id
stringlengths 14
15
| text
stringlengths 35
2.07k
| embedding
sequence | source
stringlengths 61
154
|
---|---|---|---|
05e8e554f6ac-3 | get_token_ids(text: str) → List[int][source]¶
Get the token IDs using the tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object | [
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
1483,
2484,
60,
55609,
198,
1991,
279,
4037,
29460,
1701,
279,
87272,
5963,
6462,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
05e8e554f6ac-4 | Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAIChat.html |
c310321705d5-0 | langchain.llms.gooseai.GooseAI¶
class langchain.llms.gooseai.GooseAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'gpt-neo-20b', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, min_tokens: int = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, model_kwargs: Dict[str, Any] = None, logit_bias: Optional[Dict[str, float]] = None, gooseai_api_key: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency. | [
5317,
8995,
60098,
1026,
1326,
14070,
2192,
1246,
14070,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
1326,
14070,
2192,
1246,
14070,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
70,
418,
41078,
78,
12,
508,
65,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1973,
29938,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
16,
11,
1332,
29938,
25,
528,
284,
220,
16,
11,
11900,
83386,
25,
2273,
284,
220,
15,
11,
9546,
83386,
25,
2273,
284,
220,
15,
11,
308,
25,
528,
284,
220,
16,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
284,
2290,
11,
63237,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
5377,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
12890,
76734,
15836,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
82207,
15836,
198,
3427,
974,
2192,
284,
82207,
15836,
7790,
1292,
429,
70,
418,
41078,
78,
12,
508,
65,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
3016,
25,
5884,
284,
2290,
55609,
198,
913,
11900,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
c310321705d5-1 | param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param gooseai_api_key: Optional[str] = None¶
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param min_tokens: int = 1¶
The minimum number of tokens to generate in the completion.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-neo-20b'¶
Model name to use
param n: int = 1¶
How many completions to generate for each prompt.
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | [
913,
11900,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
627,
913,
63237,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
510,
15669,
60,
55609,
198,
39716,
279,
19463,
315,
3230,
11460,
1694,
8066,
627,
913,
1973,
29938,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
12,
16,
4780,
439,
1690,
11460,
439,
3284,
2728,
279,
10137,
323,
198,
1820,
4211,
54229,
2317,
1404,
627,
913,
1332,
29938,
25,
528,
284,
220,
16,
55609,
198,
791,
8187,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
70,
418,
41078,
78,
12,
508,
65,
6,
55609,
198,
1747,
836,
311,
1005,
198,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
9546,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
198,
913,
1948,
623,
25,
2273,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
c310321705d5-2 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ | [
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
c310321705d5-3 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶ | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
c310321705d5-4 | property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'ignore'¶ | [
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
2242,
627,
15824,
284,
364,
13431,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.gooseai.GooseAI.html |
e11c7e3557b8-0 | langchain.llms.openllm.OpenLLM¶
class langchain.llms.openllm.OpenLLM(model_name: Optional[str] = None, *, model_id: Optional[str] = None, server_url: Optional[str] = None, server_type: Literal['grpc', 'http'] = 'http', embedded: bool = True, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, llm_kwargs: Dict[str, Any])[source]¶
Bases: LLM
Wrapper for accessing OpenLLM, supporting both in-process model
instance and remote OpenLLM servers.
To use, you should have the openllm library installed:
pip install openllm
Learn more at: https://github.com/bentoml/openllm
Example running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM
llm = OpenLLM(
model_name='flan-t5',
model_id='google/flan-t5-large',
)
llm("What is the difference between a duck and a goose?")
For all available supported models, you can run ‘openllm models’.
If you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose?")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶ | [
5317,
8995,
60098,
1026,
5949,
657,
76,
13250,
4178,
44,
55609,
198,
1058,
8859,
8995,
60098,
1026,
5949,
657,
76,
13250,
4178,
44,
7790,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
12039,
1646,
851,
25,
12536,
17752,
60,
284,
2290,
11,
3622,
2975,
25,
12536,
17752,
60,
284,
2290,
11,
3622,
1857,
25,
50774,
681,
57685,
518,
364,
1277,
663,
284,
364,
1277,
518,
23711,
25,
1845,
284,
3082,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
9507,
76,
37335,
25,
30226,
17752,
11,
5884,
41105,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
369,
32888,
5377,
4178,
44,
11,
12899,
2225,
304,
51194,
1646,
198,
4956,
323,
8870,
5377,
4178,
44,
16692,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
657,
76,
6875,
10487,
512,
52601,
4685,
1825,
657,
76,
198,
24762,
810,
520,
25,
3788,
1129,
5316,
916,
3554,
306,
316,
75,
38744,
657,
76,
198,
13617,
4401,
459,
445,
11237,
1646,
24392,
9152,
555,
5377,
4178,
44,
25,
1527,
8859,
8995,
60098,
1026,
1179,
5377,
4178,
44,
198,
657,
76,
284,
5377,
4178,
44,
1021,
262,
1646,
1292,
1151,
1517,
276,
2442,
20,
756,
262,
1646,
851,
1151,
17943,
59403,
276,
2442,
20,
40248,
756,
340,
657,
76,
446,
3923,
374,
279,
6811,
1990,
264,
37085,
323,
264,
63237,
71928,
2520,
682,
2561,
7396,
4211,
11,
499,
649,
1629,
3451,
2569,
657,
76,
4211,
529,
627,
2746,
499,
617,
264,
5377,
4178,
44,
3622,
4401,
11,
499,
649,
1101,
1005,
433,
39529,
25,
1527,
8859,
8995,
60098,
1026,
1179,
5377,
4178,
44,
198,
657,
76,
284,
5377,
4178,
44,
22136,
2975,
1151,
1277,
1129,
8465,
25,
3101,
15,
1329,
657,
76,
446,
3923,
374,
279,
6811,
1990,
264,
37085,
323,
264,
63237,
71928,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
e11c7e3557b8-1 | param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param embedded: bool = True¶
Initialize this LLM instance in current process by default. Should
only set to False when using in conjunction with BentoML Service.
param llm_kwargs: Dict[str, Any] [Required]¶
Key word arguments to be passed to openllm.LLM
param model_id: Optional[str] = None¶
Model Id to use. If not provided, will use the default model for the model name.
See ‘openllm models’ for all available model variants.
param model_name: Optional[str] = None¶
Model name to use. See ‘openllm models’ for all available models.
param server_type: ServerType = 'http'¶
Optional server type. Either ‘http’ or ‘grpc’.
param server_url: Optional[str] = None¶
Optional server URL that currently runs a LLMServer with ‘openllm start’.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
23711,
25,
1845,
284,
3082,
55609,
198,
10130,
420,
445,
11237,
2937,
304,
1510,
1920,
555,
1670,
13,
12540,
198,
3323,
743,
311,
3641,
994,
1701,
304,
32546,
449,
426,
17996,
2735,
5475,
627,
913,
9507,
76,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
8327,
60,
55609,
198,
1622,
3492,
6105,
311,
387,
5946,
311,
1825,
657,
76,
1236,
11237,
198,
913,
1646,
851,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
1747,
5336,
311,
1005,
13,
1442,
539,
3984,
11,
690,
1005,
279,
1670,
1646,
369,
279,
1646,
836,
627,
10031,
3451,
2569,
657,
76,
4211,
529,
369,
682,
2561,
1646,
27103,
627,
913,
1646,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
1747,
836,
311,
1005,
13,
3580,
3451,
2569,
657,
76,
4211,
529,
369,
682,
2561,
4211,
627,
913,
3622,
1857,
25,
8588,
941,
284,
364,
1277,
6,
55609,
198,
15669,
3622,
955,
13,
21663,
3451,
1277,
529,
477,
3451,
57685,
529,
627,
913,
3622,
2975,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
15669,
3622,
5665,
430,
5131,
8640,
264,
20072,
4931,
2906,
449,
3451,
2569,
657,
76,
1212,
529,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
e11c7e3557b8-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text. | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
e11c7e3557b8-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property runner: openllm.LLMRunner¶
Get the underlying openllm.LLMRunner instance for integration with BentoML.
Example:
.. code-block:: python | [
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
3784,
23055,
25,
1825,
657,
76,
1236,
11237,
20051,
55609,
198,
1991,
279,
16940,
1825,
657,
76,
1236,
11237,
20051,
2937,
369,
18052,
449,
426,
17996,
2735,
627,
13617,
512,
497,
2082,
9612,
487,
10344
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
e11c7e3557b8-4 | Example:
.. code-block:: python
llm = OpenLLM(model_name=’flan-t5’,
model_id=’google/flan-t5-large’,
embedded=False,
)
tools = load_tools([“serpapi”, “llm-math”], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service(“langchain-openllm”, runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
return agent.run(input_text)
model Config[source]¶
Bases: object
extra = 'forbid'¶ | [
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
284,
5377,
4178,
44,
7790,
1292,
28,
529,
1517,
276,
2442,
20,
529,
345,
2590,
851,
28,
529,
17943,
59403,
276,
2442,
20,
40248,
529,
345,
70964,
5725,
345,
340,
16297,
284,
2865,
40823,
2625,
2118,
805,
79,
2113,
9520,
1054,
657,
76,
1474,
589,
863,
1145,
9507,
76,
28,
657,
76,
340,
8252,
284,
9656,
26814,
1021,
16297,
11,
9507,
76,
11,
8479,
28,
17230,
941,
70948,
6977,
1831,
2241,
6966,
39268,
198,
340,
59194,
284,
30280,
316,
75,
14181,
7,
2118,
5317,
8995,
26770,
657,
76,
9520,
39380,
5941,
657,
76,
42328,
2608,
31,
59194,
6314,
5498,
28,
1199,
1535,
2612,
28,
1199,
2455,
755,
6369,
5498,
4424,
25,
610,
997,
693,
8479,
7789,
5498,
4424,
340,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openllm.OpenLLM.html |
369e58a46baf-0 | langchain.llms.octoai_endpoint.OctoAIEndpoint¶
class langchain.llms.octoai_endpoint.OctoAIEndpoint(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: Optional[str] = None, model_kwargs: Optional[dict] = None, octoai_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around OctoAI Inference Endpoints.
OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints.
To use, you should have the octoai python package installed, and the
environment variable OCTOAI_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms.octoai_endpoint import OctoAIEndpoint
OctoAIEndpoint(
octoai_api_token="octoai-api-key",
endpoint_url="https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate",
model_kwargs={
"max_new_tokens": 200,
"temperature": 0.75,
"top_p": 0.95,
"repetition_penalty": 1,
"seed": None,
"stop": [],
},
)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶ | [
5317,
8995,
60098,
1026,
14778,
302,
78,
2192,
37799,
8548,
302,
78,
15836,
28480,
55609,
198,
1058,
8859,
8995,
60098,
1026,
14778,
302,
78,
2192,
37799,
8548,
302,
78,
15836,
28480,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
15233,
2975,
25,
12536,
17752,
60,
284,
2290,
11,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
11,
18998,
78,
2192,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
5020,
78,
15836,
763,
2251,
4060,
7862,
627,
18544,
78,
15836,
28480,
374,
264,
538,
311,
16681,
449,
5020,
78,
15836,
47354,
5475,
3544,
4221,
1646,
37442,
627,
1271,
1005,
11,
499,
1288,
617,
279,
18998,
78,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
67277,
46,
15836,
11669,
19199,
743,
449,
701,
5446,
4037,
11,
477,
1522,
198,
275,
439,
264,
7086,
5852,
311,
279,
4797,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
14778,
302,
78,
2192,
37799,
220,
1179,
5020,
78,
15836,
28480,
198,
18544,
78,
15836,
28480,
1021,
262,
18998,
78,
2192,
11959,
6594,
429,
42792,
78,
2192,
24851,
16569,
761,
262,
15233,
2975,
429,
2485,
1129,
76,
418,
12,
22,
65,
59993,
12934,
74,
15,
22491,
83,
3534,
83,
3172,
14778,
302,
78,
2192,
17365,
4951,
13523,
761,
262,
1646,
37335,
18013,
286,
330,
2880,
6046,
29938,
794,
220,
1049,
345,
286,
330,
35658,
794,
220,
15,
13,
2075,
345,
286,
330,
3565,
623,
794,
220,
15,
13,
2721,
345,
286,
330,
265,
56867,
83386,
794,
220,
16,
345,
286,
330,
23425,
794,
2290,
345,
286,
330,
9684,
794,
10450,
262,
1173,
340,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
369e58a46baf-1 | param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint_url: Optional[str] = None¶
Endpoint URL to use.
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param octoai_api_token: Optional[str] = None¶
OCTOAI API Token
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages. | [
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
15233,
2975,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
28480,
5665,
311,
1005,
627,
913,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
55609,
198,
1622,
3492,
6105,
311,
1522,
311,
279,
1646,
627,
913,
18998,
78,
2192,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
46,
1182,
46,
15836,
5446,
9857,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
369e58a46baf-2 | Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it. | [
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
369e58a46baf-3 | validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.octoai_endpoint.OctoAIEndpoint.html |
a330eff80ffd-0 | langchain.llms.bedrock.Bedrock¶
class langchain.llms.bedrock.Bedrock(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, region_name: Optional[str] = None, credentials_profile_name: Optional[str] = None, model_id: str, model_kwargs: Optional[Dict] = None)[source]¶
Bases: LLM
LLM provider to invoke Bedrock models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param credentials_profile_name: Optional[str] = None¶
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
param model_id: str [Required]¶ | [
5317,
8995,
60098,
1026,
91446,
21161,
1823,
291,
21161,
55609,
198,
1058,
8859,
8995,
60098,
1026,
91446,
21161,
1823,
291,
21161,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
5654,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
16792,
14108,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
1646,
851,
25,
610,
11,
1646,
37335,
25,
12536,
58,
13755,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
4178,
44,
9287,
311,
20466,
13394,
21161,
4211,
627,
1271,
34289,
11,
279,
24124,
3016,
5829,
279,
2768,
5528,
311,
198,
28172,
7167,
2865,
16792,
512,
2485,
1129,
65,
2117,
18,
29871,
916,
5574,
16,
86686,
10729,
34249,
4951,
35805,
14,
33453,
2628,
198,
2746,
264,
3230,
41307,
5643,
1288,
387,
1511,
11,
499,
2011,
1522,
198,
1820,
836,
315,
279,
5643,
505,
279,
41058,
8805,
14,
33453,
1052,
430,
374,
311,
387,
1511,
627,
8238,
2771,
279,
16792,
611,
13073,
1511,
617,
279,
2631,
10396,
311,
198,
5323,
279,
13394,
21161,
2532,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
16792,
14108,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
836,
315,
279,
5643,
304,
279,
41058,
8805,
14,
33453,
477,
41058,
8805,
15072,
3626,
11,
902,
198,
4752,
3060,
2680,
7039,
477,
3560,
2038,
5300,
627,
2746,
539,
5300,
11,
279,
1670,
41307,
5643,
477,
11,
422,
389,
459,
21283,
17,
2937,
345,
33453,
505,
6654,
6061,
690,
387,
1511,
627,
10031,
25,
3788,
1129,
65,
2117,
18,
29871,
916,
5574,
16,
86686,
10729,
34249,
4951,
35805,
14,
33453,
2628,
198,
913,
1646,
851,
25,
610,
510,
8327,
60,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
a330eff80ffd-1 | param model_id: str [Required]¶
Id of the model to call, e.g., amazon.titan-tg1-large, this is
equivalent to the modelId property in the list-foundation-models api
param model_kwargs: Optional[Dict] = None¶
Key word arguments to pass to the model.
param region_name: Optional[str] = None¶
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text. | [
913,
1646,
851,
25,
610,
510,
8327,
60,
55609,
198,
769,
315,
279,
1646,
311,
1650,
11,
384,
1326,
2637,
39516,
739,
13145,
2442,
70,
16,
40248,
11,
420,
374,
198,
26378,
12031,
311,
279,
1646,
769,
3424,
304,
279,
1160,
2269,
4159,
29344,
82,
6464,
198,
913,
1646,
37335,
25,
12536,
58,
13755,
60,
284,
2290,
55609,
198,
1622,
3492,
6105,
311,
1522,
311,
279,
1646,
627,
913,
5654,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
32621,
5654,
384,
1326,
2637,
603,
38702,
12,
17,
13,
30743,
1445,
311,
24124,
14131,
40279,
6233,
3977,
198,
269,
5654,
5300,
304,
41058,
8805,
15072,
304,
1162,
433,
374,
539,
3984,
1618,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
a330eff80ffd-2 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python | [
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
a330eff80ffd-3 | Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that AWS credentials to and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
24124,
16792,
311,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.bedrock.Bedrock.html |
318083b09c2e-0 | langchain.llms.cohere.completion_with_retry¶
langchain.llms.cohere.completion_with_retry(llm: Cohere, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | [
5317,
8995,
60098,
1026,
522,
2319,
486,
916,
14723,
6753,
63845,
55609,
198,
5317,
8995,
60098,
1026,
522,
2319,
486,
916,
14723,
6753,
63845,
36621,
76,
25,
84675,
486,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
76747,
60,
55609,
198,
10464,
5899,
4107,
311,
23515,
279,
9954,
1650,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.completion_with_retry.html |
fe7afdc6a659-0 | langchain.llms.petals.Petals¶
class langchain.llms.petals.Petals(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, tokenizer: Any = None, model_name: str = 'bigscience/bloom-petals', temperature: float = 0.7, max_new_tokens: int = 256, top_p: float = 0.9, top_k: Optional[int] = None, do_sample: bool = True, max_length: Optional[int] = None, model_kwargs: Dict[str, Any] = None, huggingface_api_key: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
The client to use for the API calls.
param do_sample: bool = True¶
Whether or not to use sampling; use greedy decoding otherwise.
param huggingface_api_key: Optional[str] = None¶ | [
5317,
8995,
60098,
1026,
80962,
1147,
1087,
295,
1147,
55609,
198,
1058,
8859,
8995,
60098,
1026,
80962,
1147,
1087,
295,
1147,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
47058,
25,
5884,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
16548,
40657,
3554,
18981,
2320,
295,
1147,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1973,
6046,
29938,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
15,
13,
24,
11,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
11,
656,
17949,
25,
1845,
284,
3082,
11,
1973,
5228,
25,
12536,
19155,
60,
284,
2290,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
305,
36368,
1594,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
11586,
1147,
25517,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
96740,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
473,
3014,
50537,
20342,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
96740,
198,
7005,
1147,
284,
11586,
1147,
746,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
3016,
25,
5884,
284,
2290,
55609,
198,
791,
3016,
311,
1005,
369,
279,
5446,
6880,
627,
913,
656,
17949,
25,
1845,
284,
3082,
55609,
198,
25729,
477,
539,
311,
1005,
25936,
26,
1005,
57080,
48216,
6062,
627,
913,
305,
36368,
1594,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.petals.Petals.html |
fe7afdc6a659-1 | param huggingface_api_key: Optional[str] = None¶
param max_length: Optional[int] = None¶
The maximum length of the sequence to be generated.
param max_new_tokens: int = 256¶
The maximum number of new tokens to generate in the completion.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call
not explicitly specified.
param model_name: str = 'bigscience/bloom-petals'¶
The model to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use
param tokenizer: Any = None¶
The tokenizer to use for the API calls.
param top_k: Optional[int] = None¶
The number of highest probability vocabulary tokens
to keep for top-k-filtering.
param top_p: float = 0.9¶
The cumulative probability for top-p sampling.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
305,
36368,
1594,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1973,
5228,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
791,
7340,
3160,
315,
279,
8668,
311,
387,
8066,
627,
913,
1973,
6046,
29938,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
502,
11460,
311,
7068,
304,
279,
9954,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
198,
1962,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
16548,
40657,
3554,
18981,
2320,
295,
1147,
6,
55609,
198,
791,
1646,
311,
1005,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
198,
913,
47058,
25,
5884,
284,
2290,
55609,
198,
791,
47058,
311,
1005,
369,
279,
5446,
6880,
627,
913,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
791,
1396,
315,
8592,
19463,
36018,
11460,
198,
998,
2567,
369,
1948,
12934,
33548,
287,
627,
913,
1948,
623,
25,
2273,
284,
220,
15,
13,
24,
55609,
198,
791,
40944,
19463,
369,
1948,
2320,
25936,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.petals.Petals.html |
fe7afdc6a659-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶ | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.petals.Petals.html |
fe7afdc6a659-3 | get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object | [
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.petals.Petals.html |
fe7afdc6a659-4 | model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'forbid'¶ | [
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
2242,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.petals.Petals.html |
67a5a9e0f47d-0 | langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference¶
class langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, max_new_tokens: int = 512, top_k: Optional[int] = None, top_p: Optional[float] = 0.95, typical_p: Optional[float] = 0.95, temperature: float = 0.8, repetition_penalty: Optional[float] = None, stop_sequences: List[str] = None, seed: Optional[int] = None, inference_server_url: str = '', timeout: int = 120, server_kwargs: Dict[str, Any] = None, stream: bool = False, client: Any = None, async_client: Any = None)[source]¶
Bases: LLM
HuggingFace text generation inference API.
This class is a wrapper around the HuggingFace text generation inference API.
It is used to generate text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when generating text.
- repetition_penalty: The repetition penalty to use when generating text.
- stop_sequences: A list of stop sequences to use when generating text.
- seed: The seed to use when generating text.
- inference_server_url: The URL of the inference server to use. | [
5317,
8995,
60098,
1026,
870,
36368,
1594,
4424,
16724,
1265,
2251,
3924,
36368,
16680,
1199,
10172,
644,
2251,
55609,
198,
1058,
8859,
8995,
60098,
1026,
870,
36368,
1594,
4424,
16724,
1265,
2251,
3924,
36368,
16680,
1199,
10172,
644,
2251,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
1973,
6046,
29938,
25,
528,
284,
220,
8358,
11,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
11,
1948,
623,
25,
12536,
96481,
60,
284,
220,
15,
13,
2721,
11,
14595,
623,
25,
12536,
96481,
60,
284,
220,
15,
13,
2721,
11,
9499,
25,
2273,
284,
220,
15,
13,
23,
11,
54515,
83386,
25,
12536,
96481,
60,
284,
2290,
11,
3009,
59832,
25,
1796,
17752,
60,
284,
2290,
11,
10533,
25,
12536,
19155,
60,
284,
2290,
11,
45478,
12284,
2975,
25,
610,
284,
9158,
9829,
25,
528,
284,
220,
4364,
11,
3622,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
4365,
25,
1845,
284,
3641,
11,
3016,
25,
5884,
284,
2290,
11,
3393,
8342,
25,
5884,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
39,
36368,
16680,
1495,
9659,
45478,
5446,
627,
2028,
538,
374,
264,
13564,
2212,
279,
473,
36368,
16680,
1495,
9659,
45478,
5446,
627,
2181,
374,
1511,
311,
7068,
1495,
505,
264,
2728,
10137,
627,
10738,
512,
12,
1973,
6046,
29938,
25,
578,
7340,
1396,
315,
11460,
311,
7068,
627,
12,
1948,
4803,
25,
578,
1396,
315,
1948,
12934,
11460,
311,
2980,
994,
24038,
1495,
627,
12,
1948,
623,
25,
578,
40944,
19463,
12447,
369,
24038,
1495,
627,
12,
14595,
623,
25,
578,
14595,
19463,
12447,
369,
24038,
1495,
627,
12,
9499,
25,
578,
9499,
311,
1005,
994,
24038,
1495,
627,
12,
54515,
83386,
25,
578,
54515,
16750,
311,
1005,
994,
24038,
1495,
627,
12,
3009,
59832,
25,
362,
1160,
315,
3009,
24630,
311,
1005,
994,
24038,
1495,
627,
12,
10533,
25,
578,
10533,
311,
1005,
994,
24038,
1495,
627,
12,
45478,
12284,
2975,
25,
578,
5665,
315,
279,
45478,
3622,
311,
1005,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
67a5a9e0f47d-1 | - inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server.
- server_kwargs: The keyword arguments to pass to the inference server.
- client: The client object used to communicate with the inference server.
- async_client: The async client object used to communicate with the server.
Methods:
- _call: Generates text based on a given prompt and stop sequences.
- _acall: Async generates text based on a given prompt and stop sequences.
- _llm_type: Returns the type of LLM.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param async_client: Any = None¶
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param inference_server_url: str = ''¶
param max_new_tokens: int = 512¶
param repetition_penalty: Optional[float] = None¶
param seed: Optional[int] = None¶
param server_kwargs: Dict[str, Any] [Optional]¶
param stop_sequences: List[str] [Optional]¶
param stream: bool = False¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.8¶
param timeout: int = 120¶
param top_k: Optional[int] = None¶
param top_p: Optional[float] = 0.95¶
param typical_p: Optional[float] = 0.95¶
param verbose: bool [Optional]¶
Whether to print out response text. | [
12,
45478,
12284,
2975,
25,
578,
5665,
315,
279,
45478,
3622,
311,
1005,
627,
12,
9829,
25,
578,
9829,
907,
304,
6622,
311,
1005,
1418,
21583,
311,
45478,
3622,
627,
12,
3622,
37335,
25,
578,
16570,
6105,
311,
1522,
311,
279,
45478,
3622,
627,
12,
3016,
25,
578,
3016,
1665,
1511,
311,
19570,
449,
279,
45478,
3622,
627,
12,
3393,
8342,
25,
578,
3393,
3016,
1665,
1511,
311,
19570,
449,
279,
3622,
627,
18337,
512,
12,
721,
6797,
25,
53592,
1495,
3196,
389,
264,
2728,
10137,
323,
3009,
24630,
627,
12,
721,
582,
543,
25,
22149,
27983,
1495,
3196,
389,
264,
2728,
10137,
323,
3009,
24630,
627,
12,
721,
657,
76,
1857,
25,
5295,
279,
955,
315,
445,
11237,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
3393,
8342,
25,
5884,
284,
2290,
55609,
198,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
3016,
25,
5884,
284,
2290,
55609,
198,
913,
45478,
12284,
2975,
25,
610,
284,
3436,
55609,
198,
913,
1973,
6046,
29938,
25,
528,
284,
220,
8358,
55609,
198,
913,
54515,
83386,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
913,
10533,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
913,
3622,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
913,
3009,
59832,
25,
1796,
17752,
60,
510,
15669,
60,
55609,
198,
913,
4365,
25,
1845,
284,
3641,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
23,
55609,
198,
913,
9829,
25,
528,
284,
220,
4364,
55609,
198,
913,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
913,
1948,
623,
25,
12536,
96481,
60,
284,
220,
15,
13,
2721,
55609,
198,
913,
14595,
623,
25,
12536,
96481,
60,
284,
220,
15,
13,
2721,
55609,
198,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
67a5a9e0f47d-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
67a5a9e0f47d-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
67a5a9e0f47d-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html |
80e32b4580da-0 | langchain.llms.huggingface_hub.HuggingFaceHub¶
class langchain.llms.huggingface_hub.HuggingFaceHub(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, repo_id: str = 'gpt2', task: Optional[str] = None, model_kwargs: Optional[dict] = None, huggingfacehub_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param huggingfacehub_api_token: Optional[str] = None¶
param model_kwargs: Optional[dict] = None¶
Key word arguments to pass to the model.
param repo_id: str = 'gpt2'¶
Model name to use.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace. | [
5317,
8995,
60098,
1026,
870,
36368,
1594,
95096,
3924,
36368,
16680,
19876,
55609,
198,
1058,
8859,
8995,
60098,
1026,
870,
36368,
1594,
95096,
3924,
36368,
16680,
19876,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
16246,
851,
25,
610,
284,
364,
70,
418,
17,
518,
3465,
25,
12536,
17752,
60,
284,
2290,
11,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
11,
305,
36368,
1594,
27780,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
473,
36368,
16680,
19876,
220,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
305,
36368,
1594,
95096,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
473,
3014,
50537,
20342,
39,
4594,
11669,
19199,
743,
449,
701,
5446,
4037,
11,
477,
1522,
198,
275,
439,
264,
7086,
5852,
311,
279,
4797,
627,
7456,
11815,
1495,
43927,
11,
1495,
17,
1342,
43927,
323,
29385,
2065,
369,
1457,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
473,
36368,
16680,
19876,
198,
45854,
284,
473,
36368,
16680,
19876,
51708,
851,
429,
70,
418,
17,
498,
305,
36368,
1594,
27780,
11959,
6594,
429,
2465,
24851,
16569,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
305,
36368,
1594,
27780,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
55609,
198,
1622,
3492,
6105,
311,
1522,
311,
279,
1646,
627,
913,
16246,
851,
25,
610,
284,
364,
70,
418,
17,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
80e32b4580da-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param task: Optional[str] = None¶
Task to call the model with.
Should be a task that returns generated_text or summary_text.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM. | [
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
3465,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
6396,
311,
1650,
279,
1646,
449,
627,
15346,
387,
264,
3465,
430,
4780,
8066,
4424,
477,
12399,
4424,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
80e32b4580da-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting. | [
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
80e32b4580da-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.huggingface_hub.HuggingFaceHub.html |
3d4083c366ac-0 | langchain.llms.cohere.Cohere¶
class langchain.llms.cohere.Cohere(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: Optional[str] = None, max_tokens: int = 256, temperature: float = 0.75, k: int = 0, p: int = 1, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, truncate: Optional[str] = None, max_retries: int = 10, cohere_api_key: Optional[str] = None, stop: Optional[List[str]] = None)[source]¶
Bases: LLM
Wrapper around Cohere large language models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import Cohere
cohere = Cohere(model="gptd-instruct-tft", cohere_api_key="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param cohere_api_key: Optional[str] = None¶
param frequency_penalty: float = 0.0¶
Penalizes repeated tokens according to frequency. Between 0 and 1.
param k: int = 0¶ | [
5317,
8995,
60098,
1026,
522,
2319,
486,
732,
2319,
486,
55609,
198,
1058,
8859,
8995,
60098,
1026,
522,
2319,
486,
732,
2319,
486,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
12536,
17752,
60,
284,
2290,
11,
1973,
29938,
25,
528,
284,
220,
4146,
11,
9499,
25,
2273,
284,
220,
15,
13,
2075,
11,
597,
25,
528,
284,
220,
15,
11,
281,
25,
528,
284,
220,
16,
11,
11900,
83386,
25,
2273,
284,
220,
15,
13,
15,
11,
9546,
83386,
25,
2273,
284,
220,
15,
13,
15,
11,
57872,
25,
12536,
17752,
60,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
605,
11,
1080,
6881,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
84675,
486,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1080,
6881,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
7432,
4678,
11669,
6738,
743,
449,
701,
5446,
1401,
11,
477,
1522,
198,
275,
439,
264,
7086,
5852,
311,
279,
4797,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
84675,
486,
198,
1030,
6881,
284,
84675,
486,
7790,
429,
70,
418,
67,
3502,
1257,
2442,
728,
498,
1080,
6881,
11959,
3173,
429,
2465,
24851,
16569,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
1080,
6881,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
11900,
83386,
25,
2273,
284,
220,
15,
13,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
13,
28232,
220,
15,
323,
220,
16,
627,
913,
597,
25,
528,
284,
220,
15,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.Cohere.html |
3d4083c366ac-1 | param k: int = 0¶
Number of most likely tokens to consider at each step.
param max_retries: int = 10¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
Denotes the number of tokens to predict per generation.
param model: Optional[str] = None¶
Model name to use.
param p: int = 1¶
Total probability mass of tokens to consider at each step.
param presence_penalty: float = 0.0¶
Penalizes repeated tokens. Between 0 and 1.
param stop: Optional[List[str]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.75¶
A non-negative float that tunes the degree of randomness in generation.
param truncate: Optional[str] = None¶
Specify how the client handles inputs longer than the maximum token
length: Truncate from START, END or NONE
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
597,
25,
528,
284,
220,
15,
55609,
198,
2903,
315,
1455,
4461,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
1973,
1311,
4646,
25,
528,
284,
220,
605,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
528,
284,
220,
4146,
55609,
198,
24539,
6429,
279,
1396,
315,
11460,
311,
7168,
824,
9659,
627,
913,
1646,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
1747,
836,
311,
1005,
627,
913,
281,
25,
528,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
9546,
83386,
25,
2273,
284,
220,
15,
13,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
13,
28232,
220,
15,
323,
220,
16,
627,
913,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
2075,
55609,
198,
32,
2536,
62035,
2273,
430,
55090,
279,
8547,
315,
87790,
304,
9659,
627,
913,
57872,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
71152,
1268,
279,
3016,
13777,
11374,
5129,
1109,
279,
7340,
4037,
198,
4222,
25,
1183,
27998,
505,
21673,
11,
11424,
477,
43969,
198,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.Cohere.html |
3d4083c366ac-2 | Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text. | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.Cohere.html |
3d4083c366ac-3 | Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.Cohere.html |
3d4083c366ac-4 | Configuration for this pydantic object.
extra = 'forbid'¶ | [
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.cohere.Cohere.html |
dc33c6e7377e-0 | langchain.llms.nlpcloud.NLPCloud¶
class langchain.llms.nlpcloud.NLPCloud(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'finetuned-gpt-neox-20b', temperature: float = 0.7, min_length: int = 1, max_length: int = 256, length_no_input: bool = True, remove_input: bool = True, remove_end_sequence: bool = True, bad_words: List[str] = [], top_p: int = 1, top_k: int = 50, repetition_penalty: float = 1.0, length_penalty: float = 1.0, do_sample: bool = True, num_beams: int = 1, early_stopping: bool = False, num_return_sequences: int = 1, nlpcloud_api_key: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param bad_words: List[str] = []¶
List of tokens not allowed to be generated.
param cache: Optional[bool] = None¶ | [
5317,
8995,
60098,
1026,
1276,
13855,
12641,
2112,
43,
4977,
53278,
55609,
198,
1058,
8859,
8995,
60098,
1026,
1276,
13855,
12641,
2112,
43,
4977,
53278,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
5589,
295,
49983,
2427,
418,
41078,
5241,
12,
508,
65,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1332,
5228,
25,
528,
284,
220,
16,
11,
1973,
5228,
25,
528,
284,
220,
4146,
11,
3160,
6673,
6022,
25,
1845,
284,
3082,
11,
4148,
6022,
25,
1845,
284,
3082,
11,
4148,
6345,
24667,
25,
1845,
284,
3082,
11,
3958,
19518,
25,
1796,
17752,
60,
284,
10277,
1948,
623,
25,
528,
284,
220,
16,
11,
1948,
4803,
25,
528,
284,
220,
1135,
11,
54515,
83386,
25,
2273,
284,
220,
16,
13,
15,
11,
3160,
83386,
25,
2273,
284,
220,
16,
13,
15,
11,
656,
17949,
25,
1845,
284,
3082,
11,
1661,
21960,
4214,
25,
528,
284,
220,
16,
11,
4216,
1284,
7153,
25,
1845,
284,
3641,
11,
1661,
12794,
59832,
25,
528,
284,
220,
16,
11,
308,
13855,
12641,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
33260,
4977,
53278,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
308,
13855,
12641,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
33260,
4977,
48745,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
33260,
4977,
53278,
198,
77,
13855,
12641,
284,
33260,
4977,
53278,
7790,
429,
70,
418,
41078,
5241,
12,
508,
65,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
3958,
19518,
25,
1796,
17752,
60,
284,
3132,
55609,
198,
861,
315,
11460,
539,
5535,
311,
387,
8066,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
dc33c6e7377e-1 | param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param do_sample: bool = True¶
Whether to use sampling (True) or greedy decoding.
param early_stopping: bool = False¶
Whether to stop beam search at num_beams sentences.
param length_no_input: bool = True¶
Whether min_length and max_length should include the length of the input.
param length_penalty: float = 1.0¶
Exponential penalty to the length.
param max_length: int = 256¶
The maximum number of tokens to generate in the completion.
param min_length: int = 1¶
The minimum number of tokens to generate in the completion.
param model_name: str = 'finetuned-gpt-neox-20b'¶
Model name to use.
param nlpcloud_api_key: Optional[str] = None¶
param num_beams: int = 1¶
Number of beams for beam search.
param num_return_sequences: int = 1¶
How many completions to generate for each prompt.
param remove_end_sequence: bool = True¶
Whether or not to remove the end sequence token.
param remove_input: bool = True¶
Remove input text from API response
param repetition_penalty: float = 1.0¶
Penalizes repeated tokens. 1.0 means no penalty.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param top_k: int = 50¶
The number of highest probability tokens to keep for top-k filtering.
param top_p: int = 1¶
Total probability mass of tokens to consider at each step. | [
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
656,
17949,
25,
1845,
284,
3082,
55609,
198,
25729,
311,
1005,
25936,
320,
2575,
8,
477,
57080,
48216,
627,
913,
4216,
1284,
7153,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
3009,
24310,
2778,
520,
1661,
21960,
4214,
23719,
627,
913,
3160,
6673,
6022,
25,
1845,
284,
3082,
55609,
198,
25729,
1332,
5228,
323,
1973,
5228,
1288,
2997,
279,
3160,
315,
279,
1988,
627,
913,
3160,
83386,
25,
2273,
284,
220,
16,
13,
15,
55609,
198,
849,
60925,
16750,
311,
279,
3160,
627,
913,
1973,
5228,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
913,
1332,
5228,
25,
528,
284,
220,
16,
55609,
198,
791,
8187,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
913,
1646,
1292,
25,
610,
284,
364,
5589,
295,
49983,
2427,
418,
41078,
5241,
12,
508,
65,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
13855,
12641,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1661,
21960,
4214,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
51045,
369,
24310,
2778,
627,
913,
1661,
12794,
59832,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
4148,
6345,
24667,
25,
1845,
284,
3082,
55609,
198,
25729,
477,
539,
311,
4148,
279,
842,
8668,
4037,
627,
913,
4148,
6022,
25,
1845,
284,
3082,
55609,
198,
13319,
1988,
1495,
505,
5446,
2077,
198,
913,
54515,
83386,
25,
2273,
284,
220,
16,
13,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
13,
220,
16,
13,
15,
3445,
912,
16750,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
1948,
4803,
25,
528,
284,
220,
1135,
55609,
198,
791,
1396,
315,
8592,
19463,
11460,
311,
2567,
369,
1948,
12934,
30770,
627,
913,
1948,
623,
25,
528,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
dc33c6e7377e-2 | Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
dc33c6e7377e-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
dc33c6e7377e-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.nlpcloud.NLPCloud.html |
c4008245a115-0 | langchain.llms.modal.Modal¶
class langchain.llms.modal.Modal(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = '', model_kwargs: Dict[str, Any] = None)[source]¶
Bases: LLM
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint_url: str = ''¶
model endpoint to use
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not
explicitly specified.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | [
5317,
8995,
60098,
1026,
29605,
24002,
278,
55609,
198,
1058,
8859,
8995,
60098,
1026,
29605,
24002,
278,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
15233,
2975,
25,
610,
284,
9158,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
22017,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
13531,
31111,
10344,
6462,
10487,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
22017,
198,
5785,
284,
22017,
55969,
2975,
64841,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
15233,
2975,
25,
610,
284,
3436,
55609,
198,
2590,
15233,
311,
1005,
198,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
198,
94732,
398,
5300,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.modal.Modal.html |
c4008245a115-1 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ | [
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.modal.Modal.html |
c4008245a115-2 | Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids. | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.modal.Modal.html |
c4008245a115-3 | Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic config.
extra = 'forbid'¶ | [
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
2242,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.modal.Modal.html |
395a6f07d6b3-0 | langchain.llms.promptlayer_openai.PromptLayerOpenAIChat¶
class langchain.llms.promptlayer_openai.PromptLayerOpenAIChat(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'gpt-3.5-turbo', model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_proxy: Optional[str] = None, max_retries: int = 6, prefix_messages: List = None, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶
Bases: OpenAIChat
Wrapper around OpenAI large language models.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAIChat LLM can also
be passed here. The PromptLayerOpenAIChat adds two optional
Parameters
pl_tags – List of strings to tag the request with.
return_pl_id – If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat | [
5317,
8995,
60098,
1026,
66499,
10546,
11563,
2192,
1087,
15091,
9368,
5109,
15836,
16047,
55609,
198,
1058,
8859,
8995,
60098,
1026,
66499,
10546,
11563,
2192,
1087,
15091,
9368,
5109,
15836,
16047,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
518,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
9436,
24321,
25,
1796,
284,
2290,
11,
17265,
25,
1845,
284,
3641,
11,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
16857,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
518,
628,
16735,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
471,
6451,
851,
25,
12536,
58,
2707,
60,
284,
3641,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5377,
15836,
16047,
198,
11803,
2212,
5377,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
323,
10137,
10546,
10344,
198,
1757,
10487,
11,
323,
279,
4676,
3977,
30941,
15836,
11669,
6738,
198,
438,
68788,
2898,
43,
15108,
11669,
6738,
743,
449,
701,
1825,
15836,
5446,
1401,
323,
198,
41681,
10546,
1401,
15947,
627,
2460,
5137,
430,
649,
387,
5946,
311,
279,
5377,
15836,
16047,
445,
11237,
649,
1101,
198,
1395,
5946,
1618,
13,
578,
60601,
9368,
5109,
15836,
16047,
11621,
1403,
10309,
198,
9905,
198,
501,
16735,
1389,
1796,
315,
9246,
311,
4877,
279,
1715,
449,
627,
693,
6451,
851,
1389,
1442,
3082,
11,
279,
60601,
9368,
1715,
3110,
690,
387,
198,
78691,
304,
279,
9659,
3186,
2115,
315,
279,
198,
38238,
1665,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
60601,
9368,
5109,
15836,
16047
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
395a6f07d6b3-1 | Generation object.
Example
from langchain.llms import PromptLayerOpenAIChat
openaichat = PromptLayerOpenAIChat(model_name="gpt-3.5-turbo")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-3.5-turbo'¶
Model name to use.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param pl_tags: Optional[List[str]] = None¶
param prefix_messages: List [Optional]¶
Series of messages for Chat input.
param return_pl_id: Optional[bool] = False¶
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text. | [
38238,
1665,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
60601,
9368,
5109,
15836,
16047,
198,
2569,
64,
718,
266,
284,
60601,
9368,
5109,
15836,
16047,
7790,
1292,
429,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
4792,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
5535,
9174,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
6,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
539,
5535,
9174,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
628,
16735,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
913,
9436,
24321,
25,
1796,
510,
15669,
60,
55609,
198,
26625,
315,
6743,
369,
13149,
1988,
627,
913,
471,
6451,
851,
25,
12536,
58,
2707,
60,
284,
3641,
55609,
198,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
395a6f07d6b3-2 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
395a6f07d6b3-3 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
29460,
1701,
279,
87272,
5963,
6462,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
395a6f07d6b3-4 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.promptlayer_openai.PromptLayerOpenAIChat.html |
2fbf6115c291-0 | langchain.llms.vertexai.VertexAI¶
class langchain.llms.vertexai.VertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: _LanguageModel = None, model_name: str = 'text-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5, tuned_model_name: Optional[str] = None)[source]¶
Bases: _VertexAICommon, LLM
Wrapper around Google Vertex AI large language models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param model_name: str = 'text-bison'¶
The name of the Vertex AI large language model.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls. | [
5317,
8995,
60098,
1026,
48375,
2192,
73694,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
48375,
2192,
73694,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
721,
14126,
1747,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
1342,
1481,
3416,
518,
9499,
25,
2273,
284,
220,
15,
13,
15,
11,
1973,
7800,
29938,
25,
528,
284,
220,
4386,
11,
1948,
623,
25,
2273,
284,
220,
15,
13,
2721,
11,
1948,
4803,
25,
528,
284,
220,
1272,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
2447,
25,
12536,
17752,
60,
284,
2290,
11,
3813,
25,
610,
284,
364,
355,
85181,
16,
518,
16792,
25,
5884,
284,
2290,
11,
1715,
61725,
2191,
25,
528,
284,
220,
20,
11,
33519,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
721,
8484,
15836,
11076,
11,
445,
11237,
198,
11803,
2212,
5195,
24103,
15592,
3544,
4221,
4211,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
16792,
25,
5884,
284,
2290,
55609,
198,
791,
1670,
2587,
16792,
320,
17943,
9144,
75854,
732,
16112,
8,
311,
1005,
198,
913,
3813,
25,
610,
284,
364,
355,
85181,
16,
6,
55609,
198,
791,
1670,
3813,
311,
1005,
994,
3339,
5446,
6880,
627,
913,
1973,
7800,
29938,
25,
528,
284,
220,
4386,
55609,
198,
3404,
4017,
27667,
279,
7340,
3392,
315,
1495,
2612,
505,
832,
10137,
627,
913,
1646,
1292,
25,
610,
284,
364,
1342,
1481,
3416,
6,
55609,
198,
791,
836,
315,
279,
24103,
15592,
3544,
4221,
1646,
627,
913,
2447,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1670,
480,
7269,
2447,
311,
1005,
994,
3339,
24103,
5446,
6880,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
2fbf6115c291-1 | The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
param tuned_model_name: Optional[str] = None¶
The name of a tuned model. If provided, model_name is ignored.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult. | [
791,
1670,
480,
7269,
2447,
311,
1005,
994,
3339,
24103,
5446,
6880,
627,
913,
1715,
61725,
2191,
25,
528,
284,
220,
20,
55609,
198,
791,
3392,
315,
15638,
2191,
5535,
369,
7540,
11136,
311,
24103,
15836,
4211,
627,
913,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
15669,
1160,
315,
3009,
4339,
311,
1005,
994,
24038,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
15,
55609,
198,
99722,
9499,
11,
433,
11835,
279,
8547,
315,
87790,
304,
4037,
6727,
627,
913,
1948,
4803,
25,
528,
284,
220,
1272,
55609,
198,
4438,
279,
1646,
50243,
11460,
369,
2612,
11,
279,
1828,
4037,
374,
4183,
505,
198,
913,
1948,
623,
25,
2273,
284,
220,
15,
13,
2721,
55609,
198,
30400,
527,
4183,
505,
1455,
35977,
311,
3325,
3156,
279,
2694,
315,
872,
198,
913,
33519,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
836,
315,
264,
33519,
1646,
13,
1442,
3984,
11,
1646,
1292,
374,
12305,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
2fbf6115c291-2 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶ | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
2fbf6115c291-3 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that the python package exists in environment.
property is_codey_model: bool¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
task_executor: ClassVar[Optional[Executor]] = None¶
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
279,
10344,
6462,
6866,
304,
4676,
627,
3784,
374,
4229,
88,
5156,
25,
1845,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
8366,
82307,
25,
3308,
4050,
58,
15669,
58,
26321,
5163,
284,
2290,
55609,
198,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.vertexai.VertexAI.html |
7b92b4c527c5-0 | langchain.llms.openai.AzureOpenAI¶
class langchain.llms.openai.AzureOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None, deployment_name: str = '', openai_api_type: str = 'azure', openai_api_version: str = '')[source]¶
Bases: BaseOpenAI
Wrapper around Azure-specific OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed | [
5317,
8995,
60098,
1026,
5949,
2192,
58927,
5109,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
5949,
2192,
58927,
5109,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1973,
29938,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
16,
11,
11900,
83386,
25,
2273,
284,
220,
15,
11,
9546,
83386,
25,
2273,
284,
220,
15,
11,
308,
25,
528,
284,
220,
16,
11,
1888,
3659,
25,
528,
284,
220,
16,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
7309,
2424,
25,
528,
284,
220,
508,
11,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
16857,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
518,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
24047,
1292,
25,
610,
284,
9158,
1825,
2192,
11959,
1857,
25,
610,
284,
364,
40595,
518,
1825,
2192,
11959,
9625,
25,
610,
284,
364,
13588,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
5109,
15836,
198,
11803,
2212,
35219,
19440,
5377,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
30941,
15836,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-1 | Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import AzureOpenAI
openai = AzureOpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param deployment_name: str = ''¶
Deployment name to use.
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified. | [
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
35219,
5109,
15836,
198,
2569,
2192,
284,
35219,
5109,
15836,
7790,
1292,
429,
1342,
1773,
402,
49697,
12,
6268,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
4792,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
5535,
9174,
913,
7309,
2424,
25,
528,
284,
220,
508,
55609,
198,
21753,
1404,
311,
1005,
994,
12579,
5361,
9477,
311,
7068,
627,
913,
1888,
3659,
25,
528,
284,
220,
16,
55609,
198,
5648,
988,
1888,
3659,
3543,
919,
3622,
25034,
323,
4780,
279,
1054,
16241,
863,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
24047,
1292,
25,
610,
284,
3436,
55609,
198,
76386,
836,
311,
1005,
627,
913,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
6,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
539,
5535,
9174,
913,
11900,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
627,
913,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
510,
15669,
60,
55609,
198,
39716,
279,
19463,
315,
3230,
11460,
1694,
8066,
627,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
12,
16,
4780,
439,
1690,
11460,
439,
3284,
2728,
279,
10137,
323,
198,
1820,
4211,
54229,
2317,
1404,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-2 | Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_api_type: str = 'azure'¶
param openai_api_version: str = ''¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here. | [
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
1857,
25,
610,
284,
364,
40595,
6,
55609,
198,
913,
1825,
2192,
11959,
9625,
25,
610,
284,
3436,
55609,
198,
913,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
9546,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
627,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-3 | when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶ | [
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
1948,
623,
25,
2273,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
3261,
44095,
76,
5400,
92383,
25,
5884,
11,
52032,
25,
1796,
17752,
1145,
4037,
32607,
25,
30226,
17752,
11,
528,
2526,
11651,
445,
11237,
2122,
55609,
198,
4110,
279,
445,
11237,
2122,
505,
279,
11709,
323,
52032,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-4 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
max_tokens_for_prompt(prompt: str) → int¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example | [
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
5341,
48977,
13044,
9094,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
1796,
53094,
17752,
5163,
55609,
198,
1991,
279,
1207,
52032,
369,
9507,
76,
1650,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
29460,
1701,
279,
87272,
5963,
6462,
627,
2880,
29938,
5595,
62521,
73353,
25,
610,
8,
11651,
528,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
10137,
627,
9905,
198,
41681,
1389,
578,
10137,
311,
1522,
1139,
279,
1646,
627,
16851,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
369,
264,
10137,
627,
13617,
198,
2880,
29938,
284,
1825,
2192,
6817,
6594,
5595,
62521,
446,
41551,
757,
264,
22380,
13352,
2020,
1646,
609,
2401,
8634,
2190,
7790,
609,
25,
610,
8,
11651,
528,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
1646,
627,
9905,
198,
2590,
609,
1389,
578,
1646,
609,
584,
1390,
311,
1440,
279,
2317,
1404,
369,
627,
16851,
198,
791,
7340,
2317,
1404,
198,
13617
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-5 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]¶
Prepare the params for streaming.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]] = None) → Generator¶
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_azure_settings » all fields[source]¶ | [
16851,
198,
791,
7340,
2317,
1404,
198,
13617,
198,
2880,
29938,
284,
1825,
2192,
3272,
609,
2401,
8634,
2190,
446,
1342,
1773,
402,
49697,
12,
6268,
1158,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
72874,
12962,
287,
6887,
61270,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
51690,
279,
3712,
369,
17265,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
4116,
73353,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
29458,
55609,
198,
7368,
5377,
15836,
449,
17265,
5292,
323,
471,
279,
13239,
14143,
627,
33,
21352,
25,
420,
374,
264,
13746,
4668,
1418,
584,
7216,
704,
279,
1314,
59851,
627,
12805,
430,
8741,
11,
420,
3834,
1436,
2349,
627,
9905,
198,
41681,
1389,
578,
52032,
311,
1522,
1139,
279,
1646,
627,
9684,
1389,
12536,
1160,
315,
3009,
4339,
311,
1005,
994,
24038,
627,
16851,
198,
32,
14143,
14393,
279,
4365,
315,
11460,
505,
5377,
15836,
627,
13617,
198,
36951,
284,
1825,
2192,
15307,
446,
41551,
757,
264,
22380,
13352,
2000,
4037,
304,
14143,
512,
262,
7692,
4037,
198,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
62,
40595,
11090,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
7b92b4c527c5-6 | validator validate_azure_settings » all fields[source]¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
16503,
9788,
62,
40595,
11090,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
3784,
1973,
8634,
2424,
25,
528,
55609,
198,
1991,
1973,
2317,
1404,
369,
420,
1646,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.AzureOpenAI.html |
5ccab1621d09-0 | langchain.llms.deepinfra.DeepInfra¶
class langchain.llms.deepinfra.DeepInfra(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, model_id: str = 'google/flan-t5-xl', model_kwargs: Optional[dict] = None, deepinfra_api_token: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around DeepInfra deployed models.
To use, you should have the requests python package installed, and the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import DeepInfra
di = DeepInfra(model_id="google/flan-t5-xl",
deepinfra_api_token="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param deepinfra_api_token: Optional[str] = None¶
param model_id: str = 'google/flan-t5-xl'¶
param model_kwargs: Optional[dict] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text. | [
5317,
8995,
60098,
1026,
22597,
93417,
56702,
19998,
969,
55609,
198,
1058,
8859,
8995,
60098,
1026,
22597,
93417,
56702,
19998,
969,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
1646,
851,
25,
610,
284,
364,
17943,
59403,
276,
2442,
20,
32046,
518,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
11,
5655,
93417,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
18682,
19998,
969,
27167,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
7540,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
3467,
9377,
37509,
5726,
11669,
19199,
743,
449,
701,
5446,
4037,
11,
477,
1522,
198,
275,
439,
264,
7086,
5852,
311,
279,
4797,
627,
7456,
11815,
1495,
43927,
323,
1495,
17,
1342,
43927,
369,
1457,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
18682,
19998,
969,
198,
8747,
284,
18682,
19998,
969,
7790,
851,
429,
17943,
59403,
276,
2442,
20,
32046,
761,
504,
5655,
93417,
11959,
6594,
429,
2465,
24851,
16569,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
5655,
93417,
11959,
6594,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1646,
851,
25,
610,
284,
364,
17943,
59403,
276,
2442,
20,
32046,
6,
55609,
198,
913,
1646,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
5ccab1621d09-1 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
5ccab1621d09-2 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
5ccab1621d09-3 | property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.deepinfra.DeepInfra.html |
d507c7831080-0 | langchain.llms.azureml_endpoint.DollyContentFormatter¶
class langchain.llms.azureml_endpoint.DollyContentFormatter[source]¶
Bases: ContentFormatterBase
Content handler for the Dolly-v2-12b model
Methods
__init__()
format_request_payload(prompt, model_kwargs)
Formats the request body according to the input schema of the model.
format_response_payload(output)
Formats the response body according to the output schema of the model.
Attributes
accepts
The MIME type of the response data returned form the endpoint
content_type
The MIME type of the input data passed to the endpoint
format_request_payload(prompt: str, model_kwargs: Dict) → bytes[source]¶
Formats the request body according to the input schema of
the model. Returns bytes or seekable file like object in the
format specified in the content_type request header.
format_response_payload(output: bytes) → str[source]¶
Formats the response body according to the output
schema of the model. Returns the data type that is
received from the response.
accepts: Optional[str] = 'application/json'¶
The MIME type of the response data returned form the endpoint
content_type: Optional[str] = 'application/json'¶
The MIME type of the input data passed to the endpoint | [
5317,
8995,
60098,
1026,
71340,
1029,
37799,
920,
8788,
2831,
14517,
55609,
198,
1058,
8859,
8995,
60098,
1026,
71340,
1029,
37799,
920,
8788,
2831,
14517,
76747,
60,
55609,
198,
33,
2315,
25,
9059,
14517,
4066,
198,
2831,
7158,
369,
279,
423,
8788,
8437,
17,
12,
717,
65,
1646,
198,
18337,
198,
565,
2381,
33716,
2293,
8052,
33913,
73353,
11,
4194,
2590,
37335,
340,
45699,
279,
1715,
2547,
4184,
311,
279,
1988,
11036,
315,
279,
1646,
627,
2293,
9852,
33913,
11304,
340,
45699,
279,
2077,
2547,
4184,
311,
279,
2612,
11036,
315,
279,
1646,
627,
10738,
198,
10543,
82,
198,
791,
58577,
955,
315,
279,
2077,
828,
6052,
1376,
279,
15233,
198,
1834,
1857,
198,
791,
58577,
955,
315,
279,
1988,
828,
5946,
311,
279,
15233,
198,
2293,
8052,
33913,
73353,
25,
610,
11,
1646,
37335,
25,
30226,
8,
11651,
5943,
76747,
60,
55609,
198,
45699,
279,
1715,
2547,
4184,
311,
279,
1988,
11036,
315,
198,
1820,
1646,
13,
5295,
5943,
477,
6056,
481,
1052,
1093,
1665,
304,
279,
198,
2293,
5300,
304,
279,
2262,
1857,
1715,
4342,
627,
2293,
9852,
33913,
11304,
25,
5943,
8,
11651,
610,
76747,
60,
55609,
198,
45699,
279,
2077,
2547,
4184,
311,
279,
2612,
198,
17801,
315,
279,
1646,
13,
5295,
279,
828,
955,
430,
374,
198,
42923,
505,
279,
2077,
627,
10543,
82,
25,
12536,
17752,
60,
284,
364,
5242,
9108,
6,
55609,
198,
791,
58577,
955,
315,
279,
2077,
828,
6052,
1376,
279,
15233,
198,
1834,
1857,
25,
12536,
17752,
60,
284,
364,
5242,
9108,
6,
55609,
198,
791,
58577,
955,
315,
279,
1988,
828,
5946,
311,
279,
15233
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.azureml_endpoint.DollyContentFormatter.html |
3e93f2fbaa49-0 | langchain.llms.forefrontai.ForefrontAI¶
class langchain.llms.forefrontai.ForefrontAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, endpoint_url: str = '', temperature: float = 0.7, length: int = 256, top_p: float = 1.0, top_k: int = 40, repetition_penalty: int = 1, forefrontai_api_key: Optional[str] = None, base_url: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around ForefrontAI large language models.
To use, you should have the environment variable FOREFRONTAI_API_KEY
set with your API key.
Example
from langchain.llms import ForefrontAI
forefrontai = ForefrontAI(endpoint_url="")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param base_url: Optional[str] = None¶
Base url to use, if None decides based on model name.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param endpoint_url: str = ''¶
Model name to use.
param forefrontai_api_key: Optional[str] = None¶
param length: int = 256¶
The maximum number of tokens to generate in the completion.
param repetition_penalty: int = 1¶
Penalizes repeated tokens according to frequency.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace. | [
5317,
8995,
60098,
1026,
55669,
7096,
2192,
10749,
7096,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
55669,
7096,
2192,
10749,
7096,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
15233,
2975,
25,
610,
284,
9158,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
3160,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
16,
13,
15,
11,
1948,
4803,
25,
528,
284,
220,
1272,
11,
54515,
83386,
25,
528,
284,
220,
16,
11,
52301,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
2385,
2975,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
8371,
7096,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
4676,
3977,
45023,
10725,
10443,
15836,
11669,
6738,
198,
751,
449,
701,
5446,
1401,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
8371,
7096,
15836,
198,
1348,
7096,
2192,
284,
8371,
7096,
15836,
55969,
2975,
64841,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
2385,
2975,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
4066,
2576,
311,
1005,
11,
422,
2290,
28727,
3196,
389,
1646,
836,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
15233,
2975,
25,
610,
284,
3436,
55609,
198,
1747,
836,
311,
1005,
627,
913,
52301,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
3160,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
913,
54515,
83386,
25,
528,
284,
220,
16,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
3e93f2fbaa49-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param top_k: int = 40¶
The number of highest probability vocabulary tokens to
keep for top-k-filtering.
param top_p: float = 1.0¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM. | [
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
1948,
4803,
25,
528,
284,
220,
1272,
55609,
198,
791,
1396,
315,
8592,
19463,
36018,
11460,
311,
198,
13397,
369,
1948,
12934,
33548,
287,
627,
913,
1948,
623,
25,
2273,
284,
220,
16,
13,
15,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
3e93f2fbaa49-2 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting. | [
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
3e93f2fbaa49-3 | This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
extra = 'forbid'¶ | [
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.forefrontai.ForefrontAI.html |
484ee1624185-0 | langchain.llms.openai.OpenAI¶
class langchain.llms.openai.OpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]¶
Bases: BaseOpenAI
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI | [
5317,
8995,
60098,
1026,
5949,
2192,
13250,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
5949,
2192,
13250,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1973,
29938,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
16,
11,
11900,
83386,
25,
2273,
284,
220,
15,
11,
9546,
83386,
25,
2273,
284,
220,
15,
11,
308,
25,
528,
284,
220,
16,
11,
1888,
3659,
25,
528,
284,
220,
16,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
7309,
2424,
25,
528,
284,
220,
508,
11,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
16857,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
518,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
5109,
15836,
198,
11803,
2212,
5377,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
30941,
15836,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
5377,
15836
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-1 | Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param client: Any = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt. | [
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
5377,
15836,
198,
2569,
2192,
284,
5377,
15836,
7790,
1292,
429,
1342,
1773,
402,
49697,
12,
6268,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
4792,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
5535,
9174,
913,
7309,
2424,
25,
528,
284,
220,
508,
55609,
198,
21753,
1404,
311,
1005,
994,
12579,
5361,
9477,
311,
7068,
627,
913,
1888,
3659,
25,
528,
284,
220,
16,
55609,
198,
5648,
988,
1888,
3659,
3543,
919,
3622,
25034,
323,
4780,
279,
1054,
16241,
863,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
3016,
25,
5884,
284,
2290,
55609,
198,
913,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
6,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
539,
5535,
9174,
913,
11900,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
627,
913,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
510,
15669,
60,
55609,
198,
39716,
279,
19463,
315,
3230,
11460,
1694,
8066,
627,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
12,
16,
4780,
439,
1690,
11460,
439,
3284,
2728,
279,
10137,
323,
198,
1820,
4211,
54229,
2317,
1404,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-2 | param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text. | [
913,
308,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
9546,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
627,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
1948,
623,
25,
2273,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-3 | param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM. | [
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
3261,
44095,
76,
5400,
92383,
25,
5884,
11,
52032,
25,
1796,
17752,
1145,
4037,
32607,
25,
30226,
17752,
11,
528,
2526,
11651,
445,
11237,
2122,
55609,
198,
4110,
279,
445,
11237,
2122,
505,
279,
11709,
323,
52032,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-4 | dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int]¶
Get the token IDs using the tiktoken package.
max_tokens_for_prompt(prompt: str) → int¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example | [
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
5341,
48977,
13044,
9094,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
1796,
53094,
17752,
5163,
55609,
198,
1991,
279,
1207,
52032,
369,
9507,
76,
1650,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
29460,
1701,
279,
87272,
5963,
6462,
627,
2880,
29938,
5595,
62521,
73353,
25,
610,
8,
11651,
528,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
10137,
627,
9905,
198,
41681,
1389,
578,
10137,
311,
1522,
1139,
279,
1646,
627,
16851,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
369,
264,
10137,
627,
13617,
198,
2880,
29938,
284,
1825,
2192,
6817,
6594,
5595,
62521,
446,
41551,
757,
264,
22380,
13352,
2020,
1646,
609,
2401,
8634,
2190,
7790,
609,
25,
610,
8,
11651,
528,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
1646,
627,
9905,
198,
2590,
609,
1389,
578,
1646,
609,
584,
1390,
311,
1440,
279,
2317,
1404,
369,
627,
16851,
198,
791,
7340,
2317,
1404,
198,
13617
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-5 | Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any]¶
Prepare the params for streaming.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]] = None) → Generator¶
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶ | [
16851,
198,
791,
7340,
2317,
1404,
198,
13617,
198,
2880,
29938,
284,
1825,
2192,
3272,
609,
2401,
8634,
2190,
446,
1342,
1773,
402,
49697,
12,
6268,
1158,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
72874,
12962,
287,
6887,
61270,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
51690,
279,
3712,
369,
17265,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
4116,
73353,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
29458,
55609,
198,
7368,
5377,
15836,
449,
17265,
5292,
323,
471,
279,
13239,
14143,
627,
33,
21352,
25,
420,
374,
264,
13746,
4668,
1418,
584,
7216,
704,
279,
1314,
59851,
627,
12805,
430,
8741,
11,
420,
3834,
1436,
2349,
627,
9905,
198,
41681,
1389,
578,
52032,
311,
1522,
1139,
279,
1646,
627,
9684,
1389,
12536,
1160,
315,
3009,
4339,
311,
1005,
994,
24038,
627,
16851,
198,
32,
14143,
14393,
279,
4365,
315,
11460,
505,
5377,
15836,
627,
13617,
198,
36951,
284,
1825,
2192,
15307,
446,
41551,
757,
264,
22380,
13352,
2000,
4037,
304,
14143,
512,
262,
7692,
4037,
198,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
484ee1624185-6 | validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
3784,
1973,
8634,
2424,
25,
528,
55609,
198,
1991,
1973,
2317,
1404,
369,
420,
1646,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.OpenAI.html |
de9f4885de6a-0 | langchain.llms.google_palm.GooglePalm¶
class langchain.llms.google_palm.GooglePalm(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, google_api_key: Optional[str] = None, model_name: str = 'models/text-bison-001', temperature: float = 0.7, top_p: Optional[float] = None, top_k: Optional[int] = None, max_output_tokens: Optional[int] = None, n: int = 1)[source]¶
Bases: BaseLLM, BaseModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param google_api_key: Optional[str] = None¶
param max_output_tokens: Optional[int] = None¶
Maximum number of tokens to include in a candidate. Must be greater than zero.
If unset, will default to 64.
param model_name: str = 'models/text-bison-001'¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
Run inference with this temperature. Must by in the closed interval
[0.0, 1.0]. | [
5317,
8995,
60098,
1026,
5831,
623,
7828,
61493,
47,
7828,
55609,
198,
1058,
8859,
8995,
60098,
1026,
5831,
623,
7828,
61493,
47,
7828,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
11819,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
6644,
37371,
1481,
3416,
12,
4119,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
11,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
11,
1973,
7800,
29938,
25,
12536,
19155,
60,
284,
2290,
11,
308,
25,
528,
284,
220,
16,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
4178,
44,
11,
65705,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
11819,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1973,
7800,
29938,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
28409,
1396,
315,
11460,
311,
2997,
304,
264,
9322,
13,
15832,
387,
7191,
1109,
7315,
627,
2746,
18484,
11,
690,
1670,
311,
220,
1227,
627,
913,
1646,
1292,
25,
610,
284,
364,
6644,
37371,
1481,
3416,
12,
4119,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
6369,
3543,
919,
311,
7068,
369,
1855,
10137,
13,
7181,
430,
279,
5446,
1253,
198,
1962,
471,
279,
2539,
308,
3543,
919,
422,
43428,
527,
8066,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
6869,
45478,
449,
420,
9499,
13,
15832,
555,
304,
279,
8036,
10074,
198,
58,
15,
13,
15,
11,
220,
16,
13,
15,
948
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.google_palm.GooglePalm.html |
de9f4885de6a-1 | [0.0, 1.0].
param top_k: Optional[int] = None¶
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
param top_p: Optional[float] = None¶
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶ | [
58,
15,
13,
15,
11,
220,
16,
13,
15,
27218,
913,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
33664,
1701,
1948,
12934,
25936,
25,
2980,
279,
743,
315,
1948,
4803,
1455,
35977,
11460,
627,
32876,
387,
6928,
627,
913,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
33664,
1701,
62607,
25936,
25,
2980,
279,
25655,
743,
315,
11460,
6832,
198,
88540,
2694,
374,
520,
3325,
1948,
623,
13,
15832,
387,
304,
279,
8036,
10074,
510,
15,
13,
15,
11,
220,
16,
13,
15,
27218,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.google_palm.GooglePalm.html |
de9f4885de6a-2 | Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it. | [
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.google_palm.GooglePalm.html |
de9f4885de6a-3 | validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate api key, python package exists.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
6464,
1401,
11,
10344,
6462,
6866,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.google_palm.GooglePalm.html |
d35b2c96038d-0 | langchain.llms.openai.BaseOpenAI¶
class langchain.llms.openai.BaseOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'text-davinci-003', temperature: float = 0.7, max_tokens: int = 256, top_p: float = 1, frequency_penalty: float = 0, presence_penalty: float = 0, n: int = 1, best_of: int = 1, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, batch_size: int = 20, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, logit_bias: Optional[Dict[str, float]] = None, max_retries: int = 6, streaming: bool = False, allowed_special: Union[Literal['all'], AbstractSet[str]] = {}, disallowed_special: Union[Literal['all'], Collection[str]] = 'all', tiktoken_model_name: Optional[str] = None)[source]¶
Bases: BaseLLM
Wrapper around OpenAI large language models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_special: Union[Literal['all'], AbstractSet[str]] = {}¶
Set of special tokens that are allowed。
param batch_size: int = 20¶ | [
5317,
8995,
60098,
1026,
5949,
2192,
13316,
5109,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
5949,
2192,
13316,
5109,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1973,
29938,
25,
528,
284,
220,
4146,
11,
1948,
623,
25,
2273,
284,
220,
16,
11,
11900,
83386,
25,
2273,
284,
220,
15,
11,
9546,
83386,
25,
2273,
284,
220,
15,
11,
308,
25,
528,
284,
220,
16,
11,
1888,
3659,
25,
528,
284,
220,
16,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
7309,
2424,
25,
528,
284,
220,
508,
11,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
16857,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
518,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
4178,
44,
198,
11803,
2212,
5377,
15836,
3544,
4221,
4211,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
13822,
1681,
17752,
5163,
284,
4792,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
5535,
9174,
913,
7309,
2424,
25,
528,
284,
220,
508,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-1 | Set of special tokens that are allowed。
param batch_size: int = 20¶
Batch size to use when passing multiple documents to generate.
param best_of: int = 1¶
Generates best_of completions server-side and returns the “best”.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param disallowed_special: Union[Literal['all'], Collection[str]] = 'all'¶
Set of special tokens that are not allowed。
param frequency_penalty: float = 0¶
Penalizes repeated tokens according to frequency.
param logit_bias: Optional[Dict[str, float]] [Optional]¶
Adjust the probability of specific tokens being generated.
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: int = 256¶
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'text-davinci-003' (alias 'model')¶
Model name to use.
param n: int = 1¶
How many completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param presence_penalty: float = 0¶
Penalizes repeated tokens.
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶ | [
1681,
315,
3361,
11460,
430,
527,
5535,
9174,
913,
7309,
2424,
25,
528,
284,
220,
508,
55609,
198,
21753,
1404,
311,
1005,
994,
12579,
5361,
9477,
311,
7068,
627,
913,
1888,
3659,
25,
528,
284,
220,
16,
55609,
198,
5648,
988,
1888,
3659,
3543,
919,
3622,
25034,
323,
4780,
279,
1054,
16241,
863,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
834,
21642,
42729,
25,
9323,
58,
17802,
681,
543,
4181,
11348,
17752,
5163,
284,
364,
543,
6,
55609,
198,
1681,
315,
3361,
11460,
430,
527,
539,
5535,
9174,
913,
11900,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
4184,
311,
11900,
627,
913,
1515,
275,
37481,
25,
12536,
58,
13755,
17752,
11,
2273,
5163,
510,
15669,
60,
55609,
198,
39716,
279,
19463,
315,
3230,
11460,
1694,
8066,
627,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
528,
284,
220,
4146,
55609,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
304,
279,
9954,
627,
12,
16,
4780,
439,
1690,
11460,
439,
3284,
2728,
279,
10137,
323,
198,
1820,
4211,
54229,
2317,
1404,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
1342,
1773,
402,
49697,
12,
6268,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
4438,
1690,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
9546,
83386,
25,
2273,
284,
220,
15,
55609,
198,
29305,
278,
4861,
11763,
11460,
627,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-2 | param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param top_p: float = 1¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | [
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
1948,
623,
25,
2273,
284,
220,
16,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-3 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
create_llm_result(choices: Any, prompts: List[str], token_usage: Dict[str, int]) → LLMResult[source]¶
Create the LLMResult from the choices and prompts.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input. | [
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
3261,
44095,
76,
5400,
92383,
25,
5884,
11,
52032,
25,
1796,
17752,
1145,
4037,
32607,
25,
30226,
17752,
11,
528,
2526,
11651,
445,
11237,
2122,
76747,
60,
55609,
198,
4110,
279,
445,
11237,
2122,
505,
279,
11709,
323,
52032,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-4 | Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_sub_prompts(params: Dict[str, Any], prompts: List[str], stop: Optional[List[str]] = None) → List[List[str]][source]¶
Get the sub prompts for llm call.
get_token_ids(text: str) → List[int][source]¶
Get the token IDs using the tiktoken package.
max_tokens_for_prompt(prompt: str) → int[source]¶
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt – The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname: str) → int[source]¶
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname – The modelname we want to know the context size for.
Returns
The maximum context size
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text. | [
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
5341,
48977,
13044,
9094,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
1796,
53094,
17752,
28819,
2484,
60,
55609,
198,
1991,
279,
1207,
52032,
369,
9507,
76,
1650,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
1483,
2484,
60,
55609,
198,
1991,
279,
4037,
29460,
1701,
279,
87272,
5963,
6462,
627,
2880,
29938,
5595,
62521,
73353,
25,
610,
8,
11651,
528,
76747,
60,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
10137,
627,
9905,
198,
41681,
1389,
578,
10137,
311,
1522,
1139,
279,
1646,
627,
16851,
198,
791,
7340,
1396,
315,
11460,
311,
7068,
369,
264,
10137,
627,
13617,
198,
2880,
29938,
284,
1825,
2192,
6817,
6594,
5595,
62521,
446,
41551,
757,
264,
22380,
13352,
2020,
1646,
609,
2401,
8634,
2190,
7790,
609,
25,
610,
8,
11651,
528,
76747,
60,
55609,
198,
48966,
279,
7340,
1396,
315,
11460,
3284,
311,
7068,
369,
264,
1646,
627,
9905,
198,
2590,
609,
1389,
578,
1646,
609,
584,
1390,
311,
1440,
279,
2317,
1404,
369,
627,
16851,
198,
791,
7340,
2317,
1404,
198,
13617,
198,
2880,
29938,
284,
1825,
2192,
3272,
609,
2401,
8634,
2190,
446,
1342,
1773,
402,
49697,
12,
6268,
1158,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-5 | Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
prep_streaming_params(stop: Optional[List[str]] = None) → Dict[str, Any][source]¶
Prepare the params for streaming.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
stream(prompt: str, stop: Optional[List[str]] = None) → Generator[source]¶
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt – The prompts to pass into the model.
stop – Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor. | [
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
72874,
12962,
287,
6887,
61270,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
30226,
17752,
11,
5884,
1483,
2484,
60,
55609,
198,
51690,
279,
3712,
369,
17265,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
4116,
73353,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
29458,
76747,
60,
55609,
198,
7368,
5377,
15836,
449,
17265,
5292,
323,
471,
279,
13239,
14143,
627,
33,
21352,
25,
420,
374,
264,
13746,
4668,
1418,
584,
7216,
704,
279,
1314,
59851,
627,
12805,
430,
8741,
11,
420,
3834,
1436,
2349,
627,
9905,
198,
41681,
1389,
578,
52032,
311,
1522,
1139,
279,
1646,
627,
9684,
1389,
12536,
1160,
315,
3009,
4339,
311,
1005,
994,
24038,
627,
16851,
198,
32,
14143,
14393,
279,
4365,
315,
11460,
505,
5377,
15836,
627,
13617,
198,
36951,
284,
1825,
2192,
15307,
446,
41551,
757,
264,
22380,
13352,
2000,
4037,
304,
14143,
512,
262,
7692,
4037,
198,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
d35b2c96038d-6 | serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
property max_context_size: int¶
Get max context size for this model.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
3784,
1973,
8634,
2424,
25,
528,
55609,
198,
1991,
1973,
2317,
1404,
369,
420,
1646,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.openai.BaseOpenAI.html |
118421debe0c-0 | langchain.llms.human.HumanInputLLM¶
class langchain.llms.human.HumanInputLLM(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, input_func: Callable = None, prompt_func: Callable[[str], None] = None, separator: str = '\n', input_kwargs: Mapping[str, Any] = {}, prompt_kwargs: Mapping[str, Any] = {})[source]¶
Bases: LLM
A LLM wrapper which returns user input as the response.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param input_func: Callable [Optional]¶
param input_kwargs: Mapping[str, Any] = {}¶
param prompt_func: Callable[[str], None] [Optional]¶
param prompt_kwargs: Mapping[str, Any] = {}¶
param separator: str = '\n'¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | [
5317,
8995,
60098,
1026,
870,
7282,
3924,
7282,
2566,
4178,
44,
55609,
198,
1058,
8859,
8995,
60098,
1026,
870,
7282,
3924,
7282,
2566,
4178,
44,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
1988,
9791,
25,
54223,
284,
2290,
11,
10137,
9791,
25,
54223,
15873,
496,
1145,
2290,
60,
284,
2290,
11,
25829,
25,
610,
284,
5307,
77,
518,
1988,
37335,
25,
39546,
17752,
11,
5884,
60,
284,
16857,
10137,
37335,
25,
39546,
17752,
11,
5884,
60,
284,
4792,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
32,
445,
11237,
13564,
902,
4780,
1217,
1988,
439,
279,
2077,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
1988,
9791,
25,
54223,
510,
15669,
60,
55609,
198,
913,
1988,
37335,
25,
39546,
17752,
11,
5884,
60,
284,
4792,
55609,
198,
913,
10137,
9791,
25,
54223,
15873,
496,
1145,
2290,
60,
510,
15669,
60,
55609,
198,
913,
10137,
37335,
25,
39546,
17752,
11,
5884,
60,
284,
4792,
55609,
198,
913,
25829,
25,
610,
284,
5307,
77,
6,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
118421debe0c-1 | Check Cache and run the LLM on the given prompt and input.
async agenerate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(prompts: List[str], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Run the LLM on the given prompt and input.
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶ | [
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
84432,
13044,
25,
1796,
17752,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
6869,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
627,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
118421debe0c-2 | get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
save(file_path: Union[Path, str]) → None¶
Save the LLM.
Parameters
file_path – Path to file to save the LLM to.
Example:
.. code-block:: python
llm.save(file_path=”path/llm.yaml”)
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶ | [
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
445,
11237,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
445,
11237,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
657,
76,
5799,
4971,
2703,
45221,
2398,
14,
657,
76,
34506,
863,
340,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
118421debe0c-3 | property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.human.HumanInputLLM.html |
209211477b34-0 | langchain.llms.sagemaker_endpoint.LLMContentHandler¶
class langchain.llms.sagemaker_endpoint.LLMContentHandler[source]¶
Bases: ContentHandlerBase[str, str]
Content handler for LLM class.
Methods
__init__()
transform_input(prompt, model_kwargs)
Transforms the input to a format that model can accept as the request Body.
transform_output(output)
Transforms the output from the model to string that the LLM class expects.
Attributes
accepts
The MIME type of the response data returned from endpoint
content_type
The MIME type of the input data passed to endpoint
abstract transform_input(prompt: INPUT_TYPE, model_kwargs: Dict) → bytes¶
Transforms the input to a format that model can accept
as the request Body. Should return bytes or seekable file
like object in the format specified in the content_type
request header.
abstract transform_output(output: bytes) → OUTPUT_TYPE¶
Transforms the output from the model to string that
the LLM class expects.
accepts: Optional[str] = 'text/plain'¶
The MIME type of the response data returned from endpoint
content_type: Optional[str] = 'text/plain'¶
The MIME type of the input data passed to endpoint | [
5317,
8995,
60098,
1026,
516,
15003,
4506,
37799,
1236,
11237,
2831,
3126,
55609,
198,
1058,
8859,
8995,
60098,
1026,
516,
15003,
4506,
37799,
1236,
11237,
2831,
3126,
76747,
60,
55609,
198,
33,
2315,
25,
9059,
3126,
4066,
17752,
11,
610,
933,
2831,
7158,
369,
445,
11237,
538,
627,
18337,
198,
565,
2381,
33716,
4806,
6022,
73353,
11,
4194,
2590,
37335,
340,
9140,
82,
279,
1988,
311,
264,
3645,
430,
1646,
649,
4287,
439,
279,
1715,
14285,
627,
4806,
7800,
11304,
340,
9140,
82,
279,
2612,
505,
279,
1646,
311,
925,
430,
279,
445,
11237,
538,
25283,
627,
10738,
198,
10543,
82,
198,
791,
58577,
955,
315,
279,
2077,
828,
6052,
505,
15233,
198,
1834,
1857,
198,
791,
58577,
955,
315,
279,
1988,
828,
5946,
311,
15233,
198,
16647,
5276,
6022,
73353,
25,
27241,
4283,
11,
1646,
37335,
25,
30226,
8,
11651,
5943,
55609,
198,
9140,
82,
279,
1988,
311,
264,
3645,
430,
1646,
649,
4287,
198,
300,
279,
1715,
14285,
13,
12540,
471,
5943,
477,
6056,
481,
1052,
198,
4908,
1665,
304,
279,
3645,
5300,
304,
279,
2262,
1857,
198,
2079,
4342,
627,
16647,
5276,
7800,
11304,
25,
5943,
8,
11651,
32090,
4283,
55609,
198,
9140,
82,
279,
2612,
505,
279,
1646,
311,
925,
430,
198,
1820,
445,
11237,
538,
25283,
627,
10543,
82,
25,
12536,
17752,
60,
284,
364,
1342,
38071,
6,
55609,
198,
791,
58577,
955,
315,
279,
2077,
828,
6052,
505,
15233,
198,
1834,
1857,
25,
12536,
17752,
60,
284,
364,
1342,
38071,
6,
55609,
198,
791,
58577,
955,
315,
279,
1988,
828,
5946,
311,
15233
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.sagemaker_endpoint.LLMContentHandler.html |
b7151905407f-0 | langchain.llms.aviary.get_models¶
langchain.llms.aviary.get_models() → List[str][source]¶
List available models | [
5317,
8995,
60098,
1026,
85652,
661,
673,
31892,
55609,
198,
5317,
8995,
60098,
1026,
85652,
661,
673,
31892,
368,
11651,
1796,
17752,
1483,
2484,
60,
55609,
198,
861,
2561,
4211
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.aviary.get_models.html |
9c05e59c0c6d-0 | langchain.llms.base.update_cache¶
langchain.llms.base.update_cache(existing_prompts: Dict[int, List], llm_string: str, missing_prompt_idxs: List[int], new_results: LLMResult, prompts: List[str]) → Optional[dict][source]¶
Update the cache and get the LLM output. | [
5317,
8995,
60098,
1026,
9105,
5430,
11790,
55609,
198,
5317,
8995,
60098,
1026,
9105,
5430,
11790,
95714,
48977,
13044,
25,
30226,
19155,
11,
1796,
1145,
9507,
76,
3991,
25,
610,
11,
7554,
62521,
69746,
25,
1796,
19155,
1145,
502,
13888,
25,
445,
11237,
2122,
11,
52032,
25,
1796,
17752,
2526,
11651,
12536,
58,
8644,
1483,
2484,
60,
55609,
198,
4387,
279,
6636,
323,
636,
279,
445,
11237,
2612,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.base.update_cache.html |
ed2728354986-0 | langchain.llms.stochasticai.StochasticAI¶
class langchain.llms.stochasticai.StochasticAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, api_url: str = '', model_kwargs: Dict[str, Any] = None, stochasticai_api_key: Optional[str] = None)[source]¶
Bases: LLM
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param api_url: str = ''¶
Model name to use.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not
explicitly specified.
param stochasticai_api_key: Optional[str] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(prompt: str, stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶
Check Cache and run the LLM on the given prompt and input. | [
5317,
8995,
60098,
1026,
1258,
67054,
2192,
7914,
67054,
15836,
55609,
198,
1058,
8859,
8995,
60098,
1026,
1258,
67054,
2192,
7914,
67054,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
6464,
2975,
25,
610,
284,
9158,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
96340,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
445,
11237,
198,
11803,
2212,
800,
67054,
15836,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
4676,
3977,
4015,
46,
2198,
6483,
1341,
15836,
11669,
6738,
198,
751,
449,
701,
5446,
1401,
627,
13617,
198,
1527,
8859,
8995,
60098,
1026,
1179,
800,
67054,
15836,
198,
267,
67054,
2192,
284,
800,
67054,
15836,
25865,
2975,
64841,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6464,
2975,
25,
610,
284,
3436,
55609,
198,
1747,
836,
311,
1005,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
198,
94732,
398,
5300,
627,
913,
96340,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
41681,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
4061,
20044,
323,
1629,
279,
445,
11237,
389,
279,
2728,
10137,
323,
1988,
13
] | https://langchain.readthedocs.io/en/latest/llms/langchain.llms.stochasticai.StochasticAI.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.