id
stringlengths 14
15
| text
stringlengths 35
2.07k
| embedding
sequence | source
stringlengths 61
154
|
---|---|---|---|
7fbd9431852f-0 | langchain.callbacks.streamlit.mutable_expander.ChildType¶
class langchain.callbacks.streamlit.mutable_expander.ChildType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]¶
Bases: Enum
Attributes
MARKDOWN
EXCEPTION
EXCEPTION = 'EXCEPTION'¶
MARKDOWN = 'MARKDOWN'¶ | [
5317,
8995,
72134,
15307,
32735,
61359,
14548,
8363,
29606,
941,
55609,
198,
1058,
8859,
8995,
72134,
15307,
32735,
61359,
14548,
8363,
29606,
941,
3764,
11,
5144,
5980,
11,
12039,
4793,
5980,
11,
5965,
609,
5980,
11,
955,
5980,
11,
1212,
28,
16,
11,
19254,
5980,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
14416,
198,
10738,
198,
24995,
23370,
198,
3337,
22089,
198,
3337,
22089,
284,
364,
3337,
22089,
6,
55609,
198,
24995,
23370,
284,
364,
24995,
23370,
6,
55609
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streamlit.mutable_expander.ChildType.html |
99ca55ab2290-0 | langchain.callbacks.flyte_callback.FlyteCallbackHandler¶
class langchain.callbacks.flyte_callback.FlyteCallbackHandler[source]¶
Bases: BaseMetadataCallbackHandler, BaseCallbackHandler
This callback handler is designed specifically for usage within a Flyte task.
Initialize callback handler.
Methods
__init__()
Initialize callback handler.
get_custom_callback_meta()
on_agent_action(action, **kwargs)
Run on agent action.
on_agent_finish(finish, **kwargs)
Run when agent ends running.
on_chain_end(outputs, **kwargs)
Run when chain ends running.
on_chain_error(error, **kwargs)
Run when chain errors.
on_chain_start(serialized, inputs, **kwargs)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run when LLM generates a new token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts.
on_retriever_end(documents, *, run_id[, ...])
Run when Retriever ends running.
on_retriever_error(error, *, run_id[, ...])
Run when Retriever errors.
on_retriever_start(query, *, run_id[, ...])
Run when Retriever starts running.
on_text(text, **kwargs)
Run when agent is ending.
on_tool_end(output, **kwargs)
Run when tool ends running.
on_tool_error(error, **kwargs) | [
5317,
8995,
72134,
840,
398,
668,
12802,
1006,
398,
668,
7646,
3126,
55609,
198,
1058,
8859,
8995,
72134,
840,
398,
668,
12802,
1006,
398,
668,
7646,
3126,
76747,
60,
55609,
198,
33,
2315,
25,
5464,
14952,
7646,
3126,
11,
5464,
7646,
3126,
198,
2028,
4927,
7158,
374,
6319,
11951,
369,
10648,
2949,
264,
20793,
668,
3465,
627,
10130,
4927,
7158,
627,
18337,
198,
565,
2381,
33716,
10130,
4927,
7158,
627,
456,
16254,
12802,
13686,
746,
263,
26814,
8090,
15665,
11,
4194,
334,
9872,
340,
6869,
389,
8479,
1957,
627,
263,
26814,
44080,
968,
18675,
11,
4194,
334,
9872,
340,
6869,
994,
8479,
10548,
4401,
627,
263,
31683,
6345,
71213,
11,
4194,
334,
9872,
340,
6869,
994,
8957,
10548,
4401,
627,
263,
31683,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
8957,
6103,
627,
263,
31683,
5011,
30587,
1534,
11,
4194,
25986,
11,
4194,
334,
9872,
340,
6869,
994,
8957,
8638,
4401,
627,
263,
36153,
5156,
5011,
30587,
1534,
11,
4194,
16727,
11,
4194,
12594,
4194,
32318,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
263,
44095,
76,
6345,
5802,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
10548,
4401,
627,
263,
44095,
76,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
6103,
627,
263,
44095,
76,
6046,
6594,
13577,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
27983,
264,
502,
4037,
627,
263,
44095,
76,
5011,
30587,
1534,
11,
4194,
25475,
13044,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
8638,
627,
263,
1311,
9104,
424,
6345,
19702,
2901,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
994,
10608,
462,
2099,
10548,
4401,
627,
263,
1311,
9104,
424,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
994,
10608,
462,
2099,
6103,
627,
263,
1311,
9104,
424,
5011,
10974,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
994,
10608,
462,
2099,
8638,
4401,
627,
263,
4424,
7383,
11,
4194,
334,
9872,
340,
6869,
994,
8479,
374,
13696,
627,
263,
23627,
6345,
11304,
11,
4194,
334,
9872,
340,
6869,
994,
5507,
10548,
4401,
627,
263,
23627,
4188,
6524,
11,
4194,
334,
9872,
8
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html |
99ca55ab2290-1 | Run when tool ends running.
on_tool_error(error, **kwargs)
Run when tool errors.
on_tool_start(serialized, input_str, **kwargs)
Run when tool starts running.
reset_callback_meta()
Reset the callback metadata.
Attributes
always_verbose
Whether to call verbose callbacks even if verbose is False.
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
raise_error
run_inline
get_custom_callback_meta() → Dict[str, Any]¶
on_agent_action(action: AgentAction, **kwargs: Any) → Any[source]¶
Run on agent action.
on_agent_finish(finish: AgentFinish, **kwargs: Any) → None[source]¶
Run when agent ends running.
on_chain_end(outputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain ends running.
on_chain_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶
Run when chain errors.
on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any) → None[source]¶
Run when chain starts running.
on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running. | [
6869,
994,
5507,
10548,
4401,
627,
263,
23627,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
5507,
6103,
627,
263,
23627,
5011,
30587,
1534,
11,
4194,
1379,
2966,
11,
4194,
334,
9872,
340,
6869,
994,
5507,
8638,
4401,
627,
9915,
12802,
13686,
746,
15172,
279,
4927,
11408,
627,
10738,
198,
33222,
69021,
198,
25729,
311,
1650,
14008,
27777,
1524,
422,
14008,
374,
3641,
627,
13431,
26814,
198,
25729,
311,
10240,
8479,
27777,
627,
13431,
31683,
198,
25729,
311,
10240,
8957,
27777,
627,
13431,
36153,
5156,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
13431,
44095,
76,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
13431,
1311,
9104,
424,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
19223,
4188,
198,
6236,
42971,
198,
456,
16254,
12802,
13686,
368,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
263,
26814,
8090,
15665,
25,
21372,
2573,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
76747,
60,
55609,
198,
6869,
389,
8479,
1957,
627,
263,
26814,
44080,
968,
18675,
25,
21372,
26748,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
8479,
10548,
4401,
627,
263,
31683,
6345,
71213,
25,
30226,
17752,
11,
5884,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
8957,
10548,
4401,
627,
263,
31683,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
8957,
6103,
627,
263,
31683,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
11374,
25,
30226,
17752,
11,
5884,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
8957,
8638,
4401,
627,
263,
36153,
5156,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
6743,
25,
1796,
53094,
58,
4066,
2097,
21128,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
263,
44095,
76,
6345,
5802,
25,
445,
11237,
2122,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
10548,
4401,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html |
99ca55ab2290-2 | Run when LLM ends running.
on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶
Run when LLM errors.
on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run when LLM generates a new token.
on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts.
on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever ends running.
on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever errors.
on_retriever_start(query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → Any¶
Run when Retriever starts running.
on_text(text: str, **kwargs: Any) → None[source]¶
Run when agent is ending.
on_tool_end(output: str, **kwargs: Any) → None[source]¶
Run when tool ends running.
on_tool_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶
Run when tool errors.
on_tool_start(serialized: Dict[str, Any], input_str: str, **kwargs: Any) → None[source]¶
Run when tool starts running.
reset_callback_meta() → None¶
Reset the callback metadata.
property always_verbose: bool¶
Whether to call verbose callbacks even if verbose is False.
property ignore_agent: bool¶
Whether to ignore agent callbacks. | [
6869,
994,
445,
11237,
10548,
4401,
627,
263,
44095,
76,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
6103,
627,
263,
44095,
76,
6046,
6594,
13577,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
27983,
264,
502,
4037,
627,
263,
44095,
76,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
8638,
627,
263,
1311,
9104,
424,
6345,
19702,
2901,
25,
29971,
58,
7676,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
10608,
462,
2099,
10548,
4401,
627,
263,
1311,
9104,
424,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
10608,
462,
2099,
6103,
627,
263,
1311,
9104,
424,
5011,
10974,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
10608,
462,
2099,
8638,
4401,
627,
263,
4424,
7383,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
8479,
374,
13696,
627,
263,
23627,
6345,
11304,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
5507,
10548,
4401,
627,
263,
23627,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
5507,
6103,
627,
263,
23627,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
1988,
2966,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
5507,
8638,
4401,
627,
9915,
12802,
13686,
368,
11651,
2290,
55609,
198,
15172,
279,
4927,
11408,
627,
3784,
2744,
69021,
25,
1845,
55609,
198,
25729,
311,
1650,
14008,
27777,
1524,
422,
14008,
374,
3641,
627,
3784,
10240,
26814,
25,
1845,
55609,
198,
25729,
311,
10240,
8479,
27777,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html |
99ca55ab2290-3 | property ignore_agent: bool¶
Whether to ignore agent callbacks.
property ignore_chain: bool¶
Whether to ignore chain callbacks.
property ignore_chat_model: bool¶
Whether to ignore chat model callbacks.
property ignore_llm: bool¶
Whether to ignore LLM callbacks.
property ignore_retriever: bool¶
Whether to ignore retriever callbacks.
raise_error: bool = False¶
run_inline: bool = False¶ | [
3784,
10240,
26814,
25,
1845,
55609,
198,
25729,
311,
10240,
8479,
27777,
627,
3784,
10240,
31683,
25,
1845,
55609,
198,
25729,
311,
10240,
8957,
27777,
627,
3784,
10240,
36153,
5156,
25,
1845,
55609,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
3784,
10240,
44095,
76,
25,
1845,
55609,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
3784,
10240,
1311,
9104,
424,
25,
1845,
55609,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
19223,
4188,
25,
1845,
284,
3641,
55609,
198,
6236,
42971,
25,
1845,
284,
3641,
55609
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.flyte_callback.FlyteCallbackHandler.html |
32f8b9493a1c-0 | langchain.callbacks.openai_info.get_openai_token_cost_for_model¶
langchain.callbacks.openai_info.get_openai_token_cost_for_model(model_name: str, num_tokens: int, is_completion: bool = False) → float[source]¶
Get the cost in USD for a given model and number of tokens.
Parameters
model_name – Name of the model
num_tokens – Number of tokens.
is_completion – Whether the model is used for completion or not.
Defaults to False.
Returns
Cost in USD. | [
5317,
8995,
72134,
5949,
2192,
3186,
673,
11563,
2192,
6594,
16269,
5595,
5156,
55609,
198,
5317,
8995,
72134,
5949,
2192,
3186,
673,
11563,
2192,
6594,
16269,
5595,
5156,
7790,
1292,
25,
610,
11,
1661,
29938,
25,
528,
11,
374,
61264,
25,
1845,
284,
3641,
8,
11651,
2273,
76747,
60,
55609,
198,
1991,
279,
2853,
304,
20121,
369,
264,
2728,
1646,
323,
1396,
315,
11460,
627,
9905,
198,
2590,
1292,
1389,
4076,
315,
279,
1646,
198,
2470,
29938,
1389,
5742,
315,
11460,
627,
285,
61264,
1389,
13440,
279,
1646,
374,
1511,
369,
9954,
477,
539,
627,
16672,
311,
3641,
627,
16851,
198,
15289,
304,
20121,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.openai_info.get_openai_token_cost_for_model.html |
f6424eae9f32-0 | langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler¶
class langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler[source]¶
Bases: AsyncCallbackHandler
Callback handler that returns an async iterator.
Methods
__init__()
aiter()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run on retriever end.
on_retriever_error(error, *, run_id[, ...])
Run on retriever error.
on_retriever_start(query, *, run_id[, ...])
Run on retriever start.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running. | [
5317,
8995,
72134,
85058,
62,
1339,
261,
45219,
12217,
7646,
3126,
55609,
198,
1058,
8859,
8995,
72134,
85058,
62,
1339,
261,
45219,
12217,
7646,
3126,
76747,
60,
55609,
198,
33,
2315,
25,
92536,
3126,
198,
7646,
7158,
430,
4780,
459,
3393,
15441,
627,
18337,
198,
565,
2381,
33716,
1339,
261,
746,
263,
26814,
8090,
15665,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
8479,
1957,
627,
263,
26814,
44080,
968,
18675,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
8479,
842,
627,
263,
31683,
6345,
71213,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
8957,
10548,
4401,
627,
263,
31683,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
8957,
6103,
627,
263,
31683,
5011,
30587,
1534,
11,
4194,
25986,
11,
4194,
12594,
4194,
6236,
851,
340,
6869,
994,
8957,
8638,
4401,
627,
263,
36153,
5156,
5011,
30587,
1534,
11,
4194,
16727,
11,
4194,
12594,
4194,
32318,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
263,
44095,
76,
6345,
5802,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
10548,
4401,
627,
263,
44095,
76,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
6103,
627,
263,
44095,
76,
6046,
6594,
13577,
11,
4194,
334,
9872,
340,
6869,
389,
502,
445,
11237,
4037,
627,
263,
44095,
76,
5011,
30587,
1534,
11,
4194,
25475,
13044,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
8638,
4401,
627,
263,
1311,
9104,
424,
6345,
19702,
2901,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
842,
627,
263,
1311,
9104,
424,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
1493,
627,
263,
1311,
9104,
424,
5011,
10974,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
1212,
627,
263,
4424,
7383,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
389,
25142,
1495,
627,
263,
23627,
6345,
11304,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
5507,
10548,
4401,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html |
f6424eae9f32-1 | Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
Attributes
always_verbose
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
raise_error
run_inline
queue
done
async aiter() → AsyncIterator[str][source]¶
async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on agent action.
async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on agent end.
async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when chain ends running.
async on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when chain errors.
async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when chain starts running. | [
6869,
994,
5507,
10548,
4401,
627,
263,
23627,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
5507,
6103,
627,
263,
23627,
5011,
30587,
1534,
11,
4194,
1379,
2966,
11,
4194,
12594,
4194,
6236,
851,
340,
6869,
994,
5507,
8638,
4401,
627,
10738,
198,
33222,
69021,
198,
13431,
26814,
198,
25729,
311,
10240,
8479,
27777,
627,
13431,
31683,
198,
25729,
311,
10240,
8957,
27777,
627,
13431,
36153,
5156,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
13431,
44095,
76,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
13431,
1311,
9104,
424,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
19223,
4188,
198,
6236,
42971,
198,
4687,
198,
10655,
198,
7847,
264,
2058,
368,
11651,
22149,
12217,
17752,
1483,
2484,
60,
55609,
198,
7847,
389,
26814,
8090,
15665,
25,
21372,
2573,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
8479,
1957,
627,
7847,
389,
26814,
44080,
968,
18675,
25,
21372,
26748,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
8479,
842,
627,
7847,
389,
31683,
6345,
71213,
25,
30226,
17752,
11,
5884,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
10548,
4401,
627,
7847,
389,
31683,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
6103,
627,
7847,
389,
31683,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
11374,
25,
30226,
17752,
11,
5884,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
8638,
4401,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html |
f6424eae9f32-2 | Run when chain starts running.
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
async on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None[source]¶
Run when LLM errors.
async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running.
async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever end.
async on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever error.
async on_retriever_start(query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever start.
async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on arbitrary text. | [
6869,
994,
8957,
8638,
4401,
627,
7847,
389,
36153,
5156,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
6743,
25,
1796,
53094,
58,
4066,
2097,
21128,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
7847,
389,
44095,
76,
6345,
5802,
25,
445,
11237,
2122,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
10548,
4401,
627,
7847,
389,
44095,
76,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
6103,
627,
7847,
389,
44095,
76,
6046,
6594,
13577,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
389,
502,
445,
11237,
4037,
13,
8442,
2561,
994,
17265,
374,
9147,
627,
7847,
389,
44095,
76,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
8638,
4401,
627,
7847,
389,
1311,
9104,
424,
6345,
19702,
2901,
25,
29971,
58,
7676,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
842,
627,
7847,
389,
1311,
9104,
424,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
1493,
627,
7847,
389,
1311,
9104,
424,
5011,
10974,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
1212,
627,
7847,
389,
4424,
7383,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
25142,
1495,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html |
f6424eae9f32-3 | Run on arbitrary text.
async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when tool ends running.
async on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when tool errors.
async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool starts running.
property always_verbose: bool¶
done: asyncio.locks.Event¶
property ignore_agent: bool¶
Whether to ignore agent callbacks.
property ignore_chain: bool¶
Whether to ignore chain callbacks.
property ignore_chat_model: bool¶
Whether to ignore chat model callbacks.
property ignore_llm: bool¶
Whether to ignore LLM callbacks.
property ignore_retriever: bool¶
Whether to ignore retriever callbacks.
queue: asyncio.queues.Queue[str]¶
raise_error: bool = False¶
run_inline: bool = False¶ | [
6869,
389,
25142,
1495,
627,
7847,
389,
23627,
6345,
11304,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
10548,
4401,
627,
7847,
389,
23627,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
6103,
627,
7847,
389,
23627,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
1988,
2966,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
8638,
4401,
627,
3784,
2744,
69021,
25,
1845,
55609,
198,
10655,
25,
46018,
21679,
82,
7049,
55609,
198,
3784,
10240,
26814,
25,
1845,
55609,
198,
25729,
311,
10240,
8479,
27777,
627,
3784,
10240,
31683,
25,
1845,
55609,
198,
25729,
311,
10240,
8957,
27777,
627,
3784,
10240,
36153,
5156,
25,
1845,
55609,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
3784,
10240,
44095,
76,
25,
1845,
55609,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
3784,
10240,
1311,
9104,
424,
25,
1845,
55609,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
4687,
25,
46018,
13,
77189,
51351,
17752,
60,
55609,
198,
19223,
4188,
25,
1845,
284,
3641,
55609,
198,
6236,
42971,
25,
1845,
284,
3641,
55609
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter.AsyncIteratorCallbackHandler.html |
f398157a8e31-0 | langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler¶
class langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler(*, answer_prefix_tokens: Optional[List[str]] = None, strip_tokens: bool = True, stream_prefix: bool = False)[source]¶
Bases: AsyncIteratorCallbackHandler
Callback handler that returns an async iterator.
Only the final output of the agent will be iterated.
Instantiate AsyncFinalIteratorCallbackHandler.
Parameters
answer_prefix_tokens – Token sequence that prefixes the answer.
Default is [“Final”, “Answer”, “:”]
strip_tokens – Ignore white spaces and new lines when comparing
answer_prefix_tokens to last tokens? (to determine if answer has been
reached)
stream_prefix – Should answer prefix itself also be streamed?
Methods
__init__(*[, answer_prefix_tokens, ...])
Instantiate AsyncFinalIteratorCallbackHandler.
aiter()
append_to_last_tokens(token)
check_if_answer_reached()
on_agent_action(action, *, run_id[, ...])
Run on agent action.
on_agent_finish(finish, *, run_id[, ...])
Run on agent end.
on_chain_end(outputs, *, run_id[, parent_run_id])
Run when chain ends running.
on_chain_error(error, *, run_id[, parent_run_id])
Run when chain errors.
on_chain_start(serialized, inputs, *, run_id)
Run when chain starts running.
on_chat_model_start(serialized, messages, *, ...)
Run when a chat model starts running.
on_llm_end(response, **kwargs)
Run when LLM ends running.
on_llm_error(error, **kwargs)
Run when LLM errors. | [
5317,
8995,
72134,
85058,
62,
1339,
261,
21333,
18917,
45219,
19918,
12217,
7646,
3126,
55609,
198,
1058,
8859,
8995,
72134,
85058,
62,
1339,
261,
21333,
18917,
45219,
19918,
12217,
7646,
3126,
4163,
11,
4320,
14301,
29938,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
13619,
29938,
25,
1845,
284,
3082,
11,
4365,
14301,
25,
1845,
284,
3641,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
22149,
12217,
7646,
3126,
198,
7646,
7158,
430,
4780,
459,
3393,
15441,
627,
7456,
279,
1620,
2612,
315,
279,
8479,
690,
387,
5480,
660,
627,
81651,
22149,
19918,
12217,
7646,
3126,
627,
9905,
198,
9399,
14301,
29938,
1389,
9857,
8668,
430,
63676,
279,
4320,
627,
3760,
374,
510,
2118,
19918,
9520,
1054,
16533,
9520,
1054,
25,
863,
933,
13406,
29938,
1389,
40071,
4251,
12908,
323,
502,
5238,
994,
27393,
198,
9399,
14301,
29938,
311,
1566,
11460,
30,
320,
998,
8417,
422,
4320,
706,
1027,
198,
265,
3939,
340,
4116,
14301,
1389,
12540,
4320,
9436,
5196,
1101,
387,
74345,
5380,
18337,
198,
565,
2381,
69106,
38372,
4194,
9399,
14301,
29938,
11,
4194,
1131,
2608,
81651,
22149,
19918,
12217,
7646,
3126,
627,
1339,
261,
746,
5200,
2401,
12473,
29938,
13577,
340,
2071,
11366,
29634,
1311,
3939,
746,
263,
26814,
8090,
15665,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
8479,
1957,
627,
263,
26814,
44080,
968,
18675,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
8479,
842,
627,
263,
31683,
6345,
71213,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
8957,
10548,
4401,
627,
263,
31683,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
8957,
6103,
627,
263,
31683,
5011,
30587,
1534,
11,
4194,
25986,
11,
4194,
12594,
4194,
6236,
851,
340,
6869,
994,
8957,
8638,
4401,
627,
263,
36153,
5156,
5011,
30587,
1534,
11,
4194,
16727,
11,
4194,
12594,
4194,
32318,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
263,
44095,
76,
6345,
5802,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
10548,
4401,
627,
263,
44095,
76,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
6103,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
f398157a8e31-1 | on_llm_error(error, **kwargs)
Run when LLM errors.
on_llm_new_token(token, **kwargs)
Run on new LLM token.
on_llm_start(serialized, prompts, **kwargs)
Run when LLM starts running.
on_retriever_end(documents, *, run_id[, ...])
Run on retriever end.
on_retriever_error(error, *, run_id[, ...])
Run on retriever error.
on_retriever_start(query, *, run_id[, ...])
Run on retriever start.
on_text(text, *, run_id[, parent_run_id])
Run on arbitrary text.
on_tool_end(output, *, run_id[, parent_run_id])
Run when tool ends running.
on_tool_error(error, *, run_id[, parent_run_id])
Run when tool errors.
on_tool_start(serialized, input_str, *, run_id)
Run when tool starts running.
Attributes
always_verbose
ignore_agent
Whether to ignore agent callbacks.
ignore_chain
Whether to ignore chain callbacks.
ignore_chat_model
Whether to ignore chat model callbacks.
ignore_llm
Whether to ignore LLM callbacks.
ignore_retriever
Whether to ignore retriever callbacks.
raise_error
run_inline
async aiter() → AsyncIterator[str]¶
append_to_last_tokens(token: str) → None[source]¶
check_if_answer_reached() → bool[source]¶
async on_agent_action(action: AgentAction, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on agent action. | [
263,
44095,
76,
4188,
6524,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
6103,
627,
263,
44095,
76,
6046,
6594,
13577,
11,
4194,
334,
9872,
340,
6869,
389,
502,
445,
11237,
4037,
627,
263,
44095,
76,
5011,
30587,
1534,
11,
4194,
25475,
13044,
11,
4194,
334,
9872,
340,
6869,
994,
445,
11237,
8638,
4401,
627,
263,
1311,
9104,
424,
6345,
19702,
2901,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
842,
627,
263,
1311,
9104,
424,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
1493,
627,
263,
1311,
9104,
424,
5011,
10974,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
1131,
2608,
6869,
389,
10992,
424,
1212,
627,
263,
4424,
7383,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
389,
25142,
1495,
627,
263,
23627,
6345,
11304,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
5507,
10548,
4401,
627,
263,
23627,
4188,
6524,
11,
4194,
12594,
4194,
6236,
851,
38372,
4194,
3850,
14334,
851,
2608,
6869,
994,
5507,
6103,
627,
263,
23627,
5011,
30587,
1534,
11,
4194,
1379,
2966,
11,
4194,
12594,
4194,
6236,
851,
340,
6869,
994,
5507,
8638,
4401,
627,
10738,
198,
33222,
69021,
198,
13431,
26814,
198,
25729,
311,
10240,
8479,
27777,
627,
13431,
31683,
198,
25729,
311,
10240,
8957,
27777,
627,
13431,
36153,
5156,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
13431,
44095,
76,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
13431,
1311,
9104,
424,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
19223,
4188,
198,
6236,
42971,
198,
7847,
264,
2058,
368,
11651,
22149,
12217,
17752,
60,
55609,
198,
5200,
2401,
12473,
29938,
13577,
25,
610,
8,
11651,
2290,
76747,
60,
55609,
198,
2071,
11366,
29634,
1311,
3939,
368,
11651,
1845,
76747,
60,
55609,
198,
7847,
389,
26814,
8090,
15665,
25,
21372,
2573,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
8479,
1957,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
f398157a8e31-2 | Run on agent action.
async on_agent_finish(finish: AgentFinish, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on agent end.
async on_chain_end(outputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when chain ends running.
async on_chain_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when chain errors.
async on_chain_start(serialized: Dict[str, Any], inputs: Dict[str, Any], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when chain starts running.
async on_chat_model_start(serialized: Dict[str, Any], messages: List[List[BaseMessage]], *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → Any¶
Run when a chat model starts running.
async on_llm_end(response: LLMResult, **kwargs: Any) → None[source]¶
Run when LLM ends running.
async on_llm_error(error: Union[Exception, KeyboardInterrupt], **kwargs: Any) → None¶
Run when LLM errors.
async on_llm_new_token(token: str, **kwargs: Any) → None[source]¶
Run on new LLM token. Only available when streaming is enabled.
async on_llm_start(serialized: Dict[str, Any], prompts: List[str], **kwargs: Any) → None[source]¶
Run when LLM starts running. | [
6869,
389,
8479,
1957,
627,
7847,
389,
26814,
44080,
968,
18675,
25,
21372,
26748,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
8479,
842,
627,
7847,
389,
31683,
6345,
71213,
25,
30226,
17752,
11,
5884,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
10548,
4401,
627,
7847,
389,
31683,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
6103,
627,
7847,
389,
31683,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
11374,
25,
30226,
17752,
11,
5884,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
8957,
8638,
4401,
627,
7847,
389,
36153,
5156,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
6743,
25,
1796,
53094,
58,
4066,
2097,
21128,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
6869,
994,
264,
6369,
1646,
8638,
4401,
627,
7847,
389,
44095,
76,
6345,
5802,
25,
445,
11237,
2122,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
10548,
4401,
627,
7847,
389,
44095,
76,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
445,
11237,
6103,
627,
7847,
389,
44095,
76,
6046,
6594,
13577,
25,
610,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
389,
502,
445,
11237,
4037,
13,
8442,
2561,
994,
17265,
374,
9147,
627,
7847,
389,
44095,
76,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
52032,
25,
1796,
17752,
1145,
3146,
9872,
25,
5884,
8,
11651,
2290,
76747,
60,
55609,
198,
6869,
994,
445,
11237,
8638,
4401,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
f398157a8e31-3 | Run when LLM starts running.
async on_retriever_end(documents: Sequence[Document], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever end.
async on_retriever_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever error.
async on_retriever_start(query: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on retriever start.
async on_text(text: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run on arbitrary text.
async on_tool_end(output: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when tool ends running.
async on_tool_error(error: Union[Exception, KeyboardInterrupt], *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any) → None¶
Run when tool errors.
async on_tool_start(serialized: Dict[str, Any], input_str: str, *, run_id: UUID, parent_run_id: Optional[UUID] = None, tags: Optional[List[str]] = None, **kwargs: Any) → None¶
Run when tool starts running.
property always_verbose: bool¶
done: asyncio.Event¶
property ignore_agent: bool¶
Whether to ignore agent callbacks.
property ignore_chain: bool¶
Whether to ignore chain callbacks.
property ignore_chat_model: bool¶
Whether to ignore chat model callbacks.
property ignore_llm: bool¶
Whether to ignore LLM callbacks. | [
6869,
994,
445,
11237,
8638,
4401,
627,
7847,
389,
1311,
9104,
424,
6345,
19702,
2901,
25,
29971,
58,
7676,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
842,
627,
7847,
389,
1311,
9104,
424,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
1493,
627,
7847,
389,
1311,
9104,
424,
5011,
10974,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
10992,
424,
1212,
627,
7847,
389,
4424,
7383,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
389,
25142,
1495,
627,
7847,
389,
23627,
6345,
11304,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
10548,
4401,
627,
7847,
389,
23627,
4188,
6524,
25,
9323,
58,
1378,
11,
85288,
1145,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
6103,
627,
7847,
389,
23627,
5011,
30587,
1534,
25,
30226,
17752,
11,
5884,
1145,
1988,
2966,
25,
610,
11,
12039,
1629,
851,
25,
24628,
11,
2748,
14334,
851,
25,
12536,
58,
25786,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
2290,
55609,
198,
6869,
994,
5507,
8638,
4401,
627,
3784,
2744,
69021,
25,
1845,
55609,
198,
10655,
25,
46018,
7049,
55609,
198,
3784,
10240,
26814,
25,
1845,
55609,
198,
25729,
311,
10240,
8479,
27777,
627,
3784,
10240,
31683,
25,
1845,
55609,
198,
25729,
311,
10240,
8957,
27777,
627,
3784,
10240,
36153,
5156,
25,
1845,
55609,
198,
25729,
311,
10240,
6369,
1646,
27777,
627,
3784,
10240,
44095,
76,
25,
1845,
55609,
198,
25729,
311,
10240,
445,
11237,
27777,
13
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
f398157a8e31-4 | property ignore_llm: bool¶
Whether to ignore LLM callbacks.
property ignore_retriever: bool¶
Whether to ignore retriever callbacks.
queue: asyncio.Queue[str]¶
raise_error: bool = False¶
run_inline: bool = False¶ | [
3784,
10240,
44095,
76,
25,
1845,
55609,
198,
25729,
311,
10240,
445,
11237,
27777,
627,
3784,
10240,
1311,
9104,
424,
25,
1845,
55609,
198,
25729,
311,
10240,
10992,
424,
27777,
627,
4687,
25,
46018,
51351,
17752,
60,
55609,
198,
19223,
4188,
25,
1845,
284,
3641,
55609,
198,
6236,
42971,
25,
1845,
284,
3641,
55609
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.streaming_aiter_final_only.AsyncFinalIteratorCallbackHandler.html |
4db315356bee-0 | langchain.callbacks.manager.get_openai_callback¶
langchain.callbacks.manager.get_openai_callback() → Generator[OpenAICallbackHandler, None, None][source]¶
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler | [
5317,
8995,
72134,
33915,
673,
11563,
2192,
12802,
55609,
198,
5317,
8995,
72134,
33915,
673,
11563,
2192,
12802,
368,
11651,
29458,
58,
5109,
32,
1341,
3503,
3126,
11,
2290,
11,
2290,
1483,
2484,
60,
55609,
198,
1991,
279,
5377,
15836,
4927,
7158,
304,
264,
2317,
6783,
627,
8370,
49170,
59381,
4037,
323,
2853,
2038,
627,
16851,
198,
791,
5377,
15836,
4927,
7158,
627,
5715,
955,
198,
5109,
32,
1341,
3503,
3126,
198,
13617,
198,
20761,
449,
636,
11563,
2192,
12802,
368,
439,
10059,
512,
1131,
257,
674,
5560,
279,
5377,
15836,
4927,
7158
] | https://langchain.readthedocs.io/en/latest/callbacks/langchain.callbacks.manager.get_openai_callback.html |
5d4b61660450-0 | langchain.input.print_text¶
langchain.input.print_text(text: str, color: Optional[str] = None, end: str = '', file: Optional[TextIO] = None) → None[source]¶
Print text with highlighting and no end characters. | [
5317,
8995,
10252,
2263,
4424,
55609,
198,
5317,
8995,
10252,
2263,
4424,
7383,
25,
610,
11,
1933,
25,
12536,
17752,
60,
284,
2290,
11,
842,
25,
610,
284,
9158,
1052,
25,
12536,
58,
1199,
3895,
60,
284,
2290,
8,
11651,
2290,
76747,
60,
55609,
198,
9171,
1495,
449,
39686,
323,
912,
842,
5885,
13
] | https://langchain.readthedocs.io/en/latest/input/langchain.input.print_text.html |
80bbe587ec3d-0 | langchain.input.get_bolded_text¶
langchain.input.get_bolded_text(text: str) → str[source]¶
Get bolded text. | [
5317,
8995,
10252,
673,
96777,
291,
4424,
55609,
198,
5317,
8995,
10252,
673,
96777,
291,
4424,
7383,
25,
610,
8,
11651,
610,
76747,
60,
55609,
198,
1991,
14265,
291,
1495,
13
] | https://langchain.readthedocs.io/en/latest/input/langchain.input.get_bolded_text.html |
a7719f72c6af-0 | langchain.input.get_colored_text¶
langchain.input.get_colored_text(text: str, color: str) → str[source]¶
Get colored text. | [
5317,
8995,
10252,
673,
10422,
3093,
4424,
55609,
198,
5317,
8995,
10252,
673,
10422,
3093,
4424,
7383,
25,
610,
11,
1933,
25,
610,
8,
11651,
610,
76747,
60,
55609,
198,
1991,
28296,
1495,
13
] | https://langchain.readthedocs.io/en/latest/input/langchain.input.get_colored_text.html |
497b29c90450-0 | langchain.input.get_color_mapping¶
langchain.input.get_color_mapping(items: List[str], excluded_colors: Optional[List] = None) → Dict[str, str][source]¶
Get mapping for items to a support color. | [
5317,
8995,
10252,
673,
6855,
28028,
55609,
198,
5317,
8995,
10252,
673,
6855,
28028,
25331,
25,
1796,
17752,
1145,
28544,
34770,
25,
12536,
53094,
60,
284,
2290,
8,
11651,
30226,
17752,
11,
610,
1483,
2484,
60,
55609,
198,
1991,
13021,
369,
3673,
311,
264,
1862,
1933,
13
] | https://langchain.readthedocs.io/en/latest/input/langchain.input.get_color_mapping.html |
f5d6671d53ad-0 | langchain.chat_models.base.BaseChatModel¶
class langchain.chat_models.base.BaseChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseLanguageModel, ABC
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage[source]¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult[source]¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶ | [
5317,
8995,
27215,
31892,
9105,
13316,
16047,
1747,
55609,
198,
1058,
8859,
8995,
27215,
31892,
9105,
13316,
16047,
1747,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
14126,
1747,
11,
19921,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
5317,
8995,
72134,
9105,
13316,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
12536,
58,
33758,
53094,
58,
5317,
8995,
72134,
9105,
13316,
7646,
3126,
1145,
8859,
8995,
72134,
9105,
13316,
7646,
2087,
5163,
284,
2290,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
76747,
60,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
76747,
60,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
76747,
60,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html |
f5d6671d53ad-1 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str[source]¶
dict(**kwargs: Any) → Dict[source]¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult[source]¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult[source]¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str[source]¶
Predict text from text. | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
76747,
60,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
76747,
60,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
76747,
60,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
76747,
60,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
76747,
60,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
76747,
60,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
76747,
60,
55609,
198,
54644,
1495,
505,
1495,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html |
f5d6671d53ad-2 | Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage[source]¶
Predict message from messages.
validator raise_deprecation » all fields[source]¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
76747,
60,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.BaseChatModel.html |
5993e072b269-0 | langchain.chat_models.google_palm.ChatGooglePalmError¶
class langchain.chat_models.google_palm.ChatGooglePalmError[source]¶
Bases: Exception
Error raised when there is an issue with the Google PaLM API.
add_note()¶
Exception.add_note(note) –
add a note to the exception
with_traceback()¶
Exception.with_traceback(tb) –
set self.__traceback__ to tb and return self.
args¶ | [
5317,
8995,
27215,
31892,
5831,
623,
7828,
59944,
14783,
47,
7828,
1480,
55609,
198,
1058,
8859,
8995,
27215,
31892,
5831,
623,
7828,
59944,
14783,
47,
7828,
1480,
76747,
60,
55609,
198,
33,
2315,
25,
4204,
198,
1480,
9408,
994,
1070,
374,
459,
4360,
449,
279,
5195,
16056,
11237,
5446,
627,
723,
28306,
368,
55609,
198,
1378,
1388,
28306,
45151,
8,
1389,
198,
723,
264,
5296,
311,
279,
4788,
198,
4291,
24489,
1445,
368,
55609,
198,
1378,
18662,
24489,
1445,
62514,
8,
1389,
198,
751,
659,
4952,
15417,
1445,
565,
311,
16767,
323,
471,
659,
627,
2164,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalmError.html |
416bd8b4079c-0 | langchain.chat_models.fake.FakeListChatModel¶
class langchain.chat_models.fake.FakeListChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, responses: List, i: int = 0)[source]¶
Bases: SimpleChatModel
Fake ChatModel for testing purposes.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param i: int = 0¶
param responses: List [Required]¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶ | [
5317,
8995,
27215,
31892,
95724,
1006,
731,
861,
16047,
1747,
55609,
198,
1058,
8859,
8995,
27215,
31892,
95724,
1006,
731,
861,
16047,
1747,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
14847,
25,
1796,
11,
602,
25,
528,
284,
220,
15,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
9170,
16047,
1747,
198,
53417,
13149,
1747,
369,
7649,
10096,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
602,
25,
528,
284,
220,
15,
55609,
198,
913,
14847,
25,
1796,
510,
8327,
60,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html |
416bd8b4079c-1 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html |
416bd8b4079c-2 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.fake.FakeListChatModel.html |
7a47e074ced3-0 | langchain.chat_models.azure_openai.AzureChatOpenAI¶
class langchain.chat_models.azure_openai.AzureChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: str = '', openai_api_base: str = '', openai_organization: str = '', openai_proxy: str = '', request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None, deployment_name: str = '', openai_api_type: str = 'azure', openai_api_version: str = '')[source]¶
Bases: ChatOpenAI
Wrapper around Azure OpenAI Chat Completion API. To use this class you
must have a deployed model on Azure OpenAI. Use deployment_name in the
constructor to refer to the “Model deployment name” in the Azure portal.
In addition, you should have the openai python package installed, and the
following environment variables set or passed in constructor in lower case:
- OPENAI_API_TYPE (default: azure)
- OPENAI_API_KEY
- OPENAI_API_BASE
- OPENAI_API_VERSION
- OPENAI_PROXY
For exmaple, if you have gpt-35-turbo deployed, with the deployment name
35-turbo-dev, the constructor should look like: | [
5317,
8995,
27215,
31892,
71340,
11563,
2192,
58927,
16047,
5109,
15836,
55609,
198,
1058,
8859,
8995,
27215,
31892,
71340,
11563,
2192,
58927,
16047,
5109,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
610,
284,
9158,
1825,
2192,
11959,
7806,
25,
610,
284,
9158,
1825,
2192,
83452,
25,
610,
284,
9158,
1825,
2192,
30812,
25,
610,
284,
9158,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
308,
25,
528,
284,
220,
16,
11,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
11,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
24047,
1292,
25,
610,
284,
9158,
1825,
2192,
11959,
1857,
25,
610,
284,
364,
40595,
518,
1825,
2192,
11959,
9625,
25,
610,
284,
364,
13588,
2484,
60,
55609,
198,
33,
2315,
25,
13149,
5109,
15836,
198,
11803,
2212,
35219,
5377,
15836,
13149,
57350,
5446,
13,
2057,
1005,
420,
538,
499,
198,
25849,
617,
264,
27167,
1646,
389,
35219,
5377,
15836,
13,
5560,
24047,
1292,
304,
279,
198,
22602,
311,
8464,
311,
279,
1054,
1747,
24047,
836,
863,
304,
279,
35219,
24007,
627,
644,
5369,
11,
499,
1288,
617,
279,
1825,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
44018,
4676,
7482,
743,
477,
5946,
304,
4797,
304,
4827,
1162,
512,
12,
30941,
15836,
11669,
4283,
320,
2309,
25,
77630,
340,
12,
30941,
15836,
11669,
6738,
198,
12,
30941,
15836,
11669,
12024,
198,
12,
30941,
15836,
11669,
10907,
198,
12,
30941,
15836,
60165,
198,
2520,
506,
2235,
273,
11,
422,
499,
617,
342,
418,
12,
1758,
2442,
324,
754,
27167,
11,
449,
279,
24047,
836,
198,
1758,
2442,
324,
754,
26842,
11,
279,
4797,
1288,
1427,
1093,
25
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
7a47e074ced3-1 | 35-turbo-dev, the constructor should look like:
AzureChatOpenAI(
deployment_name="35-turbo-dev",
openai_api_version="2023-03-15-preview",
)
Be aware the API version may change.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param deployment_name: str = ''¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-3.5-turbo' (alias 'model')¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt.
param openai_api_base: str = ''¶
param openai_api_key: str = ''¶
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param openai_api_type: str = 'azure'¶
param openai_api_version: str = ''¶
param openai_organization: str = ''¶
param openai_proxy: str = ''¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds. | [
1758,
2442,
324,
754,
26842,
11,
279,
4797,
1288,
1427,
1093,
512,
79207,
16047,
5109,
15836,
1021,
262,
24047,
1292,
429,
1758,
2442,
324,
754,
26842,
761,
262,
1825,
2192,
11959,
9625,
429,
2366,
18,
12,
2839,
12,
868,
51981,
761,
340,
3513,
8010,
279,
5446,
2373,
1253,
2349,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
24047,
1292,
25,
610,
284,
3436,
55609,
198,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
28409,
1396,
315,
11460,
311,
7068,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
6369,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
610,
284,
3436,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
610,
284,
3436,
55609,
198,
4066,
5665,
1853,
369,
5446,
7540,
345,
22233,
10321,
422,
539,
1701,
264,
13594,
477,
2532,
59616,
627,
913,
1825,
2192,
11959,
1857,
25,
610,
284,
364,
40595,
6,
55609,
198,
913,
1825,
2192,
11959,
9625,
25,
610,
284,
3436,
55609,
198,
913,
1825,
2192,
83452,
25,
610,
284,
3436,
55609,
198,
913,
1825,
2192,
30812,
25,
610,
284,
3436,
55609,
198,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
7a47e074ced3-2 | Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call | [
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
7a47e074ced3-3 | Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
completion_with_retry(**kwargs: Any) → Any¶
Use tenacity to retry the completion call.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text. | [
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
44412,
6753,
63845,
22551,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
10464,
5899,
4107,
311,
23515,
279,
9954,
1650,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
7a47e074ced3-4 | Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int]¶
Get the tokens present in the text with tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object. | [
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
48966,
1661,
11460,
369,
342,
418,
12,
18,
13,
20,
2442,
324,
754,
323,
342,
418,
12,
19,
449,
87272,
5963,
6462,
627,
34996,
9904,
25,
3788,
1129,
5316,
916,
38744,
2192,
38744,
2192,
23283,
564,
2239,
35927,
6018,
3902,
68120,
14,
4438,
2401,
9132,
29657,
2401,
932,
9379,
38,
2898,
31892,
24046,
1910,
65,
198,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
11460,
3118,
304,
279,
1495,
449,
87272,
5963,
6462,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
7a47e074ced3-5 | model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.azure_openai.AzureChatOpenAI.html |
d1c5285b6c30-0 | langchain.chat_models.google_palm.chat_with_retry¶
langchain.chat_models.google_palm.chat_with_retry(llm: ChatGooglePalm, **kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call. | [
5317,
8995,
27215,
31892,
5831,
623,
7828,
27215,
6753,
63845,
55609,
198,
5317,
8995,
27215,
31892,
5831,
623,
7828,
27215,
6753,
63845,
36621,
76,
25,
13149,
14783,
47,
7828,
11,
3146,
9872,
25,
5884,
8,
11651,
5884,
76747,
60,
55609,
198,
10464,
5899,
4107,
311,
23515,
279,
9954,
1650,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.chat_with_retry.html |
9b27caf5c371-0 | langchain.chat_models.openai.ChatOpenAI¶
class langchain.chat_models.openai.ChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None)[source]¶
Bases: BaseChatModel
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.chat_models import ChatOpenAI
openai = ChatOpenAI(model_name="gpt-3.5-turbo")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶ | [
5317,
8995,
27215,
31892,
5949,
2192,
59944,
5109,
15836,
55609,
198,
1058,
8859,
8995,
27215,
31892,
5949,
2192,
59944,
5109,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
308,
25,
528,
284,
220,
16,
11,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
11,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
16047,
1747,
198,
11803,
2212,
5377,
15836,
13149,
3544,
4221,
4211,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
30941,
15836,
11669,
6738,
743,
449,
701,
5446,
1401,
627,
8780,
5137,
430,
527,
2764,
311,
387,
5946,
311,
279,
1825,
2192,
2581,
1650,
649,
387,
5946,
198,
258,
11,
1524,
422,
539,
21650,
6924,
389,
420,
538,
627,
13617,
198,
1527,
8859,
8995,
27215,
31892,
1179,
13149,
5109,
15836,
198,
2569,
2192,
284,
13149,
5109,
15836,
7790,
1292,
429,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
9b27caf5c371-1 | param callbacks: Callbacks = None¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-3.5-turbo' (alias 'model')¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or | [
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
28409,
1396,
315,
11460,
311,
7068,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
6369,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
4066,
5665,
1853,
369,
5446,
7540,
345,
22233,
10321,
422,
539,
1701,
264,
13594,
477,
2532,
59616,
627,
913,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
9b27caf5c371-2 | supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields[source]¶
Build extra kwargs from additional params that were passed in.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶ | [
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
9b27caf5c371-3 | completion_with_retry(**kwargs: Any) → Any[source]¶
Use tenacity to retry the completion call.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int[source]¶
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int][source]¶
Get the tokens present in the text with tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ | [
44412,
6753,
63845,
22551,
9872,
25,
5884,
8,
11651,
5884,
76747,
60,
55609,
198,
10464,
5899,
4107,
311,
23515,
279,
9954,
1650,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
76747,
60,
55609,
198,
48966,
1661,
11460,
369,
342,
418,
12,
18,
13,
20,
2442,
324,
754,
323,
342,
418,
12,
19,
449,
87272,
5963,
6462,
627,
34996,
9904,
25,
3788,
1129,
5316,
916,
38744,
2192,
38744,
2192,
23283,
564,
2239,
35927,
6018,
3902,
68120,
14,
4438,
2401,
9132,
29657,
2401,
932,
9379,
38,
2898,
31892,
24046,
1910,
65,
198,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
1483,
2484,
60,
55609,
198,
1991,
279,
11460,
3118,
304,
279,
1495,
449,
87272,
5963,
6462,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
9b27caf5c371-4 | to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.openai.ChatOpenAI.html |
0838031a8a6e-0 | langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI¶
class langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model: str = 'gpt-3.5-turbo', temperature: float = 0.7, model_kwargs: Dict[str, Any] = None, openai_api_key: Optional[str] = None, openai_api_base: Optional[str] = None, openai_organization: Optional[str] = None, openai_proxy: Optional[str] = None, request_timeout: Optional[Union[float, Tuple[float, float]]] = None, max_retries: int = 6, streaming: bool = False, n: int = 1, max_tokens: Optional[int] = None, tiktoken_model_name: Optional[str] = None, pl_tags: Optional[List[str]] = None, return_pl_id: Optional[bool] = False)[source]¶
Bases: ChatOpenAI
Wrapper around OpenAI Chat large language models and PromptLayer.
To use, you should have the openai and promptlayer python
package installed, and the environment variable OPENAI_API_KEY
and PROMPTLAYER_API_KEY set with your openAI API key and
promptlayer key respectively.
All parameters that can be passed to the OpenAI LLM can also
be passed here. The PromptLayerChatOpenAI adds to optional
Parameters
pl_tags – List of strings to tag the request with.
return_pl_id – If True, the PromptLayer request ID will be
returned in the generation_info field of the
Generation object.
Example | [
5317,
8995,
27215,
31892,
66499,
10546,
11563,
2192,
1087,
15091,
9368,
16047,
5109,
15836,
55609,
198,
1058,
8859,
8995,
27215,
31892,
66499,
10546,
11563,
2192,
1087,
15091,
9368,
16047,
5109,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
518,
9499,
25,
2273,
284,
220,
15,
13,
22,
11,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
284,
2290,
11,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
11,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
11,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
1973,
1311,
4646,
25,
528,
284,
220,
21,
11,
17265,
25,
1845,
284,
3641,
11,
308,
25,
528,
284,
220,
16,
11,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
11,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
11,
628,
16735,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
471,
6451,
851,
25,
12536,
58,
2707,
60,
284,
3641,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
13149,
5109,
15836,
198,
11803,
2212,
5377,
15836,
13149,
3544,
4221,
4211,
323,
60601,
9368,
627,
1271,
1005,
11,
499,
1288,
617,
279,
1825,
2192,
323,
10137,
10546,
10344,
198,
1757,
10487,
11,
323,
279,
4676,
3977,
30941,
15836,
11669,
6738,
198,
438,
68788,
2898,
43,
15108,
11669,
6738,
743,
449,
701,
1825,
15836,
5446,
1401,
323,
198,
41681,
10546,
1401,
15947,
627,
2460,
5137,
430,
649,
387,
5946,
311,
279,
5377,
15836,
445,
11237,
649,
1101,
198,
1395,
5946,
1618,
13,
578,
60601,
9368,
16047,
5109,
15836,
11621,
311,
10309,
198,
9905,
198,
501,
16735,
1389,
1796,
315,
9246,
311,
4877,
279,
1715,
449,
627,
693,
6451,
851,
1389,
1442,
3082,
11,
279,
60601,
9368,
1715,
3110,
690,
387,
198,
78691,
304,
279,
9659,
3186,
2115,
315,
279,
198,
38238,
1665,
627,
13617
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html |
0838031a8a6e-1 | returned in the generation_info field of the
Generation object.
Example
from langchain.chat_models import PromptLayerChatOpenAI
openai = PromptLayerChatOpenAI(model_name="gpt-3.5-turbo")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param max_retries: int = 6¶
Maximum number of retries to make when generating.
param max_tokens: Optional[int] = None¶
Maximum number of tokens to generate.
param model_kwargs: Dict[str, Any] [Optional]¶
Holds any model parameters valid for create call not explicitly specified.
param model_name: str = 'gpt-3.5-turbo' (alias 'model')¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt.
param openai_api_base: Optional[str] = None¶
param openai_api_key: Optional[str] = None¶
Base URL path for API requests,
leave blank if not using a proxy or service emulator.
param openai_organization: Optional[str] = None¶
param openai_proxy: Optional[str] = None¶
param pl_tags: Optional[List[str]] = None¶
param request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to OpenAI completion API. Default is 600 seconds.
param return_pl_id: Optional[bool] = False¶
param streaming: bool = False¶
Whether to stream the results or not.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace. | [
78691,
304,
279,
9659,
3186,
2115,
315,
279,
198,
38238,
1665,
627,
13617,
198,
1527,
8859,
8995,
27215,
31892,
1179,
60601,
9368,
16047,
5109,
15836,
198,
2569,
2192,
284,
60601,
9368,
16047,
5109,
15836,
7790,
1292,
429,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
1973,
1311,
4646,
25,
528,
284,
220,
21,
55609,
198,
28409,
1396,
315,
61701,
311,
1304,
994,
24038,
627,
913,
1973,
29938,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
28409,
1396,
315,
11460,
311,
7068,
627,
913,
1646,
37335,
25,
30226,
17752,
11,
5884,
60,
510,
15669,
60,
55609,
198,
39,
18938,
904,
1646,
5137,
2764,
369,
1893,
1650,
539,
21650,
5300,
627,
913,
1646,
1292,
25,
610,
284,
364,
70,
418,
12,
18,
13,
20,
2442,
324,
754,
6,
320,
15305,
364,
2590,
873,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
6369,
3543,
919,
311,
7068,
369,
1855,
10137,
627,
913,
1825,
2192,
11959,
7806,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
4066,
5665,
1853,
369,
5446,
7540,
345,
22233,
10321,
422,
539,
1701,
264,
13594,
477,
2532,
59616,
627,
913,
1825,
2192,
83452,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1825,
2192,
30812,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
628,
16735,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
913,
1715,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
5377,
15836,
9954,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
471,
6451,
851,
25,
12536,
58,
2707,
60,
284,
3641,
55609,
198,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
477,
539,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html |
0838031a8a6e-2 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.7¶
What sampling temperature to use.
param tiktoken_model_name: Optional[str] = None¶
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult. | [
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
22,
55609,
198,
3923,
25936,
9499,
311,
1005,
627,
913,
87272,
5963,
5156,
1292,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1646,
836,
311,
1522,
311,
87272,
5963,
994,
1701,
420,
538,
627,
51,
1609,
5963,
374,
1511,
311,
1797,
279,
1396,
315,
11460,
304,
9477,
311,
80799,
198,
49818,
311,
387,
1234,
264,
3738,
4017,
13,
3296,
1670,
11,
994,
743,
311,
2290,
11,
420,
690,
198,
1395,
279,
1890,
439,
279,
40188,
1646,
836,
13,
4452,
11,
1070,
527,
1063,
5157,
198,
2940,
499,
1253,
1390,
311,
1005,
420,
38168,
7113,
538,
449,
264,
1646,
836,
539,
198,
18717,
555,
87272,
5963,
13,
1115,
649,
2997,
994,
1701,
35219,
71647,
477,
198,
9493,
1701,
832,
315,
279,
1690,
1646,
12850,
430,
29241,
459,
5377,
15836,
12970,
198,
7227,
719,
449,
2204,
4211,
13,
763,
1884,
5157,
11,
304,
2015,
311,
5766,
1493,
287,
198,
9493,
87272,
5963,
374,
2663,
11,
499,
649,
14158,
264,
1646,
836,
311,
1005,
1618,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html |
0838031a8a6e-3 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator build_extra » all fields¶
Build extra kwargs from additional params that were passed in.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
completion_with_retry(**kwargs: Any) → Any¶
Use tenacity to retry the completion call.
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Calculate num tokens for gpt-3.5-turbo and gpt-4 with tiktoken package.
Official documentation: https://github.com/openai/openai-cookbook/blob/ | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
1977,
32958,
4194,
8345,
4194,
682,
5151,
55609,
198,
11313,
5066,
16901,
505,
5217,
3712,
430,
1051,
5946,
304,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
44412,
6753,
63845,
22551,
9872,
25,
5884,
8,
11651,
5884,
55609,
198,
10464,
5899,
4107,
311,
23515,
279,
9954,
1650,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
48966,
1661,
11460,
369,
342,
418,
12,
18,
13,
20,
2442,
324,
754,
323,
342,
418,
12,
19,
449,
87272,
5963,
6462,
627,
34996,
9904,
25,
3788,
1129,
5316,
916,
38744,
2192,
38744,
2192,
23283,
564,
2239,
35927,
14
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html |
0838031a8a6e-4 | Official documentation: https://github.com/openai/openai-cookbook/blob/
main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
get_token_ids(text: str) → List[int]¶
Get the tokens present in the text with tiktoken package.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
allow_population_by_field_name = True¶ | [
34996,
9904,
25,
3788,
1129,
5316,
916,
38744,
2192,
38744,
2192,
23283,
564,
2239,
35927,
6018,
3902,
68120,
14,
4438,
2401,
9132,
29657,
2401,
932,
9379,
38,
2898,
31892,
24046,
1910,
65,
198,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
11460,
3118,
304,
279,
1495,
449,
87272,
5963,
6462,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
7331,
75672,
3795,
5121,
1292,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.promptlayer_openai.PromptLayerChatOpenAI.html |
b18de779b8cd-0 | langchain.chat_models.base.SimpleChatModel¶
class langchain.chat_models.base.SimpleChatModel(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseChatModel
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult. | [
5317,
8995,
27215,
31892,
9105,
25236,
16047,
1747,
55609,
198,
1058,
8859,
8995,
27215,
31892,
9105,
25236,
16047,
1747,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
16047,
1747,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
5317,
8995,
72134,
9105,
13316,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
12536,
58,
33758,
53094,
58,
5317,
8995,
72134,
9105,
13316,
7646,
3126,
1145,
8859,
8995,
72134,
9105,
13316,
7646,
2087,
5163,
284,
2290,
55609,
198,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html |
b18de779b8cd-1 | Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶ | [
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html |
b18de779b8cd-2 | Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.base.SimpleChatModel.html |
17fc0767ec00-0 | langchain.chat_models.google_palm.ChatGooglePalm¶
class langchain.chat_models.google_palm.ChatGooglePalm(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: Any = None, model_name: str = 'models/chat-bison-001', google_api_key: Optional[str] = None, temperature: Optional[float] = None, top_p: Optional[float] = None, top_k: Optional[int] = None, n: int = 1)[source]¶
Bases: BaseChatModel, BaseModel
Wrapper around Google’s PaLM Chat API.
To use you must have the google.generativeai Python package installed and
either:
The GOOGLE_API_KEY` environment varaible set with your API key, or
Pass your API key using the google_api_key kwarg to the ChatGoogle
constructor.
Example
from langchain.chat_models import ChatGooglePalm
chat = ChatGooglePalm()
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param google_api_key: Optional[str] = None¶
param model_name: str = 'models/chat-bison-001'¶
Model name to use.
param n: int = 1¶
Number of chat completions to generate for each prompt. Note that the API may
not return the full n completions if duplicates are generated.
param tags: Optional[List[str]] = None¶ | [
5317,
8995,
27215,
31892,
5831,
623,
7828,
59944,
14783,
47,
7828,
55609,
198,
1058,
8859,
8995,
27215,
31892,
5831,
623,
7828,
59944,
14783,
47,
7828,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
6644,
72435,
1481,
3416,
12,
4119,
518,
11819,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
9499,
25,
12536,
96481,
60,
284,
2290,
11,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
11,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
11,
308,
25,
528,
284,
220,
16,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
16047,
1747,
11,
65705,
198,
11803,
2212,
5195,
753,
16056,
11237,
13149,
5446,
627,
1271,
1005,
499,
2011,
617,
279,
11819,
1326,
75989,
2192,
13325,
6462,
10487,
323,
198,
50998,
512,
791,
91316,
11669,
6738,
63,
4676,
63901,
1260,
743,
449,
701,
5446,
1401,
11,
477,
198,
12465,
701,
5446,
1401,
1701,
279,
11819,
11959,
3173,
30625,
867,
311,
279,
13149,
14783,
198,
22602,
627,
13617,
198,
1527,
8859,
8995,
27215,
31892,
1179,
13149,
14783,
47,
7828,
198,
9884,
284,
13149,
14783,
47,
7828,
746,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
11819,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
1646,
1292,
25,
610,
284,
364,
6644,
72435,
1481,
3416,
12,
4119,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
308,
25,
528,
284,
220,
16,
55609,
198,
2903,
315,
6369,
3543,
919,
311,
7068,
369,
1855,
10137,
13,
7181,
430,
279,
5446,
1253,
198,
1962,
471,
279,
2539,
308,
3543,
919,
422,
43428,
527,
8066,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html |
17fc0767ec00-1 | param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: Optional[float] = None¶
Run inference with this temperature. Must by in the closed
interval [0.0, 1.0].
param top_k: Optional[int] = None¶
Decode using top-k sampling: consider the set of top_k most probable tokens.
Must be positive.
param top_p: Optional[float] = None¶
Decode using nucleus sampling: consider the smallest set of tokens whose
probability sum is at least top_p. Must be in the closed interval [0.0, 1.0].
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text. | [
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
6869,
45478,
449,
420,
9499,
13,
15832,
555,
304,
279,
8036,
198,
29004,
510,
15,
13,
15,
11,
220,
16,
13,
15,
27218,
913,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
33664,
1701,
1948,
12934,
25936,
25,
2980,
279,
743,
315,
1948,
4803,
1455,
35977,
11460,
627,
32876,
387,
6928,
627,
913,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
33664,
1701,
62607,
25936,
25,
2980,
279,
25655,
743,
315,
11460,
6832,
198,
88540,
2694,
374,
520,
3325,
1948,
623,
13,
15832,
387,
304,
279,
8036,
10074,
510,
15,
13,
15,
11,
220,
16,
13,
15,
27218,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html |
17fc0767ec00-2 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶ | [
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html |
17fc0767ec00-3 | to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate api key, python package exists, temperature, top_p, and top_k.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
6464,
1401,
11,
10344,
6462,
6866,
11,
9499,
11,
1948,
623,
11,
323,
1948,
4803,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.google_palm.ChatGooglePalm.html |
ece72636962d-0 | langchain.chat_models.anthropic.ChatAnthropic¶
class langchain.chat_models.anthropic.ChatAnthropic(*, client: Any = None, model: str = 'claude-v1', max_tokens_to_sample: int = 256, temperature: Optional[float] = None, top_k: Optional[int] = None, top_p: Optional[float] = None, streaming: bool = False, default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None, anthropic_api_url: Optional[str] = None, anthropic_api_key: Optional[str] = None, HUMAN_PROMPT: Optional[str] = None, AI_PROMPT: Optional[str] = None, count_tokens: Optional[Callable[[str], int]] = None, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None)[source]¶
Bases: BaseChatModel, _AnthropicCommon
Wrapper around Anthropic’s large language model.
To use, you should have the anthropic python package installed, and the
environment variable ANTHROPIC_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
import anthropic
from langchain.llms import Anthropic
model = ChatAnthropic(model="<model_name>", anthropic_api_key="my-api-key")
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param AI_PROMPT: Optional[str] = None¶
param HUMAN_PROMPT: Optional[str] = None¶
param anthropic_api_key: Optional[str] = None¶ | [
5317,
8995,
27215,
31892,
10985,
339,
45036,
59944,
62804,
45036,
55609,
198,
1058,
8859,
8995,
27215,
31892,
10985,
339,
45036,
59944,
62804,
45036,
4163,
11,
3016,
25,
5884,
284,
2290,
11,
1646,
25,
610,
284,
364,
54761,
799,
8437,
16,
518,
1973,
29938,
2401,
17949,
25,
528,
284,
220,
4146,
11,
9499,
25,
12536,
96481,
60,
284,
2290,
11,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
11,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
11,
17265,
25,
1845,
284,
3641,
11,
1670,
8052,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
11,
41416,
292,
11959,
2975,
25,
12536,
17752,
60,
284,
2290,
11,
41416,
292,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
11,
473,
68665,
72446,
2898,
25,
12536,
17752,
60,
284,
2290,
11,
15592,
72446,
2898,
25,
12536,
17752,
60,
284,
2290,
11,
1797,
29938,
25,
12536,
58,
41510,
15873,
496,
1145,
528,
5163,
284,
2290,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
16047,
1747,
11,
721,
62804,
45036,
11076,
198,
11803,
2212,
16989,
45036,
753,
3544,
4221,
1646,
627,
1271,
1005,
11,
499,
1288,
617,
279,
41416,
292,
10344,
6462,
10487,
11,
323,
279,
198,
24175,
3977,
2147,
3701,
19828,
1341,
11669,
6738,
743,
449,
701,
5446,
1401,
11,
477,
1522,
198,
275,
439,
264,
7086,
5852,
311,
279,
4797,
627,
13617,
198,
475,
41416,
292,
198,
1527,
8859,
8995,
60098,
1026,
1179,
16989,
45036,
198,
2590,
284,
13149,
62804,
45036,
7790,
35576,
2590,
1292,
21841,
41416,
292,
11959,
3173,
429,
2465,
24851,
16569,
1158,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
15592,
72446,
2898,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
473,
68665,
72446,
2898,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
41416,
292,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html |
ece72636962d-1 | param anthropic_api_key: Optional[str] = None¶
param anthropic_api_url: Optional[str] = None¶
param cache: Optional[bool] = None¶
param callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None¶
param callbacks: Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]] = None¶
param count_tokens: Optional[Callable[[str], int]] = None¶
param default_request_timeout: Optional[Union[float, Tuple[float, float]]] = None¶
Timeout for requests to Anthropic Completion API. Default is 600 seconds.
param max_tokens_to_sample: int = 256¶
Denotes the number of tokens to predict per generation.
param model: str = 'claude-v1'¶
Model name to use.
param streaming: bool = False¶
Whether to stream the results.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: Optional[float] = None¶
A non-negative float that tunes the degree of randomness in generation.
param top_k: Optional[int] = None¶
Number of most likely tokens to consider at each step.
param top_p: Optional[float] = None¶
Total probability mass of tokens to consider at each step.
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function. | [
913,
41416,
292,
11959,
3173,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
41416,
292,
11959,
2975,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
5317,
8995,
72134,
9105,
13316,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
12536,
58,
33758,
53094,
58,
5317,
8995,
72134,
9105,
13316,
7646,
3126,
1145,
8859,
8995,
72134,
9105,
13316,
7646,
2087,
5163,
284,
2290,
55609,
198,
913,
1797,
29938,
25,
12536,
58,
41510,
15873,
496,
1145,
528,
5163,
284,
2290,
55609,
198,
913,
1670,
8052,
21179,
25,
12536,
58,
33758,
96481,
11,
25645,
96481,
11,
2273,
5163,
60,
284,
2290,
55609,
198,
7791,
369,
7540,
311,
16989,
45036,
57350,
5446,
13,
8058,
374,
220,
5067,
6622,
627,
913,
1973,
29938,
2401,
17949,
25,
528,
284,
220,
4146,
55609,
198,
24539,
6429,
279,
1396,
315,
11460,
311,
7168,
824,
9659,
627,
913,
1646,
25,
610,
284,
364,
54761,
799,
8437,
16,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
17265,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
4365,
279,
3135,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
32,
2536,
62035,
2273,
430,
55090,
279,
8547,
315,
87790,
304,
9659,
627,
913,
1948,
4803,
25,
12536,
19155,
60,
284,
2290,
55609,
198,
2903,
315,
1455,
4461,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
1948,
623,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
7749,
19463,
3148,
315,
11460,
311,
2980,
520,
1855,
3094,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html |
ece72636962d-2 | Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int[source]¶ | [
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
76747,
60,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html |
ece72636962d-3 | get_num_tokens(text: str) → int[source]¶
Calculate number of tokens.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields¶
Validate that api key and python package exists in environment.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
76747,
60,
55609,
198,
48966,
1396,
315,
11460,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
6464,
1401,
323,
10344,
6462,
6866,
304,
4676,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.anthropic.ChatAnthropic.html |
445cabfa85f2-0 | langchain.chat_models.vertexai.ChatVertexAI¶
class langchain.chat_models.vertexai.ChatVertexAI(*, cache: Optional[bool] = None, verbose: bool = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, tags: Optional[List[str]] = None, client: _LanguageModel = None, model_name: str = 'chat-bison', temperature: float = 0.0, max_output_tokens: int = 128, top_p: float = 0.95, top_k: int = 40, stop: Optional[List[str]] = None, project: Optional[str] = None, location: str = 'us-central1', credentials: Any = None, request_parallelism: int = 5)[source]¶
Bases: _VertexAICommon, BaseChatModel
Wrapper around Vertex AI large language models.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param cache: Optional[bool] = None¶
param callback_manager: Optional[BaseCallbackManager] = None¶
param callbacks: Callbacks = None¶
param credentials: Any = None¶
The default custom credentials (google.auth.credentials.Credentials) to use
param location: str = 'us-central1'¶
The default location to use when making API calls.
param max_output_tokens: int = 128¶
Token limit determines the maximum amount of text output from one prompt.
param model_name: str = 'chat-bison'¶
Model name to use.
param project: Optional[str] = None¶
The default GCP project to use when making Vertex API calls.
param request_parallelism: int = 5¶ | [
5317,
8995,
27215,
31892,
48375,
2192,
59944,
8484,
15836,
55609,
198,
1058,
8859,
8995,
27215,
31892,
48375,
2192,
59944,
8484,
15836,
4163,
11,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3016,
25,
721,
14126,
1747,
284,
2290,
11,
1646,
1292,
25,
610,
284,
364,
9884,
1481,
3416,
518,
9499,
25,
2273,
284,
220,
15,
13,
15,
11,
1973,
7800,
29938,
25,
528,
284,
220,
4386,
11,
1948,
623,
25,
2273,
284,
220,
15,
13,
2721,
11,
1948,
4803,
25,
528,
284,
220,
1272,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
2447,
25,
12536,
17752,
60,
284,
2290,
11,
3813,
25,
610,
284,
364,
355,
85181,
16,
518,
16792,
25,
5884,
284,
2290,
11,
1715,
61725,
2191,
25,
528,
284,
220,
20,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
721,
8484,
15836,
11076,
11,
5464,
16047,
1747,
198,
11803,
2212,
24103,
15592,
3544,
4221,
4211,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
6636,
25,
12536,
58,
2707,
60,
284,
2290,
55609,
198,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
913,
16792,
25,
5884,
284,
2290,
55609,
198,
791,
1670,
2587,
16792,
320,
17943,
9144,
75854,
732,
16112,
8,
311,
1005,
198,
913,
3813,
25,
610,
284,
364,
355,
85181,
16,
6,
55609,
198,
791,
1670,
3813,
311,
1005,
994,
3339,
5446,
6880,
627,
913,
1973,
7800,
29938,
25,
528,
284,
220,
4386,
55609,
198,
3404,
4017,
27667,
279,
7340,
3392,
315,
1495,
2612,
505,
832,
10137,
627,
913,
1646,
1292,
25,
610,
284,
364,
9884,
1481,
3416,
6,
55609,
198,
1747,
836,
311,
1005,
627,
913,
2447,
25,
12536,
17752,
60,
284,
2290,
55609,
198,
791,
1670,
480,
7269,
2447,
311,
1005,
994,
3339,
24103,
5446,
6880,
627,
913,
1715,
61725,
2191,
25,
528,
284,
220,
20,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html |
445cabfa85f2-1 | param request_parallelism: int = 5¶
The amount of parallelism allowed for requests issued to VertexAI models.
param stop: Optional[List[str]] = None¶
Optional list of stop words to use when generating.
param tags: Optional[List[str]] = None¶
Tags to add to the run trace.
param temperature: float = 0.0¶
Sampling temperature, it controls the degree of randomness in token selection.
param top_k: int = 40¶
How the model selects tokens for output, the next token is selected from
param top_p: float = 0.95¶
Tokens are selected from most probable to least until the sum of their
param verbose: bool [Optional]¶
Whether to print out response text.
__call__(messages: List[BaseMessage], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → BaseMessage¶
Call self as a function.
async agenerate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
async agenerate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
classmethod all_required_field_names() → Set¶
async apredict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text. | [
913,
1715,
61725,
2191,
25,
528,
284,
220,
20,
55609,
198,
791,
3392,
315,
15638,
2191,
5535,
369,
7540,
11136,
311,
24103,
15836,
4211,
627,
913,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
15669,
1160,
315,
3009,
4339,
311,
1005,
994,
24038,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
16309,
311,
923,
311,
279,
1629,
11917,
627,
913,
9499,
25,
2273,
284,
220,
15,
13,
15,
55609,
198,
99722,
9499,
11,
433,
11835,
279,
8547,
315,
87790,
304,
4037,
6727,
627,
913,
1948,
4803,
25,
528,
284,
220,
1272,
55609,
198,
4438,
279,
1646,
50243,
11460,
369,
2612,
11,
279,
1828,
4037,
374,
4183,
505,
198,
913,
1948,
623,
25,
2273,
284,
220,
15,
13,
2721,
55609,
198,
30400,
527,
4183,
505,
1455,
35977,
311,
3325,
3156,
279,
2694,
315,
872,
198,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
311,
1194,
704,
2077,
1495,
627,
565,
6797,
3889,
16727,
25,
1796,
58,
4066,
2097,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
7368,
659,
439,
264,
734,
627,
7847,
945,
13523,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
7847,
945,
13523,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
27853,
682,
19265,
5121,
9366,
368,
11651,
2638,
55609,
198,
7847,
1469,
9037,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
13
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html |
445cabfa85f2-2 | Predict text from text.
async apredict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
call_as_llm(message: str, stop: Optional[List[str]] = None, **kwargs: Any) → str¶
dict(**kwargs: Any) → Dict¶
Return a dictionary of the LLM.
generate(messages: List[List[BaseMessage]], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, **kwargs: Any) → LLMResult¶
Top Level call
generate_prompt(prompts: List[PromptValue], stop: Optional[List[str]] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → LLMResult¶
Take in a list of prompt values and return an LLMResult.
get_num_tokens(text: str) → int¶
Get the number of tokens present in the text.
get_num_tokens_from_messages(messages: List[BaseMessage]) → int¶
Get the number of tokens in the message.
get_token_ids(text: str) → List[int]¶
Get the token present in the text.
predict(text: str, *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → str¶
Predict text from text.
predict_messages(messages: List[BaseMessage], *, stop: Optional[Sequence[str]] = None, **kwargs: Any) → BaseMessage¶
Predict message from messages.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶ | [
54644,
1495,
505,
1495,
627,
7847,
1469,
9037,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
6797,
12162,
44095,
76,
7483,
25,
610,
11,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
264,
11240,
315,
279,
445,
11237,
627,
19927,
56805,
25,
1796,
53094,
58,
4066,
2097,
21128,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
5479,
9580,
1650,
198,
19927,
62521,
84432,
13044,
25,
1796,
43447,
15091,
1150,
1145,
3009,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
445,
11237,
2122,
55609,
198,
18293,
304,
264,
1160,
315,
10137,
2819,
323,
471,
459,
445,
11237,
2122,
627,
456,
4369,
29938,
7383,
25,
610,
8,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
3118,
304,
279,
1495,
627,
456,
4369,
29938,
5791,
24321,
56805,
25,
1796,
58,
4066,
2097,
2526,
11651,
528,
55609,
198,
1991,
279,
1396,
315,
11460,
304,
279,
1984,
627,
456,
6594,
8237,
7383,
25,
610,
8,
11651,
1796,
19155,
60,
55609,
198,
1991,
279,
4037,
3118,
304,
279,
1495,
627,
35798,
7383,
25,
610,
11,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
54644,
1495,
505,
1495,
627,
35798,
24321,
56805,
25,
1796,
58,
4066,
2097,
1145,
12039,
3009,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
2097,
55609,
198,
54644,
1984,
505,
6743,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html |
445cabfa85f2-3 | to_json_not_implemented() → SerializedNotImplemented¶
validator validate_environment » all fields[source]¶
Validate that the python package exists in environment.
property is_codey_model: bool¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
task_executor: ClassVar[Optional[Executor]] = None¶
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
52874,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
279,
10344,
6462,
6866,
304,
4676,
627,
3784,
374,
4229,
88,
5156,
25,
1845,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
8366,
82307,
25,
3308,
4050,
58,
15669,
58,
26321,
5163,
284,
2290,
55609,
198,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/chat_models/langchain.chat_models.vertexai.ChatVertexAI.html |
853ccbb4ae77-0 | langchain.example_generator.generate_example¶
langchain.example_generator.generate_example(examples: List[dict], llm: BaseLanguageModel, prompt_template: PromptTemplate) → str[source]¶
Return another example given a list of examples for a prompt. | [
5317,
8995,
7880,
26898,
22793,
40404,
55609,
198,
5317,
8995,
7880,
26898,
22793,
40404,
5580,
4112,
25,
1796,
58,
8644,
1145,
9507,
76,
25,
5464,
14126,
1747,
11,
10137,
8864,
25,
60601,
7423,
8,
11651,
610,
76747,
60,
55609,
198,
5715,
2500,
3187,
2728,
264,
1160,
315,
10507,
369,
264,
10137,
13
] | https://langchain.readthedocs.io/en/latest/example_generator/langchain.example_generator.generate_example.html |
f614788bd594-0 | langchain.sql_database.truncate_word¶
langchain.sql_database.truncate_word(content: Any, *, length: int, suffix: str = '...') → str[source]¶
Truncate a string to a certain number of words, based on the max string
length. | [
5317,
8995,
10251,
28441,
5543,
27998,
13843,
55609,
198,
5317,
8995,
10251,
28441,
5543,
27998,
13843,
15413,
25,
5884,
11,
12039,
3160,
25,
528,
11,
21166,
25,
610,
284,
364,
1131,
873,
11651,
610,
76747,
60,
55609,
198,
1305,
27998,
264,
925,
311,
264,
3738,
1396,
315,
4339,
11,
3196,
389,
279,
1973,
925,
198,
4222,
13
] | https://langchain.readthedocs.io/en/latest/sql_database/langchain.sql_database.truncate_word.html |
38f37c7b94c6-0 | langchain.math_utils.cosine_similarity_top_k¶
langchain.math_utils.cosine_similarity_top_k(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray], top_k: Optional[int] = 5, score_threshold: Optional[float] = None) → Tuple[List[Tuple[int, int]], List[float]][source]¶
Row-wise cosine similarity with optional top-k and score threshold filtering.
Parameters
X – Matrix.
Y – Matrix, same width as X.
top_k – Max number of results to return.
score_threshold – Minimum cosine similarity of results.
Returns
Tuple of two lists. First contains two-tuples of indices (X_idx, Y_idx),second contains corresponding cosine similarities. | [
5317,
8995,
22346,
17758,
21832,
483,
77336,
10643,
4803,
55609,
198,
5317,
8995,
22346,
17758,
21832,
483,
77336,
10643,
4803,
7799,
25,
9323,
53094,
53094,
96481,
21128,
1796,
58,
303,
1686,
1145,
67983,
1145,
816,
25,
9323,
53094,
53094,
96481,
21128,
1796,
58,
303,
1686,
1145,
67983,
1145,
1948,
4803,
25,
12536,
19155,
60,
284,
220,
20,
11,
5573,
22616,
25,
12536,
96481,
60,
284,
2290,
8,
11651,
25645,
53094,
20961,
6189,
19155,
11,
528,
21128,
1796,
96481,
28819,
2484,
60,
55609,
198,
3179,
45539,
76359,
38723,
449,
10309,
1948,
12934,
323,
5573,
12447,
30770,
627,
9905,
198,
55,
1389,
11892,
627,
56,
1389,
11892,
11,
1890,
2430,
439,
1630,
627,
3565,
4803,
1389,
7639,
1396,
315,
3135,
311,
471,
627,
12618,
22616,
1389,
32025,
76359,
38723,
315,
3135,
627,
16851,
198,
29781,
315,
1403,
11725,
13,
5629,
5727,
1403,
2442,
29423,
315,
15285,
320,
55,
7406,
11,
816,
7406,
705,
5686,
5727,
12435,
76359,
43874,
13
] | https://langchain.readthedocs.io/en/latest/math_utils/langchain.math_utils.cosine_similarity_top_k.html |
bb9fb4931c7c-0 | langchain.math_utils.cosine_similarity¶
langchain.math_utils.cosine_similarity(X: Union[List[List[float]], List[ndarray], ndarray], Y: Union[List[List[float]], List[ndarray], ndarray]) → ndarray[source]¶
Row-wise cosine similarity between two equal-width matrices. | [
5317,
8995,
22346,
17758,
21832,
483,
77336,
55609,
198,
5317,
8995,
22346,
17758,
21832,
483,
77336,
7799,
25,
9323,
53094,
53094,
96481,
21128,
1796,
58,
303,
1686,
1145,
67983,
1145,
816,
25,
9323,
53094,
53094,
96481,
21128,
1796,
58,
303,
1686,
1145,
67983,
2526,
11651,
67983,
76747,
60,
55609,
198,
3179,
45539,
76359,
38723,
1990,
1403,
6273,
9531,
36295,
13
] | https://langchain.readthedocs.io/en/latest/math_utils/langchain.math_utils.cosine_similarity.html |
476efbd68ed5-0 | langchain.agents.agent_toolkits.openapi.spec.dereference_refs¶
langchain.agents.agent_toolkits.openapi.spec.dereference_refs(spec_obj: dict, full_spec: dict) → Union[dict, list][source]¶
Try to substitute $refs.
The goal is to get the complete docs for each endpoint in context for now.
In the few OpenAPI specs I studied, $refs referenced models
(or in OpenAPI terms, components) and could be nested. This code most
likely misses lots of cases. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
29426,
962,
486,
2251,
61738,
55609,
198,
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
29426,
962,
486,
2251,
61738,
39309,
7478,
25,
6587,
11,
2539,
13908,
25,
6587,
8,
11651,
9323,
58,
8644,
11,
1160,
1483,
2484,
60,
55609,
198,
22170,
311,
28779,
400,
16541,
627,
791,
5915,
374,
311,
636,
279,
4686,
27437,
369,
1855,
15233,
304,
2317,
369,
1457,
627,
644,
279,
2478,
5377,
7227,
33347,
358,
20041,
11,
400,
16541,
25819,
4211,
198,
81020,
304,
5377,
7227,
3878,
11,
6956,
8,
323,
1436,
387,
24997,
13,
1115,
2082,
1455,
198,
14617,
43394,
10283,
315,
5157,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.dereference_refs.html |
7606fe751adb-0 | langchain.agents.agent.LLMSingleActionAgent¶
class langchain.agents.agent.LLMSingleActionAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser, stop: List[str])[source]¶
Bases: BaseSingleActionAgent
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Required]¶
param stop: List[str] [Required]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
dict(**kwargs: Any) → Dict[source]¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → BaseSingleActionAgent¶
get_allowed_tools() → Optional[List[str]]¶
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations | [
5317,
8995,
29192,
812,
45249,
1236,
43,
4931,
2222,
2573,
17230,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
1236,
43,
4931,
2222,
2573,
17230,
4163,
11,
9507,
76,
31683,
25,
445,
11237,
19368,
11,
2612,
19024,
25,
21372,
5207,
6707,
11,
3009,
25,
1796,
17752,
41105,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
11126,
2573,
17230,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
9507,
76,
31683,
25,
8859,
8995,
5442,
1771,
60098,
76,
1236,
11237,
19368,
510,
8327,
60,
55609,
198,
913,
2612,
19024,
25,
8859,
8995,
29192,
812,
45249,
89969,
5207,
6707,
510,
8327,
60,
55609,
198,
913,
3009,
25,
1796,
17752,
60,
510,
8327,
60,
55609,
198,
7847,
264,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
1483,
2484,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
76747,
60,
55609,
198,
5715,
11240,
13340,
315,
8479,
627,
27853,
505,
44095,
76,
8543,
40823,
36621,
76,
25,
5464,
14126,
1747,
11,
7526,
25,
29971,
58,
4066,
7896,
1145,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
11126,
2573,
17230,
55609,
198,
456,
43255,
40823,
368,
11651,
12536,
53094,
17752,
5163,
55609,
198,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
1483,
2484,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html |
7606fe751adb-1 | Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict[source]¶
property return_values: List[str]¶
Return values of the agent. | [
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
693,
1284,
18033,
9852,
7,
22928,
1284,
7153,
9209,
25,
610,
11,
29539,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
21372,
26748,
55609,
198,
5715,
2077,
994,
8479,
706,
1027,
10717,
4245,
311,
1973,
26771,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
8479,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
8479,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
2,
1442,
3318,
449,
8479,
32658,
198,
8252,
45249,
5799,
4971,
2703,
45221,
2398,
14,
8252,
34506,
863,
340,
14506,
14334,
61082,
37335,
368,
11651,
30226,
76747,
60,
55609,
198,
3784,
471,
9324,
25,
1796,
17752,
60,
55609,
198,
5715,
2819,
315,
279,
8479,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.LLMSingleActionAgent.html |
06bb02124275-0 | langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent¶
langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent(llm: BaseLanguageModel, toolkit: VectorStoreToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶
Construct a vectorstore agent from an LLM and tools. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
48203,
4412,
9105,
2581,
12526,
4412,
26814,
55609,
198,
5317,
8995,
29192,
812,
45249,
23627,
90517,
48203,
4412,
9105,
2581,
12526,
4412,
26814,
36621,
76,
25,
5464,
14126,
1747,
11,
66994,
25,
4290,
6221,
63044,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9436,
25,
610,
284,
364,
2675,
527,
459,
8479,
6319,
311,
4320,
4860,
922,
7437,
315,
9477,
7255,
77,
2675,
617,
2680,
311,
7526,
369,
45830,
449,
279,
9477,
11,
323,
279,
11374,
311,
279,
7526,
527,
4860,
7255,
77,
32148,
11,
499,
690,
387,
4691,
311,
3493,
8336,
369,
701,
4860,
11,
304,
902,
1162,
499,
1288,
1005,
279,
8475,
5507,
311,
656,
779,
7255,
77,
2746,
279,
3488,
1587,
539,
2873,
9959,
311,
904,
315,
279,
7526,
3984,
11,
1120,
471,
330,
40,
1541,
10379,
83,
1440,
1,
439,
279,
4320,
7255,
77,
518,
14008,
25,
1845,
284,
3641,
11,
8479,
82307,
37335,
25,
12536,
58,
13755,
17752,
11,
5884,
5163,
284,
2290,
11,
3146,
9872,
25,
30226,
17752,
11,
5884,
2526,
11651,
21372,
26321,
76747,
60,
55609,
198,
29568,
264,
4724,
4412,
8479,
505,
459,
445,
11237,
323,
7526,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.vectorstore.base.create_vectorstore_agent.html |
22b93a983fec-0 | langchain.agents.react.base.ReActTextWorldAgent¶
class langchain.agents.react.base.ReActTextWorldAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: ReActDocstoreAgent
Agent for the ReAct TextWorld chain.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool]) → BasePromptTemplate[source]¶
Return default prompt.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, **kwargs: Any) → Agent¶
Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶ | [
5317,
8995,
29192,
812,
55782,
9105,
2887,
2471,
1199,
10343,
17230,
55609,
198,
1058,
8859,
8995,
29192,
812,
55782,
9105,
2887,
2471,
1199,
10343,
17230,
4163,
11,
9507,
76,
31683,
25,
445,
11237,
19368,
11,
2612,
19024,
25,
21372,
5207,
6707,
284,
2290,
11,
5535,
40823,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
1050,
2471,
9743,
4412,
17230,
198,
17230,
369,
279,
1050,
2471,
2991,
10343,
8957,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
40823,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
913,
9507,
76,
31683,
25,
445,
11237,
19368,
510,
8327,
60,
55609,
198,
913,
2612,
19024,
25,
8859,
8995,
29192,
812,
45249,
89969,
5207,
6707,
510,
15669,
60,
55609,
198,
7847,
264,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
27853,
1893,
62521,
12464,
3145,
25,
29971,
58,
4066,
7896,
2526,
11651,
5464,
55715,
7423,
76747,
60,
55609,
198,
5715,
1670,
10137,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
8479,
627,
27853,
505,
44095,
76,
8543,
40823,
36621,
76,
25,
5464,
14126,
1747,
11,
7526,
25,
29971,
58,
4066,
7896,
1145,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
2612,
19024,
25,
12536,
58,
17230,
5207,
6707,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
21372,
55609,
198,
29568,
459,
8479,
505,
459,
445,
11237,
323,
7526,
627,
456,
43255,
40823,
368,
11651,
12536,
53094,
17752,
5163,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.react.base.ReActTextWorldAgent.html |
22b93a983fec-1 | get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the LLM call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent. | [
456,
43255,
40823,
368,
11651,
12536,
53094,
17752,
5163,
55609,
198,
456,
16776,
29657,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
4110,
279,
2539,
11374,
369,
279,
445,
11237,
19368,
505,
29539,
7504,
627,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
693,
1284,
18033,
9852,
7,
22928,
1284,
7153,
9209,
25,
610,
11,
29539,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
21372,
26748,
55609,
198,
5715,
2077,
994,
8479,
706,
1027,
10717,
4245,
311,
1973,
26771,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
8479,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
8479,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
2,
1442,
3318,
449,
8479,
32658,
198,
8252,
45249,
5799,
4971,
2703,
45221,
2398,
14,
8252,
34506,
863,
340,
14506,
14334,
61082,
37335,
368,
11651,
30226,
55609,
198,
16503,
9788,
62521,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
10137,
9248,
3645,
627,
3784,
9507,
76,
14301,
25,
610,
55609,
198,
14672,
311,
8911,
279,
445,
11237,
1650,
449,
627,
3784,
22695,
14301,
25,
610,
55609,
198,
14672,
311,
8911,
279,
22695,
449,
627,
3784,
471,
9324,
25,
1796,
17752,
60,
55609,
198,
5715,
2819,
315,
279,
8479,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.react.base.ReActTextWorldAgent.html |
4f0eb99b7edd-0 | langchain.agents.initialize.initialize_agent¶
langchain.agents.initialize.initialize_agent(tools: Sequence[BaseTool], llm: BaseLanguageModel, agent: Optional[AgentType] = None, callback_manager: Optional[BaseCallbackManager] = None, agent_path: Optional[str] = None, agent_kwargs: Optional[dict] = None, *, tags: Optional[Sequence[str]] = None, **kwargs: Any) → AgentExecutor[source]¶
Load an agent executor given tools and LLM.
Parameters
tools – List of tools this agent has access to.
llm – Language model to use as the agent.
agent – Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager – CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path – Path to serialized agent to use.
agent_kwargs – Additional key word arguments to pass to the underlying agent
tags – Tags to apply to the traced runs.
**kwargs – Additional key word arguments passed to the agent executor
Returns
An agent executor | [
5317,
8995,
29192,
812,
27060,
27060,
26814,
55609,
198,
5317,
8995,
29192,
812,
27060,
27060,
26814,
12464,
3145,
25,
29971,
58,
4066,
7896,
1145,
9507,
76,
25,
5464,
14126,
1747,
11,
8479,
25,
12536,
58,
17230,
941,
60,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
8479,
2703,
25,
12536,
17752,
60,
284,
2290,
11,
8479,
37335,
25,
12536,
58,
8644,
60,
284,
2290,
11,
12039,
9681,
25,
12536,
58,
14405,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
21372,
26321,
76747,
60,
55609,
198,
6003,
459,
8479,
32658,
2728,
7526,
323,
445,
11237,
627,
9905,
198,
16297,
1389,
1796,
315,
7526,
420,
8479,
706,
2680,
311,
627,
657,
76,
1389,
11688,
1646,
311,
1005,
439,
279,
8479,
627,
8252,
1389,
21372,
955,
311,
1005,
13,
1442,
2290,
323,
8479,
2703,
374,
1101,
2290,
11,
690,
1670,
311,
198,
17230,
941,
70948,
6977,
1831,
2241,
6966,
39268,
627,
13802,
12418,
1389,
23499,
2087,
311,
1005,
13,
8121,
4927,
6783,
374,
1511,
422,
198,
1962,
3984,
13,
37090,
311,
2290,
627,
8252,
2703,
1389,
8092,
311,
34016,
8479,
311,
1005,
627,
8252,
37335,
1389,
24086,
1401,
3492,
6105,
311,
1522,
311,
279,
16940,
8479,
198,
14412,
1389,
28783,
311,
3881,
311,
279,
51400,
8640,
627,
334,
9872,
1389,
24086,
1401,
3492,
6105,
5946,
311,
279,
8479,
32658,
198,
16851,
198,
2127,
8479,
32658
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.initialize.initialize_agent.html |
15dfc113cb19-0 | langchain.agents.agent_toolkits.sql.base.create_sql_agent¶ | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
10251,
9105,
2581,
18554,
26814,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
15dfc113cb19-1 | langchain.agents.agent_toolkits.sql.base.create_sql_agent(llm: BaseLanguageModel, toolkit: SQLDatabaseToolkit, agent_type: AgentType = AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix: Optional[str] = None, format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
10251,
9105,
2581,
18554,
26814,
36621,
76,
25,
5464,
14126,
1747,
11,
66994,
25,
8029,
6116,
63044,
11,
8479,
1857,
25,
21372,
941,
284,
21372,
941,
70948,
6977,
1831,
2241,
6966,
39268,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9436,
25,
610,
284,
364,
2675,
527,
459,
8479,
6319,
311,
16681,
449,
264,
8029,
4729,
7255,
77,
22818,
459,
1988,
3488,
11,
1893,
264,
84287,
533,
2740,
4495,
314,
32106,
772,
92,
3319,
311,
1629,
11,
1243,
1427,
520,
279,
3135,
315,
279,
3319,
323,
471,
279,
4320,
7255,
77,
36687,
279,
1217,
30202,
264,
3230,
1396,
315,
10507,
814,
6562,
311,
6994,
11,
2744,
4017,
701,
3319,
311,
520,
1455,
314,
3565,
4803,
92,
3135,
7255,
77,
2675,
649,
2015,
279,
3135,
555,
264,
9959,
3330,
311,
471,
279,
1455,
7185,
10507,
304,
279,
4729,
7255,
77,
27247,
3319,
369,
682,
279,
8310,
505,
264,
3230,
2007,
11,
1193,
2610,
369,
279,
9959,
8310,
2728,
279,
3488,
7255,
77,
2675,
617,
2680,
311,
7526,
369,
45830,
449,
279,
4729,
7255,
77,
7456,
1005,
279,
3770,
7526,
13,
8442,
1005,
279,
2038,
6052,
555,
279,
3770,
7526,
311,
9429,
701,
1620,
4320,
7255,
77,
2675,
28832,
2033,
1817,
701,
3319,
1603,
31320,
433,
13,
1442,
499,
636,
459,
1493,
1418,
31320,
264,
3319,
11,
18622,
279,
3319,
323,
1456,
1578,
7255,
77,
1734,
5989,
4276,
1304,
904,
423,
2735,
12518,
320,
12987,
11,
23743,
11,
17640,
11,
58042,
5099,
6266,
311,
279,
4729,
7255,
77,
1734,
2746,
279,
3488,
1587,
539,
2873,
5552,
311,
279,
4729,
11,
1120,
471,
330,
40,
1541,
10379,
83,
1440,
1,
439,
279,
4320,
7255,
77,
518,
21166,
25,
12536,
17752,
60,
284,
2290,
11,
3645,
83527,
25,
610,
284,
364,
10464,
279,
2768,
3645,
7338,
77,
1734,
14924,
25,
279,
1988,
3488,
499,
2011,
4320,
1734,
85269,
25,
499,
1288,
2744,
1781,
922,
1148,
311,
656,
1734,
2573,
25,
279,
1957,
311,
1935,
11,
1288,
387,
832,
315,
18973,
14506,
9366,
92,
18444,
77,
2573,
5688,
25,
279,
1988,
311,
279,
1957,
1734,
38863,
367,
25,
279,
1121,
315,
279,
1957,
1734,
1131,
320,
576,
36287,
14,
2573,
14,
2573,
5688,
17991,
4945,
367,
649,
13454,
452,
3115,
10929,
77,
85269,
25,
358,
1457,
1440,
279,
1620,
4320,
1734,
19918,
22559,
25,
279,
1620,
4320,
311,
279,
4113,
1988
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
15dfc113cb19-2 | I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, top_k: int = 10, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶ | [
40,
1457,
1440,
279,
1620,
4320,
1734,
19918,
22559,
25,
279,
1620,
4320,
311,
279,
4113,
1988,
3488,
518,
1988,
29282,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
1948,
4803,
25,
528,
284,
220,
605,
11,
1973,
56707,
25,
12536,
19155,
60,
284,
220,
868,
11,
1973,
62048,
3084,
25,
12536,
96481,
60,
284,
2290,
11,
4216,
1284,
7153,
9209,
25,
610,
284,
364,
9009,
518,
14008,
25,
1845,
284,
3641,
11,
8479,
82307,
37335,
25,
12536,
58,
13755,
17752,
11,
5884,
5163,
284,
2290,
11,
3146,
9872,
25,
30226,
17752,
11,
5884,
2526,
11651,
21372,
26321,
76747,
60,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
15dfc113cb19-3 | Construct a sql agent from an LLM and tools. | [
29568,
264,
5822,
8479,
505,
459,
445,
11237,
323,
7526,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.sql.base.create_sql_agent.html |
35cd6906baa8-0 | langchain.agents.utils.validate_tools_single_input¶
langchain.agents.utils.validate_tools_single_input(class_name: str, tools: Sequence[BaseTool]) → None[source]¶
Validate tools for single input. | [
5317,
8995,
29192,
812,
8576,
20090,
40823,
20052,
6022,
55609,
198,
5317,
8995,
29192,
812,
8576,
20090,
40823,
20052,
6022,
22723,
1292,
25,
610,
11,
7526,
25,
29971,
58,
4066,
7896,
2526,
11651,
2290,
76747,
60,
55609,
198,
18409,
7526,
369,
3254,
1988,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.utils.validate_tools_single_input.html |
a026082eea4d-0 | langchain.agents.load_tools.load_huggingface_tool¶
langchain.agents.load_tools.load_huggingface_tool(task_or_repo_id: str, model_repo_id: Optional[str] = None, token: Optional[str] = None, remote: bool = False, **kwargs: Any) → BaseTool[source]¶
Loads a tool from the HuggingFace Hub.
Parameters
task_or_repo_id – Task or model repo id.
model_repo_id – Optional model repo id.
token – Optional token.
remote – Optional remote. Defaults to False.
**kwargs –
Returns
A tool. | [
5317,
8995,
29192,
812,
5214,
40823,
5214,
1552,
36368,
1594,
23627,
55609,
198,
5317,
8995,
29192,
812,
5214,
40823,
5214,
1552,
36368,
1594,
23627,
17941,
8908,
38884,
851,
25,
610,
11,
1646,
38884,
851,
25,
12536,
17752,
60,
284,
2290,
11,
4037,
25,
12536,
17752,
60,
284,
2290,
11,
8870,
25,
1845,
284,
3641,
11,
3146,
9872,
25,
5884,
8,
11651,
5464,
7896,
76747,
60,
55609,
198,
79617,
264,
5507,
505,
279,
473,
36368,
16680,
27636,
627,
9905,
198,
8366,
8908,
38884,
851,
1389,
5546,
477,
1646,
16246,
887,
627,
2590,
38884,
851,
1389,
12536,
1646,
16246,
887,
627,
5963,
1389,
12536,
4037,
627,
18643,
1389,
12536,
8870,
13,
37090,
311,
3641,
627,
334,
9872,
1389,
720,
16851,
198,
32,
5507,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.load_tools.load_huggingface_tool.html |
ca9032fd78a7-0 | langchain.agents.agent_toolkits.json.base.create_json_agent¶ | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
4421,
9105,
2581,
9643,
26814,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html |
ca9032fd78a7-1 | langchain.agents.agent_toolkits.json.base.create_json_agent(llm: BaseLanguageModel, toolkit: JsonToolkit, callback_manager: Optional[BaseCallbackManager] = None, prefix: str = 'You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
4421,
9105,
2581,
9643,
26814,
36621,
76,
25,
5464,
14126,
1747,
11,
66994,
25,
8472,
63044,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
9436,
25,
610,
284,
364,
2675,
527,
459,
8479,
6319,
311,
16681,
449,
4823,
7255,
77,
7927,
5915,
374,
311,
471,
264,
1620,
4320,
555,
45830,
449,
279,
4823,
7255,
77,
2675,
617,
2680,
311,
279,
2768,
7526,
902,
1520,
499,
4048,
810,
922,
279,
4823,
499,
527,
45830,
449,
7255,
77,
7456,
1005,
279,
3770,
7526,
13,
8442,
1005,
279,
2038,
6052,
555,
279,
3770,
7526,
311,
9429,
701,
1620,
4320,
7255,
77,
5519,
539,
1304,
709,
904,
2038,
430,
374,
539,
13282,
304,
279,
4823,
7255,
77,
7927,
1988,
311,
279,
7526,
1288,
387,
304,
279,
1376,
315,
1595,
695,
1204,
798,
18613,
15,
60,
63,
1405,
1595,
695,
63,
374,
279,
4823,
24295,
499,
527,
45830,
449,
11,
323,
279,
20047,
1511,
374,
13325,
13,
1144,
77,
2675,
1288,
1193,
1005,
7039,
430,
499,
1440,
369,
264,
2144,
3073,
13,
1472,
2011,
9788,
430,
264,
1401,
6866,
555,
9298,
433,
8767,
994,
8260,
1595,
2285,
13908,
2062,
12919,
29687,
1144,
77,
2746,
499,
617,
539,
3970,
264,
1401,
304,
832,
315,
1884,
14847,
11,
499,
4250,
1005,
433,
7255,
77,
2675,
1288,
1193,
923,
832,
1401,
520,
264,
892,
311,
279,
1853,
13,
1472,
4250,
923,
5361,
7039,
520,
3131,
7255,
77,
2746,
499,
13123,
264,
330,
1622,
1480,
498,
733,
1203,
311,
279,
3766,
1401,
11,
1427,
520,
279,
2561,
7039,
11,
323,
1456,
1578,
7255,
77,
1734,
2746,
279,
3488,
1587,
539,
2873,
311,
387,
5552,
311,
279,
4823,
11,
1120,
471,
330,
40,
1541,
10379,
83,
1440,
1,
439,
279,
4320,
7255,
77,
38195,
3240,
701,
16628,
449,
279,
1595,
2285,
13908,
2062,
12919,
63,
5507,
449,
1988,
330,
695,
1,
311,
1518,
1148,
7039,
3073,
304,
279,
4823,
7255,
77,
1734,
9290,
430,
7170,
279,
907,
520,
264,
2728,
1853,
374,
3544,
13,
763,
420,
1162,
11,
499,
690,
636,
459,
1493,
330,
1150,
374,
264,
3544,
11240,
11,
1288,
13488,
1202,
7039,
6089,
3343,
59,
77,
644,
420,
1162,
11,
499,
1288,
68514,
1833,
709,
555,
1701,
279,
1595,
2285,
13908,
2062,
12919,
63,
5507,
311,
1518,
1148,
7039,
3073,
520,
430,
1853,
7255,
77,
5519,
539,
5042,
8464,
279,
1217,
311,
279,
4823,
477
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html |
ca9032fd78a7-2 | to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix: str = 'Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions: str = 'Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, verbose: bool = False, agent_executor_kwargs: Optional[Dict[str, Any]] = None, **kwargs: Dict[str, Any]) → AgentExecutor[source]¶ | [
998,
1518,
1148,
7039,
3073,
520,
430,
1853,
7255,
77,
5519,
539,
5042,
8464,
279,
1217,
311,
279,
4823,
477,
264,
3857,
315,
279,
4823,
11,
439,
420,
374,
539,
264,
2764,
4320,
13,
13969,
42200,
3156,
499,
1505,
279,
4320,
323,
21650,
471,
433,
7255,
77,
518,
21166,
25,
610,
284,
364,
11382,
9135,
59,
77,
1734,
14924,
25,
314,
1379,
11281,
77,
85269,
25,
358,
1288,
1427,
520,
279,
7039,
430,
3073,
304,
828,
311,
1518,
1148,
358,
617,
2680,
311,
1734,
90,
8252,
60828,
759,
13545,
17266,
3645,
83527,
25,
610,
284,
364,
10464,
279,
2768,
3645,
7338,
77,
1734,
14924,
25,
279,
1988,
3488,
499,
2011,
4320,
1734,
85269,
25,
499,
1288,
2744,
1781,
922,
1148,
311,
656,
1734,
2573,
25,
279,
1957,
311,
1935,
11,
1288,
387,
832,
315,
18973,
14506,
9366,
92,
18444,
77,
2573,
5688,
25,
279,
1988,
311,
279,
1957,
1734,
38863,
367,
25,
279,
1121,
315,
279,
1957,
1734,
1131,
320,
576,
36287,
14,
2573,
14,
2573,
5688,
17991,
4945,
367,
649,
13454,
452,
3115,
10929,
77,
85269,
25,
358,
1457,
1440,
279,
1620,
4320,
1734,
19918,
22559,
25,
279,
1620,
4320,
311,
279,
4113,
1988,
3488,
518,
1988,
29282,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
14008,
25,
1845,
284,
3641,
11,
8479,
82307,
37335,
25,
12536,
58,
13755,
17752,
11,
5884,
5163,
284,
2290,
11,
3146,
9872,
25,
30226,
17752,
11,
5884,
2526,
11651,
21372,
26321,
76747,
60,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html |
ca9032fd78a7-3 | Construct a json agent from an LLM and tools. | [
29568,
264,
3024,
8479,
505,
459,
445,
11237,
323,
7526,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.json.base.create_json_agent.html |
9dca02d8cb40-0 | langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit¶
class langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit(*, tools: List[BaseTool] = [])[source]¶
Bases: BaseToolkit
Zapier Toolkit.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param tools: List[langchain.tools.base.BaseTool] = []¶
async classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) → ZapierToolkit[source]¶
Create a toolkit from a ZapierNLAWrapper.
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper: ZapierNLAWrapper) → ZapierToolkit[source]¶
Create a toolkit from a ZapierNLAWrapper.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
4025,
391,
1291,
21966,
8390,
13784,
391,
1291,
63044,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
23627,
90517,
4025,
391,
1291,
21966,
8390,
13784,
391,
1291,
63044,
4163,
11,
7526,
25,
1796,
58,
4066,
7896,
60,
284,
510,
41105,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
63044,
198,
57,
391,
1291,
55876,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
7526,
25,
1796,
58,
5317,
8995,
24029,
9105,
13316,
7896,
60,
284,
3132,
55609,
198,
7847,
538,
4492,
3393,
5791,
6551,
391,
1291,
1107,
4355,
24474,
13476,
391,
1291,
1107,
4355,
24474,
25,
70331,
1291,
45,
18326,
11803,
8,
11651,
70331,
1291,
63044,
76747,
60,
55609,
198,
4110,
264,
66994,
505,
264,
70331,
1291,
45,
18326,
11803,
627,
27853,
505,
6551,
391,
1291,
1107,
4355,
24474,
13476,
391,
1291,
1107,
4355,
24474,
25,
70331,
1291,
45,
18326,
11803,
8,
11651,
70331,
1291,
63044,
76747,
60,
55609,
198,
4110,
264,
66994,
505,
264,
70331,
1291,
45,
18326,
11803,
627,
456,
40823,
368,
11651,
1796,
58,
4066,
7896,
1483,
2484,
60,
55609,
198,
1991,
279,
7526,
304,
279,
66994,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit.html |
5b263afa7f93-0 | langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit¶
class langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit[source]¶
Bases: BaseToolkit
Toolkit for Azure Cognitive Services.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
71340,
669,
51549,
40946,
21966,
8390,
58927,
34,
51549,
11271,
63044,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
23627,
90517,
71340,
669,
51549,
40946,
21966,
8390,
58927,
34,
51549,
11271,
63044,
76747,
60,
55609,
198,
33,
2315,
25,
5464,
63044,
198,
63044,
369,
35219,
73235,
8471,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
456,
40823,
368,
11651,
1796,
58,
4066,
7896,
1483,
2484,
60,
55609,
198,
1991,
279,
7526,
304,
279,
66994,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.azure_cognitive_services.toolkit.AzureCognitiveServicesToolkit.html |
b5ca525a1670-0 | langchain.agents.conversational_chat.output_parser.ConvoOutputParser¶
class langchain.agents.conversational_chat.output_parser.ConvoOutputParser[source]¶
Bases: AgentOutputParser
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of output parser.
get_format_instructions() → str[source]¶
Instructions on how the LLM output should be formatted.
parse(text: str) → Union[AgentAction, AgentFinish][source]¶
Parse text into agent action/finish.
parse_result(result: List[Generation]) → T¶
Parse LLM Result.
parse_with_prompt(completion: str, prompt: PromptValue) → Any¶
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion – output of language model
prompt – prompt value
Returns
structured output
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶ | [
5317,
8995,
29192,
812,
2932,
3078,
1697,
36153,
13718,
19024,
4906,
3415,
5207,
6707,
55609,
198,
1058,
8859,
8995,
29192,
812,
2932,
3078,
1697,
36153,
13718,
19024,
4906,
3415,
5207,
6707,
76747,
60,
55609,
198,
33,
2315,
25,
21372,
5207,
6707,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
2612,
6871,
627,
456,
9132,
83527,
368,
11651,
610,
76747,
60,
55609,
198,
56391,
389,
1268,
279,
445,
11237,
2612,
1288,
387,
24001,
627,
6534,
7383,
25,
610,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
1483,
2484,
60,
55609,
198,
14802,
1495,
1139,
8479,
1957,
14,
31250,
627,
6534,
5400,
4556,
25,
1796,
58,
38238,
2526,
11651,
350,
55609,
198,
14802,
445,
11237,
5832,
627,
6534,
6753,
62521,
91868,
25,
610,
11,
10137,
25,
60601,
1150,
8,
11651,
5884,
55609,
198,
15669,
1749,
311,
4820,
279,
2612,
315,
459,
445,
11237,
1650,
449,
264,
10137,
627,
791,
10137,
374,
14090,
3984,
304,
279,
1567,
279,
9442,
6707,
6944,
198,
998,
23515,
477,
5155,
279,
2612,
304,
1063,
1648,
11,
323,
3966,
2038,
505,
198,
1820,
10137,
311,
656,
779,
627,
9905,
198,
44412,
1389,
2612,
315,
4221,
1646,
198,
41681,
1389,
10137,
907,
198,
16851,
198,
52243,
2612,
198,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html |
b5ca525a1670-1 | property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
extra = 'ignore'¶ | [
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
15824,
284,
364,
13431,
6,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.conversational_chat.output_parser.ConvoOutputParser.html |
36aa8b7f50bd-0 | langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit¶
class langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit(*, requests_wrapper: TextRequestsWrapper)[source]¶
Bases: BaseToolkit
Toolkit for making requests.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param requests_wrapper: langchain.requests.TextRequestsWrapper [Required]¶
get_tools() → List[BaseTool][source]¶
Return a list of tools. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
21966,
8390,
9856,
82,
63044,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
23627,
90517,
59920,
21966,
8390,
9856,
82,
63044,
4163,
11,
7540,
24474,
25,
2991,
36395,
11803,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
63044,
198,
63044,
369,
3339,
7540,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
7540,
24474,
25,
8859,
8995,
68771,
2021,
36395,
11803,
510,
8327,
60,
55609,
198,
456,
40823,
368,
11651,
1796,
58,
4066,
7896,
1483,
2484,
60,
55609,
198,
5715,
264,
1160,
315,
7526,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.openapi.toolkit.RequestsToolkit.html |
823b5a3f0c9d-0 | langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit¶
class langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit(*, sync_browser: Optional['SyncBrowser'] = None, async_browser: Optional['AsyncBrowser'] = None)[source]¶
Bases: BaseToolkit
Toolkit for web browser tools.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param async_browser: Optional['AsyncBrowser'] = None¶
param sync_browser: Optional['SyncBrowser'] = None¶
classmethod from_browser(sync_browser: Optional[SyncBrowser] = None, async_browser: Optional[AsyncBrowser] = None) → PlayWrightBrowserToolkit[source]¶
Instantiate the toolkit.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit.
validator validate_imports_and_browser_provided » all fields[source]¶
Check that the arguments are valid.
model Config[source]¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶
extra = 'forbid'¶ | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
13269,
53852,
21966,
8390,
25469,
54,
1315,
18360,
63044,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
23627,
90517,
13269,
53852,
21966,
8390,
25469,
54,
1315,
18360,
63044,
4163,
11,
13105,
54514,
25,
12536,
681,
12430,
18360,
663,
284,
2290,
11,
3393,
54514,
25,
12536,
681,
6662,
18360,
663,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
63044,
198,
63044,
369,
3566,
7074,
7526,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
3393,
54514,
25,
12536,
681,
6662,
18360,
663,
284,
2290,
55609,
198,
913,
13105,
54514,
25,
12536,
681,
12430,
18360,
663,
284,
2290,
55609,
198,
27853,
505,
54514,
98333,
54514,
25,
12536,
58,
12430,
18360,
60,
284,
2290,
11,
3393,
54514,
25,
12536,
58,
6662,
18360,
60,
284,
2290,
8,
11651,
7199,
54,
1315,
18360,
63044,
76747,
60,
55609,
198,
81651,
279,
66994,
627,
456,
40823,
368,
11651,
1796,
58,
4066,
7896,
1483,
2484,
60,
55609,
198,
1991,
279,
7526,
304,
279,
66994,
627,
16503,
9788,
18941,
82,
8543,
54514,
2602,
44057,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
4061,
430,
279,
6105,
527,
2764,
627,
2590,
5649,
76747,
60,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609,
198,
15824,
284,
364,
2000,
21301,
6,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.playwright.toolkit.PlayWrightBrowserToolkit.html |
728a0f326f19-0 | langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec¶
langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec(spec: dict, dereference: bool = True) → ReducedOpenAPISpec[source]¶
Simplify/distill/minify a spec somehow.
I want a smaller target for retrieval and (more importantly)
I want smaller results from retrieval.
I was hoping https://openapi.tools/ would have some useful bits
to this end, but doesn’t seem so. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
29426,
24726,
11563,
2113,
13908,
55609,
198,
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
29426,
24726,
11563,
2113,
13908,
39309,
25,
6587,
11,
26344,
2251,
25,
1845,
284,
3082,
8,
11651,
80569,
5109,
2599,
1669,
1007,
76747,
60,
55609,
198,
50,
71306,
20107,
484,
45273,
1463,
264,
1424,
17354,
627,
40,
1390,
264,
9333,
2218,
369,
57470,
323,
320,
6518,
23659,
340,
40,
1390,
9333,
3135,
505,
57470,
627,
40,
574,
16026,
3788,
1129,
2569,
2113,
24029,
14,
1053,
617,
1063,
5505,
9660,
198,
998,
420,
842,
11,
719,
3250,
1431,
2873,
779,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.openapi.spec.reduce_openapi_spec.html |
ad1265b9c988-0 | langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit¶
class langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit(*, json_agent: AgentExecutor, requests_wrapper: TextRequestsWrapper)[source]¶
Bases: BaseToolkit
Toolkit for interacting with a OpenAPI api.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param json_agent: langchain.agents.agent.AgentExecutor [Required]¶
param requests_wrapper: langchain.requests.TextRequestsWrapper [Required]¶
classmethod from_llm(llm: BaseLanguageModel, json_spec: JsonSpec, requests_wrapper: TextRequestsWrapper, **kwargs: Any) → OpenAPIToolkit[source]¶
Create json agent from llm, then initialize.
get_tools() → List[BaseTool][source]¶
Get the tools in the toolkit. | [
5317,
8995,
29192,
812,
45249,
23627,
90517,
59920,
21966,
8390,
13250,
2599,
964,
1786,
8390,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
23627,
90517,
59920,
21966,
8390,
13250,
2599,
964,
1786,
8390,
4163,
11,
3024,
26814,
25,
21372,
26321,
11,
7540,
24474,
25,
2991,
36395,
11803,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
63044,
198,
63044,
369,
45830,
449,
264,
5377,
7227,
6464,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
3024,
26814,
25,
8859,
8995,
29192,
812,
45249,
89969,
26321,
510,
8327,
60,
55609,
198,
913,
7540,
24474,
25,
8859,
8995,
68771,
2021,
36395,
11803,
510,
8327,
60,
55609,
198,
27853,
505,
44095,
76,
36621,
76,
25,
5464,
14126,
1747,
11,
3024,
13908,
25,
8472,
8491,
11,
7540,
24474,
25,
2991,
36395,
11803,
11,
3146,
9872,
25,
5884,
8,
11651,
5377,
2599,
964,
1786,
8390,
76747,
60,
55609,
198,
4110,
3024,
8479,
505,
9507,
76,
11,
1243,
9656,
627,
456,
40823,
368,
11651,
1796,
58,
4066,
7896,
1483,
2484,
60,
55609,
198,
1991,
279,
7526,
304,
279,
66994,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit.html |
8628b9088772-0 | langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent¶
class langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent(*, llm: BaseLanguageModel, tools: Sequence[BaseTool], prompt: BasePromptTemplate)[source]¶
Bases: BaseSingleActionAgent
An Agent driven by OpenAIs function powered API.
Parameters
llm – This should be an instance of ChatOpenAI, specifically a model
that supports using functions.
tools – The tools this agent has access to.
prompt – The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
OpenAIFunctionsAgent.create_prompt(…)
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param llm: langchain.base_language.BaseLanguageModel [Required]¶
param prompt: langchain.prompts.base.BasePromptTemplate [Required]¶
param tools: Sequence[langchain.tools.base.BaseTool] [Required]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None) → BasePromptTemplate[source]¶
Create prompt for this agent.
Parameters | [
5317,
8995,
29192,
812,
5949,
2192,
32808,
26814,
9105,
13250,
32,
2843,
600,
82,
17230,
55609,
198,
1058,
8859,
8995,
29192,
812,
5949,
2192,
32808,
26814,
9105,
13250,
32,
2843,
600,
82,
17230,
4163,
11,
9507,
76,
25,
5464,
14126,
1747,
11,
7526,
25,
29971,
58,
4066,
7896,
1145,
10137,
25,
5464,
55715,
7423,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
5464,
11126,
2573,
17230,
198,
2127,
21372,
16625,
555,
5377,
32,
3957,
734,
23134,
5446,
627,
9905,
198,
657,
76,
1389,
1115,
1288,
387,
459,
2937,
315,
13149,
5109,
15836,
11,
11951,
264,
1646,
198,
9210,
11815,
1701,
5865,
627,
16297,
1389,
578,
7526,
420,
8479,
706,
2680,
311,
627,
41681,
1389,
578,
10137,
369,
420,
8479,
11,
1288,
1862,
8479,
60828,
759,
13545,
439,
832,
198,
1073,
279,
7482,
13,
1789,
459,
4228,
1648,
311,
9429,
420,
10137,
11,
1005,
198,
5109,
32,
2843,
600,
82,
17230,
2581,
62521,
7,
1981,
340,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
9507,
76,
25,
8859,
8995,
9105,
30121,
13316,
14126,
1747,
510,
8327,
60,
55609,
198,
913,
10137,
25,
8859,
8995,
61848,
13044,
9105,
13316,
55715,
7423,
510,
8327,
60,
55609,
198,
913,
7526,
25,
29971,
58,
5317,
8995,
24029,
9105,
13316,
7896,
60,
510,
8327,
60,
55609,
198,
7847,
264,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
1483,
2484,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
27853,
1893,
62521,
47106,
6598,
25,
12536,
86126,
2097,
60,
284,
744,
2097,
15413,
1151,
2675,
527,
264,
11190,
15592,
18328,
16045,
5217,
37335,
1185,
39942,
5066,
62521,
24321,
25,
12536,
53094,
58,
4066,
2097,
55715,
7423,
5163,
284,
2290,
8,
11651,
5464,
55715,
7423,
76747,
60,
55609,
198,
4110,
10137,
369,
420,
8479,
627,
9905
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html |
8628b9088772-1 | Create prompt for this agent.
Parameters
system_message – Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages – Prompt messages that will be placed between the
system message and the new human input.
Returns
A prompt template to pass into this agent.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, extra_prompt_messages: Optional[List[BaseMessagePromptTemplate]] = None, system_message: Optional[SystemMessage] = SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs: Any) → BaseSingleActionAgent[source]¶
Construct an agent from an LLM and tools.
get_allowed_tools() → List[str][source]¶
Get allowed tools.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish][source]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date, along with observations
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”) | [
4110,
10137,
369,
420,
8479,
627,
9905,
198,
9125,
6598,
1389,
4961,
311,
1005,
439,
279,
1887,
1984,
430,
690,
387,
279,
198,
3983,
304,
279,
10137,
627,
15824,
62521,
24321,
1389,
60601,
6743,
430,
690,
387,
9277,
1990,
279,
198,
9125,
1984,
323,
279,
502,
3823,
1988,
627,
16851,
198,
32,
10137,
3896,
311,
1522,
1139,
420,
8479,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
8479,
627,
27853,
505,
44095,
76,
8543,
40823,
36621,
76,
25,
5464,
14126,
1747,
11,
7526,
25,
29971,
58,
4066,
7896,
1145,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
5066,
62521,
24321,
25,
12536,
53094,
58,
4066,
2097,
55715,
7423,
5163,
284,
2290,
11,
1887,
6598,
25,
12536,
86126,
2097,
60,
284,
744,
2097,
15413,
1151,
2675,
527,
264,
11190,
15592,
18328,
16045,
5217,
37335,
1185,
39942,
3146,
9872,
25,
5884,
8,
11651,
5464,
11126,
2573,
17230,
76747,
60,
55609,
198,
29568,
459,
8479,
505,
459,
445,
11237,
323,
7526,
627,
456,
43255,
40823,
368,
11651,
1796,
17752,
1483,
2484,
60,
55609,
198,
1991,
5535,
7526,
627,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
1483,
2484,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
11,
3235,
449,
24654,
198,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
693,
1284,
18033,
9852,
7,
22928,
1284,
7153,
9209,
25,
610,
11,
29539,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
21372,
26748,
55609,
198,
5715,
2077,
994,
8479,
706,
1027,
10717,
4245,
311,
1973,
26771,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
8479,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
8479,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
2,
1442,
3318,
449,
8479,
32658,
198,
8252,
45249,
5799,
4971,
2703,
45221,
2398,
14,
8252,
34506,
33611
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html |
8628b9088772-2 | # If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_llm » all fields[source]¶
validator validate_prompt » all fields[source]¶
property functions: List[dict]¶
property input_keys: List[str]¶
Get input keys. Input refers to user input here.
property return_values: List[str]¶
Return values of the agent. | [
2,
1442,
3318,
449,
8479,
32658,
198,
8252,
45249,
5799,
4971,
2703,
45221,
2398,
14,
8252,
34506,
863,
340,
14506,
14334,
61082,
37335,
368,
11651,
30226,
55609,
198,
16503,
9788,
44095,
76,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
16503,
9788,
62521,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
3784,
5865,
25,
1796,
58,
8644,
60,
55609,
198,
3784,
1988,
12919,
25,
1796,
17752,
60,
55609,
198,
1991,
1988,
7039,
13,
5688,
19813,
311,
1217,
1988,
1618,
627,
3784,
471,
9324,
25,
1796,
17752,
60,
55609,
198,
5715,
2819,
315,
279,
8479,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.openai_functions_agent.base.OpenAIFunctionsAgent.html |
501cf6f746f3-0 | langchain.agents.chat.base.ChatAgent¶
class langchain.agents.chat.base.ChatAgent(*, llm_chain: LLMChain, output_parser: AgentOutputParser = None, allowed_tools: Optional[List[str]] = None)[source]¶
Bases: Agent
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param allowed_tools: Optional[List[str]] = None¶
param llm_chain: langchain.chains.llm.LLMChain [Required]¶
param output_parser: langchain.agents.agent.AgentOutputParser [Optional]¶
async aplan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use. | [
5317,
8995,
29192,
812,
27215,
9105,
59944,
17230,
55609,
198,
1058,
8859,
8995,
29192,
812,
27215,
9105,
59944,
17230,
4163,
11,
9507,
76,
31683,
25,
445,
11237,
19368,
11,
2612,
19024,
25,
21372,
5207,
6707,
284,
2290,
11,
5535,
40823,
25,
12536,
53094,
17752,
5163,
284,
2290,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
21372,
198,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
5535,
40823,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
913,
9507,
76,
31683,
25,
8859,
8995,
5442,
1771,
60098,
76,
1236,
11237,
19368,
510,
8327,
60,
55609,
198,
913,
2612,
19024,
25,
8859,
8995,
29192,
812,
45249,
89969,
5207,
6707,
510,
15669,
60,
55609,
198,
7847,
264,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.chat.base.ChatAgent.html |
501cf6f746f3-1 | **kwargs – User inputs.
Returns
Action specifying what tool to use.
classmethod create_prompt(tools: Sequence[BaseTool], system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None) → BasePromptTemplate[source]¶
Create a prompt for this class.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent. | [
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
27853,
1893,
62521,
12464,
3145,
25,
29971,
58,
4066,
7896,
1145,
1887,
6598,
14301,
25,
610,
284,
364,
16533,
279,
2768,
4860,
439,
1888,
499,
649,
13,
1472,
617,
2680,
311,
279,
2768,
7526,
17898,
1887,
6598,
38251,
25,
610,
284,
364,
11382,
0,
97002,
311,
2744,
1005,
279,
4839,
5885,
1595,
19918,
22559,
63,
994,
30438,
16045,
3823,
6598,
25,
610,
284,
11834,
1379,
11281,
77,
1734,
90,
8252,
60828,
759,
13545,
17266,
3645,
83527,
25,
610,
284,
364,
791,
1648,
499,
1005,
279,
7526,
374,
555,
38938,
264,
3024,
24295,
7255,
77,
48614,
750,
11,
420,
3024,
1288,
617,
264,
1595,
1335,
63,
1401,
320,
4291,
279,
836,
315,
279,
5507,
311,
1005,
8,
323,
264,
1595,
1335,
6022,
63,
1401,
320,
4291,
279,
1988,
311,
279,
5507,
2133,
1618,
73441,
77,
1734,
791,
1193,
2819,
430,
1288,
387,
304,
279,
330,
1335,
1,
2115,
527,
25,
314,
14506,
9366,
11281,
77,
1734,
791,
400,
5483,
1702,
10911,
1288,
1193,
6782,
264,
67859,
1957,
11,
656,
4276,
471,
264,
1160,
315,
5361,
6299,
13,
5810,
374,
459,
3187,
315,
264,
2764,
400,
5483,
1702,
10911,
7338,
77,
1734,
14196,
62169,
77,
3052,
3052,
59,
77,
4194,
330,
1335,
794,
400,
5319,
1971,
4813,
27362,
77,
4194,
330,
1335,
6022,
794,
400,
30521,
1734,
3500,
3500,
59,
77,
14196,
62169,
77,
1734,
984,
37641,
1005,
279,
2768,
3645,
7338,
77,
1734,
14924,
25,
279,
1988,
3488,
499,
2011,
4320,
1734,
85269,
25,
499,
1288,
2744,
1781,
922,
1148,
311,
656,
1734,
2573,
7338,
77,
14196,
62169,
77,
3,
5483,
1702,
10911,
1734,
14196,
62169,
77,
38863,
367,
25,
279,
1121,
315,
279,
1957,
1734,
1131,
320,
576,
36287,
14,
2573,
17991,
4945,
367,
649,
13454,
452,
3115,
10929,
77,
85269,
25,
358,
1457,
1440,
279,
1620,
4320,
1734,
19918,
22559,
25,
279,
1620,
4320,
311,
279,
4113,
1988,
3488,
518,
1988,
29282,
25,
12536,
53094,
17752,
5163,
284,
2290,
8,
11651,
5464,
55715,
7423,
76747,
60,
55609,
198,
4110,
264,
10137,
369,
420,
538,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
8479,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.chat.base.ChatAgent.html |
501cf6f746f3-2 | dict(**kwargs: Any) → Dict¶
Return dictionary representation of agent.
classmethod from_llm_and_tools(llm: BaseLanguageModel, tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, output_parser: Optional[AgentOutputParser] = None, system_message_prefix: str = 'Answer the following questions as best you can. You have access to the following tools:', system_message_suffix: str = 'Begin! Reminder to always use the exact characters `Final Answer` when responding.', human_message: str = '{input}\n\n{agent_scratchpad}', format_instructions: str = 'The way you use the tools is by specifying a json blob.\nSpecifically, this json should have a `action` key (with the name of the tool to use) and a `action_input` key (with the input to the tool going here).\n\nThe only values that should be in the "action" field are: {tool_names}\n\nThe $JSON_BLOB should only contain a SINGLE action, do NOT return a list of multiple actions. Here is an example of a valid $JSON_BLOB:\n\n```\n{{{{\n "action": $TOOL_NAME,\n "action_input": $INPUT\n}}}}\n```\n\nALWAYS use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction:\n```\n$JSON_BLOB\n```\nObservation: the result of the action\n... (this Thought/Action/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables: Optional[List[str]] = None, **kwargs: Any) → Agent[source]¶
Construct an agent from an LLM and tools. | [
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
8479,
627,
27853,
505,
44095,
76,
8543,
40823,
36621,
76,
25,
5464,
14126,
1747,
11,
7526,
25,
29971,
58,
4066,
7896,
1145,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
2612,
19024,
25,
12536,
58,
17230,
5207,
6707,
60,
284,
2290,
11,
1887,
6598,
14301,
25,
610,
284,
364,
16533,
279,
2768,
4860,
439,
1888,
499,
649,
13,
1472,
617,
2680,
311,
279,
2768,
7526,
17898,
1887,
6598,
38251,
25,
610,
284,
364,
11382,
0,
97002,
311,
2744,
1005,
279,
4839,
5885,
1595,
19918,
22559,
63,
994,
30438,
16045,
3823,
6598,
25,
610,
284,
11834,
1379,
11281,
77,
1734,
90,
8252,
60828,
759,
13545,
17266,
3645,
83527,
25,
610,
284,
364,
791,
1648,
499,
1005,
279,
7526,
374,
555,
38938,
264,
3024,
24295,
7255,
77,
48614,
750,
11,
420,
3024,
1288,
617,
264,
1595,
1335,
63,
1401,
320,
4291,
279,
836,
315,
279,
5507,
311,
1005,
8,
323,
264,
1595,
1335,
6022,
63,
1401,
320,
4291,
279,
1988,
311,
279,
5507,
2133,
1618,
73441,
77,
1734,
791,
1193,
2819,
430,
1288,
387,
304,
279,
330,
1335,
1,
2115,
527,
25,
314,
14506,
9366,
11281,
77,
1734,
791,
400,
5483,
1702,
10911,
1288,
1193,
6782,
264,
67859,
1957,
11,
656,
4276,
471,
264,
1160,
315,
5361,
6299,
13,
5810,
374,
459,
3187,
315,
264,
2764,
400,
5483,
1702,
10911,
7338,
77,
1734,
14196,
62169,
77,
3052,
3052,
59,
77,
4194,
330,
1335,
794,
400,
5319,
1971,
4813,
27362,
77,
4194,
330,
1335,
6022,
794,
400,
30521,
1734,
3500,
3500,
59,
77,
14196,
62169,
77,
1734,
984,
37641,
1005,
279,
2768,
3645,
7338,
77,
1734,
14924,
25,
279,
1988,
3488,
499,
2011,
4320,
1734,
85269,
25,
499,
1288,
2744,
1781,
922,
1148,
311,
656,
1734,
2573,
7338,
77,
14196,
62169,
77,
3,
5483,
1702,
10911,
1734,
14196,
62169,
77,
38863,
367,
25,
279,
1121,
315,
279,
1957,
1734,
1131,
320,
576,
36287,
14,
2573,
17991,
4945,
367,
649,
13454,
452,
3115,
10929,
77,
85269,
25,
358,
1457,
1440,
279,
1620,
4320,
1734,
19918,
22559,
25,
279,
1620,
4320,
311,
279,
4113,
1988,
3488,
518,
1988,
29282,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
21372,
76747,
60,
55609,
198,
29568,
459,
8479,
505,
459,
445,
11237,
323,
7526,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.chat.base.ChatAgent.html |
501cf6f746f3-3 | Construct an agent from an LLM and tools.
get_allowed_tools() → Optional[List[str]]¶
get_full_inputs(intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → Dict[str, Any]¶
Create the full inputs for the LLMChain from intermediate steps.
plan(intermediate_steps: List[Tuple[AgentAction, str]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[AgentAction, AgentFinish]¶
Given input, decided what to do.
Parameters
intermediate_steps – Steps the LLM has taken to date,
along with observations
callbacks – Callbacks to run.
**kwargs – User inputs.
Returns
Action specifying what tool to use.
return_stopped_response(early_stopping_method: str, intermediate_steps: List[Tuple[AgentAction, str]], **kwargs: Any) → AgentFinish¶
Return response when agent has been stopped due to max iterations.
save(file_path: Union[Path, str]) → None¶
Save the agent.
Parameters
file_path – Path to file to save the agent to.
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=”path/agent.yaml”)
tool_run_logging_kwargs() → Dict¶
validator validate_prompt » all fields¶
Validate that prompt matches format.
property llm_prefix: str¶
Prefix to append the llm call with.
property observation_prefix: str¶
Prefix to append the observation with.
property return_values: List[str]¶
Return values of the agent. | [
29568,
459,
8479,
505,
459,
445,
11237,
323,
7526,
627,
456,
43255,
40823,
368,
11651,
12536,
53094,
17752,
5163,
55609,
198,
456,
16776,
29657,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
4110,
279,
2539,
11374,
369,
279,
445,
11237,
19368,
505,
29539,
7504,
627,
10609,
33724,
14978,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
9323,
58,
17230,
2573,
11,
21372,
26748,
60,
55609,
198,
22818,
1988,
11,
6773,
1148,
311,
656,
627,
9905,
198,
2295,
14978,
23566,
1389,
40961,
279,
445,
11237,
706,
4529,
311,
2457,
345,
39393,
449,
24654,
198,
69411,
1389,
23499,
82,
311,
1629,
627,
334,
9872,
1389,
2724,
11374,
627,
16851,
198,
2573,
38938,
1148,
5507,
311,
1005,
627,
693,
1284,
18033,
9852,
7,
22928,
1284,
7153,
9209,
25,
610,
11,
29539,
23566,
25,
1796,
20961,
6189,
58,
17230,
2573,
11,
610,
21128,
3146,
9872,
25,
5884,
8,
11651,
21372,
26748,
55609,
198,
5715,
2077,
994,
8479,
706,
1027,
10717,
4245,
311,
1973,
26771,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
55609,
198,
8960,
279,
8479,
627,
9905,
198,
1213,
2703,
1389,
8092,
311,
1052,
311,
3665,
279,
8479,
311,
627,
13617,
512,
497,
2082,
9612,
487,
10344,
198,
2,
1442,
3318,
449,
8479,
32658,
198,
8252,
45249,
5799,
4971,
2703,
45221,
2398,
14,
8252,
34506,
863,
340,
14506,
14334,
61082,
37335,
368,
11651,
30226,
55609,
198,
16503,
9788,
62521,
4194,
8345,
4194,
682,
5151,
55609,
198,
18409,
430,
10137,
9248,
3645,
627,
3784,
9507,
76,
14301,
25,
610,
55609,
198,
14672,
311,
8911,
279,
9507,
76,
1650,
449,
627,
3784,
22695,
14301,
25,
610,
55609,
198,
14672,
311,
8911,
279,
22695,
449,
627,
3784,
471,
9324,
25,
1796,
17752,
60,
55609,
198,
5715,
2819,
315,
279,
8479,
13
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.chat.base.ChatAgent.html |
c26c63ee5178-0 | langchain.agents.agent.AgentExecutor¶
class langchain.agents.agent.AgentExecutor(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], return_intermediate_steps: bool = False, max_iterations: Optional[int] = 15, max_execution_time: Optional[float] = None, early_stopping_method: str = 'force', handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False)[source]¶
Bases: Chain
Consists of an agent using tools.
Create a new model by parsing and validating input data from keyword arguments.
Raises ValidationError if the input data cannot be parsed to form a valid model.
param agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]¶
The agent to run for creating a plan and determining actions
to take at each step of the execution loop.
param callback_manager: Optional[BaseCallbackManager] = None¶
Deprecated, use callbacks instead.
param callbacks: Callbacks = None¶
Optional list of callback handlers (or callback manager). Defaults to None.
Callback handlers are called throughout the lifecycle of a call to a chain,
starting with on_chain_start, ending with on_chain_end or on_chain_error.
Each custom chain can optionally call additional callback methods, see Callback docs
for full details.
param early_stopping_method: str = 'force'¶
The method to use for early stopping if the agent never
returns AgentFinish. Either ‘force’ or ‘generate’. | [
5317,
8995,
29192,
812,
45249,
89969,
26321,
55609,
198,
1058,
8859,
8995,
29192,
812,
45249,
89969,
26321,
4163,
11,
5044,
25,
12536,
58,
4066,
10869,
60,
284,
2290,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
14008,
25,
1845,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
8479,
25,
9323,
58,
4066,
11126,
2573,
17230,
11,
5464,
20981,
2573,
17230,
1145,
7526,
25,
29971,
58,
4066,
7896,
1145,
471,
15678,
14978,
23566,
25,
1845,
284,
3641,
11,
1973,
56707,
25,
12536,
19155,
60,
284,
220,
868,
11,
1973,
62048,
3084,
25,
12536,
96481,
60,
284,
2290,
11,
4216,
1284,
7153,
9209,
25,
610,
284,
364,
9009,
518,
3790,
623,
29698,
20808,
25,
9323,
58,
2707,
11,
610,
11,
54223,
15873,
5207,
6707,
1378,
1145,
610,
5163,
284,
3641,
6758,
2484,
60,
55609,
198,
33,
2315,
25,
29625,
198,
15577,
1705,
315,
459,
8479,
1701,
7526,
627,
4110,
264,
502,
1646,
555,
23115,
323,
69772,
1988,
828,
505,
16570,
6105,
627,
36120,
54129,
422,
279,
1988,
828,
4250,
387,
16051,
311,
1376,
264,
2764,
1646,
627,
913,
8479,
25,
9323,
58,
4066,
11126,
2573,
17230,
11,
5464,
20981,
2573,
17230,
60,
510,
8327,
60,
55609,
198,
791,
8479,
311,
1629,
369,
6968,
264,
3197,
323,
26679,
6299,
198,
998,
1935,
520,
1855,
3094,
315,
279,
11572,
6471,
627,
913,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
55609,
198,
52444,
11,
1005,
27777,
4619,
627,
913,
27777,
25,
23499,
82,
284,
2290,
55609,
198,
15669,
1160,
315,
4927,
25050,
320,
269,
4927,
6783,
570,
37090,
311,
2290,
627,
7646,
25050,
527,
2663,
6957,
279,
48608,
315,
264,
1650,
311,
264,
8957,
345,
40389,
449,
389,
31683,
5011,
11,
13696,
449,
389,
31683,
6345,
477,
389,
31683,
4188,
627,
4959,
2587,
8957,
649,
46624,
1650,
5217,
4927,
5528,
11,
1518,
23499,
27437,
198,
2000,
2539,
3649,
627,
913,
4216,
1284,
7153,
9209,
25,
610,
284,
364,
9009,
6,
55609,
198,
791,
1749,
311,
1005,
369,
4216,
23351,
422,
279,
8479,
2646,
198,
4310,
21372,
26748,
13,
21663,
3451,
9009,
529,
477,
3451,
19927,
24535
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.AgentExecutor.html |
c26c63ee5178-1 | returns AgentFinish. Either ‘force’ or ‘generate’.
“force” returns a string saying that it stopped because it met atime or iteration limit.
“generate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps.
param handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False¶
How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error.
sIf true, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agentas an observation.
param max_execution_time: Optional[float] = None¶
The maximum amount of wall clock time to spend in the execution
loop.
param max_iterations: Optional[int] = 15¶
The maximum number of steps to take before ending the execution
loop.
Setting to ‘None’ could lead to an infinite loop.
param memory: Optional[BaseMemory] = None¶
Optional memory object. Defaults to None.
Memory is a class that gets called at the start
and at the end of every chain. At the start, memory loads variables and passes
them along in the chain. At the end, it saves any returned variables.
There are many different types of memory - please see memory docs
for the full catalog.
param return_intermediate_steps: bool = False¶
Whether to return the agent’s trajectory of intermediate steps
at the end in addition to the final output.
param tags: Optional[List[str]] = None¶
Optional list of tags associated with the chain. Defaults to None | [
4310,
21372,
26748,
13,
21663,
3451,
9009,
529,
477,
3451,
19927,
529,
627,
2118,
9009,
863,
4780,
264,
925,
5605,
430,
433,
10717,
1606,
433,
2322,
520,
547,
477,
20140,
4017,
627,
2118,
19927,
863,
6880,
279,
8479,
753,
445,
11237,
29625,
832,
1620,
892,
311,
7068,
64,
1620,
4320,
3196,
389,
279,
3766,
7504,
627,
913,
3790,
623,
29698,
20808,
25,
9323,
58,
2707,
11,
610,
11,
54223,
15873,
5207,
6707,
1378,
1145,
610,
5163,
284,
3641,
55609,
198,
4438,
311,
3790,
6103,
9408,
555,
279,
8479,
753,
2612,
6871,
13578,
82,
311,
3641,
11,
902,
25930,
279,
1493,
627,
82,
2746,
837,
11,
279,
1493,
690,
387,
3288,
1203,
311,
279,
445,
11237,
439,
459,
22695,
627,
2746,
264,
925,
11,
279,
925,
5196,
690,
387,
3288,
311,
279,
445,
11237,
439,
459,
22695,
627,
2746,
264,
42022,
734,
11,
279,
734,
690,
387,
2663,
449,
279,
4788,
198,
300,
459,
5811,
11,
323,
279,
1121,
315,
430,
734,
690,
387,
5946,
311,
279,
8479,
300,
459,
22695,
627,
913,
1973,
62048,
3084,
25,
12536,
96481,
60,
284,
2290,
55609,
198,
791,
7340,
3392,
315,
7147,
9042,
892,
311,
8493,
304,
279,
11572,
198,
10719,
627,
913,
1973,
56707,
25,
12536,
19155,
60,
284,
220,
868,
55609,
198,
791,
7340,
1396,
315,
7504,
311,
1935,
1603,
13696,
279,
11572,
198,
10719,
627,
15762,
311,
3451,
4155,
529,
1436,
3063,
311,
459,
24746,
6471,
627,
913,
5044,
25,
12536,
58,
4066,
10869,
60,
284,
2290,
55609,
198,
15669,
5044,
1665,
13,
37090,
311,
2290,
627,
10869,
374,
264,
538,
430,
5334,
2663,
520,
279,
1212,
198,
438,
520,
279,
842,
315,
1475,
8957,
13,
2468,
279,
1212,
11,
5044,
21577,
7482,
323,
16609,
198,
49818,
3235,
304,
279,
8957,
13,
2468,
279,
842,
11,
433,
27024,
904,
6052,
7482,
627,
3947,
527,
1690,
2204,
4595,
315,
5044,
482,
4587,
1518,
5044,
27437,
198,
2000,
279,
2539,
16808,
627,
913,
471,
15678,
14978,
23566,
25,
1845,
284,
3641,
55609,
198,
25729,
311,
471,
279,
8479,
753,
35782,
315,
29539,
7504,
198,
266,
279,
842,
304,
5369,
311,
279,
1620,
2612,
627,
913,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
55609,
198,
15669,
1160,
315,
9681,
5938,
449,
279,
8957,
13,
37090,
311,
2290
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.AgentExecutor.html |
c26c63ee5178-2 | Optional list of tags associated with the chain. Defaults to None
These tags will be associated with each call to this chain,
and passed as arguments to the handlers defined in callbacks.
You can use these to eg identify a specific instance of a chain with its use case.
param tools: Sequence[BaseTool] [Required]¶
The valid tools the agent can call.
param verbose: bool [Optional]¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs
will be printed to the console. Defaults to langchain.verbose value.
__call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶
Run the logic of this chain and add to output if desired.
Parameters | [
15669,
1160,
315,
9681,
5938,
449,
279,
8957,
13,
37090,
311,
2290,
198,
9673,
9681,
690,
387,
5938,
449,
1855,
1650,
311,
420,
8957,
345,
438,
5946,
439,
6105,
311,
279,
25050,
4613,
304,
27777,
627,
2675,
649,
1005,
1521,
311,
8866,
10765,
264,
3230,
2937,
315,
264,
8957,
449,
1202,
1005,
1162,
627,
913,
7526,
25,
29971,
58,
4066,
7896,
60,
510,
8327,
60,
55609,
198,
791,
2764,
7526,
279,
8479,
649,
1650,
627,
913,
14008,
25,
1845,
510,
15669,
60,
55609,
198,
25729,
477,
539,
1629,
304,
14008,
3941,
13,
763,
14008,
3941,
11,
1063,
29539,
18929,
198,
14724,
387,
17124,
311,
279,
2393,
13,
37090,
311,
8859,
8995,
45749,
907,
627,
565,
6797,
3889,
25986,
25,
9323,
58,
13755,
17752,
11,
5884,
1145,
5884,
1145,
471,
18917,
36289,
25,
1845,
284,
3641,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
2997,
14334,
3186,
25,
1845,
284,
3641,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
6869,
279,
12496,
315,
420,
8957,
323,
923,
311,
2612,
422,
12974,
627,
9905,
198,
25986,
1389,
10685,
315,
11374,
11,
477,
3254,
1988,
422,
8957,
25283,
198,
3323,
832,
1719,
627,
693,
18917,
36289,
1389,
2777,
369,
3508,
311,
471,
1193,
16674,
304,
279,
198,
2376,
13,
1442,
3082,
11,
1193,
502,
7039,
8066,
555,
420,
8957,
690,
387,
198,
78691,
13,
1442,
3641,
11,
2225,
1988,
7039,
323,
502,
7039,
8066,
555,
420,
198,
8995,
690,
387,
6052,
13,
37090,
311,
3641,
627,
69411,
1389,
23499,
82,
311,
1005,
369,
420,
8957,
1629,
13,
1442,
539,
3984,
11,
690,
198,
817,
279,
27777,
3984,
311,
279,
8957,
627,
1012,
14334,
3186,
1389,
13440,
311,
2997,
1629,
3630,
304,
279,
2077,
13,
37090,
198,
998,
3641,
627,
7847,
1645,
543,
35099,
25,
9323,
58,
13755,
17752,
11,
5884,
1145,
5884,
1145,
471,
18917,
36289,
25,
1845,
284,
3641,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
12039,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
2997,
14334,
3186,
25,
1845,
284,
3641,
8,
11651,
30226,
17752,
11,
5884,
60,
55609,
198,
6869,
279,
12496,
315,
420,
8957,
323,
923,
311,
2612,
422,
12974,
627,
9905
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.AgentExecutor.html |
c26c63ee5178-3 | Run the logic of this chain and add to output if desired.
Parameters
inputs – Dictionary of inputs, or single input if chain expects
only one param.
return_only_outputs – boolean for whether to return only outputs in the
response. If True, only new keys generated by this chain will be
returned. If False, both input keys and new keys generated by this
chain will be returned. Defaults to False.
callbacks – Callbacks to use for this chain run. If not provided, will
use the callbacks provided to the chain.
include_run_info – Whether to include run info in the response. Defaults
to False.
apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶
Call the chain on all inputs in the list.
async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
dict(**kwargs: Any) → Dict¶
Return dictionary representation of chain.
classmethod from_agent_and_tools(agent: Union[BaseSingleActionAgent, BaseMultiActionAgent], tools: Sequence[BaseTool], callback_manager: Optional[BaseCallbackManager] = None, **kwargs: Any) → AgentExecutor[source]¶
Create from agent and tools.
lookup_tool(name: str) → BaseTool[source]¶
Lookup tool by name.
prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶
Validate and prep inputs.
prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ | [
6869,
279,
12496,
315,
420,
8957,
323,
923,
311,
2612,
422,
12974,
627,
9905,
198,
25986,
1389,
10685,
315,
11374,
11,
477,
3254,
1988,
422,
8957,
25283,
198,
3323,
832,
1719,
627,
693,
18917,
36289,
1389,
2777,
369,
3508,
311,
471,
1193,
16674,
304,
279,
198,
2376,
13,
1442,
3082,
11,
1193,
502,
7039,
8066,
555,
420,
8957,
690,
387,
198,
78691,
13,
1442,
3641,
11,
2225,
1988,
7039,
323,
502,
7039,
8066,
555,
420,
198,
8995,
690,
387,
6052,
13,
37090,
311,
3641,
627,
69411,
1389,
23499,
82,
311,
1005,
369,
420,
8957,
1629,
13,
1442,
539,
3984,
11,
690,
198,
817,
279,
27777,
3984,
311,
279,
8957,
627,
1012,
14334,
3186,
1389,
13440,
311,
2997,
1629,
3630,
304,
279,
2077,
13,
37090,
198,
998,
3641,
627,
10492,
5498,
2062,
25,
1796,
58,
13755,
17752,
11,
5884,
21128,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
8,
11651,
1796,
58,
13755,
17752,
11,
610,
5163,
55609,
198,
7368,
279,
8957,
389,
682,
11374,
304,
279,
1160,
627,
7847,
802,
359,
4163,
2164,
25,
5884,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
6869,
279,
8957,
439,
1495,
304,
11,
1495,
704,
477,
5361,
7482,
11,
1495,
704,
627,
8644,
22551,
9872,
25,
5884,
8,
11651,
30226,
55609,
198,
5715,
11240,
13340,
315,
8957,
627,
27853,
505,
26814,
8543,
40823,
56514,
25,
9323,
58,
4066,
11126,
2573,
17230,
11,
5464,
20981,
2573,
17230,
1145,
7526,
25,
29971,
58,
4066,
7896,
1145,
4927,
12418,
25,
12536,
58,
4066,
7646,
2087,
60,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
21372,
26321,
76747,
60,
55609,
198,
4110,
505,
8479,
323,
7526,
627,
21696,
23627,
3232,
25,
610,
8,
11651,
5464,
7896,
76747,
60,
55609,
198,
35347,
5507,
555,
836,
627,
72874,
29657,
35099,
25,
9323,
58,
13755,
17752,
11,
5884,
1145,
5884,
2526,
11651,
30226,
17752,
11,
610,
60,
55609,
198,
18409,
323,
22033,
11374,
627,
72874,
36289,
35099,
25,
30226,
17752,
11,
610,
1145,
16674,
25,
30226,
17752,
11,
610,
1145,
471,
18917,
36289,
25,
1845,
284,
3641,
8,
11651,
30226,
17752,
11,
610,
60,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.AgentExecutor.html |
c26c63ee5178-4 | Validate and prep outputs.
validator raise_deprecation » all fields¶
Raise deprecation warning if callback_manager is used.
run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
Run the chain as text in, text out or multiple variables, text out.
save(file_path: Union[Path, str]) → None[source]¶
Raise error - saving not supported for Agent Executors.
save_agent(file_path: Union[Path, str]) → None[source]¶
Save the underlying agent.
validator set_verbose » verbose¶
If verbose is None, set it.
This allows users to pass in None as verbose to access the global setting.
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
to_json_not_implemented() → SerializedNotImplemented¶
validator validate_return_direct_tool » all fields[source]¶
Validate that tools are compatible with agent.
validator validate_tools » all fields[source]¶
Validate that tools are compatible with agent.
property lc_attributes: Dict¶
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]¶
Return the namespace of the langchain object.
eg. [“langchain”, “llms”, “openai”]
property lc_secrets: Dict[str, str]¶
Return a map of constructor argument names to secret ids.
eg. {“openai_api_key”: “OPENAI_API_KEY”}
property lc_serializable: bool¶
Return whether or not the class is serializable.
model Config¶
Bases: object
Configuration for this pydantic object.
arbitrary_types_allowed = True¶ | [
18409,
323,
22033,
16674,
627,
16503,
4933,
2310,
70693,
4194,
8345,
4194,
682,
5151,
55609,
198,
94201,
409,
70693,
10163,
422,
4927,
12418,
374,
1511,
627,
6236,
4163,
2164,
25,
5884,
11,
27777,
25,
12536,
58,
33758,
53094,
58,
4066,
7646,
3126,
1145,
5464,
7646,
2087,
5163,
284,
2290,
11,
9681,
25,
12536,
53094,
17752,
5163,
284,
2290,
11,
3146,
9872,
25,
5884,
8,
11651,
610,
55609,
198,
6869,
279,
8957,
439,
1495,
304,
11,
1495,
704,
477,
5361,
7482,
11,
1495,
704,
627,
6766,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
76747,
60,
55609,
198,
94201,
1493,
482,
14324,
539,
7396,
369,
21372,
96193,
627,
6766,
26814,
4971,
2703,
25,
9323,
58,
1858,
11,
610,
2526,
11651,
2290,
76747,
60,
55609,
198,
8960,
279,
16940,
8479,
627,
16503,
743,
69021,
4194,
8345,
4194,
14008,
55609,
198,
2746,
14008,
374,
2290,
11,
743,
433,
627,
2028,
6276,
3932,
311,
1522,
304,
2290,
439,
14008,
311,
2680,
279,
3728,
6376,
627,
998,
9643,
368,
11651,
9323,
58,
78621,
13591,
11,
92572,
2688,
18804,
60,
55609,
198,
998,
9643,
8072,
18377,
14565,
368,
11651,
92572,
2688,
18804,
55609,
198,
16503,
9788,
12794,
33971,
23627,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
7526,
527,
18641,
449,
8479,
627,
16503,
9788,
40823,
4194,
8345,
4194,
682,
5151,
76747,
60,
55609,
198,
18409,
430,
7526,
527,
18641,
449,
8479,
627,
3784,
37313,
18741,
25,
30226,
55609,
198,
5715,
264,
1160,
315,
7180,
5144,
430,
1288,
387,
5343,
304,
279,
198,
76377,
16901,
13,
4314,
8365,
2011,
387,
11928,
555,
279,
198,
22602,
627,
3784,
37313,
42671,
25,
1796,
17752,
60,
55609,
198,
5715,
279,
4573,
315,
279,
8859,
8995,
1665,
627,
797,
13,
510,
2118,
5317,
8995,
9520,
1054,
657,
1026,
9520,
1054,
2569,
2192,
863,
933,
3784,
37313,
3537,
53810,
25,
30226,
17752,
11,
610,
60,
55609,
198,
5715,
264,
2472,
315,
4797,
5811,
5144,
311,
6367,
14483,
627,
797,
13,
314,
2118,
2569,
2192,
11959,
3173,
57633,
1054,
32033,
15836,
11669,
6738,
863,
534,
3784,
37313,
26684,
8499,
25,
1845,
55609,
198,
5715,
3508,
477,
539,
279,
538,
374,
6275,
8499,
627,
2590,
5649,
55609,
198,
33,
2315,
25,
1665,
198,
7843,
369,
420,
4611,
67,
8322,
1665,
627,
277,
88951,
9962,
43255,
284,
3082,
55609
] | https://langchain.readthedocs.io/en/latest/agents/langchain.agents.agent.AgentExecutor.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.