id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
af8b1c7c6518-17 | serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing when a new token is generated.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Log the latency, error, token usage, and response to Infino.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Set the error flag.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Do nothing when LLM chain starts.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing when LLM chain ends.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Need to log the error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing when tool starts.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-18 | Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing when agent takes a specific action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing when tool ends.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing when tool outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Do nothing.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
class langchain.callbacks.MlflowCallbackHandler(name='langchainrun-%', experiment='langchain', tags={}, tracking_uri=None)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs metrics and artifacts to mlflow server.
Parameters
name (str) β Name of the run.
experiment (str) β Name of the experiment.
tags (dict) β Tags to be attached for the run.
tracking_uri (str) β MLflow tracking server uri.
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run, | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-19 | the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to mlflow server.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-20 | kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, finish=False)[source]ο
Parameters
langchain_asset (Any) β
finish (bool) β
Return type
None
class langchain.callbacks.OpenAICallbackHandler[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that tracks OpenAI info.
total_tokens: int = 0ο
prompt_tokens: int = 0ο
completion_tokens: int = 0ο
successful_requests: int = 0ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-21 | completion_tokens: int = 0ο
successful_requests: int = 0ο
total_cost: float = 0.0ο
property always_verbose: boolο
Whether to call verbose callbacks even if verbose is False.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Print out the prompts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Print out the token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Collect token usage.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
class langchain.callbacks.StdOutCallbackHandler(color=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that prints to std out.
Parameters
color (Optional[str]) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Print out the prompts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Do nothing.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-22 | Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Print out that we are entering a chain.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Print out that we finished a chain.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
If not the final action, print out observation.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-23 | kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, color=None, end='', **kwargs)[source]ο
Run when agent ends.
Parameters
text (str) β
color (Optional[str]) β
end (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
class langchain.callbacks.StreamingStdOutCallbackHandler[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback handler for streaming. Only works with LLMs that support streaming.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-24 | Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run on arbitrary text.
Parameters
text (str) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-25 | Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
langchain.callbacks.StreamlitCallbackHandler(parent_container, *, max_thought_containers=4, expand_new_thoughts=True, collapse_completed_thoughts=True, thought_labeler=None)[source]ο
Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards
use with a LangChain Agent; it displays the Agentβs LLM and tool-usage βthoughtsβ
inside a series of Streamlit expanders.
Parameters
parent_container (DeltaGenerator) β The st.container that will contain all the Streamlit elements that the
Handler creates.
max_thought_containers (int) β The max number of completed LLM thought containers to show at once. When this
threshold is reached, a new thought will cause the oldest thoughts to be
collapsed into a βHistoryβ expander. Defaults to 4.
expand_new_thoughts (bool) β Each LLM βthoughtβ gets its own st.expander. This param controls whether that
expander is expanded by default. Defaults to True.
collapse_completed_thoughts (bool) β If True, LLM thought expanders will be collapsed when completed.
Defaults to True.
thought_labeler (Optional[LLMThoughtLabeler]) β An optional custom LLMThoughtLabeler instance. If unspecified, the handler
will use the default thought labeling logic. Defaults to None.
Returns
A new StreamlitCallbackHandler instance.
Note that this is an βauto-updatingβ API (if the installed version of Streamlit)
has a more recent StreamlitCallbackHandler implementation, an instance of that class | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-26 | has a more recent StreamlitCallbackHandler implementation, an instance of that class
will be used.
Return type
BaseCallbackHandler
class langchain.callbacks.LLMThoughtLabeler[source]ο
Bases: object
Generates markdown labels for LLMThought containers. Pass a custom
subclass of this to StreamlitCallbackHandler to override its default
labeling logic.
get_initial_label()[source]ο
Return the markdown label for a new LLMThought that doesnβt have
an associated tool yet.
Return type
str
get_tool_label(tool, is_complete)[source]ο
Return the label for an LLMThought that has an associated
tool.
Parameters
tool (langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord) β The toolβs ToolRecord
is_complete (bool) β True if the thought is complete; False if the thought
is still receiving input.
Return type
The markdown label for the thoughtβs container.
get_history_label()[source]ο
Return a markdown label for the special βhistoryβ container
that contains overflow thoughts.
Return type
str
get_final_agent_thought_label()[source]ο
Return the markdown label for the agentβs final thought -
the βNow I have the answerβ thought, that doesnβt involve
a tool.
Return type
str
class langchain.callbacks.WandbCallbackHandler(job_type=None, project='langchain_callback_demo', entity=None, tags=None, group=None, name=None, notes=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Weights and Biases.
Parameters
job_type (str) β The type of job.
project (str) β The project to log to. | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-27 | project (str) β The project to log to.
entity (str) β The entity to log to.
tags (list) β The tags to log.
group (str) β The group to log to.
name (str) β The name of the run.
notes (str) β The notes to log.
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics.
stream_logs (bool) β Whether to stream callback actions to W&B
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response using the run.log() method to Weights and Biases.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-28 | None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-29 | Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source]ο
Flush the tracker and reset the session.
Parameters
langchain_asset (Any) β The langchain asset to save.
reset (bool) β Whether to reset the session.
finish (bool) β Whether to finish the run.
job_type (Optional[str]) β The job type.
project (Optional[str]) β The project.
entity (Optional[str]) β The entity.
tags (Optional[Sequence]) β The tags.
group (Optional[str]) β The group.
name (Optional[str]) β The name.
notes (Optional[str]) β The notes.
visualize (Optional[bool]) β Whether to visualize.
complexity_metrics (Optional[bool]) β Whether to compute complexity metrics.
Returns β None
Return type
None
class langchain.callbacks.WhyLabsCallbackHandler(logger)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
WhyLabs CallbackHandler.
Parameters
logger (Logger) β
on_llm_start(serialized, prompts, **kwargs)[source]ο
Pass the input prompts to the logger
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Pass the generated response to the logger.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-30 | kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Do nothing.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-31 | color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
flush()[source]ο
Return type
None
close()[source]ο
Return type
None
classmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source]ο
Instantiate whylogs Logger from params.
Parameters
api_key (Optional[str]) β WhyLabs API key. Optional because the preferred
way to specify the API key is with environment variable
WHYLABS_API_KEY.
org_id (Optional[str]) β WhyLabs organization id to write profiles to.
If not set must be specified in environment variable
WHYLABS_DEFAULT_ORG_ID.
dataset_id (Optional[str]) β The model or dataset this callback is gathering
telemetry for. If not set must be specified in environment variable
WHYLABS_DEFAULT_DATASET_ID.
sentiment (bool) β If True will initialize a model to perform
sentiment analysis compound score. Defaults to False and will not gather
this metric. | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-32 | sentiment analysis compound score. Defaults to False and will not gather
this metric.
toxicity (bool) β If True will initialize a model to score
toxicity. Defaults to False and will not gather this metric.
themes (bool) β If True will initialize a model to calculate
distance to configured themes. Defaults to None and will not gather this
metric.
Return type
Logger
langchain.callbacks.get_openai_callback()[source]ο
Get the OpenAI callback handler in a context manager.
which conveniently exposes token and cost information.
Returns
The OpenAI callback handler.
Return type
OpenAICallbackHandler
Example
>>> with get_openai_callback() as cb:
... # Use the OpenAI callback handler
langchain.callbacks.tracing_enabled(session_name='default')[source]ο
Get the Deprecated LangChainTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
The LangChainTracer session.
Return type
TracerSessionV1
Example
>>> with tracing_enabled() as session:
... # Use the LangChainTracer session
langchain.callbacks.wandb_tracing_enabled(session_name='default')[source]ο
Get the WandbTracer in a context manager.
Parameters
session_name (str, optional) β The name of the session.
Defaults to βdefaultβ.
Returns
None
Return type
Generator[None, None, None]
Example
>>> with wandb_tracing_enabled() as session:
... # Use the WandbTracer session | https://api.python.langchain.com/en/stable/modules/callbacks.html |
03d1ebe9f490-0 | Agentsο
Interface for agents.
class langchain.agents.Agent(*, llm_chain, output_parser, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
Class responsible for calling the language model and deciding the action.
This is driven by an LLMChain. The prompt in the LLMChain MUST include
a variable called βagent_scratchpadβ where the agent can put its
intermediary work.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute allowed_tools: Optional[List[str]] = Noneο
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
abstract classmethod create_prompt(tools)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
Return type
langchain.prompts.base.BasePromptTemplate
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-1 | dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
get_full_inputs(intermediate_steps, **kwargs)[source]ο
Create the full inputs for the LLMChain from intermediate steps.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
Dict[str, Any]
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-2 | Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
tool_run_logging_kwargs()[source]ο
Return type
Dict
abstract property llm_prefix: strο
Prefix to append the LLM call with.
abstract property observation_prefix: strο
Prefix to append the observation with.
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.AgentExecutor(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.chains.base.Chain
Consists of an agent using tools.
Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-3 | Return type
None
attribute agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required]ο
The agent to run for creating a plan and determining actions
to take at each step of the execution loop.
attribute early_stopping_method: str = 'force'ο
The method to use for early stopping if the agent never
returns AgentFinish. Either βforceβ or βgenerateβ.
βforceβ returns a string saying that it stopped because it met atime or iteration limit.
βgenerateβ calls the agentβs LLM Chain one final time to generatea final answer based on the previous steps.
attribute handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = Falseο
How to handle errors raised by the agentβs output parser.Defaults to False, which raises the error.
sIf true, the error will be sent back to the LLM as an observation.
If a string, the string itself will be sent to the LLM as an observation.
If a callable function, the function will be called with the exception
as an argument, and the result of that function will be passed to the agentas an observation.
attribute max_execution_time: Optional[float] = Noneο
The maximum amount of wall clock time to spend in the execution
loop.
attribute max_iterations: Optional[int] = 15ο
The maximum number of steps to take before ending the execution
loop.
Setting to βNoneβ could lead to an infinite loop.
attribute return_intermediate_steps: bool = Falseο
Whether to return the agentβs trajectory of intermediate steps
at the end in addition to the final output.
attribute tools: Sequence[BaseTool] [Required]ο
The valid tools the agent can call. | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-4 | The valid tools the agent can call.
classmethod from_agent_and_tools(agent, tools, callback_manager=None, **kwargs)[source]ο
Create from agent and tools.
Parameters
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor
lookup_tool(name)[source]ο
Lookup tool by name.
Parameters
name (str) β
Return type
langchain.tools.base.BaseTool
save(file_path)[source]ο
Raise error - saving not supported for Agent Executors.
Parameters
file_path (Union[pathlib.Path, str]) β
Return type
None
save_agent(file_path)[source]ο
Save the underlying agent.
Parameters
file_path (Union[pathlib.Path, str]) β
Return type
None
class langchain.agents.AgentOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser
Return type
None
abstract parse(text)[source]ο
Parse text into agent action/finish.
Parameters
text (str) β
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]ο
Bases: str, enum.Enum
Enumerator with the Agent types.
ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description'ο
REACT_DOCSTORE = 'react-docstore'ο
SELF_ASK_WITH_SEARCH = 'self-ask-with-search'ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-5 | SELF_ASK_WITH_SEARCH = 'self-ask-with-search'ο
CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description'ο
CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description'ο
CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description'ο
STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description'ο
OPENAI_FUNCTIONS = 'openai-functions'ο
OPENAI_MULTI_FUNCTIONS = 'openai-multi-functions'ο
class langchain.agents.BaseMultiActionAgent[source]ο
Bases: pydantic.main.BaseModel
Base Agent class.
Return type
None
abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Actions specifying what tool to use.
Return type
Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
abstract plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-6 | along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Actions specifying what tool to use.
Return type
Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
save(file_path)[source]ο
Save the agent.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the agent to.
Return type
None
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs()[source]ο
Return type
Dict
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.BaseSingleActionAgent[source]ο
Bases: pydantic.main.BaseModel
Base Agent class.
Return type
None
abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-7 | **kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_llm_and_tools(llm, tools, callback_manager=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
kwargs (Any) β
Return type
langchain.agents.agent.BaseSingleActionAgent
get_allowed_tools()[source]ο
Return type
Optional[List[str]]
abstract plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source]ο
Return response when agent has been stopped due to max iterations.
Parameters
early_stopping_method (str) β
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β
kwargs (Any) β
Return type
langchain.schema.AgentFinish
save(file_path)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-8 | Return type
langchain.schema.AgentFinish
save(file_path)[source]ο
Save the agent.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the agent to.
Return type
None
Example:
.. code-block:: python
# If working with agent executor
agent.agent.save(file_path=βpath/agent.yamlβ)
tool_run_logging_kwargs()[source]ο
Return type
Dict
property return_values: List[str]ο
Return values of the agent.
class langchain.agents.ConversationalAgent(*, llm_chain, output_parser=None, allowed_tools=None, ai_prefix='AI')[source]ο
Bases: langchain.agents.agent.Agent
An agent designed to hold a conversation in addition to using tools.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
ai_prefix (str) β
Return type
None
attribute ai_prefix: str = 'AI'ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-9 | classmethod create_prompt(tools, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-10 | MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-11 | Create prompt in the style of the zero shot agent.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools the agent will have access to, used to format the
prompt.
prefix (str) β String to put before the list of tools.
suffix (str) β String to put after the list of tools.
ai_prefix (str) β String to use before AI output.
human_prefix (str) β String to use before human output.
input_variables (Optional[List[str]]) β List of input variables the final prompt will expect.
format_instructions (str) β
Returns
A PromptTemplate with the template assembled from the pieces here.
Return type
langchain.prompts.prompt.PromptTemplate | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-12 | classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-13 | say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-14 | Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
ai_prefix (str) β
human_prefix (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.ConversationalChatAgent(*, llm_chain, output_parser=None, allowed_tools=None, template_tool_response="TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.")[source]ο
Bases: langchain.agents.agent.Agent
An agent designed to hold a conversation in addition to using tools.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
template_tool_response (str) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-15 | None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
attribute template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-16 | classmethod create_prompt(tools, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, output_parser=None)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
system_message (str) β
human_message (str) β
input_variables (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-17 | human_message (str) β
input_variables (Optional[List[str]]) β
output_parser (Optional[langchain.schema.BaseOutputParser]) β
Return type
langchain.prompts.base.BasePromptTemplate | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-18 | Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-19 | Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
system_message (str) β
human_message (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.LLMSingleActionAgent(*, llm_chain, output_parser, stop)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
stop (List[str]) β
Return type
None
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
attribute output_parser: langchain.agents.agent.AgentOutputParser [Required]ο
attribute stop: List[str] [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-20 | kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
dict(**kwargs)[source]ο
Return dictionary representation of agent.
Parameters
kwargs (Any) β
Return type
Dict
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Callbacks to run.
**kwargs β User inputs.
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
tool_run_logging_kwargs()[source]ο
Return type
Dict
class langchain.agents.MRKLChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that implements the MRKL system.
Example
from langchain import OpenAI, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
prompt = PromptTemplate(...)
chains = [...]
mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt)
Parameters
memory (Optional[langchain.schema.BaseMemory]) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-21 | Parameters
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
classmethod from_chains(llm, chains, **kwargs)[source]ο
User friendly way to initialize the MRKL chain.
This is intended to be an easy way to get up and running with the
MRKL chain.
Parameters
llm (langchain.base_language.BaseLanguageModel) β The LLM to use as the agent LLM.
chains (List[langchain.agents.mrkl.base.ChainConfig]) β The chains the MRKL system has access to.
**kwargs β parameters to be passed to initialization.
kwargs (Any) β
Returns
An initialized MRKL chain.
Return type
langchain.agents.agent.AgentExecutor
Example
from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain
from langchain.chains.mrkl.base import ChainConfig
llm = OpenAI(temperature=0)
search = SerpAPIWrapper()
llm_math_chain = LLMMathChain(llm=llm)
chains = [ | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-22 | llm_math_chain = LLMMathChain(llm=llm)
chains = [
ChainConfig(
action_name = "Search",
action=search.search,
action_description="useful for searching"
),
ChainConfig(
action_name="Calculator",
action=llm_math_chain.run,
action_description="useful for doing math"
)
]
mrkl = MRKLChain.from_chains(llm, chains)
class langchain.agents.OpenAIFunctionsAgent(*, llm, tools, prompt)[source]ο
Bases: langchain.agents.agent.BaseSingleActionAgent
An Agent driven by OpenAIs function powered API.
Parameters
llm (langchain.base_language.BaseLanguageModel) β This should be an instance of ChatOpenAI, specifically a model
that supports using functions.
tools (Sequence[langchain.tools.base.BaseTool]) β The tools this agent has access to.
prompt (langchain.prompts.base.BasePromptTemplate) β The prompt for this agent, should support agent_scratchpad as one
of the variables. For an easy way to construct this prompt, use
OpenAIFunctionsAgent.create_prompt(β¦)
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute prompt: langchain.prompts.base.BasePromptTemplate [Required]ο
attribute tools: Sequence[langchain.tools.base.BaseTool] [Required]ο
async aplan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date,
along with observations
**kwargs β User inputs. | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-23 | along with observations
**kwargs β User inputs.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
classmethod create_prompt(system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages=None)[source]ο
Create prompt for this agent.
Parameters
system_message (Optional[langchain.schema.SystemMessage]) β Message to use as the system message that will be the
first in the prompt.
extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) β Prompt messages that will be placed between the
system message and the new human input.
Returns
A prompt template to pass into this agent.
Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, extra_prompt_messages=None, system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) β
system_message (Optional[langchain.schema.SystemMessage]) β
kwargs (Any) β
Return type
langchain.agents.agent.BaseSingleActionAgent
get_allowed_tools()[source]ο
Get allowed tools.
Return type
List[str]
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-24 | List[str]
plan(intermediate_steps, callbacks=None, **kwargs)[source]ο
Given input, decided what to do.
Parameters
intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) β Steps the LLM has taken to date, along with observations
**kwargs β User inputs.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Returns
Action specifying what tool to use.
Return type
Union[langchain.schema.AgentAction, langchain.schema.AgentFinish]
property functions: List[dict]ο
property input_keys: List[str]ο
Get input keys. Input refers to user input here.
class langchain.agents.ReActChain(llm, docstore, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that implements the ReAct paper.
Example
from langchain import ReActChain, OpenAI
react = ReAct(llm=OpenAI())
Parameters
llm (langchain.base_language.BaseLanguageModel) β
docstore (langchain.docstore.base.Docstore) β
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-25 | verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
class langchain.agents.ReActTextWorldAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.react.base.ReActDocstoreAgent
Agent for the ReAct TextWorld chain.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
classmethod create_prompt(tools)[source]ο
Return default prompt.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
Return type
langchain.prompts.base.BasePromptTemplate
class langchain.agents.SelfAskWithSearchChain(llm, search_chain, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source]ο
Bases: langchain.agents.agent.AgentExecutor
Chain that does self ask with search.
Example
from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper
search_chain = GoogleSerperAPIWrapper() | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-26 | search_chain = GoogleSerperAPIWrapper()
self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain)
Parameters
llm (langchain.base_language.BaseLanguageModel) β
search_chain (Union[langchain.utilities.google_serper.GoogleSerperAPIWrapper, langchain.utilities.serpapi.SerpAPIWrapper]) β
memory (Optional[langchain.schema.BaseMemory]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
tags (Optional[List[str]]) β
agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) β
tools (Sequence[langchain.tools.base.BaseTool]) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) β
Return type
None
class langchain.agents.StructuredChatAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.Agent
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-27 | None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
classmethod create_prompt(tools, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None)[source]ο
Create a prompt for this class.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β
prefix (str) β
suffix (str) β
human_message_template (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-28 | format_instructions (str) β
input_variables (Optional[List[str]]) β
memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) β
Return type
langchain.prompts.base.BasePromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ "action": $TOOL_NAME,\nΒ "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ "action": "Final Answer",\nΒ "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-29 | Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
human_message_template (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
class langchain.agents.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source]ο
Bases: langchain.tools.base.BaseTool
Tool that takes in function or coroutine directly.
Parameters
name (str) β
func (Callable[[...], str]) β
description (str) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
return_direct (bool) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) β
coroutine (Optional[Callable[[...], Awaitable[str]]]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-30 | Return type
None
attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = Noneο
The asynchronous version of the function.
attribute description: str = ''ο
Used to tell the model how/when/why to use the tool.
You can provide few-shot examples as a part of the description.
attribute func: Callable[[...], str] [Required]ο
The function to run when the tool is called.
classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source]ο
Initialize tool from a function.
Parameters
func (Callable) β
name (str) β
description (str) β
return_direct (bool) β
args_schema (Optional[Type[pydantic.main.BaseModel]]) β
kwargs (Any) β
Return type
langchain.tools.base.Tool
property args: dictο
The toolβs input arguments.
class langchain.agents.ZeroShotAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source]ο
Bases: langchain.agents.agent.Agent
Agent for the MRKL chain.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
output_parser (langchain.agents.agent.AgentOutputParser) β
allowed_tools (Optional[List[str]]) β
Return type
None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-31 | None
attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]ο
classmethod create_prompt(tools, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None)[source]ο
Create prompt in the style of the zero shot agent.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools the agent will have access to, used to format the
prompt.
prefix (str) β String to put before the list of tools.
suffix (str) β String to put after the list of tools.
input_variables (Optional[List[str]]) β List of input variables the final prompt will expect.
format_instructions (str) β
Returns
A PromptTemplate with the template assembled from the pieces here.
Return type
langchain.prompts.prompt.PromptTemplate | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-32 | Return type
langchain.prompts.prompt.PromptTemplate
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, **kwargs)[source]ο
Construct an agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tools (Sequence[langchain.tools.base.BaseTool]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.agents.agent.Agent
property llm_prefix: strο
Prefix to append the llm call with.
property observation_prefix: strο
Prefix to append the observation with.
langchain.agents.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]ο
Create csv agent by loading to a dataframe and using pandas agent.
Parameters
llm (langchain.base_language.BaseLanguageModel) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-33 | Parameters
llm (langchain.base_language.BaseLanguageModel) β
path (Union[str, List[str]]) β
pandas_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-34 | langchain.agents.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-35 | the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-36 | Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-37 | langchain.agents.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-38 | Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-39 | Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
return_intermediate_steps (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]ο
Construct a pandas agent from an LLM and dataframe.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
df (Any) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (Optional[str]) β
suffix (Optional[str]) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-40 | max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
include_df_in_prompt (Optional[bool]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-41 | langchain.agents.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-42 | Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-43 | Construct a pbi agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-44 | Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an Chat LLM and tools. | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-45 | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
Parameters
llm (langchain.chat_models.base.BaseChatModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]ο
Construct a spark agent from an LLM and dataframe.
Parameters
llm (langchain.llms.base.BaseLLM) β
df (Any) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-46 | df (Any) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-47 | langchain.agents.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-48 | Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-49 | Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-50 | langchain.agents.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-51 | max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-52 | Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (Optional[str]) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-53 | prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore router agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.get_all_tool_names()[source]ο
Get a list of all possible tool names.
Return type
List[str]
langchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source]ο
Load an agent executor given tools and LLM.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools this agent has access to. | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-54 | llm (langchain.base_language.BaseLanguageModel) β Language model to use as the agent.
agent (Optional[langchain.agents.agent_types.AgentType]) β Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path (Optional[str]) β Path to serialized agent to use.
agent_kwargs (Optional[dict]) β Additional key word arguments to pass to the underlying agent
tags (Optional[Sequence[str]]) β Tags to apply to the traced runs.
**kwargs β Additional key word arguments passed to the agent executor
kwargs (Any) β
Returns
An agent executor
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.load_agent(path, **kwargs)[source]ο
Unified method for loading a agent from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
kwargs (Any) β
Return type
Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]
langchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source]ο
Loads a tool from the HuggingFace Hub.
Parameters
task_or_repo_id (str) β Task or model repo id.
model_repo_id (Optional[str]) β Optional model repo id.
token (Optional[str]) β Optional token.
remote (bool) β Optional remote. Defaults to False.
**kwargs β
kwargs (Any) β
Returns
A tool.
Return type
langchain.tools.base.BaseTool | https://api.python.langchain.com/en/stable/modules/agents.html |
03d1ebe9f490-55 | Returns
A tool.
Return type
langchain.tools.base.BaseTool
langchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source]ο
Load tools based on their name.
Parameters
tool_names (List[str]) β name of tools to load.
llm (Optional[langchain.base_language.BaseLanguageModel]) β Optional language model, may be needed to initialize certain tools.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Optional callback manager or list of callback handlers.
If not provided, default global callback manager will be used.
kwargs (Any) β
Returns
List of tools.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]ο
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct (bool) β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β optional argument schema for user to specify
infer_schema (bool) β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
args (Union[str, Callable]) β
Return type
Callable
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return | https://api.python.langchain.com/en/stable/modules/agents.html |
4c9e56a69292-0 | Document Loadersο
All different types of document loaders.
class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads AZLyrics webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirbyteJSONLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads local airbyte json files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for Airtable tables.
Parameters | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-1 | Loader for Airtable tables.
Parameters
api_token (str) β
table_id (str) β
base_id (str) β
lazy_load()[source]ο
Lazy load records from table.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load Table.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Logic for loading documents from Apify datasets.
Parameters
dataset_id (str) β
dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) β
Return type
None
attribute apify_client: Any = Noneο
attribute dataset_id: str [Required]ο
The ID of the dataset on the Apify platform.
attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]ο
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Parameters
query (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-2 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
blob_name (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a bibtex file into a list of Documents. | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-3 | Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Parameters
file_path (str) β
parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) β
max_docs (Optional[int]) β
max_content_chars (Optional[int]) β
load_extra_metadata (bool) β
file_pattern (str) β
lazy_load()[source]ο
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-4 | are written into the page_content and none into the metadata.
Parameters
query (str) β
project (Optional[str]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
credentials (Optional[Credentials]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BiliBiliLoader(video_urls)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads bilibili transcripts.
Parameters
video_urls (List[str]) β
load()[source]ο
Load from bilibili url.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Parameters
blackboard_course_url (str) β
bbrouter (str) β | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-5 | blackboard_course_url (str) β
bbrouter (str) β
load_all_recursively (bool) β
basic_auth (Optional[Tuple[str, str]]) β
cookies (Optional[dict]) β
folder_path: strο
base_url: strο
load_all_recursively: boolο
check_bs4()[source]ο
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
Return type
None
load()[source]ο
Load data into document objects.
Returns
List of documents.
Return type
List[langchain.schema.Document]
download(path)[source]ο
Download a file from a url.
Parameters
path (str) β Path to the file.
Return type
None
parse_filename(url)[source]ο
Parse the filename from a url.
Parameters
url (str) β Url to parse the filename from.
Returns
The filename.
Return type
str
class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source]ο
Bases: pydantic.main.BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Parameters
data (Optional[Union[bytes, str]]) β
mimetype (Optional[str]) β
encoding (str) β
path (Optional[Union[str, pathlib.PurePath]]) β
Return type
None
attribute data: Optional[Union[bytes, str]] = Noneο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-6 | None
attribute data: Optional[Union[bytes, str]] = Noneο
attribute encoding: str = 'utf-8'ο
attribute mimetype: Optional[str] = Noneο
attribute path: Optional[Union[str, pathlib.PurePath]] = Noneο
as_bytes()[source]ο
Read data as bytes.
Return type
bytes
as_bytes_io()[source]ο
Read data as a byte stream.
Return type
Generator[Union[_io.BytesIO, _io.BufferedReader], None, None]
as_string()[source]ο
Read data as a string.
Return type
str
classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source]ο
Initialize the blob from in-memory data.
Parameters
data (Union[str, bytes]) β the in-memory data associated with the blob
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
path (Optional[str]) β if provided, will be set as the source from which the data came
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source]ο
Load the blob from a path like object.
Parameters
path (Union[str, pathlib.PurePath]) β path like object to file to be read
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
guess_type (bool) β If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
Return type | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-7 | if a mime-type was not provided
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
property source: Optional[str]ο
The source location of the blob as string if known otherwise none.
class langchain.document_loaders.BlobLoader[source]ο
Bases: abc.ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
abstract yield_blobs()[source]ο
A lazy loader for raw data represented by LangChainβs Blob object.
Returns
A generator over blobs
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason. | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-8 | Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address (str) β
blockchainType (langchain.document_loaders.blockchain.BlockchainType) β
api_key (str) β
startToken (str) β
get_all_tokens (bool) β
max_execution_time (Optional[int]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path (str) β
source_column (Optional[str]) β
csv_args (Optional[Dict]) β
encoding (Optional[str]) β
load()[source]ο
Load data into document objects.
Return type | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-9 | load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads conversations from exported ChatGPT data.
Parameters
log_file (str) β
num_logs (int) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CoNLLULoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load CoNLL-U files.
Parameters
file_path (str) β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads College Confidential webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Confluence pages. Port of https://llamahub.ai/l/confluence | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-10 | Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
token (str, optional) β _description_, defaults to None | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-11 | token (str, optional) β _description_, defaults to None
cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source]ο
Validates proper combinations of init arguments
Parameters
url (Optional[str]) β
api_key (Optional[str]) β
username (Optional[str]) β
oauth2 (Optional[dict]) β
token (Optional[str]) β
Return type
Optional[List]
load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source]ο
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-12 | defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False
content_format (ContentFormat) β Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) β The languages to use for the Tesseract agent. To use a
language, youβll first need to install the appropriate
Tesseract language pack.
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method, **kwargs)[source]ο
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
kwargs (Any) β
Returns
List of documents
Return type
List
is_public_page(page)[source]ο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-13 | List of documents
Return type
List
is_public_page(page)[source]ο
Check if a page is publicly accessible.
Parameters
page (dict) β
Return type
bool
process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Process a list of pages into a list of documents.
Parameters
pages (List[dict]) β
include_restricted_content (bool) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
List[langchain.schema.Document]
process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Parameters
page (dict) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
langchain.schema.Document
process_attachment(page_id, ocr_languages=None)[source]ο
Parameters
page_id (str) β
ocr_languages (Optional[str]) β
Return type
List[str]
process_pdf(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_image(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_doc(link)[source]ο
Parameters
link (str) β
Return type
str
process_xls(link)[source]ο
Parameters
link (str) β
Return type
str | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-14 | Parameters
link (str) β
Return type
str
process_svg(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Pandas DataFrames.
Parameters
data_frame (Any) β
page_content_column (str) β
lazy_load()[source]ο
Lazy load records from dataframe.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load full dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Diffbot file json.
Parameters
api_token (str) β
urls (List[str]) β
continue_on_failure (bool) β
load()[source]ο
Extract text from Diffbot on all the URLs and return Document instances
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from a directory.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-15 | silent_errors (bool) β
load_hidden (bool) β
loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) β
loader_kwargs (Optional[dict]) β
recursive (bool) β
show_progress (bool) β
use_multithreading (bool) β
max_concurrency (int) β
load_file(item, path, docs, pbar)[source]ο
Parameters
item (pathlib.Path) β
path (pathlib.Path) β
docs (List[langchain.schema.Document]) β
pbar (Optional[Any]) β
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Discord chat logs.
Parameters
chat_log (pd.DataFrame) β
user_id_col (str) β
load()[source]ο
Load all chat messages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Parameters
api (str) β
access_token (Optional[str]) β | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-16 | Parameters
api (str) β
access_token (Optional[str]) β
docset_id (Optional[str]) β
document_ids (Optional[Sequence[str]]) β
file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) β
min_chunk_size (int) β
Return type
None
attribute access_token: Optional[str] = Noneο
attribute api: str = 'https://api.docugami.com/v1preview1'ο
attribute docset_id: Optional[str] = Noneο
attribute document_ids: Optional[Sequence[str]] = Noneο
attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = Noneο
attribute min_chunk_size: int = 32ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.Docx2txtLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, abc.ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Parameters
file_path (str) β
load()[source]ο
Load given path as single page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-17 | Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
database (str) β
read_only (bool) β
config (Optional[Dict[str, str]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser
Wrapper around embaasβs document byte loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
) | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-18 | "chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
Return type
None
lazy_parse(blob)[source]ο
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob (langchain.document_loaders.blob_loaders.schema.Blob) β Blob instance
Returns
Generator of documents
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader
Wrapper around embaasβs document loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256, | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-19 | "chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
file_path (str) β
blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) β
Return type
None
attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = Noneο
The blob loader to use. If not provided, a default one will be created.
attribute file_path: str [Required]ο
The path to the file to load.
lazy_load()[source]ο
Load the documents from the file path lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
load_and_split(text_splitter=None)[source]ο
Load documents and split into chunks.
Parameters
text_splitter (Optional[langchain.text_splitter.TextSplitter]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-20 | Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load()[source]ο
Load documents from EverNote export file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FacebookChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Facebook messages json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
FaunaDB Loader.
Parameters
query (str) β
page_content_field (str) β
secret (str) β
metadata_fields (Optional[Sequence[str]]) β
queryο
The FQL query string to execute.
Type
str
page_content_fieldο
The field that contains the content of each page.
Type
str
secretο
The secret key for authenticating to FaunaDB.
Type
str
metadata_fieldsο
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]] | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-21 | Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Figma file json.
Parameters
access_token (str) β
ids (str) β
key (str) β
load()[source]ο
Load file
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Parameters
path (Union[str, pathlib.Path]) β
glob (str) β
suffixes (Optional[Sequence[str]]) β
show_progress (bool) β
Return type
None
yield_blobs()[source]ο
Yield blobs that match the requested pattern.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
count_matching_files()[source]ο
Count files that match the pattern without loading them.
Return type
int
class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]ο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-22 | Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
blob (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source]ο
Bases: langchain.document_loaders.github.BaseGitHubLoader
Parameters
repo (str) β
access_token (str) β
include_prs (bool) β
milestone (Optional[Union[int, Literal['*', 'none']]]) β
state (Optional[Literal['open', 'closed', 'all']]) β
assignee (Optional[str]) β
creator (Optional[str]) β
mentioned (Optional[str]) β
labels (Optional[List[str]]) β
sort (Optional[Literal['created', 'updated', 'comments']]) β
direction (Optional[Literal['asc', 'desc']]) β
since (Optional[str]) β
Return type
None
attribute assignee: Optional[str] = Noneο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-23 | Return type
None
attribute assignee: Optional[str] = Noneο
Filter on assigned user. Pass βnoneβ for no user and β*β for any user.
attribute creator: Optional[str] = Noneο
Filter on the user that created the issue.
attribute direction: Optional[Literal['asc', 'desc']] = Noneο
The direction to sort the results by. Can be one of: βascβ, βdescβ.
attribute include_prs: bool = Trueο
If True include Pull Requests in results, otherwise ignore them.
attribute labels: Optional[List[str]] = Noneο
Label names to filter one. Example: bug,ui,@high.
attribute mentioned: Optional[str] = Noneο
Filter on a user thatβs mentioned in the issue.
attribute milestone: Optional[Union[int, Literal['*', 'none']]] = Noneο
If integer is passed, it should be a milestoneβs number field.
If the string β*β is passed, issues with any milestone are accepted.
If the string βnoneβ is passed, issues without milestones are returned.
attribute since: Optional[str] = Noneο
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
attribute sort: Optional[Literal['created', 'updated', 'comments']] = Noneο
What to sort results by. Can be one of: βcreatedβ, βupdatedβ, βcommentsβ.
Default is βcreatedβ.
attribute state: Optional[Literal['open', 'closed', 'all']] = Noneο
Filter on issue state. Can be one of: βopenβ, βclosedβ, βallβ.
lazy_load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-24 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue)[source]ο
Create Document objects from a list of GitHub issues.
Parameters
issue (dict) β
Return type
langchain.schema.Document
property query_params: strο
property url: strο
class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path (str) β
clone_url (Optional[str]) β
branch (Optional[str]) β
file_filter (Optional[Callable[[str], bool]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-25 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Parameters
web_page (str) β
load_all_paths (bool) β
base_url (Optional[str]) β
content_selector (str) β
load()[source]ο
Fetch text from one single GitBook page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source]ο
Bases: object
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
Parameters
credentials_path (pathlib.Path) β
service_account_path (pathlib.Path) β
token_path (pathlib.Path) β
Return type
None
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-26 | service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Parameters
google_api_client (langchain.document_loaders.youtube.GoogleApiClient) β
channel_name (Optional[str]) β
video_ids (Optional[List[str]]) β
add_video_info (bool) β
captions_language (str) β
continue_on_failure (bool) β
Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
4c9e56a69292-27 | Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο
channel_name: Optional[str] = Noneο
video_ids: Optional[List[str]] = Noneο
add_video_info: bool = Trueο
captions_language: str = 'en'ο
continue_on_failure: bool = Falseο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads Google Docs from Google Drive.
Parameters
service_account_key (pathlib.Path) β
credentials_path (pathlib.Path) β
token_path (pathlib.Path) β
folder_id (Optional[str]) β
document_ids (Optional[List[str]]) β
file_ids (Optional[List[str]]) β
recursive (bool) β
file_types (Optional[Sequence[str]]) β
load_trashed_files (bool) β
file_loader_cls (Any) β
file_loader_kwargs (Dict[str, Any]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/document_loaders.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.