id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
af8b1c7c6518-17
serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing when a new token is generated. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Log the latency, error, token usage, and response to Infino. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Set the error flag. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Do nothing when LLM chain starts. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Do nothing when LLM chain ends. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Need to log the error. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Do nothing when tool starts. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-18
Return type None on_agent_action(action, **kwargs)[source] Do nothing when agent takes a specific action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source] Do nothing when tool ends. Parameters output (str) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Do nothing when tool outputs an error. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Do nothing. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Do nothing. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None class langchain.callbacks.MlflowCallbackHandler(name='langchainrun-%', experiment='langchain', tags={}, tracking_uri=None)[source] Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs metrics and artifacts to mlflow server. Parameters name (str) – Name of the run. experiment (str) – Name of the experiment. tags (dict) – Tags to be attached for the run. tracking_uri (str) – MLflow tracking server uri. Return type None This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run,
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-19
the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to mlflow server. on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run when LLM generates a new token. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-20
kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any flush_tracker(langchain_asset=None, finish=False)[source] Parameters langchain_asset (Any) – finish (bool) – Return type None class langchain.callbacks.OpenAICallbackHandler[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that tracks OpenAI info. total_tokens: int = 0 prompt_tokens: int = 0 completion_tokens: int = 0 successful_requests: int = 0
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-21
completion_tokens: int = 0 successful_requests: int = 0 total_cost: float = 0.0 property always_verbose: bool Whether to call verbose callbacks even if verbose is False. on_llm_start(serialized, prompts, **kwargs)[source] Print out the prompts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Print out the token. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Collect token usage. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None class langchain.callbacks.StdOutCallbackHandler(color=None)[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that prints to std out. Parameters color (Optional[str]) – Return type None on_llm_start(serialized, prompts, **kwargs)[source] Print out the prompts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Do nothing. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing. Parameters token (str) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-22
Return type None on_llm_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Print out that we are entering a chain. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Print out that we finished a chain. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Do nothing. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, color=None, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – color (Optional[str]) – kwargs (Any) – Return type Any on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source] If not the final action, print out observation. Parameters output (str) – color (Optional[str]) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-23
kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, color=None, end='', **kwargs)[source] Run when agent ends. Parameters text (str) – color (Optional[str]) – end (str) – kwargs (Any) – Return type None on_agent_finish(finish, color=None, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – color (Optional[str]) – kwargs (Any) – Return type None class langchain.callbacks.StreamingStdOutCallbackHandler[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback handler for streaming. Only works with LLMs that support streaming. on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts running. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run on new LLM token. Only available when streaming is enabled. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-24
Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run on arbitrary text. Parameters text (str) – kwargs (Any) – Return type
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-25
Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None langchain.callbacks.StreamlitCallbackHandler(parent_container, *, max_thought_containers=4, expand_new_thoughts=True, collapse_completed_thoughts=True, thought_labeler=None)[source] Construct a new StreamlitCallbackHandler. This CallbackHandler is geared towards use with a LangChain Agent; it displays the Agent’s LLM and tool-usage β€œthoughts” inside a series of Streamlit expanders. Parameters parent_container (DeltaGenerator) – The st.container that will contain all the Streamlit elements that the Handler creates. max_thought_containers (int) – The max number of completed LLM thought containers to show at once. When this threshold is reached, a new thought will cause the oldest thoughts to be collapsed into a β€œHistory” expander. Defaults to 4. expand_new_thoughts (bool) – Each LLM β€œthought” gets its own st.expander. This param controls whether that expander is expanded by default. Defaults to True. collapse_completed_thoughts (bool) – If True, LLM thought expanders will be collapsed when completed. Defaults to True. thought_labeler (Optional[LLMThoughtLabeler]) – An optional custom LLMThoughtLabeler instance. If unspecified, the handler will use the default thought labeling logic. Defaults to None. Returns A new StreamlitCallbackHandler instance. Note that this is an β€œauto-updating” API (if the installed version of Streamlit) has a more recent StreamlitCallbackHandler implementation, an instance of that class
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-26
has a more recent StreamlitCallbackHandler implementation, an instance of that class will be used. Return type BaseCallbackHandler class langchain.callbacks.LLMThoughtLabeler[source] Bases: object Generates markdown labels for LLMThought containers. Pass a custom subclass of this to StreamlitCallbackHandler to override its default labeling logic. get_initial_label()[source] Return the markdown label for a new LLMThought that doesn’t have an associated tool yet. Return type str get_tool_label(tool, is_complete)[source] Return the label for an LLMThought that has an associated tool. Parameters tool (langchain.callbacks.streamlit.streamlit_callback_handler.ToolRecord) – The tool’s ToolRecord is_complete (bool) – True if the thought is complete; False if the thought is still receiving input. Return type The markdown label for the thought’s container. get_history_label()[source] Return a markdown label for the special β€˜history’ container that contains overflow thoughts. Return type str get_final_agent_thought_label()[source] Return the markdown label for the agent’s final thought - the β€œNow I have the answer” thought, that doesn’t involve a tool. Return type str class langchain.callbacks.WandbCallbackHandler(job_type=None, project='langchain_callback_demo', entity=None, tags=None, group=None, name=None, notes=None, visualize=False, complexity_metrics=False, stream_logs=False)[source] Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to Weights and Biases. Parameters job_type (str) – The type of job. project (str) – The project to log to.
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-27
project (str) – The project to log to. entity (str) – The entity to log to. tags (list) – The tags to log. group (str) – The group to log to. name (str) – The name of the run. notes (str) – The notes to log. visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics. stream_logs (bool) – Whether to stream callback actions to W&B Return type None This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response using the run.log() method to Weights and Biases. on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run when LLM generates a new token. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-28
None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-29
Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any flush_tracker(langchain_asset=None, reset=True, finish=False, job_type=None, project=None, entity=None, tags=None, group=None, name=None, notes=None, visualize=None, complexity_metrics=None)[source] Flush the tracker and reset the session. Parameters langchain_asset (Any) – The langchain asset to save. reset (bool) – Whether to reset the session. finish (bool) – Whether to finish the run. job_type (Optional[str]) – The job type. project (Optional[str]) – The project. entity (Optional[str]) – The entity. tags (Optional[Sequence]) – The tags. group (Optional[str]) – The group. name (Optional[str]) – The name. notes (Optional[str]) – The notes. visualize (Optional[bool]) – Whether to visualize. complexity_metrics (Optional[bool]) – Whether to compute complexity metrics. Returns – None Return type None class langchain.callbacks.WhyLabsCallbackHandler(logger)[source] Bases: langchain.callbacks.base.BaseCallbackHandler WhyLabs CallbackHandler. Parameters logger (Logger) – on_llm_start(serialized, prompts, **kwargs)[source] Pass the input prompts to the logger Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Pass the generated response to the logger. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-30
kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing. Parameters token (str) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Do nothing. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Do nothing. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Do nothing. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, color=None, **kwargs)[source] Do nothing. Parameters action (langchain.schema.AgentAction) – color (Optional[str]) – kwargs (Any) – Return type Any on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source] Do nothing. Parameters output (str) – color (Optional[str]) – observation_prefix (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-31
color (Optional[str]) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Do nothing. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, color=None, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – color (Optional[str]) – kwargs (Any) – Return type None flush()[source] Return type None close()[source] Return type None classmethod from_params(*, api_key=None, org_id=None, dataset_id=None, sentiment=False, toxicity=False, themes=False)[source] Instantiate whylogs Logger from params. Parameters api_key (Optional[str]) – WhyLabs API key. Optional because the preferred way to specify the API key is with environment variable WHYLABS_API_KEY. org_id (Optional[str]) – WhyLabs organization id to write profiles to. If not set must be specified in environment variable WHYLABS_DEFAULT_ORG_ID. dataset_id (Optional[str]) – The model or dataset this callback is gathering telemetry for. If not set must be specified in environment variable WHYLABS_DEFAULT_DATASET_ID. sentiment (bool) – If True will initialize a model to perform sentiment analysis compound score. Defaults to False and will not gather this metric.
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-32
sentiment analysis compound score. Defaults to False and will not gather this metric. toxicity (bool) – If True will initialize a model to score toxicity. Defaults to False and will not gather this metric. themes (bool) – If True will initialize a model to calculate distance to configured themes. Defaults to None and will not gather this metric. Return type Logger langchain.callbacks.get_openai_callback()[source] Get the OpenAI callback handler in a context manager. which conveniently exposes token and cost information. Returns The OpenAI callback handler. Return type OpenAICallbackHandler Example >>> with get_openai_callback() as cb: ... # Use the OpenAI callback handler langchain.callbacks.tracing_enabled(session_name='default')[source] Get the Deprecated LangChainTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to β€œdefault”. Returns The LangChainTracer session. Return type TracerSessionV1 Example >>> with tracing_enabled() as session: ... # Use the LangChainTracer session langchain.callbacks.wandb_tracing_enabled(session_name='default')[source] Get the WandbTracer in a context manager. Parameters session_name (str, optional) – The name of the session. Defaults to β€œdefault”. Returns None Return type Generator[None, None, None] Example >>> with wandb_tracing_enabled() as session: ... # Use the WandbTracer session
https://api.python.langchain.com/en/stable/modules/callbacks.html
03d1ebe9f490-0
Agents Interface for agents. class langchain.agents.Agent(*, llm_chain, output_parser, allowed_tools=None)[source] Bases: langchain.agents.agent.BaseSingleActionAgent Class responsible for calling the language model and deciding the action. This is driven by an LLMChain. The prompt in the LLMChain MUST include a variable called β€œagent_scratchpad” where the agent can put its intermediary work. Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – Return type None attribute allowed_tools: Optional[List[str]] = None attribute llm_chain: langchain.chains.llm.LLMChain [Required] attribute output_parser: langchain.agents.agent.AgentOutputParser [Required] async aplan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] abstract classmethod create_prompt(tools)[source] Create a prompt for this class. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – Return type langchain.prompts.base.BasePromptTemplate dict(**kwargs)[source] Return dictionary representation of agent. Parameters
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-1
dict(**kwargs)[source] Return dictionary representation of agent. Parameters kwargs (Any) – Return type Dict classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, **kwargs)[source] Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – kwargs (Any) – Return type langchain.agents.agent.Agent get_allowed_tools()[source] Return type Optional[List[str]] get_full_inputs(intermediate_steps, **kwargs)[source] Create the full inputs for the LLMChain from intermediate steps. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – kwargs (Any) – Return type Dict[str, Any] plan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source] Return response when agent has been stopped due to max iterations. Parameters
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-2
Return response when agent has been stopped due to max iterations. Parameters early_stopping_method (str) – intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – kwargs (Any) – Return type langchain.schema.AgentFinish tool_run_logging_kwargs()[source] Return type Dict abstract property llm_prefix: str Prefix to append the LLM call with. abstract property observation_prefix: str Prefix to append the observation with. property return_values: List[str] Return values of the agent. class langchain.agents.AgentExecutor(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source] Bases: langchain.chains.base.Chain Consists of an agent using tools. Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) – tools (Sequence[langchain.tools.base.BaseTool]) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) – Return type None
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-3
Return type None attribute agent: Union[BaseSingleActionAgent, BaseMultiActionAgent] [Required] The agent to run for creating a plan and determining actions to take at each step of the execution loop. attribute early_stopping_method: str = 'force' The method to use for early stopping if the agent never returns AgentFinish. Either β€˜force’ or β€˜generate’. β€œforce” returns a string saying that it stopped because it met atime or iteration limit. β€œgenerate” calls the agent’s LLM Chain one final time to generatea final answer based on the previous steps. attribute handle_parsing_errors: Union[bool, str, Callable[[OutputParserException], str]] = False How to handle errors raised by the agent’s output parser.Defaults to False, which raises the error. sIf true, the error will be sent back to the LLM as an observation. If a string, the string itself will be sent to the LLM as an observation. If a callable function, the function will be called with the exception as an argument, and the result of that function will be passed to the agentas an observation. attribute max_execution_time: Optional[float] = None The maximum amount of wall clock time to spend in the execution loop. attribute max_iterations: Optional[int] = 15 The maximum number of steps to take before ending the execution loop. Setting to β€˜None’ could lead to an infinite loop. attribute return_intermediate_steps: bool = False Whether to return the agent’s trajectory of intermediate steps at the end in addition to the final output. attribute tools: Sequence[BaseTool] [Required] The valid tools the agent can call.
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-4
The valid tools the agent can call. classmethod from_agent_and_tools(agent, tools, callback_manager=None, **kwargs)[source] Create from agent and tools. Parameters agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – kwargs (Any) – Return type langchain.agents.agent.AgentExecutor lookup_tool(name)[source] Lookup tool by name. Parameters name (str) – Return type langchain.tools.base.BaseTool save(file_path)[source] Raise error - saving not supported for Agent Executors. Parameters file_path (Union[pathlib.Path, str]) – Return type None save_agent(file_path)[source] Save the underlying agent. Parameters file_path (Union[pathlib.Path, str]) – Return type None class langchain.agents.AgentOutputParser[source] Bases: langchain.schema.BaseOutputParser Return type None abstract parse(text)[source] Parse text into agent action/finish. Parameters text (str) – Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] class langchain.agents.AgentType(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source] Bases: str, enum.Enum Enumerator with the Agent types. ZERO_SHOT_REACT_DESCRIPTION = 'zero-shot-react-description' REACT_DOCSTORE = 'react-docstore' SELF_ASK_WITH_SEARCH = 'self-ask-with-search'
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-5
SELF_ASK_WITH_SEARCH = 'self-ask-with-search' CONVERSATIONAL_REACT_DESCRIPTION = 'conversational-react-description' CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'chat-zero-shot-react-description' CHAT_CONVERSATIONAL_REACT_DESCRIPTION = 'chat-conversational-react-description' STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION = 'structured-chat-zero-shot-react-description' OPENAI_FUNCTIONS = 'openai-functions' OPENAI_MULTI_FUNCTIONS = 'openai-multi-functions' class langchain.agents.BaseMultiActionAgent[source] Bases: pydantic.main.BaseModel Base Agent class. Return type None abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Actions specifying what tool to use. Return type Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish] dict(**kwargs)[source] Return dictionary representation of agent. Parameters kwargs (Any) – Return type Dict get_allowed_tools()[source] Return type Optional[List[str]] abstract plan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-6
along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Actions specifying what tool to use. Return type Union[List[langchain.schema.AgentAction], langchain.schema.AgentFinish] return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source] Return response when agent has been stopped due to max iterations. Parameters early_stopping_method (str) – intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – kwargs (Any) – Return type langchain.schema.AgentFinish save(file_path)[source] Save the agent. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the agent to. Return type None Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs()[source] Return type Dict property return_values: List[str] Return values of the agent. class langchain.agents.BaseSingleActionAgent[source] Bases: pydantic.main.BaseModel Base Agent class. Return type None abstract async aplan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-7
**kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] dict(**kwargs)[source] Return dictionary representation of agent. Parameters kwargs (Any) – Return type Dict classmethod from_llm_and_tools(llm, tools, callback_manager=None, **kwargs)[source] Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – kwargs (Any) – Return type langchain.agents.agent.BaseSingleActionAgent get_allowed_tools()[source] Return type Optional[List[str]] abstract plan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] return_stopped_response(early_stopping_method, intermediate_steps, **kwargs)[source] Return response when agent has been stopped due to max iterations. Parameters early_stopping_method (str) – intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – kwargs (Any) – Return type langchain.schema.AgentFinish save(file_path)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-8
Return type langchain.schema.AgentFinish save(file_path)[source] Save the agent. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the agent to. Return type None Example: .. code-block:: python # If working with agent executor agent.agent.save(file_path=”path/agent.yaml”) tool_run_logging_kwargs()[source] Return type Dict property return_values: List[str] Return values of the agent. class langchain.agents.ConversationalAgent(*, llm_chain, output_parser=None, allowed_tools=None, ai_prefix='AI')[source] Bases: langchain.agents.agent.Agent An agent designed to hold a conversation in addition to using tools. Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – ai_prefix (str) – Return type None attribute ai_prefix: str = 'AI' attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-9
classmethod create_prompt(tools, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool?
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-10
MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-11
Create prompt in the style of the zero shot agent. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – List of tools the agent will have access to, used to format the prompt. prefix (str) – String to put before the list of tools. suffix (str) – String to put after the list of tools. ai_prefix (str) – String to use before AI output. human_prefix (str) – String to use before human output. input_variables (Optional[List[str]]) – List of input variables the final prompt will expect. format_instructions (str) – Returns A PromptTemplate with the template assembled from the pieces here. Return type langchain.prompts.prompt.PromptTemplate
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-12
classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.\n\nTOOLS:\n------\n\nAssistant has access to the following tools:', suffix='Begin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}', format_instructions='To use a tool, please use the following format:\n\n```\nThought: Do I need to use a tool? Yes\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n```\n\nWhen you have a response to say to the Human, or if you do not need to use a tool, you MUST use the
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-13
say to the Human, or if you do not need to use a tool, you MUST use the format:\n\n```\nThought: Do I need to use a tool? No\n{ai_prefix}: [your response here]\n```', ai_prefix='AI', human_prefix='Human', input_variables=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-14
Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – format_instructions (str) – ai_prefix (str) – human_prefix (str) – input_variables (Optional[List[str]]) – kwargs (Any) – Return type langchain.agents.agent.Agent property llm_prefix: str Prefix to append the llm call with. property observation_prefix: str Prefix to append the observation with. class langchain.agents.ConversationalChatAgent(*, llm_chain, output_parser=None, allowed_tools=None, template_tool_response="TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else.")[source] Bases: langchain.agents.agent.Agent An agent designed to hold a conversation in addition to using tools. Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – template_tool_response (str) – Return type None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-15
None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional] attribute template_tool_response: str = "TOOL RESPONSE: \n---------------------\n{observation}\n\nUSER'S INPUT\n--------------------\n\nOkay, so what is the response to my last comment? If using information obtained from the tools you must mention it explicitly without mentioning the tool names - I have forgotten all TOOL RESPONSES! Remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else."
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-16
classmethod create_prompt(tools, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, output_parser=None)[source] Create a prompt for this class. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – system_message (str) – human_message (str) – input_variables (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-17
human_message (str) – input_variables (Optional[List[str]]) – output_parser (Optional[langchain.schema.BaseOutputParser]) – Return type langchain.prompts.base.BasePromptTemplate
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-18
Return type langchain.prompts.base.BasePromptTemplate classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, system_message='Assistant is a large language model trained by OpenAI.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.\n\nAssistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.\n\nOverall, Assistant is a powerful system that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.', human_message="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}", input_variables=None, **kwargs)[source] Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-19
Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – system_message (str) – human_message (str) – input_variables (Optional[List[str]]) – kwargs (Any) – Return type langchain.agents.agent.Agent property llm_prefix: str Prefix to append the llm call with. property observation_prefix: str Prefix to append the observation with. class langchain.agents.LLMSingleActionAgent(*, llm_chain, output_parser, stop)[source] Bases: langchain.agents.agent.BaseSingleActionAgent Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – stop (List[str]) – Return type None attribute llm_chain: langchain.chains.llm.LLMChain [Required] attribute output_parser: langchain.agents.agent.AgentOutputParser [Required] attribute stop: List[str] [Required] async aplan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-20
kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] dict(**kwargs)[source] Return dictionary representation of agent. Parameters kwargs (Any) – Return type Dict plan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Callbacks to run. **kwargs – User inputs. kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] tool_run_logging_kwargs()[source] Return type Dict class langchain.agents.MRKLChain(*, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source] Bases: langchain.agents.agent.AgentExecutor Chain that implements the MRKL system. Example from langchain import OpenAI, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) prompt = PromptTemplate(...) chains = [...] mrkl = MRKLChain.from_chains(llm=llm, prompt=prompt) Parameters memory (Optional[langchain.schema.BaseMemory]) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-21
Parameters memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) – tools (Sequence[langchain.tools.base.BaseTool]) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) – Return type None classmethod from_chains(llm, chains, **kwargs)[source] User friendly way to initialize the MRKL chain. This is intended to be an easy way to get up and running with the MRKL chain. Parameters llm (langchain.base_language.BaseLanguageModel) – The LLM to use as the agent LLM. chains (List[langchain.agents.mrkl.base.ChainConfig]) – The chains the MRKL system has access to. **kwargs – parameters to be passed to initialization. kwargs (Any) – Returns An initialized MRKL chain. Return type langchain.agents.agent.AgentExecutor Example from langchain import LLMMathChain, OpenAI, SerpAPIWrapper, MRKLChain from langchain.chains.mrkl.base import ChainConfig llm = OpenAI(temperature=0) search = SerpAPIWrapper() llm_math_chain = LLMMathChain(llm=llm) chains = [
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-22
llm_math_chain = LLMMathChain(llm=llm) chains = [ ChainConfig( action_name = "Search", action=search.search, action_description="useful for searching" ), ChainConfig( action_name="Calculator", action=llm_math_chain.run, action_description="useful for doing math" ) ] mrkl = MRKLChain.from_chains(llm, chains) class langchain.agents.OpenAIFunctionsAgent(*, llm, tools, prompt)[source] Bases: langchain.agents.agent.BaseSingleActionAgent An Agent driven by OpenAIs function powered API. Parameters llm (langchain.base_language.BaseLanguageModel) – This should be an instance of ChatOpenAI, specifically a model that supports using functions. tools (Sequence[langchain.tools.base.BaseTool]) – The tools this agent has access to. prompt (langchain.prompts.base.BasePromptTemplate) – The prompt for this agent, should support agent_scratchpad as one of the variables. For an easy way to construct this prompt, use OpenAIFunctionsAgent.create_prompt(…) Return type None attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute prompt: langchain.prompts.base.BasePromptTemplate [Required] attribute tools: Sequence[langchain.tools.base.BaseTool] [Required] async aplan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations **kwargs – User inputs.
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-23
along with observations **kwargs – User inputs. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] classmethod create_prompt(system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), extra_prompt_messages=None)[source] Create prompt for this agent. Parameters system_message (Optional[langchain.schema.SystemMessage]) – Message to use as the system message that will be the first in the prompt. extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) – Prompt messages that will be placed between the system message and the new human input. Returns A prompt template to pass into this agent. Return type langchain.prompts.base.BasePromptTemplate classmethod from_llm_and_tools(llm, tools, callback_manager=None, extra_prompt_messages=None, system_message=SystemMessage(content='You are a helpful AI assistant.', additional_kwargs={}), **kwargs)[source] Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – extra_prompt_messages (Optional[List[langchain.prompts.chat.BaseMessagePromptTemplate]]) – system_message (Optional[langchain.schema.SystemMessage]) – kwargs (Any) – Return type langchain.agents.agent.BaseSingleActionAgent get_allowed_tools()[source] Get allowed tools. Return type List[str] plan(intermediate_steps, callbacks=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-24
List[str] plan(intermediate_steps, callbacks=None, **kwargs)[source] Given input, decided what to do. Parameters intermediate_steps (List[Tuple[langchain.schema.AgentAction, str]]) – Steps the LLM has taken to date, along with observations **kwargs – User inputs. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Returns Action specifying what tool to use. Return type Union[langchain.schema.AgentAction, langchain.schema.AgentFinish] property functions: List[dict] property input_keys: List[str] Get input keys. Input refers to user input here. class langchain.agents.ReActChain(llm, docstore, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source] Bases: langchain.agents.agent.AgentExecutor Chain that implements the ReAct paper. Example from langchain import ReActChain, OpenAI react = ReAct(llm=OpenAI()) Parameters llm (langchain.base_language.BaseLanguageModel) – docstore (langchain.docstore.base.Docstore) – memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-25
verbose (bool) – tags (Optional[List[str]]) – agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) – tools (Sequence[langchain.tools.base.BaseTool]) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) – Return type None class langchain.agents.ReActTextWorldAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source] Bases: langchain.agents.react.base.ReActDocstoreAgent Agent for the ReAct TextWorld chain. Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – Return type None classmethod create_prompt(tools)[source] Return default prompt. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – Return type langchain.prompts.base.BasePromptTemplate class langchain.agents.SelfAskWithSearchChain(llm, search_chain, *, memory=None, callbacks=None, callback_manager=None, verbose=None, tags=None, agent, tools, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', handle_parsing_errors=False)[source] Bases: langchain.agents.agent.AgentExecutor Chain that does self ask with search. Example from langchain import SelfAskWithSearchChain, OpenAI, GoogleSerperAPIWrapper search_chain = GoogleSerperAPIWrapper()
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-26
search_chain = GoogleSerperAPIWrapper() self_ask = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search_chain) Parameters llm (langchain.base_language.BaseLanguageModel) – search_chain (Union[langchain.utilities.google_serper.GoogleSerperAPIWrapper, langchain.utilities.serpapi.SerpAPIWrapper]) – memory (Optional[langchain.schema.BaseMemory]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – tags (Optional[List[str]]) – agent (Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]) – tools (Sequence[langchain.tools.base.BaseTool]) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – handle_parsing_errors (Union[bool, str, Callable[[langchain.schema.OutputParserException], str]]) – Return type None class langchain.agents.StructuredChatAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source] Bases: langchain.agents.agent.Agent Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – Return type None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-27
None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional] classmethod create_prompt(tools, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ  "action": $TOOL_NAME,\nΒ  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ  "action": "Final Answer",\nΒ  "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None)[source] Create a prompt for this class. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – prefix (str) – suffix (str) – human_message_template (str) – format_instructions (str) – input_variables (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-28
format_instructions (str) – input_variables (Optional[List[str]]) – memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) – Return type langchain.prompts.base.BasePromptTemplate classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Respond to the human as helpfully and accurately as possible. You have access to the following tools:', suffix='Begin! Reminder to ALWAYS respond with a valid json blob of a single action. Use tools if necessary. Respond directly if appropriate. Format is Action:```$JSON_BLOB```then Observation:.\nThought:', human_message_template='{input}\n\n{agent_scratchpad}', format_instructions='Use a json blob to specify a tool by providing an action key (tool name) and an action_input key (tool input).\n\nValid "action" values: "Final Answer" or {tool_names}\n\nProvide only ONE action per $JSON_BLOB, as shown:\n\n```\n{{{{\nΒ  "action": $TOOL_NAME,\nΒ  "action_input": $INPUT\n}}}}\n```\n\nFollow this format:\n\nQuestion: input question to answer\nThought: consider previous and subsequent steps\nAction:\n```\n$JSON_BLOB\n```\nObservation: action result\n... (repeat Thought/Action/Observation N times)\nThought: I know what to respond\nAction:\n```\n{{{{\nΒ  "action": "Final Answer",\nΒ  "action_input": "Final response to human"\n}}}}\n```', input_variables=None, memory_prompts=None, **kwargs)[source] Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-29
Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – human_message_template (str) – format_instructions (str) – input_variables (Optional[List[str]]) – memory_prompts (Optional[List[langchain.prompts.base.BasePromptTemplate]]) – kwargs (Any) – Return type langchain.agents.agent.Agent property llm_prefix: str Prefix to append the llm call with. property observation_prefix: str Prefix to append the observation with. class langchain.agents.Tool(name, func, description, *, args_schema=None, return_direct=False, verbose=False, callbacks=None, callback_manager=None, handle_tool_error=False, coroutine=None)[source] Bases: langchain.tools.base.BaseTool Tool that takes in function or coroutine directly. Parameters name (str) – func (Callable[[...], str]) – description (str) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – return_direct (bool) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – handle_tool_error (Optional[Union[bool, str, Callable[[langchain.tools.base.ToolException], str]]]) – coroutine (Optional[Callable[[...], Awaitable[str]]]) – Return type None
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-30
Return type None attribute coroutine: Optional[Callable[[...], Awaitable[str]]] = None The asynchronous version of the function. attribute description: str = '' Used to tell the model how/when/why to use the tool. You can provide few-shot examples as a part of the description. attribute func: Callable[[...], str] [Required] The function to run when the tool is called. classmethod from_function(func, name, description, return_direct=False, args_schema=None, **kwargs)[source] Initialize tool from a function. Parameters func (Callable) – name (str) – description (str) – return_direct (bool) – args_schema (Optional[Type[pydantic.main.BaseModel]]) – kwargs (Any) – Return type langchain.tools.base.Tool property args: dict The tool’s input arguments. class langchain.agents.ZeroShotAgent(*, llm_chain, output_parser=None, allowed_tools=None)[source] Bases: langchain.agents.agent.Agent Agent for the MRKL chain. Parameters llm_chain (langchain.chains.llm.LLMChain) – output_parser (langchain.agents.agent.AgentOutputParser) – allowed_tools (Optional[List[str]]) – Return type None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-31
None attribute output_parser: langchain.agents.agent.AgentOutputParser [Optional] classmethod create_prompt(tools, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None)[source] Create prompt in the style of the zero shot agent. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – List of tools the agent will have access to, used to format the prompt. prefix (str) – String to put before the list of tools. suffix (str) – String to put after the list of tools. input_variables (Optional[List[str]]) – List of input variables the final prompt will expect. format_instructions (str) – Returns A PromptTemplate with the template assembled from the pieces here. Return type langchain.prompts.prompt.PromptTemplate
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-32
Return type langchain.prompts.prompt.PromptTemplate classmethod from_llm_and_tools(llm, tools, callback_manager=None, output_parser=None, prefix='Answer the following questions as best you can. You have access to the following tools:', suffix='Begin!\n\nQuestion: {input}\nThought:{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, **kwargs)[source] Construct an agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – tools (Sequence[langchain.tools.base.BaseTool]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – kwargs (Any) – Return type langchain.agents.agent.Agent property llm_prefix: str Prefix to append the llm call with. property observation_prefix: str Prefix to append the observation with. langchain.agents.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source] Create csv agent by loading to a dataframe and using pandas agent. Parameters llm (langchain.base_language.BaseLanguageModel) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-33
Parameters llm (langchain.base_language.BaseLanguageModel) – path (Union[str, List[str]]) – pandas_kwargs (Optional[dict]) – kwargs (Any) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-34
langchain.agents.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-35
the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-36
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-37
langchain.agents.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-38
Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-39
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – return_intermediate_steps (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source] Construct a pandas agent from an LLM and dataframe. Parameters llm (langchain.base_language.BaseLanguageModel) – df (Any) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (Optional[str]) – suffix (Optional[str]) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-40
max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – include_df_in_prompt (Optional[bool]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-41
langchain.agents.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None,
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-42
Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-43
Construct a pbi agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-44
Return type langchain.agents.agent.AgentExecutor langchain.agents.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a pbi agent from an Chat LLM and tools.
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-45
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. Parameters llm (langchain.chat_models.base.BaseChatModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source] Construct a spark agent from an LLM and dataframe. Parameters llm (langchain.llms.base.BaseLLM) – df (Any) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-46
df (Any) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-47
langchain.agents.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10,
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-48
Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-49
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-50
langchain.agents.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None,
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-51
max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-52
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (Optional[str]) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) –
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-53
prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore router agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.get_all_tool_names()[source] Get a list of all possible tool names. Return type List[str] langchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source] Load an agent executor given tools and LLM. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – List of tools this agent has access to.
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-54
llm (langchain.base_language.BaseLanguageModel) – Language model to use as the agent. agent (Optional[langchain.agents.agent_types.AgentType]) – Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path (Optional[str]) – Path to serialized agent to use. agent_kwargs (Optional[dict]) – Additional key word arguments to pass to the underlying agent tags (Optional[Sequence[str]]) – Tags to apply to the traced runs. **kwargs – Additional key word arguments passed to the agent executor kwargs (Any) – Returns An agent executor Return type langchain.agents.agent.AgentExecutor langchain.agents.load_agent(path, **kwargs)[source] Unified method for loading a agent from LangChainHub or local fs. Parameters path (Union[str, pathlib.Path]) – kwargs (Any) – Return type Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent] langchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source] Loads a tool from the HuggingFace Hub. Parameters task_or_repo_id (str) – Task or model repo id. model_repo_id (Optional[str]) – Optional model repo id. token (Optional[str]) – Optional token. remote (bool) – Optional remote. Defaults to False. **kwargs – kwargs (Any) – Returns A tool. Return type langchain.tools.base.BaseTool
https://api.python.langchain.com/en/stable/modules/agents.html
03d1ebe9f490-55
Returns A tool. Return type langchain.tools.base.BaseTool langchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source] Load tools based on their name. Parameters tool_names (List[str]) – name of tools to load. llm (Optional[langchain.base_language.BaseLanguageModel]) – Optional language model, may be needed to initialize certain tools. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Optional callback manager or list of callback handlers. If not provided, default global callback manager will be used. kwargs (Any) – Returns List of tools. Return type List[langchain.tools.base.BaseTool] langchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source] Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct (bool) – Whether to return directly from the tool rather than continuing the agent loop. args_schema (Optional[Type[pydantic.main.BaseModel]]) – optional argument schema for user to specify infer_schema (bool) – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. args (Union[str, Callable]) – Return type Callable Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return
https://api.python.langchain.com/en/stable/modules/agents.html
4c9e56a69292-0
Document Loaders All different types of document loaders. class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads AZLyrics webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.AirbyteJSONLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads local airbyte json files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for Airtable tables. Parameters
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-1
Loader for Airtable tables. Parameters api_token (str) – table_id (str) – base_id (str) – lazy_load()[source] Lazy load records from table. Return type Iterator[langchain.schema.Document] load()[source] Load Table. Return type List[langchain.schema.Document] class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Logic for loading documents from Apify datasets. Parameters dataset_id (str) – dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) – Return type None attribute apify_client: Any = None attribute dataset_id: str [Required] The ID of the dataset on the Apify platform. attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required] A custom function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from arxiv.org into a list of Documents. Each document represents one Document. The loader converts the original PDF format into the text. Parameters query (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-2
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – blob_name (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source] Bases: langchain.document_loaders.base.BaseLoader Loads a bibtex file into a list of Documents.
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-3
Loads a bibtex file into a list of Documents. Each document represents one entry from the bibtex file. If a PDF file is present in the file bibtex field, the original PDF is loaded into the document text. If no such file entry is present, the abstract field is used instead. Parameters file_path (str) – parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) – max_docs (Optional[int]) – max_content_chars (Optional[int]) – load_extra_metadata (bool) – file_pattern (str) – lazy_load()[source] Load bibtex file using bibtexparser and get the article texts plus the article metadata. See https://bibtexparser.readthedocs.io/en/master/ Returns a list of documents with the document.page_content in text format Return type Iterator[langchain.schema.Document] load()[source] Load bibtex file documents from the given bibtex file path. See https://bibtexparser.readthedocs.io/en/master/ Parameters file_path – the path to the bibtex file Returns a list of documents with the document.page_content in text format Return type List[langchain.schema.Document] class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from BigQuery into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-4
are written into the page_content and none into the metadata. Parameters query (str) – project (Optional[str]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – credentials (Optional[Credentials]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BiliBiliLoader(video_urls)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads bilibili transcripts. Parameters video_urls (List[str]) – load()[source] Load from bilibili url. Return type List[langchain.schema.Document] class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() Parameters blackboard_course_url (str) – bbrouter (str) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-5
blackboard_course_url (str) – bbrouter (str) – load_all_recursively (bool) – basic_auth (Optional[Tuple[str, str]]) – cookies (Optional[dict]) – folder_path: str base_url: str load_all_recursively: bool check_bs4()[source] Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. Return type None load()[source] Load data into document objects. Returns List of documents. Return type List[langchain.schema.Document] download(path)[source] Download a file from a url. Parameters path (str) – Path to the file. Return type None parse_filename(url)[source] Parse the filename from a url. Parameters url (str) – Url to parse the filename from. Returns The filename. Return type str class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source] Bases: pydantic.main.BaseModel A blob is used to represent raw data by either reference or value. Provides an interface to materialize the blob in different representations, and help to decouple the development of data loaders from the downstream parsing of the raw data. Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob Parameters data (Optional[Union[bytes, str]]) – mimetype (Optional[str]) – encoding (str) – path (Optional[Union[str, pathlib.PurePath]]) – Return type None attribute data: Optional[Union[bytes, str]] = None
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-6
None attribute data: Optional[Union[bytes, str]] = None attribute encoding: str = 'utf-8' attribute mimetype: Optional[str] = None attribute path: Optional[Union[str, pathlib.PurePath]] = None as_bytes()[source] Read data as bytes. Return type bytes as_bytes_io()[source] Read data as a byte stream. Return type Generator[Union[_io.BytesIO, _io.BufferedReader], None, None] as_string()[source] Read data as a string. Return type str classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source] Initialize the blob from in-memory data. Parameters data (Union[str, bytes]) – the in-memory data associated with the blob encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data path (Optional[str]) – if provided, will be set as the source from which the data came Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source] Load the blob from a path like object. Parameters path (Union[str, pathlib.PurePath]) – path like object to file to be read encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data guess_type (bool) – If True, the mimetype will be guessed from the file extension, if a mime-type was not provided Returns Blob instance Return type
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-7
if a mime-type was not provided Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob property source: Optional[str] The source location of the blob as string if known otherwise none. class langchain.document_loaders.BlobLoader[source] Bases: abc.ABC Abstract interface for blob loaders implementation. Implementer should be able to load raw content from a storage system according to some criteria and return the raw content lazily as a stream of blobs. abstract yield_blobs()[source] A lazy loader for raw data represented by LangChain’s Blob object. Returns A generator over blobs Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads elements from a blockchain smart contract into Langchain documents. The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet, Polygon mainnet, and Polygon Mumbai testnet. If no BlockchainType is specified, the default is Ethereum mainnet. The Loader uses the Alchemy API to interact with the blockchain. ALCHEMY_API_KEY environment variable must be set to use this loader. The API returns 100 NFTs per request and can be paginated using the startToken parameter. If get_all_tokens is set to True, the loader will get all tokens on the contract. Note that for contracts with a large number of tokens, this may take a long time (e.g. 10k tokens is 100 requests). Default value is false for this reason.
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-8
Default value is false for this reason. The max_execution_time (sec) can be set to limit the execution time of the loader. Future versions of this loader can: Support additional Alchemy APIs (e.g. getTransactions, etc.) Support additional blockain APIs (e.g. Infura, Opensea, etc.) Parameters contract_address (str) – blockchainType (langchain.document_loaders.blockchain.BlockchainType) – api_key (str) – startToken (str) – get_all_tokens (bool) – max_execution_time (Optional[int]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a CSV file into a list of documents. Each document represents one row of the CSV file. Every row is converted into a key/value pair and outputted to a new line in the document’s page_content. The source for each document loaded from csv is set to the value of the file_path argument for all doucments by default. You can override this by setting the source_column argument to the name of a column in the CSV file. The source of each document will then be set to the value of the column with the name specified in source_column. Output Example:column1: value1 column2: value2 column3: value3 Parameters file_path (str) – source_column (Optional[str]) – csv_args (Optional[Dict]) – encoding (Optional[str]) – load()[source] Load data into document objects. Return type
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-9
load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads conversations from exported ChatGPT data. Parameters log_file (str) – num_logs (int) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CoNLLULoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Load CoNLL-U files. Parameters file_path (str) – load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads College Confidential webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source] Bases: langchain.document_loaders.base.BaseLoader Load Confluence pages. Port of https://llamahub.ai/l/confluence
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-10
Load Confluence pages. Port of https://llamahub.ai/l/confluence This currently supports username/api_key, Oauth2 login or personal access token authentication. Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Confluence API supports difference format of page content. The storage format is the raw XML representation for storage. The view format is the HTML representation for viewing with macros are rendered as though it is viewed by users. You can pass a enum content_format argument to load() to specify the content format, this is set to ContentFormat.STORAGE by default. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Example from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50) Parameters url (str) – _description_ api_key (str, optional) – _description_, defaults to None username (str, optional) – _description_, defaults to None oauth2 (dict, optional) – _description_, defaults to {} token (str, optional) – _description_, defaults to None
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-11
token (str, optional) – _description_, defaults to None cloud (bool, optional) – _description_, defaults to True number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3 min_retry_seconds (Optional[int], optional) – defaults to 2 max_retry_seconds (Optional[int], optional) – defaults to 10 confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with Raises ValueError – Errors while validating input ImportError – Required dependencies not installed. static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source] Validates proper combinations of init arguments Parameters url (Optional[str]) – api_key (Optional[str]) – username (Optional[str]) – oauth2 (Optional[dict]) – token (Optional[str]) – Return type Optional[List] load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source] Parameters space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None label (Optional[str], optional) – Get all pages with this label, defaults to None cql (Optional[str], optional) – CQL Expression, defaults to None include_restricted_content (bool, optional) – defaults to False include_archived_content (bool, optional) – Whether to include archived content, defaults to False include_attachments (bool, optional) – defaults to False
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-12
defaults to False include_attachments (bool, optional) – defaults to False include_comments (bool, optional) – defaults to False content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50 max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a language, you’ll first need to install the appropriate Tesseract language pack. Raises ValueError – _description_ ImportError – _description_ Returns _description_ Return type List[Document] paginate_request(retrieval_method, **kwargs)[source] Paginate the various methods to retrieve groups of pages. Unfortunately, due to page size, sometimes the Confluence API doesn’t match the limit value. If limit is >100 confluence seems to cap the response to 100. Also, due to the Atlassian Python package, we don’t get the β€œnext” values from the β€œ_links” key because they only return the value from the results key. So here, the pagination starts from 0 and goes until the max_pages, getting the limit number of pages with each request. We have to manually check if there are more docs based on the length of the returned list of pages, rather than just checking for the presence of a next key in the response like this page would have you do: https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/ Parameters retrieval_method (callable) – Function used to retrieve docs kwargs (Any) – Returns List of documents Return type List is_public_page(page)[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-13
List of documents Return type List is_public_page(page)[source] Check if a page is publicly accessible. Parameters page (dict) – Return type bool process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source] Process a list of pages into a list of documents. Parameters pages (List[dict]) – include_restricted_content (bool) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type List[langchain.schema.Document] process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source] Parameters page (dict) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type langchain.schema.Document process_attachment(page_id, ocr_languages=None)[source] Parameters page_id (str) – ocr_languages (Optional[str]) – Return type List[str] process_pdf(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_image(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_doc(link)[source] Parameters link (str) – Return type str process_xls(link)[source] Parameters link (str) – Return type str
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-14
Parameters link (str) – Return type str process_svg(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source] Bases: langchain.document_loaders.base.BaseLoader Load Pandas DataFrames. Parameters data_frame (Any) – page_content_column (str) – lazy_load()[source] Lazy load records from dataframe. Return type Iterator[langchain.schema.Document] load()[source] Load full dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Diffbot file json. Parameters api_token (str) – urls (List[str]) – continue_on_failure (bool) – load()[source] Extract text from Diffbot on all the URLs and return Document instances Return type List[langchain.schema.Document] class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from a directory. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-15
silent_errors (bool) – load_hidden (bool) – loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) – loader_kwargs (Optional[dict]) – recursive (bool) – show_progress (bool) – use_multithreading (bool) – max_concurrency (int) – load_file(item, path, docs, pbar)[source] Parameters item (pathlib.Path) – path (pathlib.Path) – docs (List[langchain.schema.Document]) – pbar (Optional[Any]) – Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source] Bases: langchain.document_loaders.base.BaseLoader Load Discord chat logs. Parameters chat_log (pd.DataFrame) – user_id_col (str) – load()[source] Load all chat messages. Return type List[langchain.schema.Document] class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads processed docs from Docugami. To use, you should have the lxml python package installed. Parameters api (str) – access_token (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-16
Parameters api (str) – access_token (Optional[str]) – docset_id (Optional[str]) – document_ids (Optional[Sequence[str]]) – file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) – min_chunk_size (int) – Return type None attribute access_token: Optional[str] = None attribute api: str = 'https://api.docugami.com/v1preview1' attribute docset_id: Optional[str] = None attribute document_ids: Optional[Sequence[str]] = None attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None attribute min_chunk_size: int = 32 load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.Docx2txtLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader, abc.ABC Loads a DOCX with docx2txt and chunks at character level. Defaults to check for local file, but if the file is a web path, it will download it to a temporary file, and use that, then clean up the temporary file after completion Parameters file_path (str) – load()[source] Load given path as single page. Return type List[langchain.schema.Document] class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from DuckDB into a list of documents. Each document represents one row of the result. The page_content_columns
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-17
Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – database (str) – read_only (bool) – config (Optional[Dict[str, str]]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser Wrapper around embaas’s document byte loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader() blob = Blob.from_path(path="example.mp3") documents = loader.parse(blob=blob) # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } )
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-18
"chunk_splitter": "CharacterTextSplitter" } ) blob = Blob.from_path(path="example.pdf") documents = loader.parse(blob=blob) Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – Return type None lazy_parse(blob)[source] Lazy parsing interface. Subclasses are required to implement this method. Parameters blob (langchain.document_loaders.blob_loaders.schema.Blob) – Blob instance Returns Generator of documents Return type Iterator[langchain.schema.Document] class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader Wrapper around embaas’s document loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasLoader loader = EmbaasLoader(file_path="example.mp3") documents = loader.load() # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( file_path="example.pdf", params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256,
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-19
"chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } ) documents = loader.load() Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – file_path (str) – blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) – Return type None attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None The blob loader to use. If not provided, a default one will be created. attribute file_path: str [Required] The path to the file to load. lazy_load()[source] Load the documents from the file path lazily. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] load_and_split(text_splitter=None)[source] Load documents and split into chunks. Parameters text_splitter (Optional[langchain.text_splitter.TextSplitter]) – Return type List[langchain.schema.Document] class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source] Bases: langchain.document_loaders.base.BaseLoader EverNote Loader. Loads an EverNote notebook export file e.g. my_notebook.enex into Documents. Instructions on producing this file can be found at https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML Currently only the plain text in the note is extracted and stored as the contents
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-20
Currently only the plain text in the note is extracted and stored as the contents of the Document, any non content metadata (e.g. β€˜author’, β€˜created’, β€˜updated’ etc. but not β€˜content-raw’ or β€˜resource’) tags on the note will be extracted and stored as metadata on the Document. Parameters file_path (str) – The path to the notebook export with a .enex extension load_single_document (bool) – Whether or not to concatenate the content of all notes into a single long Document. True (If this is set to) – the β€˜source’ which contains the file name of the export. load()[source] Load documents from EverNote export file. Return type List[langchain.schema.Document] class langchain.document_loaders.FacebookChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Facebook messages json directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source] Bases: langchain.document_loaders.base.BaseLoader FaunaDB Loader. Parameters query (str) – page_content_field (str) – secret (str) – metadata_fields (Optional[Sequence[str]]) – query The FQL query string to execute. Type str page_content_field The field that contains the content of each page. Type str secret The secret key for authenticating to FaunaDB. Type str metadata_fields Optional list of field names to include in metadata. Type Optional[Sequence[str]]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-21
Optional list of field names to include in metadata. Type Optional[Sequence[str]] load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Figma file json. Parameters access_token (str) – ids (str) – key (str) – load()[source] Load file Return type List[langchain.schema.Document] class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Blob loader for the local file system. Example: from langchain.document_loaders.blob_loaders import FileSystemBlobLoader loader = FileSystemBlobLoader("/path/to/directory") for blob in loader.yield_blobs(): print(blob) Parameters path (Union[str, pathlib.Path]) – glob (str) – suffixes (Optional[Sequence[str]]) – show_progress (bool) – Return type None yield_blobs()[source] Yield blobs that match the requested pattern. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] count_matching_files()[source] Count files that match the pattern without loading them. Return type int class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-22
Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – blob (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source] Bases: langchain.document_loaders.github.BaseGitHubLoader Parameters repo (str) – access_token (str) – include_prs (bool) – milestone (Optional[Union[int, Literal['*', 'none']]]) – state (Optional[Literal['open', 'closed', 'all']]) – assignee (Optional[str]) – creator (Optional[str]) – mentioned (Optional[str]) – labels (Optional[List[str]]) – sort (Optional[Literal['created', 'updated', 'comments']]) – direction (Optional[Literal['asc', 'desc']]) – since (Optional[str]) – Return type None attribute assignee: Optional[str] = None
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-23
Return type None attribute assignee: Optional[str] = None Filter on assigned user. Pass β€˜none’ for no user and β€˜*’ for any user. attribute creator: Optional[str] = None Filter on the user that created the issue. attribute direction: Optional[Literal['asc', 'desc']] = None The direction to sort the results by. Can be one of: β€˜asc’, β€˜desc’. attribute include_prs: bool = True If True include Pull Requests in results, otherwise ignore them. attribute labels: Optional[List[str]] = None Label names to filter one. Example: bug,ui,@high. attribute mentioned: Optional[str] = None Filter on a user that’s mentioned in the issue. attribute milestone: Optional[Union[int, Literal['*', 'none']]] = None If integer is passed, it should be a milestone’s number field. If the string β€˜*’ is passed, issues with any milestone are accepted. If the string β€˜none’ is passed, issues without milestones are returned. attribute since: Optional[str] = None Only show notifications updated after the given time. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ. attribute sort: Optional[Literal['created', 'updated', 'comments']] = None What to sort results by. Can be one of: β€˜created’, β€˜updated’, β€˜comments’. Default is β€˜created’. attribute state: Optional[Literal['open', 'closed', 'all']] = None Filter on issue state. Can be one of: β€˜open’, β€˜closed’, β€˜all’. lazy_load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-24
Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes parse_issue(issue)[source] Create Document objects from a list of GitHub issues. Parameters issue (dict) – Return type langchain.schema.Document property query_params: str property url: str class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads files from a Git repository into a list of documents. Repository can be local on disk available at repo_path, or remote at clone_url that will be cloned to repo_path. Currently supports only text files. Each document represents one file in the repository. The path points to the local Git repository, and the branch specifies the branch to load files from. By default, it loads from the main branch. Parameters repo_path (str) – clone_url (Optional[str]) – branch (Optional[str]) – file_filter (Optional[Callable[[str], bool]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-25
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load GitBook data. load from either a single page, or load all (relative) paths in the navbar. Parameters web_page (str) – load_all_paths (bool) – base_url (Optional[str]) – content_selector (str) – load()[source] Fetch text from one single GitBook page. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source] Bases: object A Generic Google Api Client. To use, you should have the google_auth_oauthlib,youtube_transcript_api,google python package installed. As the google api expects credentials you need to set up a google account and register your Service. β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) Parameters credentials_path (pathlib.Path) – service_account_path (pathlib.Path) – token_path (pathlib.Path) – Return type None credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-26
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads all Videos from a Channel To use, you should have the googleapiclient,youtube_transcript_api python package installed. As the service needs a google_api_client, you first have to initialize the GoogleApiClient. Additionally you have to either provide a channel name or a list of videoids β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient from langchain.document_loaders import GoogleApiYoutubeLoader google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) loader = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name = "CodeAesthetic" ) load.load() Parameters google_api_client (langchain.document_loaders.youtube.GoogleApiClient) – channel_name (Optional[str]) – video_ids (Optional[List[str]]) – add_video_info (bool) – captions_language (str) – continue_on_failure (bool) – Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient
https://api.python.langchain.com/en/stable/modules/document_loaders.html
4c9e56a69292-27
Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient channel_name: Optional[str] = None video_ids: Optional[List[str]] = None add_video_info: bool = True captions_language: str = 'en' continue_on_failure: bool = False classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads Google Docs from Google Drive. Parameters service_account_key (pathlib.Path) – credentials_path (pathlib.Path) – token_path (pathlib.Path) – folder_id (Optional[str]) – document_ids (Optional[List[str]]) – file_ids (Optional[List[str]]) – recursive (bool) – file_types (Optional[Sequence[str]]) – load_trashed_files (bool) – file_loader_cls (Any) – file_loader_kwargs (Dict[str, Any]) – Return type None
https://api.python.langchain.com/en/stable/modules/document_loaders.html