id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
a44a60c1f5be-261
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-262
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Replicate(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None, replicate_api_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around Replicate models. To use, you should have the replicate python package installed, and the environment variable REPLICATE_API_TOKEN set with your API token. You can find your token here: https://replicate.com/account The model param is required, but any other model parameters can also be passed in with the format input={model_param: value, …} Example from langchain.llms import Replicate replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478", input={"image_dimensions": "512x512"}) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-263
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – model (str) – input (Dict[str, Any]) – model_kwargs (Dict[str, Any]) – replicate_api_token (Optional[str]) – Return type None attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-264
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-265
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-266
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-267
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SagemakerEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source] Bases: langchain.llms.base.LLM Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-268
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (Any) – endpoint_name (str) – region_name (str) – credentials_profile_name (Optional[str]) – content_handler (langchain.llms.sagemaker_endpoint.LLMContentHandler) – model_kwargs (Optional[Dict]) – endpoint_kwargs (Optional[Dict]) – Return type None attribute content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required] The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute endpoint_kwargs: Optional[Dict] = None Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> attribute endpoint_name: str = '' The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: str = '' The aws region where the Sagemaker model is deployed, eg. us-west-2. attribute tags: Optional[List[str]] = None
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-269
attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-270
kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-271
Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-272
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-273
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SelfHostedHuggingFaceLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn=<function _load_transformer>, load_fn_kwargs=None, model_reqs=['./', 'transformers', 'torch'], model_id='gpt2', task='text-generation', device=0, model_kwargs=None)[source] Bases: langchain.llms.self_hosted.SelfHostedPipeline Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Only supports text-generation, text2text-generation and summarization for now. Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceLLM( model_id="google/flan-t5-large", task="text2text-generation",
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-274
model_id="google/flan-t5-large", task="text2text-generation", hardware=gpu ) Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def get_pipeline(): model_id = "gpt2" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer ) return pipe hf = SelfHostedHuggingFaceLLM( model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – model_id (str) – task (str) – device (int) – model_kwargs (Optional[dict]) – Return type None attribute device: int = 0 Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc. attribute hardware: Any = None Remote hardware to send the inference function to.
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-275
attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _generate_text> Inference function to send to the remote hardware. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function. attribute model_id: str = 'gpt2' Hugging Face model_id to load the model. attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute model_load_fn: Callable = <function _load_transformer> Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'transformers', 'torch'] Requirements to install on hardware to inference the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute task: str = 'text-generation' Hugging Face task (β€œtext-generation”, β€œtext2text-generation” or β€œsummarization”). attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-276
Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-277
Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs) Init the SelfHostedPipeline from a pipeline object or string. Parameters pipeline (Any) – hardware (Any) – model_reqs (Optional[List[str]]) – device (int) – kwargs (Any) – Return type langchain.llms.base.LLM generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-278
kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-279
exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-280
property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.SelfHostedPipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'])[source] Bases: langchain.llms.base.LLM Run model inference on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh def load_pipeline(): tokenizer = AutoTokenizer.from_pretrained("gpt2") model = AutoModelForCausalLM.from_pretrained("gpt2") return pipeline( "text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10 ) def inference_fn(pipeline, prompt, stop = None): return pipeline(prompt)[0]["generated_text"] gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") llm = SelfHostedPipeline( model_load_fn=load_pipeline, hardware=gpu, model_reqs=model_reqs, inference_fn=inference_fn ) Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-281
import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") my_model = ... llm = SelfHostedPipeline.from_pipeline( pipeline=my_model, hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Example passing model path for larger models:from langchain.llms import SelfHostedPipeline import runhouse as rh import pickle from transformers import pipeline generator = pipeline(model="gpt2") rh.blob(pickle.dumps(generator), path="models/pipeline.pkl" ).save().to(gpu, path="models") llm = SelfHostedPipeline.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – Return type None attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _generate_text> Inference function to send to the remote hardware. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function.
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-282
Key word arguments to pass to the model load function. attribute model_load_fn: Callable [Required] Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'torch'] Requirements to install on hardware to inference the model. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-283
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-284
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)[source] Init the SelfHostedPipeline from a pipeline object or string. Parameters pipeline (Any) – hardware (Any) – model_reqs (Optional[List[str]]) – device (int) – kwargs (Any) – Return type langchain.llms.base.LLM generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-285
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-286
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.StochasticAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url='', model_kwargs=None, stochasticai_api_key=None)[source] Bases: langchain.llms.base.LLM Wrapper around StochasticAI large language models. To use, you should have the environment variable STOCHASTICAI_API_KEY set with your API key. Example from langchain.llms import StochasticAI stochasticai = StochasticAI(api_url="") Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-287
Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – api_url (str) – model_kwargs (Dict[str, Any]) – stochasticai_api_key (Optional[str]) – Return type None attribute api_url: str = '' Model name to use. attribute model_kwargs: Dict[str, Any] [Optional] Holds any model parameters valid for create call not explicitly specified. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-288
kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-289
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-290
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-291
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.VertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='text-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5, tuned_model_name=None)[source] Bases: langchain.llms.vertexai._VertexAICommon, langchain.llms.base.LLM Wrapper around Google Vertex AI large language models. Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-292
Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – client (_LanguageModel) – model_name (str) – temperature (float) – max_output_tokens (int) – top_p (float) – top_k (int) – stop (Optional[List[str]]) – project (Optional[str]) – location (str) – credentials (Any) – request_parallelism (int) – tuned_model_name (Optional[str]) – Return type None attribute credentials: Any = None The default custom credentials (google.auth.credentials.Credentials) to use attribute location: str = 'us-central1' The default location to use when making API calls. attribute max_output_tokens: int = 128 Token limit determines the maximum amount of text output from one prompt. attribute model_name: str = 'text-bison' The name of the Vertex AI large language model. attribute project: Optional[str] = None The default GCP project to use when making Vertex API calls. attribute request_parallelism: int = 5 The amount of parallelism allowed for requests issued to VertexAI models. attribute stop: Optional[List[str]] = None Optional list of stop words to use when generating. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: float = 0.0 Sampling temperature, it controls the degree of randomness in token selection.
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-293
Sampling temperature, it controls the degree of randomness in token selection. attribute top_k: int = 40 How the model selects tokens for output, the next token is selected from attribute top_p: float = 0.95 Tokens are selected from most probable to least until the sum of their attribute tuned_model_name: Optional[str] = None The name of a tuned model. If provided, model_name is ignored. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-294
kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-295
Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-296
Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-297
Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.Writer(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, writer_org_id=None, model_id='palmyra-instruct', min_tokens=None, max_tokens=None, temperature=None, top_p=None, stop=None, presence_penalty=None, repetition_penalty=None, best_of=None, logprobs=False, n=None, writer_api_key=None, base_url=None)[source] Bases: langchain.llms.base.LLM Wrapper around Writer large language models. To use, you should have the environment variable WRITER_API_KEY and WRITER_ORG_ID set with your API key and organization ID respectively. Example from langchain import Writer writer = Writer(model_id="palmyra-base") Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – writer_org_id (Optional[str]) – model_id (str) – min_tokens (Optional[int]) – max_tokens (Optional[int]) – temperature (Optional[float]) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-298
max_tokens (Optional[int]) – temperature (Optional[float]) – top_p (Optional[float]) – stop (Optional[List[str]]) – presence_penalty (Optional[float]) – repetition_penalty (Optional[float]) – best_of (Optional[int]) – logprobs (bool) – n (Optional[int]) – writer_api_key (Optional[str]) – base_url (Optional[str]) – Return type None attribute base_url: Optional[str] = None Base url to use, if None decides based on model name. attribute best_of: Optional[int] = None Generates this many completions server-side and returns the β€œbest”. attribute logprobs: bool = False Whether to return log probabilities. attribute max_tokens: Optional[int] = None Maximum number of tokens to generate. attribute min_tokens: Optional[int] = None Minimum number of tokens to generate. attribute model_id: str = 'palmyra-instruct' Model name to use. attribute n: Optional[int] = None How many completions to generate. attribute presence_penalty: Optional[float] = None Penalizes repeated tokens regardless of frequency. attribute repetition_penalty: Optional[float] = None Penalizes repeated tokens according to frequency. attribute stop: Optional[List[str]] = None Sequences when completion generation will stop. attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute temperature: Optional[float] = None What sampling temperature to use. attribute top_p: Optional[float] = None Total probability mass of tokens to consider at each step. attribute verbose: bool [Optional]
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-299
attribute verbose: bool [Optional] Whether to print out response text. attribute writer_api_key: Optional[str] = None Writer API key. attribute writer_org_id: Optional[str] = None Writer organization ID. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-300
stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-301
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-302
Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object.
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-303
property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable. class langchain.llms.OctoAIEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url=None, model_kwargs=None, octoai_api_token=None)[source] Bases: langchain.llms.base.LLM Wrapper around OctoAI Inference Endpoints. OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints. To use, you should have the octoai python package installed, and the environment variable OCTOAI_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms.octoai_endpoint import OctoAIEndpoint OctoAIEndpoint( octoai_api_token="octoai-api-key", endpoint_url="https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate", model_kwargs={ "max_new_tokens": 200, "temperature": 0.75, "top_p": 0.95, "repetition_penalty": 1, "seed": None, "stop": [], }, ) Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-304
) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – endpoint_url (Optional[str]) – model_kwargs (Optional[dict]) – octoai_api_token (Optional[str]) – Return type None attribute endpoint_url: Optional[str] = None Endpoint URL to use. attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute octoai_api_token: Optional[str] = None OCTOAI API Token attribute tags: Optional[List[str]] = None Tags to add to the run trace. attribute verbose: bool [Optional] Whether to print out response text. __call__(prompt, stop=None, callbacks=None, **kwargs) Check Cache and run the LLM on the given prompt and input. Parameters prompt (str) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type str async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-305
tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult async apredict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str async apredict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage classmethod construct(_fields_set=None, **values) Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data. Default values are respected, but no other validation is performed. Behaves as if Config.extra = β€˜allow’ was set since it adds all passed values Parameters _fields_set (Optional[SetStr]) – values (Any) – Return type Model copy(*, include=None, exclude=None, update=None, deep=False) Duplicate a model, optionally choose which fields to include, exclude and change. Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to include in new model
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-306
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – fields to exclude from new model, as with values this takes precedence over include update (Optional[DictStrAny]) – values to change/add in the new model. Note: the data is not validated before creating the new model: you should trust this data deep (bool) – set to True to make a deep copy of the model self (Model) – Returns new model instance Return type Model dict(**kwargs) Return a dictionary of the LLM. Parameters kwargs (Any) – Return type Dict generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs) Run the LLM on the given prompt and input. Parameters prompts (List[str]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – tags (Optional[List[str]]) – kwargs (Any) – Return type langchain.schema.LLMResult generate_prompt(prompts, stop=None, callbacks=None, **kwargs) Take in a list of prompt values and return an LLMResult. Parameters prompts (List[langchain.schema.PromptValue]) – stop (Optional[List[str]]) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – kwargs (Any) – Return type langchain.schema.LLMResult get_num_tokens(text) Get the number of tokens present in the text. Parameters text (str) – Return type int get_num_tokens_from_messages(messages) Get the number of tokens in the message. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-307
Get the number of tokens in the message. Parameters messages (List[langchain.schema.BaseMessage]) – Return type int get_token_ids(text) Get the token present in the text. Parameters text (str) – Return type List[int] json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs) Generate a JSON representation of the model, include and exclude arguments as per dict(). encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps(). Parameters include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) – by_alias (bool) – skip_defaults (Optional[bool]) – exclude_unset (bool) – exclude_defaults (bool) – exclude_none (bool) – encoder (Optional[Callable[[Any], Any]]) – models_as_dict (bool) – dumps_kwargs (Any) – Return type unicode predict(text, *, stop=None, **kwargs) Predict text from text. Parameters text (str) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type str predict_messages(messages, *, stop=None, **kwargs) Predict message from messages. Parameters messages (List[langchain.schema.BaseMessage]) – stop (Optional[Sequence[str]]) – kwargs (Any) – Return type langchain.schema.BaseMessage save(file_path) Save the LLM. Parameters
https://api.python.langchain.com/en/stable/modules/llms.html
a44a60c1f5be-308
save(file_path) Save the LLM. Parameters file_path (Union[pathlib.Path, str]) – Path to file to save the LLM to. Return type None Example: .. code-block:: python llm.save(file_path=”path/llm.yaml”) classmethod update_forward_refs(**localns) Try to update ForwardRefs on fields based on this Model, globalns and localns. Parameters localns (Any) – Return type None property lc_attributes: Dict Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str] Return the namespace of the langchain object. eg. [β€œlangchain”, β€œllms”, β€œopenai”] property lc_secrets: Dict[str, str] Return a map of constructor argument names to secret ids. eg. {β€œopenai_api_key”: β€œOPENAI_API_KEY”} property lc_serializable: bool Return whether or not the class is serializable.
https://api.python.langchain.com/en/stable/modules/llms.html
1ed571763b88-0
Retrievers class langchain.retrievers.AmazonKendraRetriever(index_id, region_name=None, credentials_profile_name=None, top_k=3, attribute_filter=None, client=None)[source] Bases: langchain.schema.BaseRetriever Retriever class to query documents from Amazon Kendra Index. Parameters index_id (str) – Kendra index id region_name (Optional[str]) – The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config. credentials_profile_name (Optional[str]) – The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. top_k (int) – No of results to return attribute_filter (Optional[Dict]) – Additional filtering of results based on metadata See: https://docs.aws.amazon.com/kendra/latest/APIReference client (Optional[Any]) – boto3 client for Kendra Example retriever = AmazonKendraRetriever( index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03" ) get_relevant_documents(query)[source] Run search on Kendra index and get top k documents Example: .. code-block:: python docs = retriever.get_relevant_documents(β€˜This is my query’) Parameters query (str) – Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-1
Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ArxivRetriever(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: langchain.schema.BaseRetriever, langchain.utilities.arxiv.ArxivAPIWrapper It is effectively a wrapper for ArxivAPIWrapper. It wraps load() to get_relevant_documents(). It uses all ArxivAPIWrapper arguments without any change. Parameters arxiv_search (Any) – arxiv_exceptions (Any) – top_k_results (int) – load_max_docs (int) – load_all_available_meta (bool) – doc_content_chars_max (Optional[int]) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.AzureCognitiveSearchRetriever(*, service_name='', index_name='', api_key='', api_version='2020-06-30', aiosession=None, content_key='content')[source]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-2
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Wrapper around Azure Cognitive Search. Parameters service_name (str) – index_name (str) – api_key (str) – api_version (str) – aiosession (Optional[aiohttp.client.ClientSession]) – content_key (str) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None ClientSession, in case we want to reuse connection for better performance. attribute api_key: str = '' API Key. Both Admin and Query keys work, but for reading data it’s recommended to use a Query key. attribute api_version: str = '2020-06-30' API version attribute content_key: str = 'content' Key in a retrieved result to set as the Document page_content. attribute index_name: str = '' Name of Index inside Azure Cognitive Search service attribute service_name: str = '' Name of Azure Cognitive Search service async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ChatGPTPluginRetriever(*, url, bearer_token, top_k=3, filter=None, aiosession=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-3
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters url (str) – bearer_token (str) – top_k (int) – filter (Optional[dict]) – aiosession (Optional[aiohttp.client.ClientSession]) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute bearer_token: str [Required] attribute filter: Optional[dict] = None attribute top_k: int = 3 attribute url: str [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ContextualCompressionRetriever(*, base_compressor, base_retriever)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever that wraps a base retriever and compresses the results. Parameters base_compressor (langchain.retrievers.document_compressors.base.BaseDocumentCompressor) – base_retriever (langchain.schema.BaseRetriever) – Return type None attribute base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required] Compressor for compressing retrieved documents. attribute base_retriever: langchain.schema.BaseRetriever [Required]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-4
attribute base_retriever: langchain.schema.BaseRetriever [Required] Base Retriever to use for getting relevant documents. async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns Sequence of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.DataberryRetriever(datastore_url, top_k=None, api_key=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Databerry API. Parameters datastore_url (str) – top_k (Optional[int]) – api_key (Optional[str]) – datastore_url: str api_key: Optional[str] top_k: Optional[int] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ElasticSearchBM25Retriever(client, index_name)[source] Bases: langchain.schema.BaseRetriever Wrapper around Elasticsearch using BM25 as a retrieval method.
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-5
Wrapper around Elasticsearch using BM25 as a retrieval method. To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the β€œDeployments” page. To obtain your Elastic Cloud password for the default β€œelastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to β€œSecurity” > β€œUsers” Locate the β€œelastic” user and click β€œEdit” Click β€œReset password” Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Parameters client (Any) – index_name (str) – classmethod create(elasticsearch_url, index_name, k1=2.0, b=0.75)[source] Parameters elasticsearch_url (str) – index_name (str) – k1 (float) – b (float) – Return type langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever add_texts(texts, refresh_indices=True)[source] Run more texts through the embeddings and add to the retriever. Parameters texts (Iterable[str]) – Iterable of strings to add to the retriever. refresh_indices (bool) – bool to refresh ElasticSearch indices Returns
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-6
refresh_indices (bool) – bool to refresh ElasticSearch indices Returns List of ids from adding the texts into the retriever. Return type List[str] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.KNNRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel KNN Retriever. Parameters embeddings (langchain.embeddings.base.Embeddings) – index (Any) – texts (List[str]) – k (int) – relevancy_threshold (Optional[float]) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute k: int = 4 attribute relevancy_threshold: Optional[float] = None attribute texts: List[str] [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_texts(texts, embeddings, **kwargs)[source] Parameters texts (List[str]) –
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-7
Parameters texts (List[str]) – embeddings (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.retrievers.knn.KNNRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.LlamaIndexGraphRetriever(*, graph=None, query_configs=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Question-answering with sources over an LlamaIndex graph data structure. Parameters graph (Any) – query_configs (List[Dict]) – Return type None attribute graph: Any = None attribute query_configs: List[Dict] [Optional] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – Return type List[langchain.schema.Document] class langchain.retrievers.LlamaIndexRetriever(*, index=None, query_kwargs=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Question-answering with sources over an LlamaIndex data structure. Parameters index (Any) – query_kwargs (Dict) – Return type None attribute index: Any = None
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-8
Return type None attribute index: Any = None attribute query_kwargs: Dict [Optional] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – Return type List[langchain.schema.Document] class langchain.retrievers.MergerRetriever(retrievers)[source] Bases: langchain.schema.BaseRetriever This class merges the results of multiple retrievers. Parameters retrievers (List[langchain.schema.BaseRetriever]) – A list of retrievers to merge. get_relevant_documents(query)[source] Get the relevant documents for a given query. Parameters query (str) – The query to search for. Returns A list of relevant documents. Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Asynchronously get the relevant documents for a given query. Parameters query (str) – The query to search for. Returns A list of relevant documents. Return type List[langchain.schema.Document] merge_documents(query)[source] Merge the results of the retrievers. Parameters query (str) – The query to search for. Returns A list of merged documents. Return type List[langchain.schema.Document] async amerge_documents(query)[source] Asynchronously merge the results of the retrievers. Parameters query (str) – The query to search for. Returns A list of merged documents. Return type
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-9
Returns A list of merged documents. Return type List[langchain.schema.Document] class langchain.retrievers.MetalRetriever(client, params=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Metal API. Parameters client (Any) – params (Optional[dict]) – get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.MilvusRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Milvus API. Parameters embedding_function (langchain.embeddings.base.Embeddings) – collection_name (str) – connection_args (Optional[Dict[str, Any]]) – consistency_level (str) – search_params (Optional[dict]) – add_texts(texts, metadatas=None)[source] Add text to the Milvus store Parameters texts (List[str]) – The text metadatas (List[dict]) – Metadata dicts, must line up with existing store Return type None get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-10
Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.MultiQueryRetriever(retriever, llm_chain, verbose=True, parser_key='lines')[source] Bases: langchain.schema.BaseRetriever Given a user query, use an LLM to write a set of queries. Retrieve docs for each query. Rake the unique union of all retrieved docs. Parameters retriever (langchain.schema.BaseRetriever) – llm_chain (langchain.chains.llm.LLMChain) – verbose (bool) – parser_key (str) – Return type None classmethod from_llm(retriever, llm, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='You are an AI language model assistant. Your task is \nΒ Β Β  to generate 3 different versions of the given user \nΒ Β Β  question to retrieve relevant documents from a vectorΒ  database. \nΒ Β Β  By generating multiple perspectives on the user question, \nΒ Β Β  your goal is to help the user overcome some of the limitations \nΒ Β Β  of distance-based similarity search. Provide these alternative \nΒ Β Β  questions seperated by newlines. Original question: {question}', template_format='f-string', validate_template=True), parser_key='lines')[source] Initialize from llm using default template. Parameters retriever (langchain.schema.BaseRetriever) – retriever to query documents from
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-11
retriever (langchain.schema.BaseRetriever) – retriever to query documents from llm (langchain.llms.base.BaseLLM) – llm for query generation using DEFAULT_QUERY_PROMPT prompt (langchain.prompts.prompt.PromptTemplate) – parser_key (str) – Returns MultiQueryRetriever Return type langchain.retrievers.multi_query.MultiQueryRetriever get_relevant_documents(question)[source] Get relevated documents given a user query. Parameters question (str) – user query Returns Unique union of relevant documents from all generated queries Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] generate_queries(question)[source] Generate queries based upon user input. Parameters question (str) – user query Returns List of LLM generated queries that are similar to the user input Return type List[str] retrieve_documents(queries)[source] Run all LLM generated queries. Parameters queries (List[str]) – query list Returns List of retrived Documents Return type List[langchain.schema.Document] unique_union(documents)[source] Get uniqe Documents. Parameters documents (List[langchain.schema.Document]) – List of retrived Documents Returns List of unique retrived Documents Return type List[langchain.schema.Document] class langchain.retrievers.PineconeHybridSearchRetriever(*, embeddings, sparse_encoder=None, index=None, top_k=4, alpha=0.5)[source]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-12
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters embeddings (langchain.embeddings.base.Embeddings) – sparse_encoder (Any) – index (Any) – top_k (int) – alpha (float) – Return type None attribute alpha: float = 0.5 attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute sparse_encoder: Any = None attribute top_k: int = 4 add_texts(texts, ids=None, metadatas=None)[source] Parameters texts (List[str]) – ids (Optional[List[str]]) – metadatas (Optional[List[dict]]) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-13
Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.PubMedRetriever(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: langchain.schema.BaseRetriever, langchain.utilities.pupmed.PubMedAPIWrapper It is effectively a wrapper for PubMedAPIWrapper. It wraps load() to get_relevant_documents(). It uses all PubMedAPIWrapper arguments without any change. Parameters top_k_results (int) – load_max_docs (int) – doc_content_chars_max (int) – load_all_available_meta (bool) – email (str) – base_url_esearch (str) – base_url_efetch (str) – max_retry (int) – sleep_time (float) – ARXIV_MAX_QUERY_LENGTH (int) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-14
Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.RemoteLangChainRetriever(*, url, headers=None, input_key='message', response_key='response', page_content_key='page_content', metadata_key='metadata')[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters url (str) – headers (Optional[dict]) – input_key (str) – response_key (str) – page_content_key (str) – metadata_key (str) – Return type None attribute headers: Optional[dict] = None attribute input_key: str = 'message' attribute metadata_key: str = 'metadata' attribute page_content_key: str = 'page_content' attribute response_key: str = 'response' attribute url: str [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.SVMRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel SVM Retriever. Parameters embeddings (langchain.embeddings.base.Embeddings) – index (Any) –
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-15
index (Any) – texts (List[str]) – k (int) – relevancy_threshold (Optional[float]) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute index: Any = None attribute k: int = 4 attribute relevancy_threshold: Optional[float] = None attribute texts: List[str] [Required] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_texts(texts, embeddings, **kwargs)[source] Parameters texts (List[str]) – embeddings (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.retrievers.svm.SVMRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.SelfQueryRetriever(*, vectorstore, llm_chain, search_type='similarity', search_kwargs=None, structured_query_translator, verbose=False, use_original_query=False)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever that wraps around a vector store and uses an LLM to generate the vector store queries. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – llm_chain (langchain.chains.llm.LLMChain) – search_type (str) –
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-16
search_type (str) – search_kwargs (dict) – structured_query_translator (langchain.chains.query_constructor.ir.Visitor) – verbose (bool) – use_original_query (bool) – Return type None attribute llm_chain: langchain.chains.llm.LLMChain [Required] The LLMChain for generating the vector store queries. attribute search_kwargs: dict [Optional] Keyword arguments to pass in to the vector store search. attribute search_type: str = 'similarity' The search type to perform on the vector store. attribute structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required] Translator for turning internal query language into vectorstore search params. attribute use_original_query: bool = False attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] The underlying vector store from which documents will be retrieved. attribute verbose: bool = False Use original query instead of the revised new query from LLM async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_llm(llm, vectorstore, document_contents, metadata_field_info, structured_query_translator=None, chain_kwargs=None, enable_limit=False, use_original_query=False, **kwargs)[source] Parameters llm (langchain.base_language.BaseLanguageModel) – vectorstore (langchain.vectorstores.base.VectorStore) – document_contents (str) – metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) –
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-17
metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) – structured_query_translator (Optional[langchain.chains.query_constructor.ir.Visitor]) – chain_kwargs (Optional[Dict]) – enable_limit (bool) – use_original_query (bool) – kwargs (Any) – Return type langchain.retrievers.self_query.base.SelfQueryRetriever get_relevant_documents(query, callbacks=None)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.TFIDFRetriever(*, vectorizer=None, docs, tfidf_array=None, k=4)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Parameters vectorizer (Any) – docs (List[langchain.schema.Document]) – tfidf_array (Any) – k (int) – Return type None attribute docs: List[langchain.schema.Document] [Required] attribute k: int = 4 attribute tfidf_array: Any = None attribute vectorizer: Any = None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] classmethod from_documents(documents, *, tfidf_params=None, **kwargs)[source] Parameters documents (Iterable[langchain.schema.Document]) –
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-18
Parameters documents (Iterable[langchain.schema.Document]) – tfidf_params (Optional[Dict[str, Any]]) – kwargs (Any) – Return type langchain.retrievers.tfidf.TFIDFRetriever classmethod from_texts(texts, metadatas=None, tfidf_params=None, **kwargs)[source] Parameters texts (Iterable[str]) – metadatas (Optional[Iterable[dict]]) – tfidf_params (Optional[Dict[str, Any]]) – kwargs (Any) – Return type langchain.retrievers.tfidf.TFIDFRetriever get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.TimeWeightedVectorStoreRetriever(*, vectorstore, search_kwargs=None, memory_stream=None, decay_rate=0.01, k=4, other_score_keys=[], default_salience=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever combining embedding similarity with recency. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – search_kwargs (dict) – memory_stream (List[langchain.schema.Document]) – decay_rate (float) – k (int) – other_score_keys (List[str]) – default_salience (Optional[float]) – Return type None attribute decay_rate: float = 0.01 The exponential decay factor used as (1.0-decay_rate)**(hrs_passed). attribute default_salience: Optional[float] = None
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-19
attribute default_salience: Optional[float] = None The salience to assign memories not retrieved from the vector store. None assigns no salience to documents not fetched from the vector store. attribute k: int = 4 The maximum number of documents to retrieve in a given call. attribute memory_stream: List[langchain.schema.Document] [Optional] The memory_stream of documents to search through. attribute other_score_keys: List[str] = [] Other keys in the metadata to factor into the score, e.g. β€˜importance’. attribute search_kwargs: dict [Optional] Keyword arguments to pass to the vectorstore similarity search. attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] The vectorstore to store documents and determine salience. async aadd_documents(documents, **kwargs)[source] Add documents to vectorstore. Parameters documents (List[langchain.schema.Document]) – kwargs (Any) – Return type List[str] add_documents(documents, **kwargs)[source] Add documents to vectorstore. Parameters documents (List[langchain.schema.Document]) – kwargs (Any) – Return type List[str] async aget_relevant_documents(query)[source] Return documents that are relevant to the query. Parameters query (str) – Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Return documents that are relevant to the query. Parameters query (str) – Return type List[langchain.schema.Document] get_salient_docs(query)[source] Return documents that are salient to the query. Parameters query (str) – Return type
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-20
Parameters query (str) – Return type Dict[int, Tuple[langchain.schema.Document, float]] class langchain.retrievers.VespaRetriever(app, body, content_field, metadata_fields=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Vespa. Parameters app (Vespa) – body (Dict) – content_field (str) – metadata_fields (Optional[Sequence[str]]) – get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents_with_filter(query, *, _filter=None)[source] Parameters query (str) – _filter (Optional[str]) – Return type List[langchain.schema.Document] classmethod from_params(url, content_field, *, k=None, metadata_fields=(), sources=None, _filter=None, yql=None, **kwargs)[source] Instantiate retriever from params. Parameters url (str) – Vespa app URL. content_field (str) – Field in results to return as Document page_content. k (Optional[int]) – Number of Documents to return. Defaults to None. metadata_fields (Sequence[str] or "*") – Fields in results to include in document metadata. Defaults to empty tuple (). sources (Sequence[str] or "*" or None) – Sources to retrieve from. Defaults to None.
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-21
from. Defaults to None. _filter (Optional[str]) – Document filter condition expressed in YQL. Defaults to None. yql (Optional[str]) – Full YQL query to be used. Should not be specified if _filter or sources are specified. Defaults to None. kwargs (Any) – Keyword arguments added to query body. Return type langchain.retrievers.vespa_retriever.VespaRetriever class langchain.retrievers.WeaviateHybridSearchRetriever(client, index_name, text_key, alpha=0.5, k=4, attributes=None, create_schema_if_missing=True)[source] Bases: langchain.schema.BaseRetriever Parameters client (Any) – index_name (str) – text_key (str) – alpha (float) – k (int) – attributes (Optional[List[str]]) – create_schema_if_missing (bool) – class Config[source] Bases: object Configuration for this pydantic object. extra = 'forbid' arbitrary_types_allowed = True add_documents(docs, **kwargs)[source] Upload documents to Weaviate. Parameters docs (List[langchain.schema.Document]) – kwargs (Any) – Return type List[str] get_relevant_documents(query, where_filter=None)[source] Look up similar documents in Weaviate. Parameters query (str) – where_filter (Optional[Dict[str, object]]) – Return type List[langchain.schema.Document] async aget_relevant_documents(query, where_filter=None)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-22
Parameters query (str) – string to find relevant documents for where_filter (Optional[Dict[str, object]]) – Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.WikipediaRetriever(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source] Bases: langchain.schema.BaseRetriever, langchain.utilities.wikipedia.WikipediaAPIWrapper It is effectively a wrapper for WikipediaAPIWrapper. It wraps load() to get_relevant_documents(). It uses all WikipediaAPIWrapper arguments without any change. Parameters wiki_client (Any) – top_k_results (int) – lang (str) – load_all_available_meta (bool) – doc_content_chars_max (int) – Return type None async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ZepRetriever(session_id, url, top_k=None)[source] Bases: langchain.schema.BaseRetriever A Retriever implementation for the Zep long-term memory store. Search your user’s long-term chat history with Zep. Note: You will need to provide the user’s session_id to use this retriever. More on Zep:
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-23
More on Zep: Zep provides long-term conversation storage for LLM apps. The server stores, summarizes, embeds, indexes, and enriches conversational AI chat histories, and exposes them via simple, low-latency APIs. For server installation instructions, see: https://getzep.github.io/deployment/quickstart/ Parameters session_id (str) – url (str) – top_k (Optional[int]) – get_relevant_documents(query, metadata=None)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for metadata (Optional[Dict]) – Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query, metadata=None)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for metadata (Optional[Dict]) – Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.ZillizRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source] Bases: langchain.schema.BaseRetriever Retriever that uses the Zilliz API. Parameters embedding_function (langchain.embeddings.base.Embeddings) – collection_name (str) – connection_args (Optional[Dict[str, Any]]) – consistency_level (str) – search_params (Optional[dict]) – add_texts(texts, metadatas=None)[source] Add text to the Zilliz store Parameters texts (List[str]) – The text
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-24
Add text to the Zilliz store Parameters texts (List[str]) – The text metadatas (List[dict]) – Metadata dicts, must line up with existing store Return type None get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] class langchain.retrievers.DocArrayRetriever(*, index=None, embeddings, search_field, content_field, search_type=SearchType.similarity, top_k=1, filters=None)[source] Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel Retriever class for DocArray Document Indices. Currently, supports 5 backends: InMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex, ElasticDocIndex, and WeaviateDocumentIndex. Parameters index (Any) – embeddings (langchain.embeddings.base.Embeddings) – search_field (str) – content_field (str) – search_type (langchain.retrievers.docarray.SearchType) – top_k (int) – filters (Optional[Any]) – Return type None index One of the above-mentioned index instances embeddings Embedding model to represent text as vectors search_field Field to consider for searching in the documents. Should be an embedding/vector/tensor. content_field
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-25
Should be an embedding/vector/tensor. content_field Field that represents the main content in your document schema. Will be used as a page_content. Everything else will go into metadata. search_type Type of search to perform (similarity / mmr) filters Filters applied for document retrieval. top_k Number of documents to return attribute content_field: str [Required] attribute embeddings: langchain.embeddings.base.Embeddings [Required] attribute filters: Optional[Any] = None attribute index: Any = None attribute search_field: str [Required] attribute search_type: langchain.retrievers.docarray.SearchType = SearchType.similarity attribute top_k: int = 1 async aget_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] get_relevant_documents(query)[source] Get documents relevant for a query. Parameters query (str) – string to find relevant documents for Returns List of relevant documents Return type List[langchain.schema.Document] Document compressors class langchain.retrievers.document_compressors.DocumentCompressorPipeline(*, transformers)[source] Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor Document compressor that uses a pipeline of transformers. Parameters transformers (List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]]) – Return type None
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-26
Return type None attribute transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required] List of document filters that are chained together and run in sequence. async acompress_documents(documents, query)[source] Compress retrieved documents given the query context. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] compress_documents(documents, query)[source] Transform a list of documents. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] class langchain.retrievers.document_compressors.EmbeddingsFilter(*, embeddings, similarity_fn=<function cosine_similarity>, k=20, similarity_threshold=None)[source] Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor Parameters embeddings (langchain.embeddings.base.Embeddings) – similarity_fn (Callable) – k (Optional[int]) – similarity_threshold (Optional[float]) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] Embeddings to use for embedding document contents and queries. attribute k: Optional[int] = 20 The number of relevant documents to return. Can be set to None, in which case similarity_threshold must be specified. Defaults to 20. attribute similarity_fn: Callable = <function cosine_similarity> Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. attribute similarity_threshold: Optional[float] = None
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-27
indicate greater similarity. attribute similarity_threshold: Optional[float] = None Threshold for determining when two documents are similar enough to be considered redundant. Defaults to None, must be specified if k is set to None. async acompress_documents(documents, query)[source] Filter down documents. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] compress_documents(documents, query)[source] Filter documents based on similarity of their embeddings to the query. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] class langchain.retrievers.document_compressors.LLMChainExtractor(*, llm_chain, get_input=<function default_get_input>)[source] Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor Parameters llm_chain (langchain.chains.llm.LLMChain) – get_input (Callable[[str, langchain.schema.Document], dict]) – Return type None attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input> Callable for constructing the chain input from the query and a Document. attribute llm_chain: langchain.chains.llm.LLMChain [Required] LLM wrapper to use for compressing documents. async acompress_documents(documents, query)[source] Compress page content of raw documents asynchronously. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] compress_documents(documents, query)[source] Compress page content of raw documents. Parameters
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-28
Compress page content of raw documents. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] classmethod from_llm(llm, prompt=None, get_input=None, llm_chain_kwargs=None)[source] Initialize from LLM. Parameters llm (langchain.base_language.BaseLanguageModel) – prompt (Optional[langchain.prompts.prompt.PromptTemplate]) – get_input (Optional[Callable[[str, langchain.schema.Document], str]]) – llm_chain_kwargs (Optional[dict]) – Return type langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor class langchain.retrievers.document_compressors.LLMChainFilter(*, llm_chain, get_input=<function default_get_input>)[source] Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor Filter that drops documents that aren’t relevant to the query. Parameters llm_chain (langchain.chains.llm.LLMChain) – get_input (Callable[[str, langchain.schema.Document], dict]) – Return type None attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input> Callable for constructing the chain input from the query and a Document. attribute llm_chain: langchain.chains.llm.LLMChain [Required] LLM wrapper to use for filtering documents. The chain prompt is expected to have a BooleanOutputParser. async acompress_documents(documents, query)[source] Filter down documents. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/retrievers.html
1ed571763b88-29
query (str) – Return type Sequence[langchain.schema.Document] compress_documents(documents, query)[source] Filter down documents based on their relevance to the query. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] classmethod from_llm(llm, prompt=None, **kwargs)[source] Parameters llm (langchain.base_language.BaseLanguageModel) – prompt (Optional[langchain.prompts.base.BasePromptTemplate]) – kwargs (Any) – Return type langchain.retrievers.document_compressors.chain_filter.LLMChainFilter class langchain.retrievers.document_compressors.CohereRerank(*, client, top_n=3, model='rerank-english-v2.0')[source] Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor Parameters client (Client) – top_n (int) – model (str) – Return type None attribute client: Client [Required] attribute model: str = 'rerank-english-v2.0' attribute top_n: int = 3 async acompress_documents(documents, query)[source] Compress retrieved documents given the query context. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document] compress_documents(documents, query)[source] Compress retrieved documents given the query context. Parameters documents (Sequence[langchain.schema.Document]) – query (str) – Return type Sequence[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/retrievers.html
3a705d435744-0
Example Selector Logic for selecting examples to include in prompts. class langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel Select examples based on length. Parameters examples (List[dict]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – get_text_length (Callable[[str], int]) – max_length (int) – example_text_lengths (List[int]) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] Prompt template used to format the examples. attribute examples: List[dict] [Required] A list of the examples that the prompt template expects. attribute get_text_length: Callable[[str], int] = <function _get_length_based> Function to measure prompt length. Defaults to word count. attribute max_length: int = 2048 Max length for the prompt, beyond which examples are cut. add_example(example)[source] Add new example to list. Parameters example (Dict[str, str]) – Return type None select_examples(input_variables)[source] Select which examples to use based on the input lengths. Parameters input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source] Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
https://api.python.langchain.com/en/stable/modules/example_selector.html
3a705d435744-1
Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. This was shown to improve performance in this paper: https://arxiv.org/pdf/2211.13892.pdf Parameters vectorstore (langchain.vectorstores.base.VectorStore) – k (int) – example_keys (Optional[List[str]]) – input_keys (Optional[List[str]]) – fetch_k (int) – Return type None attribute fetch_k: int = 20 Number of examples to fetch to rerank. classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source] Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples (List[dict]) – List of examples to use in the prompt. embeddings (langchain.embeddings.base.Embeddings) – An iniialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) – A vector store DB interface class, e.g. FAISS. k (int) – Number of examples to select input_keys (Optional[List[str]]) – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs (Any) – optional kwargs containing url for vector store fetch_k (int) – Returns The ExampleSelector instantiated, backed by a vector store. Return type langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector select_examples(input_variables)[source] Select which examples to use based on semantic similarity. Parameters input_variables (Dict[str, str]) –
https://api.python.langchain.com/en/stable/modules/example_selector.html
3a705d435744-2
Parameters input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.example_selector.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel Select and order examples based on ngram overlap score (sentence_bleu score). https://www.nltk.org/_modules/nltk/translate/bleu_score.html https://aclanthology.org/P02-1040.pdf Parameters examples (List[dict]) – example_prompt (langchain.prompts.prompt.PromptTemplate) – threshold (float) – Return type None attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required] Prompt template used to format the examples. attribute examples: List[dict] [Required] A list of the examples that the prompt template expects. attribute threshold: float = -1.0 Threshold at which algorithm stops. Set to -1.0 by default. For negative threshold: select_examples sorts examples by ngram_overlap_score, but excludes none. For threshold greater than 1.0: select_examples excludes all examples, and returns an empty list. For threshold equal to 0.0: select_examples sorts examples by ngram_overlap_score, and excludes examples with no ngram overlap with input. add_example(example)[source] Add new example to list. Parameters example (Dict[str, str]) – Return type None select_examples(input_variables)[source] Return list of examples sorted by ngram_overlap_score with input. Descending order. Excludes any examples with ngram_overlap_score less than or equal to threshold. Parameters
https://api.python.langchain.com/en/stable/modules/example_selector.html
3a705d435744-3
Excludes any examples with ngram_overlap_score less than or equal to threshold. Parameters input_variables (Dict[str, str]) – Return type List[dict] class langchain.prompts.example_selector.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source] Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel Example selector that selects examples based on SemanticSimilarity. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – k (int) – example_keys (Optional[List[str]]) – input_keys (Optional[List[str]]) – Return type None attribute example_keys: Optional[List[str]] = None Optional keys to filter examples to. attribute input_keys: Optional[List[str]] = None Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. attribute k: int = 4 Number of examples to select. attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] VectorStore than contains information about examples. add_example(example)[source] Add new example to vectorstore. Parameters example (Dict[str, str]) – Return type str classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source] Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples (List[dict]) – List of examples to use in the prompt. embeddings (langchain.embeddings.base.Embeddings) – An initialized embedding API interface, e.g. OpenAIEmbeddings().
https://api.python.langchain.com/en/stable/modules/example_selector.html
3a705d435744-4
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) – A vector store DB interface class, e.g. FAISS. k (int) – Number of examples to select input_keys (Optional[List[str]]) – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs (Any) – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store. Return type langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector select_examples(input_variables)[source] Select which examples to use based on semantic similarity. Parameters input_variables (Dict[str, str]) – Return type List[dict]
https://api.python.langchain.com/en/stable/modules/example_selector.html
af8b1c7c6518-0
Callbacks Callback handlers that allow listening to events in LangChain. class langchain.callbacks.AimCallbackHandler(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True)[source] Bases: langchain.callbacks.aim_callback.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to Aim. Parameters repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used. experiment_name (str, optional) – Sets Run’s experiment property. β€˜default’ if not specified. Can be used later to query runs/sequences. system_tracking_interval (int, optional) – Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None to disable system metrics tracking. log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc. Return type None This handler will utilize the associated callback method called and formats the input of each callback function with metadata regarding the state of LLM run and then logs the response to Aim. setup(**kwargs)[source] Parameters kwargs (Any) – Return type None on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-1
Return type None on_llm_new_token(token, **kwargs)[source] Run when LLM generates a new token. Parameters token (str) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-2
Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any flush_tracker(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True, langchain_asset=None, reset=True, finish=False)[source] Flush the tracker and reset the session. Parameters repo (str, optional) – Aim repository path or Repo object to which Run object is bound. If skipped, default Repo is used. experiment_name (str, optional) – Sets Run’s experiment property. β€˜default’ if not specified. Can be used later to query runs/sequences. system_tracking_interval (int, optional) – Sets the tracking interval in seconds for system usage metrics (CPU, Memory, etc.). Set to None to disable system metrics tracking. log_system_params (bool, optional) – Enable/Disable logging of system params such as installed packages, git info, environment variables, etc. langchain_asset (Any) – The langchain asset to save. reset (bool) – Whether to reset the session. finish (bool) – Whether to finish the run. Returns – None Return type None class langchain.callbacks.ArgillaCallbackHandler(dataset_name, workspace_name=None, api_url=None, api_key=None)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-3
Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs into Argilla. Parameters dataset_name (str) – name of the FeedbackDataset in Argilla. Note that it must exist in advance. If you need help on how to create a FeedbackDataset in Argilla, please visit https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html. workspace_name (Optional[str]) – name of the workspace in Argilla where the specified FeedbackDataset lives in. Defaults to None, which means that the default workspace will be used. api_url (Optional[str]) – URL of the Argilla Server that we want to use, and where the FeedbackDataset lives in. Defaults to None, which means that either ARGILLA_API_URL environment variable or the default http://localhost:6900 will be used. api_key (Optional[str]) – API Key to connect to the Argilla Server. Defaults to None, which means that either ARGILLA_API_KEY environment variable or the default argilla.apikey will be used. Raises ImportError – if the argilla package is not installed. ConnectionError – if the connection to Argilla fails. FileNotFoundError – if the FeedbackDataset retrieval from Argilla fails. Return type None Examples >>> from langchain.llms import OpenAI >>> from langchain.callbacks import ArgillaCallbackHandler >>> argilla_callback = ArgillaCallbackHandler( ... dataset_name="my-dataset", ... workspace_name="my-workspace", ... api_url="http://localhost:6900", ... api_key="argilla.apikey", ... ) >>> llm = OpenAI( ... temperature=0, ... callbacks=[argilla_callback], ... verbose=True,
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-4
... callbacks=[argilla_callback], ... verbose=True, ... openai_api_key="API_KEY_HERE", ... ) >>> llm.generate([ ... "What is the best NLP-annotation tool out there? (no bias at all)", ... ]) "Argilla, no doubt about it." on_llm_start(serialized, prompts, **kwargs)[source] Save the prompts in memory when an LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing when a new token is generated. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Log records to Argilla when an LLM ends. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Do nothing when LLM outputs an error. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] If the key input is in inputs, then save it in self.prompts using either the parent_run_id or the run_id as the key. This is done so that we don’t log the same input prompt twice, once when the LLM starts and once when the chain starts. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-5
kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] If either the parent_run_id or the run_id is in self.prompts, then log the outputs to Argilla, and pop the run from self.prompts. The behavior differs if the output is a list or not. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Do nothing when LLM chain outputs an error. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Do nothing when tool starts. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Do nothing when agent takes a specific action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source] Do nothing when tool ends. Parameters output (str) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Do nothing when tool outputs an error. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Do nothing Parameters text (str) – kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-6
Do nothing Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Do nothing Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None class langchain.callbacks.ArizeCallbackHandler(model_id=None, model_version=None, SPACE_KEY=None, API_KEY=None)[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to Arize. Parameters model_id (Optional[str]) – model_version (Optional[str]) – SPACE_KEY (Optional[str]) – API_KEY (Optional[str]) – Return type None on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts running. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Do nothing. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) –
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-7
inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Do nothing. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Do nothing. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Do nothing. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source] Run when tool ends running. Parameters output (str) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run on arbitrary text. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-8
kwargs (Any) – Return type None class langchain.callbacks.AsyncIteratorCallbackHandler[source] Bases: langchain.callbacks.base.AsyncCallbackHandler Callback handler that returns an async iterator. Return type None property always_verbose: bool queue: asyncio.queues.Queue[str] done: asyncio.locks.Event async on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts running. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None async on_llm_new_token(token, **kwargs)[source] Run on new LLM token. Only available when streaming is enabled. Parameters token (str) – kwargs (Any) – Return type None async on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None async on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None async aiter()[source] Return type AsyncIterator[str] class langchain.callbacks.ClearMLCallbackHandler(task_type='inference', project_name='langchain_callback_demo', tags=None, task_name=None, visualize=False, complexity_metrics=False, stream_logs=False)[source] Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to ClearML. Parameters
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-9
Callback Handler that logs to ClearML. Parameters job_type (str) – The type of clearml task such as β€œinference”, β€œtesting” or β€œqc” project_name (str) – The clearml project name tags (list) – Tags to add to the task task_name (str) – Name of the clearml task visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics stream_logs (bool) – Whether to stream callback actions to ClearML task_type (Optional[str]) – Return type None This handler will utilize the associated callback method and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to the ClearML console. on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run when LLM generates a new token. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-10
kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-11
Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any analyze_text(text)[source] Analyze text using textstat and spacy. Parameters text (str) – The text to analyze. Returns A dictionary containing the complexity metrics. Return type (dict) flush_tracker(name=None, langchain_asset=None, finish=False)[source] Flush the tracker and setup the session. Everything after this will be a new table. Parameters name (Optional[str]) – Name of the preformed session so far so it is identifyable langchain_asset (Any) – The langchain asset to save. finish (bool) – Whether to finish the run. Returns – None Return type None class langchain.callbacks.CometCallbackHandler(task_type='inference', workspace=None, project_name=None, tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, stream_logs=True)[source] Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to Comet. Parameters job_type (str) – The type of comet_ml task such as β€œinference”, β€œtesting” or β€œqc” project_name (str) – The comet_ml project name tags (list) – Tags to add to the task task_name (str) – Name of the comet_ml task visualize (bool) – Whether to visualize the run. complexity_metrics (bool) – Whether to log complexity metrics stream_logs (bool) – Whether to stream callback actions to Comet task_type (Optional[str]) – workspace (Optional[str]) – name (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-12
workspace (Optional[str]) – name (Optional[str]) – visualizations (Optional[List[str]]) – custom_metrics (Optional[Callable]) – Return type None This handler will utilize the associated callback method and formats the input of each callback function with metadata regarding the state of LLM run, and adds the response to the list of records for both the {method}_records and action. It then logs the response to Comet. on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run when LLM generates a new token. Parameters token (str) – kwargs (Any) – Return type None on_llm_end(response, **kwargs)[source] Run when LLM ends running. Parameters response (langchain.schema.LLMResult) – kwargs (Any) – Return type None on_llm_error(error, **kwargs)[source] Run when LLM errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Run when chain starts running. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Run when chain ends running. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-13
kwargs (Any) – Return type None on_chain_error(error, **kwargs)[source] Run when chain errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_tool_start(serialized, input_str, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – kwargs (Any) – Return type None on_tool_end(output, **kwargs)[source] Run when tool ends running. Parameters output (str) – kwargs (Any) – Return type None on_tool_error(error, **kwargs)[source] Run when tool errors. Parameters error (Union[Exception, KeyboardInterrupt]) – kwargs (Any) – Return type None on_text(text, **kwargs)[source] Run when agent is ending. Parameters text (str) – kwargs (Any) – Return type None on_agent_finish(finish, **kwargs)[source] Run when agent ends running. Parameters finish (langchain.schema.AgentFinish) – kwargs (Any) – Return type None on_agent_action(action, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – kwargs (Any) – Return type Any flush_tracker(langchain_asset=None, task_type='inference', workspace=None, project_name='comet-langchain-demo', tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, finish=False, reset=False)[source] Flush the tracker and setup the session. Everything after this will be a new table.
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-14
Flush the tracker and setup the session. Everything after this will be a new table. Parameters name (Optional[str]) – Name of the preformed session so far so it is identifyable langchain_asset (Any) – The langchain asset to save. finish (bool) – Whether to finish the run. Returns – None task_type (Optional[str]) – workspace (Optional[str]) – project_name (Optional[str]) – tags (Optional[Sequence]) – visualizations (Optional[List[str]]) – complexity_metrics (bool) – custom_metrics (Optional[Callable]) – reset (bool) – Return type None class langchain.callbacks.FileCallbackHandler(filename, mode='a', color=None)[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that writes to a file. Parameters filename (str) – mode (str) – color (Optional[str]) – Return type None on_chain_start(serialized, inputs, **kwargs)[source] Print out that we are entering a chain. Parameters serialized (Dict[str, Any]) – inputs (Dict[str, Any]) – kwargs (Any) – Return type None on_chain_end(outputs, **kwargs)[source] Print out that we finished a chain. Parameters outputs (Dict[str, Any]) – kwargs (Any) – Return type None on_agent_action(action, color=None, **kwargs)[source] Run on agent action. Parameters action (langchain.schema.AgentAction) – color (Optional[str]) – kwargs (Any) – Return type Any
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-15
color (Optional[str]) – kwargs (Any) – Return type Any on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source] If not the final action, print out observation. Parameters output (str) – color (Optional[str]) – observation_prefix (Optional[str]) – llm_prefix (Optional[str]) – kwargs (Any) – Return type None on_text(text, color=None, end='', **kwargs)[source] Run when agent ends. Parameters text (str) – color (Optional[str]) – end (str) – kwargs (Any) – Return type None on_agent_finish(finish, color=None, **kwargs)[source] Run on agent end. Parameters finish (langchain.schema.AgentFinish) – color (Optional[str]) – kwargs (Any) – Return type None class langchain.callbacks.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens=None, strip_tokens=True, stream_prefix=False)[source] Bases: langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler Callback handler for streaming in agents. Only works with agents using LLMs that support streaming. Only the final output of the agent will be streamed. Parameters answer_prefix_tokens (Optional[List[str]]) – strip_tokens (bool) – stream_prefix (bool) – Return type None append_to_last_tokens(token)[source] Parameters token (str) – Return type None check_if_answer_reached()[source] Return type bool on_llm_start(serialized, prompts, **kwargs)[source] Run when LLM starts running. Parameters
https://api.python.langchain.com/en/stable/modules/callbacks.html
af8b1c7c6518-16
Run when LLM starts running. Parameters serialized (Dict[str, Any]) – prompts (List[str]) – kwargs (Any) – Return type None on_llm_new_token(token, **kwargs)[source] Run on new LLM token. Only available when streaming is enabled. Parameters token (str) – kwargs (Any) – Return type None class langchain.callbacks.HumanApprovalCallbackHandler(approve=<function _default_approve>, should_check=<function _default_true>)[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback for manually validating values. Parameters approve (Callable[[Any], bool]) – should_check (Callable[[Dict[str, Any]], bool]) – raise_error: bool = True on_tool_start(serialized, input_str, *, run_id, parent_run_id=None, **kwargs)[source] Run when tool starts running. Parameters serialized (Dict[str, Any]) – input_str (str) – run_id (uuid.UUID) – parent_run_id (Optional[uuid.UUID]) – kwargs (Any) – Return type Any class langchain.callbacks.InfinoCallbackHandler(model_id=None, model_version=None, verbose=False)[source] Bases: langchain.callbacks.base.BaseCallbackHandler Callback Handler that logs to Infino. Parameters model_id (Optional[str]) – model_version (Optional[str]) – verbose (bool) – Return type None on_llm_start(serialized, prompts, **kwargs)[source] Log the prompts to Infino, and set start time and error flag. Parameters serialized (Dict[str, Any]) – prompts (List[str]) –
https://api.python.langchain.com/en/stable/modules/callbacks.html