id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
a44a60c1f5be-261 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-262 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Replicate(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model, input=None, model_kwargs=None, replicate_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Replicate models.
To use, you should have the replicate python package installed,
and the environment variable REPLICATE_API_TOKEN set with your API token.
You can find your token here: https://replicate.com/account
The model param is required, but any other model parameters can also
be passed in with the format input={model_param: value, β¦}
Example
from langchain.llms import Replicate
replicate = Replicate(model="stability-ai/stable-diffusion: 27b93a2413e7f36cd83da926f365628 0b2931564ff050bf9575f1fdf9bcd7478",
input={"image_dimensions": "512x512"})
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-263 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model (str) β
input (Dict[str, Any]) β
model_kwargs (Dict[str, Any]) β
replicate_api_token (Optional[str]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-264 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-265 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-266 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-267 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SagemakerEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-268 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.llms.sagemaker_endpoint.LLMContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.llms.sagemaker_endpoint.LLMContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
attribute tags: Optional[List[str]] = Noneο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-269 | attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-270 | kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-271 | Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-272 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-273 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SelfHostedHuggingFaceLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn=<function _load_transformer>, load_fn_kwargs=None, model_reqs=['./', 'transformers', 'torch'], model_id='gpt2', task='text-generation', device=0, model_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline
Wrapper around HuggingFace Pipeline API to run on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import SelfHostedHuggingFaceLLM
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceLLM(
model_id="google/flan-t5-large", task="text2text-generation", | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-274 | model_id="google/flan-t5-large", task="text2text-generation",
hardware=gpu
)
Example passing fn that generates a pipeline (bc the pipeline is not serializable):from langchain.llms import SelfHostedHuggingFaceLLM
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def get_pipeline():
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer
)
return pipe
hf = SelfHostedHuggingFaceLLM(
model_load_fn=get_pipeline, model_id="gpt2", hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
model_id (str) β
task (str) β
device (int) β
model_kwargs (Optional[dict]) β
Return type
None
attribute device: int = 0ο
Device to use for inference. -1 for CPU, 0 for GPU, 1 for second GPU, etc.
attribute hardware: Any = Noneο
Remote hardware to send the inference function to. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-275 | attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _generate_text>ο
Inference function to send to the remote hardware.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'gpt2'ο
Hugging Face model_id to load the model.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute model_load_fn: Callable = <function _load_transformer>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'transformers', 'torch']ο
Requirements to install on hardware to inference the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute task: str = 'text-generation'ο
Hugging Face task (βtext-generationβ, βtext2text-generationβ or
βsummarizationβ).
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-276 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-277 | Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)ο
Init the SelfHostedPipeline from a pipeline object or string.
Parameters
pipeline (Any) β
hardware (Any) β
model_reqs (Optional[List[str]]) β
device (int) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-278 | kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-279 | exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-280 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.SelfHostedPipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _generate_text>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'])[source]ο
Bases: langchain.llms.base.LLM
Run model inference on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example for custom pipeline and inference functions:from langchain.llms import SelfHostedPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
def load_pipeline():
tokenizer = AutoTokenizer.from_pretrained("gpt2")
model = AutoModelForCausalLM.from_pretrained("gpt2")
return pipeline(
"text-generation", model=model, tokenizer=tokenizer,
max_new_tokens=10
)
def inference_fn(pipeline, prompt, stop = None):
return pipeline(prompt)[0]["generated_text"]
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
llm = SelfHostedPipeline(
model_load_fn=load_pipeline,
hardware=gpu,
model_reqs=model_reqs, inference_fn=inference_fn
)
Example for <2GB model (can be serialized and sent directly to the server):from langchain.llms import SelfHostedPipeline | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-281 | import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
my_model = ...
llm = SelfHostedPipeline.from_pipeline(
pipeline=my_model,
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Example passing model path for larger models:from langchain.llms import SelfHostedPipeline
import runhouse as rh
import pickle
from transformers import pipeline
generator = pipeline(model="gpt2")
rh.blob(pickle.dumps(generator), path="models/pipeline.pkl"
).save().to(gpu, path="models")
llm = SelfHostedPipeline.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _generate_text>ο
Inference function to send to the remote hardware.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-282 | Key word arguments to pass to the model load function.
attribute model_load_fn: Callable [Required]ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'torch']ο
Requirements to install on hardware to inference the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-283 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-284 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_pipeline(pipeline, hardware, model_reqs=None, device=0, **kwargs)[source]ο
Init the SelfHostedPipeline from a pipeline object or string.
Parameters
pipeline (Any) β
hardware (Any) β
model_reqs (Optional[List[str]]) β
device (int) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-285 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-286 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.StochasticAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, api_url='', model_kwargs=None, stochasticai_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around StochasticAI large language models.
To use, you should have the environment variable STOCHASTICAI_API_KEY
set with your API key.
Example
from langchain.llms import StochasticAI
stochasticai = StochasticAI(api_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-287 | Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
api_url (str) β
model_kwargs (Dict[str, Any]) β
stochasticai_api_key (Optional[str]) β
Return type
None
attribute api_url: str = ''ο
Model name to use.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-288 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-289 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-290 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-291 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.VertexAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='text-bison', temperature=0.0, max_output_tokens=128, top_p=0.95, top_k=40, stop=None, project=None, location='us-central1', credentials=None, request_parallelism=5, tuned_model_name=None)[source]ο
Bases: langchain.llms.vertexai._VertexAICommon, langchain.llms.base.LLM
Wrapper around Google Vertex AI large language models.
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-292 | Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (_LanguageModel) β
model_name (str) β
temperature (float) β
max_output_tokens (int) β
top_p (float) β
top_k (int) β
stop (Optional[List[str]]) β
project (Optional[str]) β
location (str) β
credentials (Any) β
request_parallelism (int) β
tuned_model_name (Optional[str]) β
Return type
None
attribute credentials: Any = Noneο
The default custom credentials (google.auth.credentials.Credentials) to use
attribute location: str = 'us-central1'ο
The default location to use when making API calls.
attribute max_output_tokens: int = 128ο
Token limit determines the maximum amount of text output from one prompt.
attribute model_name: str = 'text-bison'ο
The name of the Vertex AI large language model.
attribute project: Optional[str] = Noneο
The default GCP project to use when making Vertex API calls.
attribute request_parallelism: int = 5ο
The amount of parallelism allowed for requests issued to VertexAI models.
attribute stop: Optional[List[str]] = Noneο
Optional list of stop words to use when generating.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.0ο
Sampling temperature, it controls the degree of randomness in token selection. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-293 | Sampling temperature, it controls the degree of randomness in token selection.
attribute top_k: int = 40ο
How the model selects tokens for output, the next token is selected from
attribute top_p: float = 0.95ο
Tokens are selected from most probable to least until the sum of their
attribute tuned_model_name: Optional[str] = Noneο
The name of a tuned model. If provided, model_name is ignored.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-294 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-295 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-296 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-297 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Writer(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, writer_org_id=None, model_id='palmyra-instruct', min_tokens=None, max_tokens=None, temperature=None, top_p=None, stop=None, presence_penalty=None, repetition_penalty=None, best_of=None, logprobs=False, n=None, writer_api_key=None, base_url=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Writer large language models.
To use, you should have the environment variable WRITER_API_KEY and
WRITER_ORG_ID set with your API key and organization ID respectively.
Example
from langchain import Writer
writer = Writer(model_id="palmyra-base")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
writer_org_id (Optional[str]) β
model_id (str) β
min_tokens (Optional[int]) β
max_tokens (Optional[int]) β
temperature (Optional[float]) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-298 | max_tokens (Optional[int]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
stop (Optional[List[str]]) β
presence_penalty (Optional[float]) β
repetition_penalty (Optional[float]) β
best_of (Optional[int]) β
logprobs (bool) β
n (Optional[int]) β
writer_api_key (Optional[str]) β
base_url (Optional[str]) β
Return type
None
attribute base_url: Optional[str] = Noneο
Base url to use, if None decides based on model name.
attribute best_of: Optional[int] = Noneο
Generates this many completions server-side and returns the βbestβ.
attribute logprobs: bool = Falseο
Whether to return log probabilities.
attribute max_tokens: Optional[int] = Noneο
Maximum number of tokens to generate.
attribute min_tokens: Optional[int] = Noneο
Minimum number of tokens to generate.
attribute model_id: str = 'palmyra-instruct'ο
Model name to use.
attribute n: Optional[int] = Noneο
How many completions to generate.
attribute presence_penalty: Optional[float] = Noneο
Penalizes repeated tokens regardless of frequency.
attribute repetition_penalty: Optional[float] = Noneο
Penalizes repeated tokens according to frequency.
attribute stop: Optional[List[str]] = Noneο
Sequences when completion generation will stop.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = Noneο
What sampling temperature to use.
attribute top_p: Optional[float] = Noneο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-299 | attribute verbose: bool [Optional]ο
Whether to print out response text.
attribute writer_api_key: Optional[str] = Noneο
Writer API key.
attribute writer_org_id: Optional[str] = Noneο
Writer organization ID.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-300 | stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-301 | generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-302 | Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-303 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.OctoAIEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url=None, model_kwargs=None, octoai_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around OctoAI Inference Endpoints.
OctoAIEndpoint is a class to interact with OctoAICompute Service large language model endpoints.
To use, you should have the octoai python package installed, and the
environment variable OCTOAI_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms.octoai_endpoint import OctoAIEndpoint
OctoAIEndpoint(
octoai_api_token="octoai-api-key",
endpoint_url="https://mpt-7b-demo-kk0powt97tmb.octoai.cloud/generate",
model_kwargs={
"max_new_tokens": 200,
"temperature": 0.75,
"top_p": 0.95,
"repetition_penalty": 1,
"seed": None,
"stop": [],
},
)
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-304 | )
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (Optional[str]) β
model_kwargs (Optional[dict]) β
octoai_api_token (Optional[str]) β
Return type
None
attribute endpoint_url: Optional[str] = Noneο
Endpoint URL to use.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute octoai_api_token: Optional[str] = Noneο
OCTOAI API Token
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-305 | tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-306 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-307 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/stable/modules/llms.html |
a44a60c1f5be-308 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/stable/modules/llms.html |
1ed571763b88-0 | Retrieversο
class langchain.retrievers.AmazonKendraRetriever(index_id, region_name=None, credentials_profile_name=None, top_k=3, attribute_filter=None, client=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever class to query documents from Amazon Kendra Index.
Parameters
index_id (str) β Kendra index id
region_name (Optional[str]) β The aws region e.g., us-west-2.
Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config.
credentials_profile_name (Optional[str]) β The name of the profile in the ~/.aws/credentials
or ~/.aws/config files, which has either access keys or role information
specified. If not specified, the default credential profile or, if on an
EC2 instance, credentials from IMDS will be used.
top_k (int) β No of results to return
attribute_filter (Optional[Dict]) β Additional filtering of results based on metadata
See: https://docs.aws.amazon.com/kendra/latest/APIReference
client (Optional[Any]) β boto3 client for Kendra
Example
retriever = AmazonKendraRetriever(
index_id="c0806df7-e76b-4bce-9b5c-d5582f6b1a03"
)
get_relevant_documents(query)[source]ο
Run search on Kendra index and get top k documents
Example:
.. code-block:: python
docs = retriever.get_relevant_documents(βThis is my queryβ)
Parameters
query (str) β
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-1 | Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ArxivRetriever(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.arxiv.ArxivAPIWrapper
It is effectively a wrapper for ArxivAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all ArxivAPIWrapper arguments without any change.
Parameters
arxiv_search (Any) β
arxiv_exceptions (Any) β
top_k_results (int) β
load_max_docs (int) β
load_all_available_meta (bool) β
doc_content_chars_max (Optional[int]) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.AzureCognitiveSearchRetriever(*, service_name='', index_name='', api_key='', api_version='2020-06-30', aiosession=None, content_key='content')[source]ο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-2 | Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Wrapper around Azure Cognitive Search.
Parameters
service_name (str) β
index_name (str) β
api_key (str) β
api_version (str) β
aiosession (Optional[aiohttp.client.ClientSession]) β
content_key (str) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
ClientSession, in case we want to reuse connection for better performance.
attribute api_key: str = ''ο
API Key. Both Admin and Query keys work, but for reading data itβs
recommended to use a Query key.
attribute api_version: str = '2020-06-30'ο
API version
attribute content_key: str = 'content'ο
Key in a retrieved result to set as the Document page_content.
attribute index_name: str = ''ο
Name of Index inside Azure Cognitive Search service
attribute service_name: str = ''ο
Name of Azure Cognitive Search service
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ChatGPTPluginRetriever(*, url, bearer_token, top_k=3, filter=None, aiosession=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-3 | Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
url (str) β
bearer_token (str) β
top_k (int) β
filter (Optional[dict]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute bearer_token: str [Required]ο
attribute filter: Optional[dict] = Noneο
attribute top_k: int = 3ο
attribute url: str [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ContextualCompressionRetriever(*, base_compressor, base_retriever)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever that wraps a base retriever and compresses the results.
Parameters
base_compressor (langchain.retrievers.document_compressors.base.BaseDocumentCompressor) β
base_retriever (langchain.schema.BaseRetriever) β
Return type
None
attribute base_compressor: langchain.retrievers.document_compressors.base.BaseDocumentCompressor [Required]ο
Compressor for compressing retrieved documents.
attribute base_retriever: langchain.schema.BaseRetriever [Required]ο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-4 | attribute base_retriever: langchain.schema.BaseRetriever [Required]ο
Base Retriever to use for getting relevant documents.
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
Sequence of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.DataberryRetriever(datastore_url, top_k=None, api_key=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Databerry API.
Parameters
datastore_url (str) β
top_k (Optional[int]) β
api_key (Optional[str]) β
datastore_url: strο
api_key: Optional[str]ο
top_k: Optional[int]ο
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ElasticSearchBM25Retriever(client, index_name)[source]ο
Bases: langchain.schema.BaseRetriever
Wrapper around Elasticsearch using BM25 as a retrieval method. | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-5 | Wrapper around Elasticsearch using BM25 as a retrieval method.
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the βDeploymentsβ page.
To obtain your Elastic Cloud password for the default βelasticβ user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to βSecurityβ > βUsersβ
Locate the βelasticβ user and click βEditβ
Click βReset passwordβ
Follow the prompts to reset the password
The format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Parameters
client (Any) β
index_name (str) β
classmethod create(elasticsearch_url, index_name, k1=2.0, b=0.75)[source]ο
Parameters
elasticsearch_url (str) β
index_name (str) β
k1 (float) β
b (float) β
Return type
langchain.retrievers.elastic_search_bm25.ElasticSearchBM25Retriever
add_texts(texts, refresh_indices=True)[source]ο
Run more texts through the embeddings and add to the retriever.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the retriever.
refresh_indices (bool) β bool to refresh ElasticSearch indices
Returns | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-6 | refresh_indices (bool) β bool to refresh ElasticSearch indices
Returns
List of ids from adding the texts into the retriever.
Return type
List[str]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.KNNRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
KNN Retriever.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
index (Any) β
texts (List[str]) β
k (int) β
relevancy_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute k: int = 4ο
attribute relevancy_threshold: Optional[float] = Noneο
attribute texts: List[str] [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embeddings, **kwargs)[source]ο
Parameters
texts (List[str]) β | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-7 | Parameters
texts (List[str]) β
embeddings (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.retrievers.knn.KNNRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.LlamaIndexGraphRetriever(*, graph=None, query_configs=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Question-answering with sources over an LlamaIndex graph data structure.
Parameters
graph (Any) β
query_configs (List[Dict]) β
Return type
None
attribute graph: Any = Noneο
attribute query_configs: List[Dict] [Optional]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
class langchain.retrievers.LlamaIndexRetriever(*, index=None, query_kwargs=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Question-answering with sources over an LlamaIndex data structure.
Parameters
index (Any) β
query_kwargs (Dict) β
Return type
None
attribute index: Any = Noneο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-8 | Return type
None
attribute index: Any = Noneο
attribute query_kwargs: Dict [Optional]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
class langchain.retrievers.MergerRetriever(retrievers)[source]ο
Bases: langchain.schema.BaseRetriever
This class merges the results of multiple retrievers.
Parameters
retrievers (List[langchain.schema.BaseRetriever]) β A list of retrievers to merge.
get_relevant_documents(query)[source]ο
Get the relevant documents for a given query.
Parameters
query (str) β The query to search for.
Returns
A list of relevant documents.
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Asynchronously get the relevant documents for a given query.
Parameters
query (str) β The query to search for.
Returns
A list of relevant documents.
Return type
List[langchain.schema.Document]
merge_documents(query)[source]ο
Merge the results of the retrievers.
Parameters
query (str) β The query to search for.
Returns
A list of merged documents.
Return type
List[langchain.schema.Document]
async amerge_documents(query)[source]ο
Asynchronously merge the results of the retrievers.
Parameters
query (str) β The query to search for.
Returns
A list of merged documents.
Return type | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-9 | Returns
A list of merged documents.
Return type
List[langchain.schema.Document]
class langchain.retrievers.MetalRetriever(client, params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Metal API.
Parameters
client (Any) β
params (Optional[dict]) β
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.MilvusRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Milvus API.
Parameters
embedding_function (langchain.embeddings.base.Embeddings) β
collection_name (str) β
connection_args (Optional[Dict[str, Any]]) β
consistency_level (str) β
search_params (Optional[dict]) β
add_texts(texts, metadatas=None)[source]ο
Add text to the Milvus store
Parameters
texts (List[str]) β The text
metadatas (List[dict]) β Metadata dicts, must line up with existing store
Return type
None
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-10 | Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.MultiQueryRetriever(retriever, llm_chain, verbose=True, parser_key='lines')[source]ο
Bases: langchain.schema.BaseRetriever
Given a user query, use an LLM to write a set of queries.
Retrieve docs for each query. Rake the unique union of all retrieved docs.
Parameters
retriever (langchain.schema.BaseRetriever) β
llm_chain (langchain.chains.llm.LLMChain) β
verbose (bool) β
parser_key (str) β
Return type
None
classmethod from_llm(retriever, llm, prompt=PromptTemplate(input_variables=['question'], output_parser=None, partial_variables={}, template='You are an AI language model assistant. Your task is \nΒ Β Β to generate 3 different versions of the given user \nΒ Β Β question to retrieve relevant documents from a vectorΒ database. \nΒ Β Β By generating multiple perspectives on the user question, \nΒ Β Β your goal is to help the user overcome some of the limitations \nΒ Β Β of distance-based similarity search. Provide these alternative \nΒ Β Β questions seperated by newlines. Original question: {question}', template_format='f-string', validate_template=True), parser_key='lines')[source]ο
Initialize from llm using default template.
Parameters
retriever (langchain.schema.BaseRetriever) β retriever to query documents from | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-11 | retriever (langchain.schema.BaseRetriever) β retriever to query documents from
llm (langchain.llms.base.BaseLLM) β llm for query generation using DEFAULT_QUERY_PROMPT
prompt (langchain.prompts.prompt.PromptTemplate) β
parser_key (str) β
Returns
MultiQueryRetriever
Return type
langchain.retrievers.multi_query.MultiQueryRetriever
get_relevant_documents(question)[source]ο
Get relevated documents given a user query.
Parameters
question (str) β user query
Returns
Unique union of relevant documents from all generated queries
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
generate_queries(question)[source]ο
Generate queries based upon user input.
Parameters
question (str) β user query
Returns
List of LLM generated queries that are similar to the user input
Return type
List[str]
retrieve_documents(queries)[source]ο
Run all LLM generated queries.
Parameters
queries (List[str]) β query list
Returns
List of retrived Documents
Return type
List[langchain.schema.Document]
unique_union(documents)[source]ο
Get uniqe Documents.
Parameters
documents (List[langchain.schema.Document]) β List of retrived Documents
Returns
List of unique retrived Documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.PineconeHybridSearchRetriever(*, embeddings, sparse_encoder=None, index=None, top_k=4, alpha=0.5)[source]ο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-12 | Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
sparse_encoder (Any) β
index (Any) β
top_k (int) β
alpha (float) β
Return type
None
attribute alpha: float = 0.5ο
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute sparse_encoder: Any = Noneο
attribute top_k: int = 4ο
add_texts(texts, ids=None, metadatas=None)[source]ο
Parameters
texts (List[str]) β
ids (Optional[List[str]]) β
metadatas (Optional[List[dict]]) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-13 | Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.PubMedRetriever(*, top_k_results=3, load_max_docs=25, doc_content_chars_max=2000, load_all_available_meta=False, email='your_email@example.com', base_url_esearch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?', base_url_efetch='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?', max_retry=5, sleep_time=0.2, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.pupmed.PubMedAPIWrapper
It is effectively a wrapper for PubMedAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all PubMedAPIWrapper arguments without any change.
Parameters
top_k_results (int) β
load_max_docs (int) β
doc_content_chars_max (int) β
load_all_available_meta (bool) β
email (str) β
base_url_esearch (str) β
base_url_efetch (str) β
max_retry (int) β
sleep_time (float) β
ARXIV_MAX_QUERY_LENGTH (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-14 | Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.RemoteLangChainRetriever(*, url, headers=None, input_key='message', response_key='response', page_content_key='page_content', metadata_key='metadata')[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
url (str) β
headers (Optional[dict]) β
input_key (str) β
response_key (str) β
page_content_key (str) β
metadata_key (str) β
Return type
None
attribute headers: Optional[dict] = Noneο
attribute input_key: str = 'message'ο
attribute metadata_key: str = 'metadata'ο
attribute page_content_key: str = 'page_content'ο
attribute response_key: str = 'response'ο
attribute url: str [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.SVMRetriever(*, embeddings, index=None, texts, k=4, relevancy_threshold=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
SVM Retriever.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
index (Any) β | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-15 | index (Any) β
texts (List[str]) β
k (int) β
relevancy_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute index: Any = Noneο
attribute k: int = 4ο
attribute relevancy_threshold: Optional[float] = Noneο
attribute texts: List[str] [Required]ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embeddings, **kwargs)[source]ο
Parameters
texts (List[str]) β
embeddings (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.retrievers.svm.SVMRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.SelfQueryRetriever(*, vectorstore, llm_chain, search_type='similarity', search_kwargs=None, structured_query_translator, verbose=False, use_original_query=False)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever that wraps around a vector store and uses an LLM to generate
the vector store queries.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
llm_chain (langchain.chains.llm.LLMChain) β
search_type (str) β | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-16 | search_type (str) β
search_kwargs (dict) β
structured_query_translator (langchain.chains.query_constructor.ir.Visitor) β
verbose (bool) β
use_original_query (bool) β
Return type
None
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
The LLMChain for generating the vector store queries.
attribute search_kwargs: dict [Optional]ο
Keyword arguments to pass in to the vector store search.
attribute search_type: str = 'similarity'ο
The search type to perform on the vector store.
attribute structured_query_translator: langchain.chains.query_constructor.ir.Visitor [Required]ο
Translator for turning internal query language into vectorstore search params.
attribute use_original_query: bool = Falseο
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
The underlying vector store from which documents will be retrieved.
attribute verbose: bool = Falseο
Use original query instead of the revised new query from LLM
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_llm(llm, vectorstore, document_contents, metadata_field_info, structured_query_translator=None, chain_kwargs=None, enable_limit=False, use_original_query=False, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
vectorstore (langchain.vectorstores.base.VectorStore) β
document_contents (str) β
metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) β | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-17 | metadata_field_info (List[langchain.chains.query_constructor.schema.AttributeInfo]) β
structured_query_translator (Optional[langchain.chains.query_constructor.ir.Visitor]) β
chain_kwargs (Optional[Dict]) β
enable_limit (bool) β
use_original_query (bool) β
kwargs (Any) β
Return type
langchain.retrievers.self_query.base.SelfQueryRetriever
get_relevant_documents(query, callbacks=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.TFIDFRetriever(*, vectorizer=None, docs, tfidf_array=None, k=4)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Parameters
vectorizer (Any) β
docs (List[langchain.schema.Document]) β
tfidf_array (Any) β
k (int) β
Return type
None
attribute docs: List[langchain.schema.Document] [Required]ο
attribute k: int = 4ο
attribute tfidf_array: Any = Noneο
attribute vectorizer: Any = Noneο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
classmethod from_documents(documents, *, tfidf_params=None, **kwargs)[source]ο
Parameters
documents (Iterable[langchain.schema.Document]) β | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-18 | Parameters
documents (Iterable[langchain.schema.Document]) β
tfidf_params (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
langchain.retrievers.tfidf.TFIDFRetriever
classmethod from_texts(texts, metadatas=None, tfidf_params=None, **kwargs)[source]ο
Parameters
texts (Iterable[str]) β
metadatas (Optional[Iterable[dict]]) β
tfidf_params (Optional[Dict[str, Any]]) β
kwargs (Any) β
Return type
langchain.retrievers.tfidf.TFIDFRetriever
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.TimeWeightedVectorStoreRetriever(*, vectorstore, search_kwargs=None, memory_stream=None, decay_rate=0.01, k=4, other_score_keys=[], default_salience=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever combining embedding similarity with recency.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
search_kwargs (dict) β
memory_stream (List[langchain.schema.Document]) β
decay_rate (float) β
k (int) β
other_score_keys (List[str]) β
default_salience (Optional[float]) β
Return type
None
attribute decay_rate: float = 0.01ο
The exponential decay factor used as (1.0-decay_rate)**(hrs_passed).
attribute default_salience: Optional[float] = Noneο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-19 | attribute default_salience: Optional[float] = Noneο
The salience to assign memories not retrieved from the vector store.
None assigns no salience to documents not fetched from the vector store.
attribute k: int = 4ο
The maximum number of documents to retrieve in a given call.
attribute memory_stream: List[langchain.schema.Document] [Optional]ο
The memory_stream of documents to search through.
attribute other_score_keys: List[str] = []ο
Other keys in the metadata to factor into the score, e.g. βimportanceβ.
attribute search_kwargs: dict [Optional]ο
Keyword arguments to pass to the vectorstore similarity search.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
The vectorstore to store documents and determine salience.
async aadd_documents(documents, **kwargs)[source]ο
Add documents to vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
add_documents(documents, **kwargs)[source]ο
Add documents to vectorstore.
Parameters
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
async aget_relevant_documents(query)[source]ο
Return documents that are relevant to the query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Return documents that are relevant to the query.
Parameters
query (str) β
Return type
List[langchain.schema.Document]
get_salient_docs(query)[source]ο
Return documents that are salient to the query.
Parameters
query (str) β
Return type | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-20 | Parameters
query (str) β
Return type
Dict[int, Tuple[langchain.schema.Document, float]]
class langchain.retrievers.VespaRetriever(app, body, content_field, metadata_fields=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Vespa.
Parameters
app (Vespa) β
body (Dict) β
content_field (str) β
metadata_fields (Optional[Sequence[str]]) β
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents_with_filter(query, *, _filter=None)[source]ο
Parameters
query (str) β
_filter (Optional[str]) β
Return type
List[langchain.schema.Document]
classmethod from_params(url, content_field, *, k=None, metadata_fields=(), sources=None, _filter=None, yql=None, **kwargs)[source]ο
Instantiate retriever from params.
Parameters
url (str) β Vespa app URL.
content_field (str) β Field in results to return as Document page_content.
k (Optional[int]) β Number of Documents to return. Defaults to None.
metadata_fields (Sequence[str] or "*") β Fields in results to include in
document metadata. Defaults to empty tuple ().
sources (Sequence[str] or "*" or None) β Sources to retrieve
from. Defaults to None. | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-21 | from. Defaults to None.
_filter (Optional[str]) β Document filter condition expressed in YQL.
Defaults to None.
yql (Optional[str]) β Full YQL query to be used. Should not be specified
if _filter or sources are specified. Defaults to None.
kwargs (Any) β Keyword arguments added to query body.
Return type
langchain.retrievers.vespa_retriever.VespaRetriever
class langchain.retrievers.WeaviateHybridSearchRetriever(client, index_name, text_key, alpha=0.5, k=4, attributes=None, create_schema_if_missing=True)[source]ο
Bases: langchain.schema.BaseRetriever
Parameters
client (Any) β
index_name (str) β
text_key (str) β
alpha (float) β
k (int) β
attributes (Optional[List[str]]) β
create_schema_if_missing (bool) β
class Config[source]ο
Bases: object
Configuration for this pydantic object.
extra = 'forbid'ο
arbitrary_types_allowed = Trueο
add_documents(docs, **kwargs)[source]ο
Upload documents to Weaviate.
Parameters
docs (List[langchain.schema.Document]) β
kwargs (Any) β
Return type
List[str]
get_relevant_documents(query, where_filter=None)[source]ο
Look up similar documents in Weaviate.
Parameters
query (str) β
where_filter (Optional[Dict[str, object]]) β
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query, where_filter=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-22 | Parameters
query (str) β string to find relevant documents for
where_filter (Optional[Dict[str, object]]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.WikipediaRetriever(*, wiki_client=None, top_k_results=3, lang='en', load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: langchain.schema.BaseRetriever, langchain.utilities.wikipedia.WikipediaAPIWrapper
It is effectively a wrapper for WikipediaAPIWrapper.
It wraps load() to get_relevant_documents().
It uses all WikipediaAPIWrapper arguments without any change.
Parameters
wiki_client (Any) β
top_k_results (int) β
lang (str) β
load_all_available_meta (bool) β
doc_content_chars_max (int) β
Return type
None
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ZepRetriever(session_id, url, top_k=None)[source]ο
Bases: langchain.schema.BaseRetriever
A Retriever implementation for the Zep long-term memory store. Search your
userβs long-term chat history with Zep.
Note: You will need to provide the userβs session_id to use this retriever.
More on Zep: | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-23 | More on Zep:
Zep provides long-term conversation storage for LLM apps. The server stores,
summarizes, embeds, indexes, and enriches conversational AI chat
histories, and exposes them via simple, low-latency APIs.
For server installation instructions, see:
https://getzep.github.io/deployment/quickstart/
Parameters
session_id (str) β
url (str) β
top_k (Optional[int]) β
get_relevant_documents(query, metadata=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
metadata (Optional[Dict]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query, metadata=None)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
metadata (Optional[Dict]) β
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.ZillizRetriever(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', search_params=None)[source]ο
Bases: langchain.schema.BaseRetriever
Retriever that uses the Zilliz API.
Parameters
embedding_function (langchain.embeddings.base.Embeddings) β
collection_name (str) β
connection_args (Optional[Dict[str, Any]]) β
consistency_level (str) β
search_params (Optional[dict]) β
add_texts(texts, metadatas=None)[source]ο
Add text to the Zilliz store
Parameters
texts (List[str]) β The text | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-24 | Add text to the Zilliz store
Parameters
texts (List[str]) β The text
metadatas (List[dict]) β Metadata dicts, must line up with existing store
Return type
None
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
class langchain.retrievers.DocArrayRetriever(*, index=None, embeddings, search_field, content_field, search_type=SearchType.similarity, top_k=1, filters=None)[source]ο
Bases: langchain.schema.BaseRetriever, pydantic.main.BaseModel
Retriever class for DocArray Document Indices.
Currently, supports 5 backends:
InMemoryExactNNIndex, HnswDocumentIndex, QdrantDocumentIndex,
ElasticDocIndex, and WeaviateDocumentIndex.
Parameters
index (Any) β
embeddings (langchain.embeddings.base.Embeddings) β
search_field (str) β
content_field (str) β
search_type (langchain.retrievers.docarray.SearchType) β
top_k (int) β
filters (Optional[Any]) β
Return type
None
indexο
One of the above-mentioned index instances
embeddingsο
Embedding model to represent text as vectors
search_fieldο
Field to consider for searching in the documents.
Should be an embedding/vector/tensor.
content_fieldο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-25 | Should be an embedding/vector/tensor.
content_fieldο
Field that represents the main content in your document schema.
Will be used as a page_content. Everything else will go into metadata.
search_typeο
Type of search to perform (similarity / mmr)
filtersο
Filters applied for document retrieval.
top_kο
Number of documents to return
attribute content_field: str [Required]ο
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
attribute filters: Optional[Any] = Noneο
attribute index: Any = Noneο
attribute search_field: str [Required]ο
attribute search_type: langchain.retrievers.docarray.SearchType = SearchType.similarityο
attribute top_k: int = 1ο
async aget_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
get_relevant_documents(query)[source]ο
Get documents relevant for a query.
Parameters
query (str) β string to find relevant documents for
Returns
List of relevant documents
Return type
List[langchain.schema.Document]
Document compressorsο
class langchain.retrievers.document_compressors.DocumentCompressorPipeline(*, transformers)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Document compressor that uses a pipeline of transformers.
Parameters
transformers (List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-26 | Return type
None
attribute transformers: List[Union[langchain.schema.BaseDocumentTransformer, langchain.retrievers.document_compressors.base.BaseDocumentCompressor]] [Required]ο
List of document filters that are chained together and run in sequence.
async acompress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
class langchain.retrievers.document_compressors.EmbeddingsFilter(*, embeddings, similarity_fn=<function cosine_similarity>, k=20, similarity_threshold=None)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
similarity_fn (Callable) β
k (Optional[int]) β
similarity_threshold (Optional[float]) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
Embeddings to use for embedding document contents and queries.
attribute k: Optional[int] = 20ο
The number of relevant documents to return. Can be set to None, in which case
similarity_threshold must be specified. Defaults to 20.
attribute similarity_fn: Callable = <function cosine_similarity>ο
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
attribute similarity_threshold: Optional[float] = Noneο | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-27 | indicate greater similarity.
attribute similarity_threshold: Optional[float] = Noneο
Threshold for determining when two documents are similar enough
to be considered redundant. Defaults to None, must be specified if k is set
to None.
async acompress_documents(documents, query)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Filter documents based on similarity of their embeddings to the query.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
class langchain.retrievers.document_compressors.LLMChainExtractor(*, llm_chain, get_input=<function default_get_input>)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
get_input (Callable[[str, langchain.schema.Document], dict]) β
Return type
None
attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>ο
Callable for constructing the chain input from the query and a Document.
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
LLM wrapper to use for compressing documents.
async acompress_documents(documents, query)[source]ο
Compress page content of raw documents asynchronously.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Compress page content of raw documents.
Parameters | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-28 | Compress page content of raw documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
classmethod from_llm(llm, prompt=None, get_input=None, llm_chain_kwargs=None)[source]ο
Initialize from LLM.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.prompt.PromptTemplate]) β
get_input (Optional[Callable[[str, langchain.schema.Document], str]]) β
llm_chain_kwargs (Optional[dict]) β
Return type
langchain.retrievers.document_compressors.chain_extract.LLMChainExtractor
class langchain.retrievers.document_compressors.LLMChainFilter(*, llm_chain, get_input=<function default_get_input>)[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Filter that drops documents that arenβt relevant to the query.
Parameters
llm_chain (langchain.chains.llm.LLMChain) β
get_input (Callable[[str, langchain.schema.Document], dict]) β
Return type
None
attribute get_input: Callable[[str, langchain.schema.Document], dict] = <function default_get_input>ο
Callable for constructing the chain input from the query and a Document.
attribute llm_chain: langchain.chains.llm.LLMChain [Required]ο
LLM wrapper to use for filtering documents.
The chain prompt is expected to have a BooleanOutputParser.
async acompress_documents(documents, query)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/retrievers.html |
1ed571763b88-29 | query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Filter down documents based on their relevance to the query.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
classmethod from_llm(llm, prompt=None, **kwargs)[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
prompt (Optional[langchain.prompts.base.BasePromptTemplate]) β
kwargs (Any) β
Return type
langchain.retrievers.document_compressors.chain_filter.LLMChainFilter
class langchain.retrievers.document_compressors.CohereRerank(*, client, top_n=3, model='rerank-english-v2.0')[source]ο
Bases: langchain.retrievers.document_compressors.base.BaseDocumentCompressor
Parameters
client (Client) β
top_n (int) β
model (str) β
Return type
None
attribute client: Client [Required]ο
attribute model: str = 'rerank-english-v2.0'ο
attribute top_n: int = 3ο
async acompress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document]
compress_documents(documents, query)[source]ο
Compress retrieved documents given the query context.
Parameters
documents (Sequence[langchain.schema.Document]) β
query (str) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/retrievers.html |
3a705d435744-0 | Example Selectorο
Logic for selecting examples to include in prompts.
class langchain.prompts.example_selector.LengthBasedExampleSelector(*, examples, example_prompt, get_text_length=<function _get_length_based>, max_length=2048, example_text_lengths=[])[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select examples based on length.
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
get_text_length (Callable[[str], int]) β
max_length (int) β
example_text_lengths (List[int]) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute get_text_length: Callable[[str], int] = <function _get_length_based>ο
Function to measure prompt length. Defaults to word count.
attribute max_length: int = 2048ο
Max length for the prompt, beyond which examples are cut.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Select which examples to use based on the input lengths.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.MaxMarginalRelevanceExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None, fetch_k=20)[source]ο
Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector | https://api.python.langchain.com/en/stable/modules/example_selector.html |
3a705d435744-1 | Bases: langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
ExampleSelector that selects examples based on Max Marginal Relevance.
This was shown to improve performance in this paper:
https://arxiv.org/pdf/2211.13892.pdf
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
fetch_k (int) β
Return type
None
attribute fetch_k: int = 20ο
Number of examples to fetch to rerank.
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, fetch_k=20, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An iniialized embedding API interface, e.g. OpenAIEmbeddings().
vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
fetch_k (int) β
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β | https://api.python.langchain.com/en/stable/modules/example_selector.html |
3a705d435744-2 | Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.NGramOverlapExampleSelector(*, examples, example_prompt, threshold=- 1.0)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Select and order examples based on ngram overlap score (sentence_bleu score).
https://www.nltk.org/_modules/nltk/translate/bleu_score.html
https://aclanthology.org/P02-1040.pdf
Parameters
examples (List[dict]) β
example_prompt (langchain.prompts.prompt.PromptTemplate) β
threshold (float) β
Return type
None
attribute example_prompt: langchain.prompts.prompt.PromptTemplate [Required]ο
Prompt template used to format the examples.
attribute examples: List[dict] [Required]ο
A list of the examples that the prompt template expects.
attribute threshold: float = -1.0ο
Threshold at which algorithm stops. Set to -1.0 by default.
For negative threshold:
select_examples sorts examples by ngram_overlap_score, but excludes none.
For threshold greater than 1.0:
select_examples excludes all examples, and returns an empty list.
For threshold equal to 0.0:
select_examples sorts examples by ngram_overlap_score,
and excludes examples with no ngram overlap with input.
add_example(example)[source]ο
Add new example to list.
Parameters
example (Dict[str, str]) β
Return type
None
select_examples(input_variables)[source]ο
Return list of examples sorted by ngram_overlap_score with input.
Descending order.
Excludes any examples with ngram_overlap_score less than or equal to threshold.
Parameters | https://api.python.langchain.com/en/stable/modules/example_selector.html |
3a705d435744-3 | Excludes any examples with ngram_overlap_score less than or equal to threshold.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict]
class langchain.prompts.example_selector.SemanticSimilarityExampleSelector(*, vectorstore, k=4, example_keys=None, input_keys=None)[source]ο
Bases: langchain.prompts.example_selector.base.BaseExampleSelector, pydantic.main.BaseModel
Example selector that selects examples based on SemanticSimilarity.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
k (int) β
example_keys (Optional[List[str]]) β
input_keys (Optional[List[str]]) β
Return type
None
attribute example_keys: Optional[List[str]] = Noneο
Optional keys to filter examples to.
attribute input_keys: Optional[List[str]] = Noneο
Optional keys to filter input to. If provided, the search is based on
the input variables instead of all variables.
attribute k: int = 4ο
Number of examples to select.
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
VectorStore than contains information about examples.
add_example(example)[source]ο
Add new example to vectorstore.
Parameters
example (Dict[str, str]) β
Return type
str
classmethod from_examples(examples, embeddings, vectorstore_cls, k=4, input_keys=None, **vectorstore_cls_kwargs)[source]ο
Create k-shot example selector using example list and embeddings.
Reshuffles examples dynamically based on query similarity.
Parameters
examples (List[dict]) β List of examples to use in the prompt.
embeddings (langchain.embeddings.base.Embeddings) β An initialized embedding API interface, e.g. OpenAIEmbeddings(). | https://api.python.langchain.com/en/stable/modules/example_selector.html |
3a705d435744-4 | vectorstore_cls (Type[langchain.vectorstores.base.VectorStore]) β A vector store DB interface class, e.g. FAISS.
k (int) β Number of examples to select
input_keys (Optional[List[str]]) β If provided, the search is based on the input variables
instead of all variables.
vectorstore_cls_kwargs (Any) β optional kwargs containing url for vector store
Returns
The ExampleSelector instantiated, backed by a vector store.
Return type
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector
select_examples(input_variables)[source]ο
Select which examples to use based on semantic similarity.
Parameters
input_variables (Dict[str, str]) β
Return type
List[dict] | https://api.python.langchain.com/en/stable/modules/example_selector.html |
af8b1c7c6518-0 | Callbacksο
Callback handlers that allow listening to events in LangChain.
class langchain.callbacks.AimCallbackHandler(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True)[source]ο
Bases: langchain.callbacks.aim_callback.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Aim.
Parameters
repo (str, optional) β Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) β Sets Runβs experiment property.
βdefaultβ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) β Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) β Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
Return type
None
This handler will utilize the associated callback method called and formats
the input of each callback function with metadata regarding the state of LLM run
and then logs the response to Aim.
setup(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-1 | Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-2 | Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(repo=None, experiment_name=None, system_tracking_interval=10, log_system_params=True, langchain_asset=None, reset=True, finish=False)[source]ο
Flush the tracker and reset the session.
Parameters
repo (str, optional) β Aim repository path or Repo object to which
Run object is bound. If skipped, default Repo is used.
experiment_name (str, optional) β Sets Runβs experiment property.
βdefaultβ if not specified. Can be used later to query runs/sequences.
system_tracking_interval (int, optional) β Sets the tracking interval
in seconds for system usage metrics (CPU, Memory, etc.). Set to None
to disable system metrics tracking.
log_system_params (bool, optional) β Enable/Disable logging of system
params such as installed packages, git info, environment variables, etc.
langchain_asset (Any) β The langchain asset to save.
reset (bool) β Whether to reset the session.
finish (bool) β Whether to finish the run.
Returns β None
Return type
None
class langchain.callbacks.ArgillaCallbackHandler(dataset_name, workspace_name=None, api_url=None, api_key=None)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-3 | Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs into Argilla.
Parameters
dataset_name (str) β name of the FeedbackDataset in Argilla. Note that it must
exist in advance. If you need help on how to create a FeedbackDataset in
Argilla, please visit
https://docs.argilla.io/en/latest/guides/llms/practical_guides/use_argilla_callback_in_langchain.html.
workspace_name (Optional[str]) β name of the workspace in Argilla where the specified
FeedbackDataset lives in. Defaults to None, which means that the
default workspace will be used.
api_url (Optional[str]) β URL of the Argilla Server that we want to use, and where the
FeedbackDataset lives in. Defaults to None, which means that either
ARGILLA_API_URL environment variable or the default http://localhost:6900
will be used.
api_key (Optional[str]) β API Key to connect to the Argilla Server. Defaults to None, which
means that either ARGILLA_API_KEY environment variable or the default
argilla.apikey will be used.
Raises
ImportError β if the argilla package is not installed.
ConnectionError β if the connection to Argilla fails.
FileNotFoundError β if the FeedbackDataset retrieval from Argilla fails.
Return type
None
Examples
>>> from langchain.llms import OpenAI
>>> from langchain.callbacks import ArgillaCallbackHandler
>>> argilla_callback = ArgillaCallbackHandler(
... dataset_name="my-dataset",
... workspace_name="my-workspace",
... api_url="http://localhost:6900",
... api_key="argilla.apikey",
... )
>>> llm = OpenAI(
... temperature=0,
... callbacks=[argilla_callback],
... verbose=True, | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-4 | ... callbacks=[argilla_callback],
... verbose=True,
... openai_api_key="API_KEY_HERE",
... )
>>> llm.generate([
... "What is the best NLP-annotation tool out there? (no bias at all)",
... ])
"Argilla, no doubt about it."
on_llm_start(serialized, prompts, **kwargs)[source]ο
Save the prompts in memory when an LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing when a new token is generated.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Log records to Argilla when an LLM ends.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing when LLM outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
If the key input is in inputs, then save it in self.prompts using
either the parent_run_id or the run_id as the key. This is done so that
we donβt log the same input prompt twice, once when the LLM starts and once
when the chain starts.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-5 | kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
If either the parent_run_id or the run_id is in self.prompts, then
log the outputs to Argilla, and pop the run from self.prompts. The behavior
differs if the output is a list or not.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing when LLM chain outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Do nothing when tool starts.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing when agent takes a specific action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Do nothing when tool ends.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Do nothing when tool outputs an error.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Do nothing
Parameters
text (str) β
kwargs (Any) β | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-6 | Do nothing
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Do nothing
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
class langchain.callbacks.ArizeCallbackHandler(model_id=None, model_version=None, SPACE_KEY=None, API_KEY=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Arize.
Parameters
model_id (Optional[str]) β
model_version (Optional[str]) β
SPACE_KEY (Optional[str]) β
API_KEY (Optional[str]) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Do nothing.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-7 | inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Do nothing.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Do nothing.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Do nothing.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
on_tool_end(output, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run on arbitrary text.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-8 | kwargs (Any) β
Return type
None
class langchain.callbacks.AsyncIteratorCallbackHandler[source]ο
Bases: langchain.callbacks.base.AsyncCallbackHandler
Callback handler that returns an async iterator.
Return type
None
property always_verbose: boolο
queue: asyncio.queues.Queue[str]ο
done: asyncio.locks.Eventο
async on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
async on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
async on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
async on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
async aiter()[source]ο
Return type
AsyncIterator[str]
class langchain.callbacks.ClearMLCallbackHandler(task_type='inference', project_name='langchain_callback_demo', tags=None, task_name=None, visualize=False, complexity_metrics=False, stream_logs=False)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to ClearML.
Parameters | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-9 | Callback Handler that logs to ClearML.
Parameters
job_type (str) β The type of clearml task such as βinferenceβ, βtestingβ or βqcβ
project_name (str) β The clearml project name
tags (list) β Tags to add to the task
task_name (str) β Name of the clearml task
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics
stream_logs (bool) β Whether to stream callback actions to ClearML
task_type (Optional[str]) β
Return type
None
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to the ClearML console.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-10 | kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-11 | Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
analyze_text(text)[source]ο
Analyze text using textstat and spacy.
Parameters
text (str) β The text to analyze.
Returns
A dictionary containing the complexity metrics.
Return type
(dict)
flush_tracker(name=None, langchain_asset=None, finish=False)[source]ο
Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name (Optional[str]) β Name of the preformed session so far so it is identifyable
langchain_asset (Any) β The langchain asset to save.
finish (bool) β Whether to finish the run.
Returns β None
Return type
None
class langchain.callbacks.CometCallbackHandler(task_type='inference', workspace=None, project_name=None, tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, stream_logs=True)[source]ο
Bases: langchain.callbacks.utils.BaseMetadataCallbackHandler, langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Comet.
Parameters
job_type (str) β The type of comet_ml task such as βinferenceβ,
βtestingβ or βqcβ
project_name (str) β The comet_ml project name
tags (list) β Tags to add to the task
task_name (str) β Name of the comet_ml task
visualize (bool) β Whether to visualize the run.
complexity_metrics (bool) β Whether to log complexity metrics
stream_logs (bool) β Whether to stream callback actions to Comet
task_type (Optional[str]) β
workspace (Optional[str]) β
name (Optional[str]) β | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-12 | workspace (Optional[str]) β
name (Optional[str]) β
visualizations (Optional[List[str]]) β
custom_metrics (Optional[Callable]) β
Return type
None
This handler will utilize the associated callback method and formats
the input of each callback function with metadata regarding the state of LLM run,
and adds the response to the list of records for both the {method}_records and
action. It then logs the response to Comet.
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run when LLM generates a new token.
Parameters
token (str) β
kwargs (Any) β
Return type
None
on_llm_end(response, **kwargs)[source]ο
Run when LLM ends running.
Parameters
response (langchain.schema.LLMResult) β
kwargs (Any) β
Return type
None
on_llm_error(error, **kwargs)[source]ο
Run when LLM errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Run when chain starts running.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Run when chain ends running.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-13 | kwargs (Any) β
Return type
None
on_chain_error(error, **kwargs)[source]ο
Run when chain errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_tool_start(serialized, input_str, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
kwargs (Any) β
Return type
None
on_tool_end(output, **kwargs)[source]ο
Run when tool ends running.
Parameters
output (str) β
kwargs (Any) β
Return type
None
on_tool_error(error, **kwargs)[source]ο
Run when tool errors.
Parameters
error (Union[Exception, KeyboardInterrupt]) β
kwargs (Any) β
Return type
None
on_text(text, **kwargs)[source]ο
Run when agent is ending.
Parameters
text (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, **kwargs)[source]ο
Run when agent ends running.
Parameters
finish (langchain.schema.AgentFinish) β
kwargs (Any) β
Return type
None
on_agent_action(action, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
kwargs (Any) β
Return type
Any
flush_tracker(langchain_asset=None, task_type='inference', workspace=None, project_name='comet-langchain-demo', tags=None, name=None, visualizations=None, complexity_metrics=False, custom_metrics=None, finish=False, reset=False)[source]ο
Flush the tracker and setup the session.
Everything after this will be a new table. | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-14 | Flush the tracker and setup the session.
Everything after this will be a new table.
Parameters
name (Optional[str]) β Name of the preformed session so far so it is identifyable
langchain_asset (Any) β The langchain asset to save.
finish (bool) β Whether to finish the run.
Returns β None
task_type (Optional[str]) β
workspace (Optional[str]) β
project_name (Optional[str]) β
tags (Optional[Sequence]) β
visualizations (Optional[List[str]]) β
complexity_metrics (bool) β
custom_metrics (Optional[Callable]) β
reset (bool) β
Return type
None
class langchain.callbacks.FileCallbackHandler(filename, mode='a', color=None)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that writes to a file.
Parameters
filename (str) β
mode (str) β
color (Optional[str]) β
Return type
None
on_chain_start(serialized, inputs, **kwargs)[source]ο
Print out that we are entering a chain.
Parameters
serialized (Dict[str, Any]) β
inputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_chain_end(outputs, **kwargs)[source]ο
Print out that we finished a chain.
Parameters
outputs (Dict[str, Any]) β
kwargs (Any) β
Return type
None
on_agent_action(action, color=None, **kwargs)[source]ο
Run on agent action.
Parameters
action (langchain.schema.AgentAction) β
color (Optional[str]) β
kwargs (Any) β
Return type
Any | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-15 | color (Optional[str]) β
kwargs (Any) β
Return type
Any
on_tool_end(output, color=None, observation_prefix=None, llm_prefix=None, **kwargs)[source]ο
If not the final action, print out observation.
Parameters
output (str) β
color (Optional[str]) β
observation_prefix (Optional[str]) β
llm_prefix (Optional[str]) β
kwargs (Any) β
Return type
None
on_text(text, color=None, end='', **kwargs)[source]ο
Run when agent ends.
Parameters
text (str) β
color (Optional[str]) β
end (str) β
kwargs (Any) β
Return type
None
on_agent_finish(finish, color=None, **kwargs)[source]ο
Run on agent end.
Parameters
finish (langchain.schema.AgentFinish) β
color (Optional[str]) β
kwargs (Any) β
Return type
None
class langchain.callbacks.FinalStreamingStdOutCallbackHandler(*, answer_prefix_tokens=None, strip_tokens=True, stream_prefix=False)[source]ο
Bases: langchain.callbacks.streaming_stdout.StreamingStdOutCallbackHandler
Callback handler for streaming in agents.
Only works with agents using LLMs that support streaming.
Only the final output of the agent will be streamed.
Parameters
answer_prefix_tokens (Optional[List[str]]) β
strip_tokens (bool) β
stream_prefix (bool) β
Return type
None
append_to_last_tokens(token)[source]ο
Parameters
token (str) β
Return type
None
check_if_answer_reached()[source]ο
Return type
bool
on_llm_start(serialized, prompts, **kwargs)[source]ο
Run when LLM starts running.
Parameters | https://api.python.langchain.com/en/stable/modules/callbacks.html |
af8b1c7c6518-16 | Run when LLM starts running.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β
kwargs (Any) β
Return type
None
on_llm_new_token(token, **kwargs)[source]ο
Run on new LLM token. Only available when streaming is enabled.
Parameters
token (str) β
kwargs (Any) β
Return type
None
class langchain.callbacks.HumanApprovalCallbackHandler(approve=<function _default_approve>, should_check=<function _default_true>)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback for manually validating values.
Parameters
approve (Callable[[Any], bool]) β
should_check (Callable[[Dict[str, Any]], bool]) β
raise_error: bool = Trueο
on_tool_start(serialized, input_str, *, run_id, parent_run_id=None, **kwargs)[source]ο
Run when tool starts running.
Parameters
serialized (Dict[str, Any]) β
input_str (str) β
run_id (uuid.UUID) β
parent_run_id (Optional[uuid.UUID]) β
kwargs (Any) β
Return type
Any
class langchain.callbacks.InfinoCallbackHandler(model_id=None, model_version=None, verbose=False)[source]ο
Bases: langchain.callbacks.base.BaseCallbackHandler
Callback Handler that logs to Infino.
Parameters
model_id (Optional[str]) β
model_version (Optional[str]) β
verbose (bool) β
Return type
None
on_llm_start(serialized, prompts, **kwargs)[source]ο
Log the prompts to Infino, and set start time and error flag.
Parameters
serialized (Dict[str, Any]) β
prompts (List[str]) β | https://api.python.langchain.com/en/stable/modules/callbacks.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.