id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
45b46cec0462-129 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.GooseAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-neo-20b', temperature=0.7, max_tokens=256, top_p=1, min_tokens=1, frequency_penalty=0, presence_penalty=0, n=1, model_kwargs=None, logit_bias=None, gooseai_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable GOOSEAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import GooseAI
gooseai = GooseAI(model_name="gpt-neo-20b")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
min_tokens (int) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
model_kwargs (Dict[str, Any]) β
logit_bias (Optional[Dict[str, float]]) β
gooseai_api_key (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-130 | gooseai_api_key (Optional[str]) β
Return type
None
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute min_tokens: int = 1ο
The minimum number of tokens to generate in the completion.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-neo-20b'ο
Model name to use
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-131 | kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-132 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-133 | Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-134 | dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-135 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceEndpoint(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFaceHub Inference Endpoints.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation and text2text-generation for now.
Example
from langchain.llms import HuggingFaceEndpoint
endpoint_url = (
"https://abcdefghijklmnop.us-east-1.aws.endpoints.huggingface.cloud"
)
hf = HuggingFaceEndpoint(
endpoint_url=endpoint_url,
huggingfacehub_api_token="my-api-key"
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute endpoint_url: str = ''ο
Endpoint URL to use.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-136 | Tags to add to the run trace.
attribute task: Optional[str] = Noneο
Task to call the model with.
Should be a task that returns generated_text or summary_text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-137 | Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-138 | Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict(). | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-139 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-140 | Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceHub(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, repo_id='gpt2', task=None, model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFaceHub models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Only supports text-generation, text2text-generation and summarization for now.
Example
from langchain.llms import HuggingFaceHub
hf = HuggingFaceHub(repo_id="gpt2", huggingfacehub_api_token="my-api-key")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
repo_id (str) β
task (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-141 | repo_id (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute repo_id: str = 'gpt2'ο
Model name to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute task: Optional[str] = Noneο
Task to call the model with.
Should be a task that returns generated_text or summary_text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-142 | Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-143 | update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-144 | get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-145 | Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFacePipeline(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline=None, model_id='gpt2', model_kwargs=None, pipeline_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HuggingFace Pipeline API.
To use, you should have the transformers python package installed.
Only supports text-generation, text2text-generation and summarization for now.
Example using from_model_id:from langchain.llms import HuggingFacePipeline
hf = HuggingFacePipeline.from_model_id(
model_id="gpt2",
task="text-generation",
pipeline_kwargs={"max_new_tokens": 10},
)
Example passing pipeline in directly:from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-146 | from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
pipe = pipeline(
"text-generation", model=model, tokenizer=tokenizer, max_new_tokens=10
)
hf = HuggingFacePipeline(pipeline=pipe)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline (Any) β
model_id (str) β
model_kwargs (Optional[dict]) β
pipeline_kwargs (Optional[dict]) β
Return type
None
attribute model_id: str = 'gpt2'ο
Model name to use.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments passed to the model.
attribute pipeline_kwargs: Optional[dict] = Noneο
Key word arguments passed to the pipeline.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-147 | kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-148 | Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
classmethod from_model_id(model_id, task, device=- 1, model_kwargs=None, pipeline_kwargs=None, **kwargs)[source]ο
Construct the pipeline object from model_id and task.
Parameters
model_id (str) β
task (str) β
device (int) β
model_kwargs (Optional[dict]) β
pipeline_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.llms.base.LLM
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-149 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-150 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-151 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HuggingFaceTextGenInference(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, max_new_tokens=512, top_k=None, top_p=0.95, typical_p=0.95, temperature=0.8, repetition_penalty=None, stop_sequences=None, seed=None, inference_server_url='', timeout=120, server_kwargs=None, stream=False, client=None, async_client=None)[source]ο
Bases: langchain.llms.base.LLM
HuggingFace text generation inference API.
This class is a wrapper around the HuggingFace text generation inference API.
It is used to generate text from a given prompt.
Attributes:
- max_new_tokens: The maximum number of tokens to generate.
- top_k: The number of top-k tokens to consider when generating text.
- top_p: The cumulative probability threshold for generating text.
- typical_p: The typical probability threshold for generating text.
- temperature: The temperature to use when generating text.
- repetition_penalty: The repetition penalty to use when generating text.
- stop_sequences: A list of stop sequences to use when generating text.
- seed: The seed to use when generating text.
- inference_server_url: The URL of the inference server to use.
- timeout: The timeout value in seconds to use while connecting to inference server. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-152 | - timeout: The timeout value in seconds to use while connecting to inference server.
- server_kwargs: The keyword arguments to pass to the inference server.
- client: The client object used to communicate with the inference server.
- async_client: The async client object used to communicate with the server.
Methods:
- _call: Generates text based on a given prompt and stop sequences.
- _acall: Async generates text based on a given prompt and stop sequences.
- _llm_type: Returns the type of LLM.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
max_new_tokens (int) β
top_k (Optional[int]) β
top_p (Optional[float]) β
typical_p (Optional[float]) β
temperature (float) β
repetition_penalty (Optional[float]) β
stop_sequences (List[str]) β
seed (Optional[int]) β
inference_server_url (str) β
timeout (int) β
server_kwargs (Dict[str, Any]) β
stream (bool) β
client (Any) β
async_client (Any) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-153 | Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-154 | langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-155 | kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-156 | exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-157 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.HumanInputLLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, input_func=None, prompt_func=None, separator='\n', input_kwargs={}, prompt_kwargs={})[source]ο
Bases: langchain.llms.base.LLM
A LLM wrapper which returns user input as the response.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
input_func (Callable) β
prompt_func (Callable[[str], None]) β
separator (str) β
input_kwargs (Mapping[str, Any]) β
prompt_kwargs (Mapping[str, Any]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-158 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-159 | Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-160 | kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-161 | Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-162 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.LlamaCpp(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_path, lora_base=None, lora_path=None, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=True, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None, suffix=None, max_tokens=256, temperature=0.8, top_p=0.95, logprobs=None, echo=False, stop=[], repeat_penalty=1.1, top_k=40, last_n_tokens_size=64, use_mmap=True, streaming=True)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around the llama.cpp model.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.llms import LlamaCppEmbeddings
llm = LlamaCppEmbeddings(model_path="/path/to/llama/model")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_path (str) β
lora_base (Optional[str]) β
lora_path (Optional[str]) β
n_ctx (int) β
n_parts (int) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-163 | n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
suffix (Optional[str]) β
max_tokens (Optional[int]) β
temperature (Optional[float]) β
top_p (Optional[float]) β
logprobs (Optional[int]) β
echo (Optional[bool]) β
stop (Optional[List[str]]) β
repeat_penalty (Optional[float]) β
top_k (Optional[int]) β
last_n_tokens_size (Optional[int]) β
use_mmap (Optional[bool]) β
streaming (bool) β
Return type
None
attribute echo: Optional[bool] = Falseο
Whether to echo the prompt.
attribute f16_kv: bool = Trueο
Use half-precision for key/value cache.
attribute last_n_tokens_size: Optional[int] = 64ο
The number of tokens to look back when applying the repeat_penalty.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute logprobs: Optional[int] = Noneο
The number of logprobs to return. If None, no logprobs are returned.
attribute lora_base: Optional[str] = Noneο
The path to the Llama LoRA base model.
attribute lora_path: Optional[str] = Noneο
The path to the Llama LoRA. If None, no LoRa is loaded.
attribute max_tokens: Optional[int] = 256ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-164 | attribute max_tokens: Optional[int] = 256ο
The maximum number of tokens to generate.
attribute model_path: str [Required]ο
The path to the Llama model file.
attribute n_batch: Optional[int] = 8ο
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
attribute n_ctx: int = 512ο
Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to use.
If None, the number of threads is automatically determined.
attribute repeat_penalty: Optional[float] = 1.1ο
The penalty to apply to repeated tokens.
attribute seed: int = -1ο
Seed. If -1, a random seed is used.
attribute stop: Optional[List[str]] = []ο
A list of strings to stop generation when encountered.
attribute streaming: bool = Trueο
Whether to stream the results, token by token.
attribute suffix: Optional[str] = Noneο
A suffix to append to the generated text. If None, no suffix is appended.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = 0.8ο
The temperature to use for sampling.
attribute top_k: Optional[int] = 40ο
The top-k value to use for sampling.
attribute top_p: Optional[float] = 0.95ο
The top-p value to use for sampling. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-165 | The top-p value to use for sampling.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute use_mmap: Optional[bool] = Trueο
Whether to keep the model loaded in RAM
attribute verbose: bool [Optional]ο
Whether to print out response text.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-166 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-167 | Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)[source]ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-168 | Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None, run_manager=None)[source]ο
Yields results objects as they are generated in real time.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-169 | Once that happens, this interface could change.
It also calls the callback managerβs on_llm_new_token event with
similar parameters to the OpenAI LLM class method of the same name.
Args:prompt: The prompts to pass into the model.
stop: Optional list of stop words to use when generating.
Returns:A generator representing the stream of tokens being generated.
Yields:A dictionary like objects containing a string token and metadata.
See llama-cpp-python docs and below for more.
Example:from langchain.llms import LlamaCpp
llm = LlamaCpp(
model_path="/path/to/local/model.bin",
temperature = 0.5
)
for chunk in llm.stream("Ask 'Hi, how are you?' like a pirate:'",
stop=["'","
β]):result = chunk[βchoicesβ][0]
print(result[βtextβ], end=ββ, flush=True)
Parameters
prompt (str) β
stop (Optional[List[str]]) β
run_manager (Optional[langchain.callbacks.manager.CallbackManagerForLLMRun]) β
Return type
Generator[Dict, None, None]
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-170 | Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.TextGen(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, model_url, max_new_tokens=250, do_sample=True, temperature=1.3, top_p=0.1, typical_p=1, epsilon_cutoff=0, eta_cutoff=0, repetition_penalty=1.18, top_k=40, min_length=0, no_repeat_ngram_size=0, num_beams=1, penalty_alpha=0, length_penalty=1, early_stopping=False, seed=- 1, add_bos_token=True, truncation_length=2048, ban_eos_token=False, skip_special_tokens=True, stopping_strings=[], streaming=False)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around the text-generation-webui model.
To use, you should have the text-generation-webui installed, a model loaded,
and βapi added as a command-line option.
Suggested installation, use one-click installer for your OS:
https://github.com/oobabooga/text-generation-webui#one-click-installers
Paremeters below taken from text-generation-webui api example:
https://github.com/oobabooga/text-generation-webui/blob/main/api-examples/api-example.py
Example
from langchain.llms import TextGen
llm = TextGen(model_url="http://localhost:8500")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-171 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
model_url (str) β
max_new_tokens (Optional[int]) β
do_sample (bool) β
temperature (Optional[float]) β
top_p (Optional[float]) β
typical_p (Optional[float]) β
epsilon_cutoff (Optional[float]) β
eta_cutoff (Optional[float]) β
repetition_penalty (Optional[float]) β
top_k (Optional[float]) β
min_length (Optional[int]) β
no_repeat_ngram_size (Optional[int]) β
num_beams (Optional[int]) β
penalty_alpha (Optional[float]) β
length_penalty (Optional[float]) β
early_stopping (bool) β
seed (int) β
add_bos_token (bool) β
truncation_length (Optional[int]) β
ban_eos_token (bool) β
skip_special_tokens (bool) β
stopping_strings (Optional[List[str]]) β
streaming (bool) β
Return type
None
attribute add_bos_token: bool = Trueο
Add the bos_token to the beginning of prompts.
Disabling this can make the replies more creative.
attribute ban_eos_token: bool = Falseο
Ban the eos_token. Forces the model to never end the generation prematurely.
attribute do_sample: bool = Trueο
Do sample
attribute early_stopping: bool = Falseο
Early stopping
attribute epsilon_cutoff: Optional[float] = 0ο
Epsilon cutoff
attribute eta_cutoff: Optional[float] = 0ο
ETA cutoff
attribute length_penalty: Optional[float] = 1ο
Length Penalty
attribute max_new_tokens: Optional[int] = 250ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-172 | Length Penalty
attribute max_new_tokens: Optional[int] = 250ο
The maximum number of tokens to generate.
attribute min_length: Optional[int] = 0ο
Minimum generation length in tokens.
attribute model_url: str [Required]ο
The full URL to the textgen webui including http[s]://host:port
attribute no_repeat_ngram_size: Optional[int] = 0ο
If not set to 0, specifies the length of token sets that are completely blocked
from repeating at all. Higher values = blocks larger phrases,
lower values = blocks words or letters from repeating.
Only 0 or high values are a good idea in most cases.
attribute num_beams: Optional[int] = 1ο
Number of beams
attribute penalty_alpha: Optional[float] = 0ο
Penalty Alpha
attribute repetition_penalty: Optional[float] = 1.18ο
Exponential penalty factor for repeating prior tokens. 1 means no penalty,
higher value = less repetition, lower value = more repetition.
attribute seed: int = -1ο
Seed (-1 for random)
attribute skip_special_tokens: bool = Trueο
Skip special tokens. Some specific models need this unset.
attribute stopping_strings: Optional[List[str]] = []ο
A list of strings to stop generation when encountered.
attribute streaming: bool = Falseο
Whether to stream the results, token by token (currently unimplemented).
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: Optional[float] = 1.3ο
Primary factor to control randomness of outputs. 0 = deterministic
(only the most likely token is used). Higher value = more randomness.
attribute top_k: Optional[float] = 40ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-173 | attribute top_k: Optional[float] = 40ο
Similar to top_p, but select instead only the top_k most likely tokens.
Higher value = higher range of possible random results.
attribute top_p: Optional[float] = 0.1ο
If not set to 1, select tokens with probabilities adding up to less than this
number. Higher value = higher range of possible random results.
attribute truncation_length: Optional[int] = 2048ο
Truncate the prompt up to this length. The leftmost tokens are removed if
the prompt exceeds this length. Most models require this to be at most 2048.
attribute typical_p: Optional[float] = 1ο
If not set to 1, select only tokens that are at least this much more likely to
appear than random tokens, given the prior text.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-174 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-175 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-176 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-177 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.ManifestWrapper(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, llm_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around HazyResearchβs Manifest library.
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
llm_kwargs (Optional[Dict]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-178 | llm_kwargs (Optional[Dict]) β
Return type
None
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-179 | stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-180 | generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-181 | Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-182 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.Modal(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='', model_kwargs=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Modal large language models.
To use, you should have the modal-client python package installed.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import Modal
modal = Modal(endpoint_url="")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
model_kwargs (Dict[str, Any]) β
Return type
None
attribute endpoint_url: str = ''ο
model endpoint to use
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not
explicitly specified.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-183 | attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-184 | Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-185 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-186 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-187 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.MosaicML(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, endpoint_url='https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict', inject_instruction_format=False, model_kwargs=None, retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around MosaicMLβs LLM inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicML
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict"
)
mosaic_llm = MosaicML(
endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
endpoint_url (str) β
inject_instruction_format (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-188 | endpoint_url (str) β
inject_instruction_format (bool) β
model_kwargs (Optional[dict]) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/mpt-7b-instruct/v1/predict'ο
Endpoint URL to use.
attribute inject_instruction_format: bool = Falseο
Whether to inject the instruction format into the prompt.
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-189 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-190 | exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-191 | Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-192 | save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.NLPCloud(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='finetuned-gpt-neox-20b', temperature=0.7, min_length=1, max_length=256, length_no_input=True, remove_input=True, remove_end_sequence=True, bad_words=[], top_p=1, top_k=50, repetition_penalty=1.0, length_penalty=1.0, do_sample=True, num_beams=1, early_stopping=False, num_return_sequences=1, nlpcloud_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around NLPCloud large language models. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-193 | Wrapper around NLPCloud large language models.
To use, you should have the nlpcloud python package installed, and the
environment variable NLPCLOUD_API_KEY set with your API key.
Example
from langchain.llms import NLPCloud
nlpcloud = NLPCloud(model="gpt-neox-20b")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
temperature (float) β
min_length (int) β
max_length (int) β
length_no_input (bool) β
remove_input (bool) β
remove_end_sequence (bool) β
bad_words (List[str]) β
top_p (int) β
top_k (int) β
repetition_penalty (float) β
length_penalty (float) β
do_sample (bool) β
num_beams (int) β
early_stopping (bool) β
num_return_sequences (int) β
nlpcloud_api_key (Optional[str]) β
Return type
None
attribute bad_words: List[str] = []ο
List of tokens not allowed to be generated.
attribute do_sample: bool = Trueο
Whether to use sampling (True) or greedy decoding.
attribute early_stopping: bool = Falseο
Whether to stop beam search at num_beams sentences.
attribute length_no_input: bool = Trueο
Whether min_length and max_length should include the length of the input. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-194 | Whether min_length and max_length should include the length of the input.
attribute length_penalty: float = 1.0ο
Exponential penalty to the length.
attribute max_length: int = 256ο
The maximum number of tokens to generate in the completion.
attribute min_length: int = 1ο
The minimum number of tokens to generate in the completion.
attribute model_name: str = 'finetuned-gpt-neox-20b'ο
Model name to use.
attribute num_beams: int = 1ο
Number of beams for beam search.
attribute num_return_sequences: int = 1ο
How many completions to generate for each prompt.
attribute remove_end_sequence: bool = Trueο
Whether or not to remove the end sequence token.
attribute remove_input: bool = Trueο
Remove input text from API response
attribute repetition_penalty: float = 1.0ο
Penalizes repeated tokens. 1.0 means no penalty.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute top_k: int = 50ο
The number of highest probability tokens to keep for top-k filtering.
attribute top_p: int = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-195 | Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-196 | langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-197 | kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-198 | exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-199 | property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.OpenAI(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Wrapper around OpenAI large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAI
openai = OpenAI(model_name="text-davinci-003")
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-200 | presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-201 | -1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-202 | Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-203 | str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-204 | Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-205 | get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003") | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-206 | max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-207 | Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.OpenAIChat(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model_name='gpt-3.5-turbo', model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_proxy=None, max_retries=6, prefix_messages=None, streaming=False, allowed_special={}, disallowed_special='all')[source]ο
Bases: langchain.llms.base.BaseLLM
Wrapper around OpenAI Chat large language models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key.
Any parameters that are valid to be passed to the openai.create call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import OpenAIChat
openaichat = OpenAIChat(model_name="gpt-3.5-turbo")
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-208 | Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model_name (str) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_proxy (Optional[str]) β
max_retries (int) β
prefix_messages (List) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'gpt-3.5-turbo'ο
Model name to use.
attribute prefix_messages: List [Optional]ο
Series of messages for Chat input.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-209 | Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-210 | Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-211 | Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)[source]ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-212 | include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-213 | property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
class langchain.llms.OpenLLM(model_name=None, *, model_id=None, server_url=None, server_type='http', embedded=True, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, llm_kwargs)[source]ο
Bases: langchain.llms.base.LLM
Wrapper for accessing OpenLLM, supporting both in-process model
instance and remote OpenLLM servers.
To use, you should have the openllm library installed:
pip install openllm
Learn more at: https://github.com/bentoml/openllm
Example running an LLM model locally managed by OpenLLM:from langchain.llms import OpenLLM
llm = OpenLLM(
model_name='flan-t5',
model_id='google/flan-t5-large',
)
llm("What is the difference between a duck and a goose?")
For all available supported models, you can run βopenllm modelsβ.
If you have a OpenLLM server running, you can also use it remotely:from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://localhost:3000')
llm("What is the difference between a duck and a goose?")
Parameters
model_name (Optional[str]) β
model_id (Optional[str]) β
server_url (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-214 | model_id (Optional[str]) β
server_url (Optional[str]) β
server_type (Literal['grpc', 'http']) β
embedded (bool) β
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
llm_kwargs (Dict[str, Any]) β
Return type
None
attribute embedded: bool = Trueο
Initialize this LLM instance in current process by default. Should
only set to False when using in conjunction with BentoML Service.
attribute llm_kwargs: Dict[str, Any] [Required]ο
Key word arguments to be passed to openllm.LLM
attribute model_id: Optional[str] = Noneο
Model Id to use. If not provided, will use the default model for the model name.
See βopenllm modelsβ for all available model variants.
attribute model_name: Optional[str] = Noneο
Model name to use. See βopenllm modelsβ for all available models.
attribute server_type: ServerType = 'http'ο
Optional server type. Either βhttpβ or βgrpcβ.
attribute server_url: Optional[str] = Noneο
Optional server URL that currently runs a LLMServer with βopenllm startβ.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-215 | Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-216 | langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-217 | kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_token_ids(text)ο
Get the token present in the text.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-218 | exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-219 | property lc_serializable: boolο
Return whether or not the class is serializable.
property runner: openllm.LLMRunnerο
Get the underlying openllm.LLMRunner instance for integration with BentoML.
Example:
.. code-block:: python
llm = OpenLLM(model_name=βflan-t5β,
model_id=βgoogle/flan-t5-largeβ,
embedded=False,
)
tools = load_tools([βserpapiβ, βllm-mathβ], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
svc = bentoml.Service(βlangchain-openllmβ, runners=[llm.runner])
@svc.api(input=Text(), output=Text())
def chat(input_text: str):
return agent.run(input_text)
class langchain.llms.OpenLM(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, model='text-davinci-003', temperature=0.7, max_tokens=256, top_p=1, frequency_penalty=0, presence_penalty=0, n=1, best_of=1, model_kwargs=None, openai_api_key=None, openai_api_base=None, openai_organization=None, openai_proxy=None, batch_size=20, request_timeout=None, logit_bias=None, max_retries=6, streaming=False, allowed_special={}, disallowed_special='all', tiktoken_model_name=None)[source]ο
Bases: langchain.llms.openai.BaseOpenAI
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-220 | callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
model (str) β
temperature (float) β
max_tokens (int) β
top_p (float) β
frequency_penalty (float) β
presence_penalty (float) β
n (int) β
best_of (int) β
model_kwargs (Dict[str, Any]) β
openai_api_key (Optional[str]) β
openai_api_base (Optional[str]) β
openai_organization (Optional[str]) β
openai_proxy (Optional[str]) β
batch_size (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
logit_bias (Optional[Dict[str, float]]) β
max_retries (int) β
streaming (bool) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute allowed_special: Union[Literal['all'], AbstractSet[str]] = {}ο
Set of special tokens that are allowedγ
attribute batch_size: int = 20ο
Batch size to use when passing multiple documents to generate.
attribute best_of: int = 1ο
Generates best_of completions server-side and returns the βbestβ.
attribute disallowed_special: Union[Literal['all'], Collection[str]] = 'all'ο
Set of special tokens that are not allowedγ
attribute frequency_penalty: float = 0ο
Penalizes repeated tokens according to frequency.
attribute logit_bias: Optional[Dict[str, float]] [Optional]ο | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-221 | attribute logit_bias: Optional[Dict[str, float]] [Optional]ο
Adjust the probability of specific tokens being generated.
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute max_tokens: int = 256ο
The maximum number of tokens to generate in the completion.
-1 returns as many tokens as possible given the prompt and
the models maximal context size.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call not explicitly specified.
attribute model_name: str = 'text-davinci-003' (alias 'model')ο
Model name to use.
attribute n: int = 1ο
How many completions to generate for each prompt.
attribute presence_penalty: float = 0ο
Penalizes repeated tokens.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout for requests to OpenAI completion API. Default is 600 seconds.
attribute streaming: bool = Falseο
Whether to stream the results or not.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace.
attribute temperature: float = 0.7ο
What sampling temperature to use.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-222 | supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
attribute top_p: float = 1ο
Total probability mass of tokens to consider at each step.
attribute verbose: bool [Optional]ο
Whether to print out response text.
__call__(prompt, stop=None, callbacks=None, **kwargs)ο
Check Cache and run the LLM on the given prompt and input.
Parameters
prompt (str) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
str
async agenerate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
async agenerate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-223 | kwargs (Any) β
Return type
langchain.schema.LLMResult
async apredict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
async apredict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
classmethod construct(_fields_set=None, **values)ο
Creates a new model setting __dict__ and __fields_set__ from trusted or pre-validated data.
Default values are respected, but no other validation is performed.
Behaves as if Config.extra = βallowβ was set since it adds all passed values
Parameters
_fields_set (Optional[SetStr]) β
values (Any) β
Return type
Model
copy(*, include=None, exclude=None, update=None, deep=False)ο
Duplicate a model, optionally choose which fields to include, exclude and change.
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to include in new model
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β fields to exclude from new model, as with values this takes precedence over include
update (Optional[DictStrAny]) β values to change/add in the new model. Note: the data is not validated before creating
the new model: you should trust this data
deep (bool) β set to True to make a deep copy of the model
self (Model) β
Returns
new model instance
Return type
Model | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-224 | self (Model) β
Returns
new model instance
Return type
Model
create_llm_result(choices, prompts, token_usage)ο
Create the LLMResult from the choices and prompts.
Parameters
choices (Any) β
prompts (List[str]) β
token_usage (Dict[str, int]) β
Return type
langchain.schema.LLMResult
dict(**kwargs)ο
Return a dictionary of the LLM.
Parameters
kwargs (Any) β
Return type
Dict
generate(prompts, stop=None, callbacks=None, *, tags=None, **kwargs)ο
Run the LLM on the given prompt and input.
Parameters
prompts (List[str]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
tags (Optional[List[str]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
generate_prompt(prompts, stop=None, callbacks=None, **kwargs)ο
Take in a list of prompt values and return an LLMResult.
Parameters
prompts (List[langchain.schema.PromptValue]) β
stop (Optional[List[str]]) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
kwargs (Any) β
Return type
langchain.schema.LLMResult
get_num_tokens(text)ο
Get the number of tokens present in the text.
Parameters
text (str) β
Return type
int
get_num_tokens_from_messages(messages)ο
Get the number of tokens in the message.
Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-225 | Parameters
messages (List[langchain.schema.BaseMessage]) β
Return type
int
get_sub_prompts(params, prompts, stop=None)ο
Get the sub prompts for llm call.
Parameters
params (Dict[str, Any]) β
prompts (List[str]) β
stop (Optional[List[str]]) β
Return type
List[List[str]]
get_token_ids(text)ο
Get the token IDs using the tiktoken package.
Parameters
text (str) β
Return type
List[int]
json(*, include=None, exclude=None, by_alias=False, skip_defaults=None, exclude_unset=False, exclude_defaults=False, exclude_none=False, encoder=None, models_as_dict=True, **dumps_kwargs)ο
Generate a JSON representation of the model, include and exclude arguments as per dict().
encoder is an optional function to supply as default to json.dumps(), other arguments as per json.dumps().
Parameters
include (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
exclude (Optional[Union[AbstractSetIntStr, MappingIntStrAny]]) β
by_alias (bool) β
skip_defaults (Optional[bool]) β
exclude_unset (bool) β
exclude_defaults (bool) β
exclude_none (bool) β
encoder (Optional[Callable[[Any], Any]]) β
models_as_dict (bool) β
dumps_kwargs (Any) β
Return type
unicode
max_tokens_for_prompt(prompt)ο
Calculate the maximum number of tokens possible to generate for a prompt.
Parameters
prompt (str) β The prompt to pass into the model.
Returns
The maximum number of tokens to generate for a prompt.
Return type
int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.") | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-226 | int
Example
max_tokens = openai.max_token_for_prompt("Tell me a joke.")
static modelname_to_contextsize(modelname)ο
Calculate the maximum number of tokens possible to generate for a model.
Parameters
modelname (str) β The modelname we want to know the context size for.
Returns
The maximum context size
Return type
int
Example
max_tokens = openai.modelname_to_contextsize("text-davinci-003")
predict(text, *, stop=None, **kwargs)ο
Predict text from text.
Parameters
text (str) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
str
predict_messages(messages, *, stop=None, **kwargs)ο
Predict message from messages.
Parameters
messages (List[langchain.schema.BaseMessage]) β
stop (Optional[Sequence[str]]) β
kwargs (Any) β
Return type
langchain.schema.BaseMessage
prep_streaming_params(stop=None)ο
Prepare the params for streaming.
Parameters
stop (Optional[List[str]]) β
Return type
Dict[str, Any]
save(file_path)ο
Save the LLM.
Parameters
file_path (Union[pathlib.Path, str]) β Path to file to save the LLM to.
Return type
None
Example:
.. code-block:: python
llm.save(file_path=βpath/llm.yamlβ)
stream(prompt, stop=None)ο
Call OpenAI with streaming flag and return the resulting generator.
BETA: this is a beta feature while we figure out the right abstraction.
Once that happens, this interface could change.
Parameters
prompt (str) β The prompts to pass into the model.
stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-227 | stop (Optional[List[str]]) β Optional list of stop words to use when generating.
Returns
A generator representing the stream of tokens from OpenAI.
Return type
Generator
Example
generator = openai.stream("Tell me a joke.")
for token in generator:
yield token
classmethod update_forward_refs(**localns)ο
Try to update ForwardRefs on fields based on this Model, globalns and localns.
Parameters
localns (Any) β
Return type
None
property lc_attributes: Dictο
Return a list of attribute names that should be included in the
serialized kwargs. These attributes must be accepted by the
constructor.
property lc_namespace: List[str]ο
Return the namespace of the langchain object.
eg. [βlangchainβ, βllmsβ, βopenaiβ]
property lc_secrets: Dict[str, str]ο
Return a map of constructor argument names to secret ids.
eg. {βopenai_api_keyβ: βOPENAI_API_KEYβ}
property lc_serializable: boolο
Return whether or not the class is serializable.
property max_context_size: intο
Get max context size for this model.
class langchain.llms.Petals(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, client=None, tokenizer=None, model_name='bigscience/bloom-petals', temperature=0.7, max_new_tokens=256, top_p=0.9, top_k=None, do_sample=True, max_length=None, model_kwargs=None, huggingface_api_key=None)[source]ο
Bases: langchain.llms.base.LLM
Wrapper around Petals Bloom models.
To use, you should have the petals python package installed, and the
environment variable HUGGINGFACE_API_KEY set with your API key. | https://api.python.langchain.com/en/latest/modules/llms.html |
45b46cec0462-228 | environment variable HUGGINGFACE_API_KEY set with your API key.
Any parameters that are valid to be passed to the call can be passed
in, even if not explicitly saved on this class.
Example
from langchain.llms import petals
petals = Petals()
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
client (Any) β
tokenizer (Any) β
model_name (str) β
temperature (float) β
max_new_tokens (int) β
top_p (float) β
top_k (Optional[int]) β
do_sample (bool) β
max_length (Optional[int]) β
model_kwargs (Dict[str, Any]) β
huggingface_api_key (Optional[str]) β
Return type
None
attribute client: Any = Noneο
The client to use for the API calls.
attribute do_sample: bool = Trueο
Whether or not to use sampling; use greedy decoding otherwise.
attribute max_length: Optional[int] = Noneο
The maximum length of the sequence to be generated.
attribute max_new_tokens: int = 256ο
The maximum number of new tokens to generate in the completion.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Holds any model parameters valid for create call
not explicitly specified.
attribute model_name: str = 'bigscience/bloom-petals'ο
The model to use.
attribute tags: Optional[List[str]] = Noneο
Tags to add to the run trace. | https://api.python.langchain.com/en/latest/modules/llms.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.