id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
2b28b54605b7-122 | index_name (str) β
content_key (str) β
metadata_key (str) β
kwargs (Any) β
Return type
langchain.vectorstores.tair.Tair
class langchain.vectorstores.Tigris(client, embeddings, index_name)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Parameters
client (TigrisClient) β
embeddings (Embeddings) β
index_name (str) β
property search_index: TigrisVectorStoreο
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids for documents.
Ids will be autogenerated if not provided.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search(query, k=4, filter=None, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
filter (Optional[TigrisFilter]) β
kwargs (Any) β
Return type
List[Document]
similarity_search_with_score(query, k=4, filter=None)[source]ο
Run similarity search with Chroma with distance.
Parameters
query (str) β Query text to search for.
k (int) β Number of results to return. Defaults to 4.
filter (Optional[TigrisFilter]) β Filter by metadata. Defaults to None.
Returns | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-123 | filter (Optional[TigrisFilter]) β Filter by metadata. Defaults to None.
Returns
List of documents most similar to the querytext with distance in float.
Return type
List[Tuple[Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, ids=None, client=None, index_name=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
client (Optional[TigrisClient]) β
index_name (Optional[str]) β
kwargs (Any) β
Return type
Tigris
class langchain.vectorstores.Typesense(typesense_client, embedding, *, typesense_collection_name=None, text_key='text')[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Typesense vector search.
To use, you should have the typesense python package installed.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
import typesense
node = {
"host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net
"port": "8108", # For Typesense Cloud use 443
"protocol": "http" # For Typesense Cloud use https
}
typesense_client = typesense.Client(
{
"nodes": [node],
"api_key": "<API_KEY>",
"connection_timeout_seconds": 2
}
)
typesense_collection_name = "langchain-memory"
embedding = OpenAIEmbeddings()
vectorstore = Typesense(
typesense_client=typesense_client,
embedding=embedding, | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-124 | typesense_client=typesense_client,
embedding=embedding,
typesense_collection_name=typesense_collection_name,
text_key="text",
)
Parameters
typesense_client (Client) β
embedding (Embeddings) β
typesense_collection_name (Optional[str]) β
text_key (str) β
add_texts(texts, metadatas=None, ids=None, **kwargs)[source]ο
Run more texts through the embedding and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
ids (Optional[List[str]]) β Optional list of ids to associate with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, k=10, filter='')[source]ο
Return typesense documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned.
filter (Optional[str]) β typesense filter_by expression to filter documents on
Returns
List of Documents most similar to the query and score for each
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=10, filter='', **kwargs)[source]ο
Return typesense documents most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 10.
Minimum 10 results would be returned. | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-125 | Minimum 10 results would be returned.
filter (Optional[str]) β typesense filter_by expression to filter documents on
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each
Return type
List[langchain.schema.Document]
classmethod from_client_params(embedding, *, host='localhost', port='8108', protocol='http', typesense_api_key=None, connection_timeout_seconds=2, **kwargs)[source]ο
Initialize Typesense directly from client parameters.
Example
from langchain.embedding.openai import OpenAIEmbeddings
from langchain.vectorstores import Typesense
# Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY".
vectorstore = Typesense(
OpenAIEmbeddings(),
host="localhost",
port="8108",
protocol="http",
typesense_collection_name="langchain-memory",
)
Parameters
embedding (langchain.embeddings.base.Embeddings) β
host (str) β
port (Union[str, int]) β
protocol (str) β
typesense_api_key (Optional[str]) β
connection_timeout_seconds (int) β
kwargs (Any) β
Return type
langchain.vectorstores.typesense.Typesense
classmethod from_texts(texts, embedding, metadatas=None, ids=None, typesense_client=None, typesense_client_params=None, typesense_collection_name=None, text_key='text', **kwargs)[source]ο
Construct Typesense wrapper from raw text.
Parameters
texts (List[str]) β
embedding (Embeddings) β
metadatas (Optional[List[dict]]) β
ids (Optional[List[str]]) β
typesense_client (Optional[Client]) β
typesense_client_params (Optional[dict]) β | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-126 | typesense_client_params (Optional[dict]) β
typesense_collection_name (Optional[str]) β
text_key (str) β
kwargs (Any) β
Return type
Typesense
class langchain.vectorstores.Vectara(vectara_customer_id=None, vectara_corpus_id=None, vectara_api_key=None)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Implementation of Vector Store using Vectara (https://vectara.com).
.. rubric:: Example
from langchain.vectorstores import Vectara
vectorstore = Vectara(
vectara_customer_id=vectara_customer_id,
vectara_corpus_id=vectara_corpus_id,
vectara_api_key=vectara_api_key
)
Parameters
vectara_customer_id (Optional[str]) β
vectara_corpus_id (Optional[str]) β
vectara_api_key (Optional[str]) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
similarity_search_with_score(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]ο
Return Vectara documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 5.
lambda_val (float) β lexical match parameter for hybrid search. | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-127 | lambda_val (float) β lexical match parameter for hybrid search.
filter (Optional[str]) β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview
for more details.
n_sentence_context (int) β number of sentences before/after the matching segment
to add
kwargs (Any) β
Returns
List of Documents most similar to the query and score for each.
Return type
List[Tuple[langchain.schema.Document, float]]
similarity_search(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source]ο
Return Vectara documents most similar to query, along with scores.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 5.
filter (Optional[str]) β Dictionary of argument(s) to filter on metadata. For example a
filter can be βdoc.rating > 3.0 and part.lang = βdeuββ} see
https://docs.vectara.com/docs/search-apis/sql/filter-overview for more
details.
n_sentence_context (int) β number of sentences before/after the matching segment
to add
lambda_val (float) β
kwargs (Any) β
Returns
List of Documents most similar to the query
Return type
List[langchain.schema.Document]
classmethod from_texts(texts, embedding=None, metadatas=None, **kwargs)[source]ο
Construct Vectara wrapper from raw documents.
This is intended to be a quick way to get started.
.. rubric:: Example
from langchain import Vectara | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-128 | .. rubric:: Example
from langchain import Vectara
vectara = Vectara.from_texts(
texts,
vectara_customer_id=customer_id,
vectara_corpus_id=corpus_id,
vectara_api_key=api_key,
)
Parameters
texts (List[str]) β
embedding (Optional[langchain.embeddings.base.Embeddings]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.vectara.Vectara
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.vectara.VectaraRetriever
class langchain.vectorstores.VectorStore[source]ο
Bases: abc.ABC
Interface for vector stores.
abstract add_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β Iterable of strings to add to the vectorstore.
metadatas (Optional[List[dict]]) β Optional list of metadatas associated with the texts.
kwargs (Any) β vectorstore specific parameters
Returns
List of ids from adding the texts into the vectorstore.
Return type
List[str]
delete(ids)[source]ο
Delete by vector ID.
Parameters
ids (List[str]) β List of ids to delete.
Returns
True if deletion is successful,
False otherwise, None if not implemented.
Return type
Optional[bool]
async aadd_texts(texts, metadatas=None, **kwargs)[source]ο
Run more texts through the embeddings and add to the vectorstore.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-129 | texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
add_documents(documents, **kwargs)[source]ο
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
async aadd_documents(documents, **kwargs)[source]ο
Run more documents through the embeddings and add to the vectorstore.
Parameters
(List[Document] (documents) β Documents to add to the vectorstore.
documents (List[langchain.schema.Document]) β
kwargs (Any) β
Returns
List of IDs of the added texts.
Return type
List[str]
search(query, search_type, **kwargs)[source]ο
Return docs most similar to query using specified search type.
Parameters
query (str) β
search_type (str) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
async asearch(query, search_type, **kwargs)[source]ο
Return docs most similar to query using specified search type.
Parameters
query (str) β
search_type (str) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
abstract similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-130 | kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs and relevance scores in the range [0, 1].
0 is dissimilar, 1 is most similar.
Parameters
query (str) β input text
k (int) β Number of Documents to return. Defaults to 4.
**kwargs β kwargs to be passed to similarity search. Should include:
score_threshold: Optional, a floating point value between 0 to 1 to
filter the resulting set of retrieved docs
kwargs (Any) β
Returns
List of Tuples of (doc, similarity_score)
Return type
List[Tuple[langchain.schema.Document, float]]
async asimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
async asimilarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query vector.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-131 | Return type
List[langchain.schema.Document]
async asimilarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Return docs most similar to embedding vector.
Parameters
embedding (List[float]) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
async amax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Parameters
query (str) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-132 | Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
async amax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Parameters
embedding (List[float]) β
k (int) β
fetch_k (int) β
lambda_mult (float) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
classmethod from_documents(documents, embedding, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
async classmethod afrom_documents(documents, embedding, **kwargs)[source]ο
Return VectorStore initialized from documents and embeddings.
Parameters
documents (List[langchain.schema.Document]) β
embedding (langchain.embeddings.base.Embeddings) β
kwargs (Any) β
Return type | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-133 | kwargs (Any) β
Return type
langchain.vectorstores.base.VST
abstract classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
async classmethod afrom_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Return VectorStore initialized from texts and embeddings.
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.base.VST
as_retriever(**kwargs)[source]ο
Parameters
kwargs (Any) β
Return type
langchain.vectorstores.base.VectorStoreRetriever
class langchain.vectorstores.Weaviate(client, index_name, text_key, embedding=None, attributes=None, relevance_score_fn=<function _default_score_normalizer>, by_text=True)[source]ο
Bases: langchain.vectorstores.base.VectorStore
Wrapper around Weaviate vector database.
To use, you should have the weaviate-client python package installed.
Example
import weaviate
from langchain.vectorstores import Weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
weaviate = Weaviate(client, index_name, text_key)
Parameters
client (Any) β
index_name (str) β
text_key (str) β
embedding (Optional[Embeddings]) β
attributes (Optional[List[str]]) β | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-134 | embedding (Optional[Embeddings]) β
attributes (Optional[List[str]]) β
relevance_score_fn (Optional[Callable[[float], float]]) β
by_text (bool) β
add_texts(texts, metadatas=None, **kwargs)[source]ο
Upload texts with metadata (properties) to Weaviate.
Parameters
texts (Iterable[str]) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
List[str]
similarity_search(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_text(query, k=4, **kwargs)[source]ο
Return docs most similar to query.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
kwargs (Any) β
Returns
List of Documents most similar to the query.
Return type
List[langchain.schema.Document]
similarity_search_by_vector(embedding, k=4, **kwargs)[source]ο
Look up similar documents by embedding vector in Weaviate.
Parameters
embedding (List[float]) β
k (int) β
kwargs (Any) β
Return type
List[langchain.schema.Document]
max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance. | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-135 | Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
query (str) β Text to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]ο
Return docs selected using the maximal marginal relevance.
Maximal marginal relevance optimizes for similarity to query AND diversity
among selected documents.
Parameters
embedding (List[float]) β Embedding to look up documents similar to.
k (int) β Number of Documents to return. Defaults to 4.
fetch_k (int) β Number of Documents to fetch to pass to MMR algorithm.
lambda_mult (float) β Number between 0 and 1 that determines the degree
of diversity among the results with 0 corresponding
to maximum diversity and 1 to minimum diversity.
Defaults to 0.5.
kwargs (Any) β
Returns
List of Documents selected by maximal marginal relevance.
Return type
List[langchain.schema.Document]
similarity_search_with_score(query, k=4, **kwargs)[source]ο
Return list of documents most similar to the query
text and cosine distance in float for each.
Lower score represents more similarity.
Parameters | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
2b28b54605b7-136 | text and cosine distance in float for each.
Lower score represents more similarity.
Parameters
query (str) β
k (int) β
kwargs (Any) β
Return type
List[Tuple[langchain.schema.Document, float]]
classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source]ο
Construct Weaviate wrapper from raw documents.
This is a user-friendly interface that:
Embeds documents.
Creates a new index for the embeddings in the Weaviate instance.
Adds the documents to the newly created Weaviate index.
This is intended to be a quick way to get started.
Example
from langchain.vectorstores.weaviate import Weaviate
from langchain.embeddings import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
weaviate = Weaviate.from_texts(
texts,
embeddings,
weaviate_url="http://localhost:8080"
)
Parameters
texts (List[str]) β
embedding (langchain.embeddings.base.Embeddings) β
metadatas (Optional[List[dict]]) β
kwargs (Any) β
Return type
langchain.vectorstores.weaviate.Weaviate
delete(ids)[source]ο
Delete by vector IDs.
Parameters
ids (List[str]) β List of ids to delete.
Return type
None | https://api.python.langchain.com/en/stable/modules/vectorstores.html |
0344f06df581-0 | Agent Toolkitsο
Agent toolkits. | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-1 | langchain.agents.agent_toolkits.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-2 | a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-3 | Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-4 | langchain.agents.agent_toolkits.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-5 | max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-6 | Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (Optional[str]) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-7 | langchain.agents.agent_toolkits.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-8 | Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-9 | Construct a json agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
return_intermediate_steps (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-10 | langchain.agents.agent_toolkits.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None, | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-11 | know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-12 | Construct a pbi agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-13 | Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a pbi agent from an Chat LLM and tools. | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-14 | Construct a pbi agent from an Chat LLM and tools.
If you supply only a toolkit and no powerbi dataset, the same LLM is used for both.
Parameters
llm (langchain.chat_models.base.BaseChatModel) β
toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) β
powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
output_parser (Optional[langchain.agents.agent.AgentOutputParser]) β
prefix (str) β
suffix (str) β
examples (Optional[str]) β
input_variables (Optional[List[str]]) β
memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) β
top_k (int) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_python_agent(llm, tool, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, verbose=False, prefix='You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-15 | Construct a python agent from an LLM and tool.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
tool (langchain.tools.python.tool.PythonREPLTool) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
verbose (bool) β
prefix (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-16 | class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a JSON spec.
Parameters
spec (langchain.tools.json.tool.JsonSpec) β
Return type
None
attribute spec: langchain.tools.json.tool.JsonSpec [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.SQLDatabaseToolkit(*, db, llm)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with SQL databases.
Parameters
db (langchain.sql_database.SQLDatabase) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute db: langchain.sql_database.SQLDatabase [Required]ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
property dialect: strο
Return string representation of dialect to use.
class langchain.agents.agent_toolkits.SparkSQLToolkit(*, db, llm)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with Spark SQL.
Parameters
db (langchain.utilities.spark_sql.SparkSQL) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute db: langchain.utilities.spark_sql.SparkSQL [Required]ο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-17 | get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.NLAToolkit(*, nla_tools)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Natural Language API Toolkit Definition.
Parameters
nla_tools (Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool]) β
Return type
None
attribute nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required]ο
List of API Endpoint Tools.
classmethod from_llm_and_ai_plugin(llm, ai_plugin, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
ai_plugin (langchain.tools.plugin.AIPlugin) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_ai_plugin_url(llm, ai_plugin_url, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
ai_plugin_url (str) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_spec(llm, spec, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit by creating tools for each operation.
Parameters | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-18 | Instantiate the toolkit by creating tools for each operation.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
spec (langchain.utilities.openapi.OpenAPISpec) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
classmethod from_llm_and_url(llm, open_api_url, requests=None, verbose=False, **kwargs)[source]ο
Instantiate the toolkit from an OpenAPI Spec URL
Parameters
llm (langchain.base_language.BaseLanguageModel) β
open_api_url (str) β
requests (Optional[langchain.requests.Requests]) β
verbose (bool) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit
get_tools()[source]ο
Get the tools for all the API operations.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.PowerBIToolkit(*, powerbi, llm, examples=None, max_iterations=5, callback_manager=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with PowerBI dataset.
Parameters
powerbi (langchain.utilities.powerbi.PowerBIDataset) β
llm (langchain.base_language.BaseLanguageModel) β
examples (Optional[str]) β
max_iterations (int) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
Return type
None
attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = Noneο
attribute examples: Optional[str] = Noneο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-19 | attribute examples: Optional[str] = Noneο
attribute llm: langchain.base_language.BaseLanguageModel [Required]ο
attribute max_iterations: int = 5ο
attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.OpenAPIToolkit(*, json_agent, requests_wrapper)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a OpenAPI api.
Parameters
json_agent (langchain.agents.agent.AgentExecutor) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
Return type
None
attribute json_agent: langchain.agents.agent.AgentExecutor [Required]ο
attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required]ο
classmethod from_llm(llm, json_spec, requests_wrapper, **kwargs)[source]ο
Create json agent from llm, then initialize.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
json_spec (langchain.tools.json.tool.JsonSpec) β
requests_wrapper (langchain.requests.TextRequestsWrapper) β
kwargs (Any) β
Return type
langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.VectorStoreToolkit(*, vectorstore_info, llm=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a vector store.
Parameters | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-20 | Toolkit for interacting with a vector store.
Parameters
vectorstore_info (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Optional]ο
attribute vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore router agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.VectorStoreInfo(*, vectorstore, name, description)[source]ο
Bases: pydantic.main.BaseModel
Information about a vectorstore.
Parameters | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-21 | Bases: pydantic.main.BaseModel
Information about a vectorstore.
Parameters
vectorstore (langchain.vectorstores.base.VectorStore) β
name (str) β
description (str) β
Return type
None
attribute description: str [Required]ο
attribute name: str [Required]ο
attribute vectorstore: langchain.vectorstores.base.VectorStore [Required]ο
class langchain.agents.agent_toolkits.VectorStoreRouterToolkit(*, vectorstores, llm=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for routing between vector stores.
Parameters
vectorstores (List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo]) β
llm (langchain.base_language.BaseLanguageModel) β
Return type
None
attribute llm: langchain.base_language.BaseLanguageModel [Optional]ο
attribute vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source]ο
Construct a pandas agent from an LLM and dataframe.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
df (Any) β
agent_type (langchain.agents.agent_types.AgentType) β | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-22 | agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (Optional[str]) β
suffix (Optional[str]) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
include_df_in_prompt (Optional[bool]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source]ο
Construct a spark agent from an LLM and dataframe.
Parameters
llm (langchain.llms.base.BaseLLM) β
df (Any) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
input_variables (Optional[List[str]]) β
verbose (bool) β
return_intermediate_steps (bool) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-23 | max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-24 | langchain.agents.agent_toolkits.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-25 | Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-26 | Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (str) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.agent_toolkits.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source]ο
Create csv agent by loading to a dataframe and using pandas agent.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
path (Union[str, List[str]]) β
pandas_kwargs (Optional[dict]) β
kwargs (Any) β
Return type
langchain.agents.agent.AgentExecutor
class langchain.agents.agent_toolkits.ZapierToolkit(*, tools=[])[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Zapier Toolkit.
Parameters
tools (List[langchain.tools.base.BaseTool]) β
Return type
None
attribute tools: List[langchain.tools.base.BaseTool] = []ο
async classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper)[source]ο
Create a toolkit from a ZapierNLAWrapper.
Parameters | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-27 | Create a toolkit from a ZapierNLAWrapper.
Parameters
zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
Return type
langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit
classmethod from_zapier_nla_wrapper(zapier_nla_wrapper)[source]ο
Create a toolkit from a ZapierNLAWrapper.
Parameters
zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) β
Return type
langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.GmailToolkit(*, api_resource=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with Gmail.
Parameters
api_resource (Resource) β
Return type
None
attribute api_resource: Resource [Optional]ο
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.JiraToolkit(*, tools=[])[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Jira Toolkit.
Parameters
tools (List[langchain.tools.base.BaseTool]) β
Return type
None
attribute tools: List[langchain.tools.base.BaseTool] = []ο
classmethod from_jira_api_wrapper(jira_api_wrapper)[source]ο
Parameters
jira_api_wrapper (langchain.utilities.jira.JiraAPIWrapper) β
Return type
langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit
get_tools()[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-28 | get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.FileManagementToolkit(*, root_dir=None, selected_tools=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for interacting with a Local Files.
Parameters
root_dir (Optional[str]) β
selected_tools (Optional[List[str]]) β
Return type
None
attribute root_dir: Optional[str] = Noneο
If specified, all file operations are made relative to root_dir.
attribute selected_tools: Optional[List[str]] = Noneο
If provided, only provide the selected tools. Defaults to all.
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.PlayWrightBrowserToolkit(*, sync_browser=None, async_browser=None)[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for web browser tools.
Parameters
sync_browser (Optional['SyncBrowser']) β
async_browser (Optional['AsyncBrowser']) β
Return type
None
attribute async_browser: Optional['AsyncBrowser'] = Noneο
attribute sync_browser: Optional['SyncBrowser'] = Noneο
classmethod from_browser(sync_browser=None, async_browser=None)[source]ο
Instantiate the toolkit.
Parameters
sync_browser (Optional[SyncBrowser]) β
async_browser (Optional[AsyncBrowser]) β
Return type
PlayWrightBrowserToolkit
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool]
class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]ο | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
0344f06df581-29 | class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]ο
Bases: langchain.agents.agent_toolkits.base.BaseToolkit
Toolkit for Azure Cognitive Services.
Return type
None
get_tools()[source]ο
Get the tools in the toolkit.
Return type
List[langchain.tools.base.BaseTool] | https://api.python.langchain.com/en/stable/modules/agent_toolkits.html |
fcfdf074c66a-0 | Output Parsersο
class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source]ο
Bases: langchain.schema.BaseOutputParser[bool]
Parameters
true_val (str) β
false_val (str) β
Return type
None
attribute false_val: str = 'NO'ο
attribute true_val: str = 'YES'ο
parse(text)[source]ο
Parse the output of an LLM call to a boolean.
Parameters
text (str) β output of language model
Returns
boolean
Return type
bool
class langchain.output_parsers.CombiningOutputParser(*, parsers)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to combine multiple output parsers into one.
Parameters
parsers (List[langchain.schema.BaseOutputParser]) β
Return type
None
attribute parsers: List[langchain.schema.BaseOutputParser] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, Any]
class langchain.output_parsers.CommaSeparatedListOutputParser[source]ο
Bases: langchain.output_parsers.list.ListOutputParser
Parse out comma separated lists.
Return type
None
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str] | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-1 | Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source]ο
Bases: langchain.schema.BaseOutputParser[datetime.datetime]
Parameters
format (str) β
Return type
None
attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ'ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
datetime.datetime
class langchain.output_parsers.EnumOutputParser(*, enum)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
enum (Type[enum.Enum]) β
Return type
None
attribute enum: Type[enum.Enum] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(response)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
response (str) β
Returns
structured output
Return type
Any
class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
guard (Any) β | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-2 | Bases: langchain.schema.BaseOutputParser
Parameters
guard (Any) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
None
attribute api: Optional[Callable] = Noneο
attribute args: Any = Noneο
attribute guard: Any = Noneο
attribute kwargs: Any = Noneο
classmethod from_pydantic(output_class, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
output_class (Any) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_file (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source]ο
Parameters
rail_str (str) β
num_reasks (int) β
api (Optional[Callable]) β
args (Any) β
kwargs (Any) β
Return type
langchain.output_parsers.rail_parser.GuardrailsOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call. | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-3 | str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Dict
class langchain.output_parsers.ListOutputParser[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output of an LLM call to a list.
Return type
None
abstract parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
List[str]
class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]
Wraps a parser and tries to fix parsing errors.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-4 | attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.fix.T
class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T]
Parameters
pydantic_object (Type[langchain.output_parsers.pydantic.T]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-5 | Return type
None
attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required]ο
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
langchain.output_parsers.pydantic.T
class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex_pattern (str) β
output_key_to_format (Dict[str, str]) β
no_update_value (Optional[str]) β
Return type
None
attribute no_update_value: Optional[str] = Noneο
attribute output_key_to_format: Dict[str, str] [Required]ο
attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?"ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source]ο
Bases: langchain.schema.BaseOutputParser
Class to parse the output into a dictionary.
Parameters
regex (str) β
output_keys (List[str]) β
default_output_key (Optional[str]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-6 | default_output_key (Optional[str]) β
Return type
None
attribute default_output_key: Optional[str] = Noneο
attribute output_keys: List[str] [Required]ο
attribute regex: str [Required]ο
parse(text)[source]ο
Parse the output of an LLM call.
Parameters
text (str) β
Return type
Dict[str, str]
class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source]ο
Bases: pydantic.main.BaseModel
Parameters
name (str) β
description (str) β
type (str) β
Return type
None
attribute description: str [Required]ο
attribute name: str [Required]ο
attribute type: str = 'string'ο
class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt and the completion to another
LLM, and telling it the completion did not satisfy criteria in the prompt.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-7 | attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T]
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-8 | Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source]ο
Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]
Wraps a parser and tries to fix parsing errors.
Does this by passing the original prompt, the completion, AND the error
that was raised to another language model and telling it that the completion
did not work, and raised the given error. Differs from RetryOutputParser
in that this implementation provides the error that was raised back to the
LLM, which in theory should give it more information on how to fix it.
Parameters
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
retry_chain (langchain.chains.llm.LLMChain) β
Return type
None
attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required]ο
attribute retry_chain: langchain.chains.llm.LLMChain [Required]ο
classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source]ο
Parameters
llm (langchain.base_language.BaseLanguageModel) β
parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) β
prompt (langchain.prompts.base.BasePromptTemplate) β
Return type
langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T] | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-9 | get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(completion)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text β output of language model
completion (str) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
parse_with_prompt(completion, prompt_value)[source]ο
Optional method to parse the output of an LLM call with a prompt.
The prompt is largely provided in the event the OutputParser wants
to retry or fix the output in some way, and needs information from
the prompt to do so.
Parameters
completion (str) β output of language model
prompt β prompt value
prompt_value (langchain.schema.PromptValue) β
Returns
structured output
Return type
langchain.output_parsers.retry.T
class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source]ο
Bases: langchain.schema.BaseOutputParser
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
None
attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required]ο
classmethod from_response_schemas(response_schemas)[source]ο
Parameters
response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) β
Return type
langchain.output_parsers.structured.StructuredOutputParser
get_format_instructions()[source]ο
Instructions on how the LLM output should be formatted.
Return type
str
parse(text)[source]ο
Parse the output of an LLM call. | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
fcfdf074c66a-10 | str
parse(text)[source]ο
Parse the output of an LLM call.
A method which takes in a string (assumed output of a language model )
and parses it into some structure.
Parameters
text (str) β output of language model
Returns
structured output
Return type
Any | https://api.python.langchain.com/en/stable/modules/output_parsers.html |
4f5128845410-0 | Embeddingsο
Wrappers around embedding modules.
class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around OpenAI embedding models.
To use, you should have the openai python package installed, and the
environment variable OPENAI_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import OpenAIEmbeddings
openai = OpenAIEmbeddings(openai_api_key="my-api-key")
In order to use the library with Microsoft Azure endpoints, you need to set
the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION.
The OPENAI_API_TYPE must be set to βazureβ and the others correspond to
the properties of your endpoint.
In addition, the deployment name must be passed as the model parameter.
Example
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key"
os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview"
os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080" | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-1 | from langchain.embeddings.openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings(
deployment="your-embeddings-deployment-name",
model="your-embeddings-model-name",
openai_api_base="https://your-endpoint.openai.azure.com/",
openai_api_type="azure",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
deployment (str) β
openai_api_version (Optional[str]) β
openai_api_base (Optional[str]) β
openai_api_type (Optional[str]) β
openai_proxy (Optional[str]) β
embedding_ctx_length (int) β
openai_api_key (Optional[str]) β
openai_organization (Optional[str]) β
allowed_special (Union[Literal['all'], typing.Set[str]]) β
disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) β
chunk_size (int) β
max_retries (int) β
request_timeout (Optional[Union[float, Tuple[float, float]]]) β
headers (Any) β
tiktoken_model_name (Optional[str]) β
Return type
None
attribute chunk_size: int = 1000ο
Maximum number of texts to embed in each batch
attribute max_retries: int = 6ο
Maximum number of retries to make when generating.
attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = Noneο
Timeout in seconds for the OpenAPI request.
attribute tiktoken_model_name: Optional[str] = Noneο
The model name to pass to tiktoken when using this class. | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-2 | The model name to pass to tiktoken when using this class.
Tiktoken is used to count the number of tokens in documents to constrain
them to be under a certain limit. By default, when set to None, this will
be the same as the embedding model name. However, there are some cases
where you may want to use this Embedding class with a model name not
supported by tiktoken. This can include when using Azure embeddings or
when using one of the many model providers that expose an OpenAI-like
API but with different models. In those cases, in order to avoid erroring
when tiktoken is called, you can specify a model name to use here.
async aembed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
async aembed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint async for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
embed_documents(texts, chunk_size=0)[source]ο
Call out to OpenAIβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (Optional[int]) β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-3 | Return type
List[List[float]]
embed_query(text)[source]ο
Call out to OpenAIβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers python package installed.
Example
from langchain.embeddings import HuggingFaceEmbeddings
model_name = "sentence-transformers/all-mpnet-base-v2"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': False}
hf = HuggingFaceEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use. | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-4 | Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Cohere embedding models.
To use, you should have the cohere python package installed, and the
environment variable COHERE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import CohereEmbeddings
cohere = CohereEmbeddings(
model="embed-english-light-v2.0", cohere_api_key="my-api-key"
)
Parameters
client (Any) β
model (str) β
truncate (Optional[str]) β
cohere_api_key (Optional[str]) β
Return type
None
attribute model: str = 'embed-english-v2.0'ο
Model name to use.
attribute truncate: Optional[str] = Noneο
Truncate embeddings that are too long from start or end (βNONEβ|βSTARTβ|βENDβ)
embed_documents(texts)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-5 | Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Cohereβs embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source]ο
Bases: langchain.embeddings.base.Embeddings
Wrapper around Elasticsearch embedding models.
This class provides an interface to generate embeddings using a model deployed
in an Elasticsearch cluster. It requires an Elasticsearch connection object
and the model_id of the model deployed in the cluster.
In Elasticsearch you need to have an embedding model loaded and deployed.
- https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html
- https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html
Parameters
client (MlClient) β
model_id (str) β
input_field (str) β
classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source]ο
Instantiate embeddings from Elasticsearch credentials.
Parameters
model_id (str) β The model_id of the model deployed in the Elasticsearch
cluster.
input_field (str) β The name of the key for the input text field in the
document. Defaults to βtext_fieldβ.
es_cloud_id (Optional[str]) β (str, optional): The Elasticsearch cloud ID to connect to.
es_user (Optional[str]) β (str, optional): Elasticsearch username.
es_password (Optional[str]) β (str, optional): Elasticsearch password.
Return type | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-6 | es_password (Optional[str]) β (str, optional): Elasticsearch password.
Return type
langchain.embeddings.elasticsearch.ElasticsearchEmbeddings
Example
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Credentials can be passed in two ways. Either set the env vars
# ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically
# pulled in, or pass them in directly as kwargs.
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
input_field=input_field,
# es_cloud_id="foo",
# es_user="bar",
# es_password="baz",
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source]ο
Instantiate embeddings from an existing Elasticsearch connection.
This method provides a way to create an instance of the ElasticsearchEmbeddings
class using an existing Elasticsearch connection. The connection object is used
to create an MlClient, which is then used to initialize the
ElasticsearchEmbeddings instance.
Args:
model_id (str): The model_id of the model deployed in the Elasticsearch cluster.
es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch
connection object. input_field (str, optional): The name of the key for the
input text field in the document. Defaults to βtext_fieldβ.
Returns:
ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class.
Example
from elasticsearch import Elasticsearch | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-7 | Example
from elasticsearch import Elasticsearch
from langchain.embeddings import ElasticsearchEmbeddings
# Define the model ID and input field name (if different from default)
model_id = "your_model_id"
# Optional, only if different from 'text_field'
input_field = "your_input_field"
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=["localhost:9200"], http_auth=("user", "password")
)
# Instantiate ElasticsearchEmbeddings using the existing connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
input_field=input_field,
)
documents = [
"This is an example document.",
"Another example document to generate embeddings for.",
]
embeddings_generator.embed_documents(documents)
Parameters
model_id (str) β
es_connection (Elasticsearch) β
input_field (str) β
Return type
ElasticsearchEmbeddings
embed_documents(texts)[source]ο
Generate embeddings for a list of documents.
Parameters
texts (List[str]) β A list of document text strings to generate embeddings
for.
Returns
A list of embeddings, one for each document in the inputlist.
Return type
List[List[float]]
embed_query(text)[source]ο
Generate an embedding for a single query text.
Parameters
text (str) β The query text to generate an embedding for.
Returns
The embedding for the input query text.
Return type
List[float]
class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]ο | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-8 | Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around llama.cpp embedding models.
To use, you should have the llama-cpp-python library installed, and provide the
path to the Llama model as a named parameter to the constructor.
Check out: https://github.com/abetlen/llama-cpp-python
Example
from langchain.embeddings import LlamaCppEmbeddings
llama = LlamaCppEmbeddings(model_path="/path/to/model.bin")
Parameters
client (Any) β
model_path (str) β
n_ctx (int) β
n_parts (int) β
seed (int) β
f16_kv (bool) β
logits_all (bool) β
vocab_only (bool) β
use_mlock (bool) β
n_threads (Optional[int]) β
n_batch (Optional[int]) β
n_gpu_layers (Optional[int]) β
Return type
None
attribute f16_kv: bool = Falseο
Use half-precision for key/value cache.
attribute logits_all: bool = Falseο
Return logits for all tokens, not just the last token.
attribute n_batch: Optional[int] = 8ο
Number of tokens to process in parallel.
Should be a number between 1 and n_ctx.
attribute n_ctx: int = 512ο
Token context window.
attribute n_gpu_layers: Optional[int] = Noneο
Number of layers to be loaded into gpu memory. Default None.
attribute n_parts: int = -1ο
Number of parts to split the model into.
If -1, the number of parts is automatically determined.
attribute n_threads: Optional[int] = Noneο
Number of threads to use. If None, the number | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-9 | Number of threads to use. If None, the number
of threads is automatically determined.
attribute seed: int = -1ο
Seed. If -1, a random seed is used.
attribute use_mlock: bool = Falseο
Force system to keep model in RAM.
attribute vocab_only: bool = Falseο
Only load the vocabulary, no weights.
embed_documents(texts)[source]ο
Embed a list of documents using the Llama model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using the Llama model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around HuggingFaceHub embedding models.
To use, you should have the huggingface_hub python package installed, and the
environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.embeddings import HuggingFaceHubEmbeddings
repo_id = "sentence-transformers/all-mpnet-base-v2"
hf = HuggingFaceHubEmbeddings(
repo_id=repo_id,
task="feature-extraction",
huggingfacehub_api_token="my-api-key",
)
Parameters
client (Any) β | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-10 | )
Parameters
client (Any) β
repo_id (str) β
task (Optional[str]) β
model_kwargs (Optional[dict]) β
huggingfacehub_api_token (Optional[str]) β
Return type
None
attribute model_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model.
attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute task: Optional[str] = 'feature-extraction'ο
Task to call the model with.
embed_documents(texts)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to HuggingFaceHubβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around modelscope_hub embedding models.
To use, you should have the modelscope python package installed.
Example
from langchain.embeddings import ModelScopeEmbeddings
model_id = "damo/nlp_corom_sentence-embedding_english-base"
embed = ModelScopeEmbeddings(model_id=model_id)
Parameters
embed (Any) β
model_id (str) β
Return type
None | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-11 | embed (Any) β
model_id (str) β
Return type
None
attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a modelscope embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a modelscope embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around tensorflow_hub embedding models.
To use, you should have the tensorflow_text python package installed.
Example
from langchain.embeddings import TensorflowHubEmbeddings
url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3"
tf = TensorflowHubEmbeddings(model_url=url)
Parameters
embed (Any) β
model_url (str) β
Return type
None
attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3'ο
Model name to use.
embed_documents(texts)[source]ο
Compute doc embeddings using a TensorflowHub embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]] | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-12 | Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a TensorflowHub embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around custom Sagemaker Inference Endpoints.
To use, you must supply the endpoint name from your deployed
Sagemaker model & the region where it is deployed.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Sagemaker endpoint.
See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html
Parameters
client (Any) β
endpoint_name (str) β
region_name (str) β
credentials_profile_name (Optional[str]) β
content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) β
model_kwargs (Optional[Dict]) β
endpoint_kwargs (Optional[Dict]) β
Return type
None
attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required]ο
The content handler class that provides an input and
output transform functions to handle formats between LLM | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-13 | The content handler class that provides an input and
output transform functions to handle formats between LLM
and the endpoint.
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute endpoint_kwargs: Optional[Dict] = Noneο
Optional attributes passed to the invoke_endpoint
function. See `boto3`_. docs for more info.
.. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html>
attribute endpoint_name: str = ''ο
The name of the endpoint from the deployed Sagemaker model.
Must be unique within an AWS Region.
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: str = ''ο
The aws region where the Sagemaker model is deployed, eg. us-west-2.
embed_documents(texts, chunk_size=64)[source]ο
Compute doc embeddings using a SageMaker Inference Endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β The chunk size defines how many input texts will
be grouped together as request. If None, will use the
chunk size specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a SageMaker inference endpoint.
Parameters
text (str) β The text to embed.
Returns | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-14 | Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around sentence_transformers embedding models.
To use, you should have the sentence_transformers
and InstructorEmbedding python packages installed.
Example
from langchain.embeddings import HuggingFaceInstructEmbeddings
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
hf = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs
)
Parameters
client (Any) β
model_name (str) β
cache_folder (Optional[str]) β
model_kwargs (Dict[str, Any]) β
encode_kwargs (Dict[str, Any]) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute cache_folder: Optional[str] = Noneο
Path to store models.
Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable.
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute encode_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass when calling the encode method of the model.
attribute model_kwargs: Dict[str, Any] [Optional]ο | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-15 | attribute model_kwargs: Dict[str, Any] [Optional]ο
Key word arguments to pass to the model.
attribute model_name: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MosaicMLβs embedding inference service.
To use, you should have the
environment variable MOSAICML_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
Example
from langchain.llms import MosaicMLInstructorEmbeddings
endpoint_url = (
"https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict"
)
mosaic_llm = MosaicMLInstructorEmbeddings(
endpoint_url=endpoint_url, | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-16 | endpoint_url=endpoint_url,
mosaicml_api_token="my-api-key"
)
Parameters
endpoint_url (str) β
embed_instruction (str) β
query_instruction (str) β
retry_sleep (float) β
mosaicml_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction used to embed documents.
attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict'ο
Endpoint URL to use.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction used to embed the query.
attribute retry_sleep: float = 1.0ο
How long to try sleeping for if a rate limit is encountered
embed_documents(texts)[source]ο
Embed documents using a MosaicML deployed instructor embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MosaicML deployed instructor embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source]ο
Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-17 | Runs custom embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import runhouse as rh
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
def get_pipeline():
model_id = "facebook/bart-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
return pipeline("feature-extraction", model=model, tokenizer=tokenizer)
embeddings = SelfHostedEmbeddings(
model_load_fn=get_pipeline,
hardware=gpu
model_reqs=["./", "torch", "transformers"],
)
Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings
import runhouse as rh
from transformers import pipeline
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
pipeline = pipeline(model="bert-base-uncased", task="feature-extraction")
rh.blob(pickle.dumps(pipeline),
path="models/pipeline.pkl").save().to(gpu, path="models")
embeddings = SelfHostedHFEmbeddings.from_pipeline(
pipeline="models/pipeline.pkl",
hardware=gpu,
model_reqs=["./", "torch", "transformers"],
)
Parameters
cache (Optional[bool]) β
verbose (bool) β | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-18 | )
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
Return type
None
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings on the remote hardware.
attribute inference_kwargs: Any = Noneο
Any kwargs to pass to the modelβs inference function.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace transformer model.
Parameters
texts (List[str]) β The list of texts to embed.s
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace transformer model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float] | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-19 | Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source]ο
Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings
Runs sentence_transformers embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another cloud
like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceEmbeddings
import runhouse as rh
model_name = "sentence-transformers/all-mpnet-base-v2"
gpu = rh.cluster(name="rh-a10x", instance_type="A100:1")
hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-20 | inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
Return type
None
attribute hardware: Any = Noneο
Remote hardware to send the inference function to.
attribute inference_fn: Callable = <function _embed_documents>ο
Inference function to extract the embeddings.
attribute load_fn_kwargs: Optional[dict] = Noneο
Key word arguments to pass to the model load function.
attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2'ο
Model name to use.
attribute model_load_fn: Callable = <function load_embedding_model>ο
Function to load the model remotely on the server.
attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch']ο
Requirements to install on hardware to inference the model.
class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source]ο
Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings
Runs InstructorEmbedding embedding models on self-hosted remote hardware.
Supported hardware includes auto-launched instances on AWS, GCP, Azure, | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-21 | Supported hardware includes auto-launched instances on AWS, GCP, Azure,
and Lambda, as well as servers specified
by IP address and SSH credentials (such as on-prem, or another
cloud like Paperspace, Coreweave, etc.).
To use, you should have the runhouse python package installed.
Example
from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings
import runhouse as rh
model_name = "hkunlp/instructor-large"
gpu = rh.cluster(name='rh-a10x', instance_type='A100:1')
hf = SelfHostedHuggingFaceInstructEmbeddings(
model_name=model_name, hardware=gpu)
Parameters
cache (Optional[bool]) β
verbose (bool) β
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
tags (Optional[List[str]]) β
pipeline_ref (Any) β
client (Any) β
inference_fn (Callable) β
hardware (Any) β
model_load_fn (Callable) β
load_fn_kwargs (Optional[dict]) β
model_reqs (List[str]) β
inference_kwargs (Any) β
model_id (str) β
embed_instruction (str) β
query_instruction (str) β
Return type
None
attribute embed_instruction: str = 'Represent the document for retrieval: 'ο
Instruction to use for embedding documents.
attribute model_id: str = 'hkunlp/instructor-large'ο
Model name to use.
attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch']ο
Requirements to install on hardware to inference the model. | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-22 | Requirements to install on hardware to inference the model.
attribute query_instruction: str = 'Represent the question for retrieving supporting documents: 'ο
Instruction to use for embedding query.
embed_documents(texts)[source]ο
Compute doc embeddings using a HuggingFace instruct model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a HuggingFace instruct model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.FakeEmbeddings(*, size)[source]ο
Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel
Parameters
size (int) β
Return type
None
embed_documents(texts)[source]ο
Embed search docs.
Parameters
texts (List[str]) β
Return type
List[List[float]]
embed_query(text)[source]ο
Embed query text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper for Aleph Alphaβs Asymmetric Embeddings
AA provides you with an endpoint to embed a document and a query.
The models were optimized to make the embeddings of documents and
the query for a document as similar as possible. | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-23 | the query for a document as similar as possible.
To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/
Example
from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding
embeddings = AlephAlphaSymmetricSemanticEmbedding()
document = "This is a content of the document"
query = "What is the content of the document?"
doc_result = embeddings.embed_documents([document])
query_result = embeddings.embed_query(query)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β
Return type
None
attribute aleph_alpha_api_key: Optional[str] = Noneο
API key for Aleph Alpha API.
attribute compress_to_size: Optional[int] = 128ο
Should the returned embeddings come back as an original 5120-dim vector,
or should it be compressed to 128-dim.
attribute contextual_control_threshold: Optional[int] = Noneο
Attention control parameters only apply to those tokens that have
explicitly been set in the request.
attribute control_log_additive: Optional[bool] = Trueο
Apply controls on prompt items by adding the log(control_factor)
to attention scores.
attribute hosting: Optional[str] = 'https://api.aleph-alpha.com'ο
Optional parameter that specifies which datacenters may process the request.
attribute model: Optional[str] = 'luminous-base'ο
Model name to use.
attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-24 | attribute normalize: Optional[bool] = Trueο
Should returned embeddings be normalized
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs asymmetric Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source]ο
Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding
The symmetric version of the Aleph Alphaβs semantic embeddings.
The main difference is that here, both the documents and
queries are embedded with a SemanticRepresentation.Symmetric
.. rubric:: Example
from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding
embeddings = AlephAlphaAsymmetricSemanticEmbedding()
text = "This is a test text"
doc_result = embeddings.embed_documents([text])
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (Optional[str]) β
hosting (Optional[str]) β
normalize (Optional[bool]) β
compress_to_size (Optional[int]) β
contextual_control_threshold (Optional[int]) β
control_log_additive (Optional[bool]) β
aleph_alpha_api_key (Optional[str]) β | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-25 | aleph_alpha_api_key (Optional[str]) β
Return type
None
embed_documents(texts)[source]ο
Call out to Aleph Alphaβs Document endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to Aleph Alphaβs asymmetric, query embedding endpoint
:param text: The text to embed.
Returns
Embeddings for the text.
Parameters
text (str) β
Return type
List[float]
langchain.embeddings.SentenceTransformerEmbeddingsο
alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings
class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around MiniMaxβs embedding inference service.
To use, you should have the environment variable MINIMAX_GROUP_ID and
MINIMAX_API_KEY set with your API token, or pass it as a named parameter to
the constructor.
Example
from langchain.embeddings import MiniMaxEmbeddings
embeddings = MiniMaxEmbeddings()
query_text = "This is a test query."
query_result = embeddings.embed_query(query_text)
document_text = "This is a test document."
document_result = embeddings.embed_documents([document_text])
Parameters
endpoint_url (str) β
model (str) β
embed_type_db (str) β
embed_type_query (str) β
minimax_group_id (Optional[str]) β | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-26 | embed_type_query (str) β
minimax_group_id (Optional[str]) β
minimax_api_key (Optional[str]) β
Return type
None
attribute embed_type_db: str = 'db'ο
For embed_documents
attribute embed_type_query: str = 'query'ο
For embed_query
attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings'ο
Endpoint URL to use.
attribute minimax_api_key: Optional[str] = Noneο
API Key for MiniMax API.
attribute minimax_group_id: Optional[str] = Noneο
Group ID for MiniMax API.
attribute model: str = 'embo-01'ο
Embeddings model name to use.
embed_documents(texts)[source]ο
Embed documents using a MiniMax embedding endpoint.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a MiniMax embedding endpoint.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Embeddings provider to invoke Bedrock embedding models.
To authenticate, the AWS client uses the following methods to
automatically load credentials:
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
If a specific credential profile should be used, you must pass | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-27 | If a specific credential profile should be used, you must pass
the name of the profile from the ~/.aws/credentials file that is to be used.
Make sure the credentials / roles used have the required policies to
access the Bedrock service.
Parameters
client (Any) β
region_name (Optional[str]) β
credentials_profile_name (Optional[str]) β
model_id (str) β
model_kwargs (Optional[Dict]) β
Return type
None
attribute credentials_profile_name: Optional[str] = Noneο
The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which
has either access keys or role information specified.
If not specified, the default credential profile or, if on an EC2 instance,
credentials from IMDS will be used.
See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html
attribute model_id: str = 'amazon.titan-e1t-medium'ο
Id of the model to call, e.g., amazon.titan-e1t-medium, this is
equivalent to the modelId property in the list-foundation-models api
attribute model_kwargs: Optional[Dict] = Noneο
Key word arguments to pass to the model.
attribute region_name: Optional[str] = Noneο
The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable
or region specified in ~/.aws/config in case it is not provided here.
embed_documents(texts, chunk_size=1)[source]ο
Compute doc embeddings using a Bedrock model.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size (int) β Bedrock currently only allows single string
inputs, so chunk size is always 1. This input is here
only for compatibility with the embeddings interface.
Returns | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-28 | only for compatibility with the embeddings interface.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Compute query embeddings using a Bedrock model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around Deep Infraβs embedding inference service.
To use, you should have the
environment variable DEEPINFRA_API_TOKEN set with your API token, or pass
it as a named parameter to the constructor.
There are multiple embeddings models available,
see https://deepinfra.com/models?type=embeddings.
Example
from langchain.embeddings import DeepInfraEmbeddings
deepinfra_emb = DeepInfraEmbeddings(
model_id="sentence-transformers/clip-ViT-B-32",
deepinfra_api_token="my-api-key"
)
r1 = deepinfra_emb.embed_documents(
[
"Alpha is the first letter of Greek alphabet",
"Beta is the second letter of Greek alphabet",
]
)
r2 = deepinfra_emb.embed_query(
"What is the second letter of Greek alphabet"
)
Parameters
model_id (str) β
normalize (bool) β
embed_instruction (str) β
query_instruction (str) β
model_kwargs (Optional[dict]) β
deepinfra_api_token (Optional[str]) β
Return type
None | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-29 | deepinfra_api_token (Optional[str]) β
Return type
None
attribute embed_instruction: str = 'passage: 'ο
Instruction used to embed documents.
attribute model_id: str = 'sentence-transformers/clip-ViT-B-32'ο
Embeddings model to use.
attribute model_kwargs: Optional[dict] = Noneο
Other model keyword args
attribute normalize: bool = Falseο
whether to normalize the computed embeddings
attribute query_instruction: str = 'query: 'ο
Instruction used to embed the query.
embed_documents(texts)[source]ο
Embed documents using a Deep Infra deployed embedding model.
Parameters
texts (List[str]) β The list of texts to embed.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Embed a query using a Deep Infra deployed embedding model.
Parameters
text (str) β The text to embed.
Returns
Embeddings for the text.
Return type
List[float]
class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around DashScope embedding models.
To use, you should have the dashscope python package installed, and the
environment variable DASHSCOPE_API_KEY set with your API key or pass it
as a named parameter to the constructor.
Example
from langchain.embeddings import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key")
Example
import os
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-30 | os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
from langchain.embeddings.dashscope import DashScopeEmbeddings
embeddings = DashScopeEmbeddings(
model="text-embedding-v1",
)
text = "This is a test query."
query_result = embeddings.embed_query(text)
Parameters
client (Any) β
model (str) β
dashscope_api_key (Optional[str]) β
max_retries (int) β
Return type
None
attribute dashscope_api_key: Optional[str] = Noneο
Maximum number of retries to make when generating.
embed_documents(texts)[source]ο
Call out to DashScopeβs embedding endpoint for embedding search docs.
Parameters
texts (List[str]) β The list of texts to embed.
chunk_size β The chunk size of embeddings. If None, will use the chunk size
specified by the class.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Call out to DashScopeβs embedding endpoint for embedding query text.
Parameters
text (str) β The text to embed.
Returns
Embedding for the text.
Return type
List[float]
class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source]ο
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings
Wrapper around embaasβs embedding service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Initialise with default model and instruction | https://api.python.langchain.com/en/stable/modules/embeddings.html |
4f5128845410-31 | it as a named parameter to the constructor.
Example
# Initialise with default model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb = EmbaasEmbeddings()
# Initialise with custom model and instruction
from langchain.embeddings import EmbaasEmbeddings
emb_model = "instructor-large"
emb_inst = "Represent the Wikipedia document for retrieval"
emb = EmbaasEmbeddings(
model=emb_model,
instruction=emb_inst
)
Parameters
model (str) β
instruction (Optional[str]) β
api_url (str) β
embaas_api_key (Optional[str]) β
Return type
None
attribute api_url: str = 'https://api.embaas.io/v1/embeddings/'ο
The URL for the embaas embeddings API.
attribute instruction: Optional[str] = Noneο
Instruction used for domain-specific embeddings.
attribute model: str = 'e5-large-v2'ο
The model used for embeddings.
embed_documents(texts)[source]ο
Get embeddings for a list of texts.
Parameters
texts (List[str]) β The list of texts to get embeddings for.
Returns
List of embeddings, one for each text.
Return type
List[List[float]]
embed_query(text)[source]ο
Get embeddings for a single text.
Parameters
text (str) β The text to get embeddings for.
Returns
List of embeddings.
Return type
List[float] | https://api.python.langchain.com/en/stable/modules/embeddings.html |
6d859eadd4b9-0 | Utilitiesο
General utilities.
class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Apify.
To use, you should have the apify-client python package installed,
and the environment variable APIFY_API_TOKEN set with your API key, or pass
apify_api_token as a named parameter to the constructor.
Parameters
apify_client (Any) β
apify_client_async (Any) β
Return type
None
attribute apify_client: Any = Noneο
attribute apify_client_async: Any = Noneο
async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to
an instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
async acall_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-1 | Run a saved Actor task on Apify and wait for results to be ready.
Parameters
task_id (str) β The ID or name of the task on the Apify platform.
task_input (Dict) β The input object of the task that youβre trying to run.
Overrides the taskβs saved input.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from thetask runβs default dataset.
Return type
ApifyDatasetLoader
call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run an Actor on the Apify platform and wait for results to be ready.
Parameters
actor_id (str) β The ID or name of the Actor on the Apify platform.
run_input (Dict) β The input object of the Actor that youβre trying to run.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-2 | timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from theActor runβs default dataset.
Return type
ApifyDatasetLoader
call_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]ο
Run a saved Actor task on Apify and wait for results to be ready.
Parameters
task_id (str) β The ID or name of the task on the Apify platform.
task_input (Dict) β The input object of the task that youβre trying to run.
Overrides the taskβs saved input.
dataset_mapping_function (Callable) β A function that takes a single
dictionary (an Apify dataset item) and converts it to an
instance of the Document class.
build (str, optional) β Optionally specifies the actor build to run.
It can be either a build tag or build number.
memory_mbytes (int, optional) β Optional memory limit for the run,
in megabytes.
timeout_secs (int, optional) β Optional timeout for the run, in seconds.
Returns
A loader that will fetch the records from thetask runβs default dataset.
Return type
ApifyDatasetLoader
class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around ArxivAPI.
To use, you should have the arxiv python package installed.
https://lukasschwab.me/arxiv.py/index.html
This wrapper will use the Arxiv API to conduct searches and | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-3 | This wrapper will use the Arxiv API to conduct searches and
fetch document summaries. By default, it will return the document summaries
of the top-k results.
It limits the Document content by doc_content_chars_max.
Set doc_content_chars_max=None if you donβt want to limit the content size.
Parameters
top_k_results (int) β number of the top-scored document used for the arxiv tool
ARXIV_MAX_QUERY_LENGTH (int) β the cut limit on the query used for the arxiv tool.
load_max_docs (int) β a limit to the number of loaded documents
load_all_available_meta (bool) β
if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result),
if False: the metadata gets only the most informative fields.
arxiv_search (Any) β
arxiv_exceptions (Any) β
doc_content_chars_max (Optional[int]) β
Return type
None
attribute arxiv_exceptions: Any = Noneο
attribute doc_content_chars_max: Optional[int] = 4000ο
attribute load_all_available_meta: bool = Falseο
attribute load_max_docs: int = 100ο
attribute top_k_results: int = 3ο
load(query)[source]ο
Run Arxiv search and get the article texts plus the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search
Returns: a list of documents with the document.page_content in text format
Parameters
query (str) β
Return type
List[langchain.schema.Document]
run(query)[source]ο
Run Arxiv search and get the article meta information.
See https://lukasschwab.me/arxiv.py/index.html#Search | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-4 | See https://lukasschwab.me/arxiv.py/index.html#Search
See https://lukasschwab.me/arxiv.py/index.html#Result
It uses only the most informative fields of article meta information.
Parameters
query (str) β
Return type
str
class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source]ο
Bases: object
Executes bash commands and returns the output.
Parameters
strip_newlines (bool) β
return_err_output (bool) β
persistent (bool) β
run(commands)[source]ο
Run commands and return final output.
Parameters
commands (Union[str, List[str]]) β
Return type
str
process_output(output, command)[source]ο
Parameters
output (str) β
command (str) β
Return type
str
class langchain.utilities.BibtexparserWrapper[source]ο
Bases: pydantic.main.BaseModel
Wrapper around bibtexparser.
To use, you should have the bibtexparser python package installed.
https://bibtexparser.readthedocs.io/en/master/
This wrapper will use bibtexparser to load a collection of references from
a bibtex file and fetch document summaries.
Return type
None
get_metadata(entry, load_extra=False)[source]ο
Get metadata for the given entry.
Parameters
entry (Mapping[str, Any]) β
load_extra (bool) β
Return type
Dict[str, Any]
load_bibtex_entries(path)[source]ο
Load bibtex entries from the bibtex file at the given path.
Parameters
path (str) β
Return type
List[Dict[str, Any]] | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-5 | Parameters
path (str) β
Return type
List[Dict[str, Any]]
class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Bing Search API.
In order to set this up, follow instructions at:
https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e
Parameters
bing_subscription_key (str) β
bing_search_url (str) β
k (int) β
Return type
None
attribute bing_search_url: str [Required]ο
attribute bing_subscription_key: str [Required]ο
attribute k: int = 10ο
results(query, num_results)[source]ο
Run query through BingSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through BingSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source]ο
Bases: pydantic.main.BaseModel
Parameters
api_key (str) β
search_kwargs (dict) β
Return type
None
attribute api_key: str [Required]ο
attribute search_kwargs: dict [Optional]ο
run(query)[source]ο
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-6 | Parameters
query (str) β
Return type
str
class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for DuckDuckGo Search API.
Free and does not require any setup
Parameters
k (int) β
region (Optional[str]) β
safesearch (str) β
time (Optional[str]) β
max_results (int) β
Return type
None
attribute k: int = 10ο
attribute max_results: int = 5ο
attribute region: Optional[str] = 'wt-wt'ο
attribute safesearch: str = 'moderate'ο
attribute time: Optional[str] = 'y'ο
get_snippets(query)[source]ο
Run query through DuckDuckGo and return concatenated results.
Parameters
query (str) β
Return type
List[str]
results(query, num_results)[source]ο
Run query through DuckDuckGo and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Parameters
query (str) β
Return type
str
class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around Google Places API. | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-7 | Bases: pydantic.main.BaseModel
Wrapper around Google Places API.
To use, you should have the googlemaps python package installed,an API key for the google maps platform,
and the enviroment variable ββGPLACES_API_KEYββ
set with your API key , or pass βgplaces_api_keyβ
as a named parameter to the constructor.
By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results.
Example
from langchain import GooglePlacesAPIWrapper
gplaceapi = GooglePlacesAPIWrapper()
Parameters
gplaces_api_key (Optional[str]) β
google_map_client (Any) β
top_k_results (Optional[int]) β
Return type
None
attribute gplaces_api_key: Optional[str] = Noneο
attribute top_k_results: Optional[int] = Noneο
fetch_place_details(place_id)[source]ο
Parameters
place_id (str) β
Return type
Optional[str]
format_place_details(place_details)[source]ο
Parameters
place_details (Dict[str, Any]) β
Return type
Optional[str]
run(query)[source]ο
Run Places search and get k number of places that exists that match.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source]ο
Bases: pydantic.main.BaseModel
Wrapper for Google Search API.
Adapted from: Instructions adapted from https://stackoverflow.com/questions/
37083058/
programmatically-searching-google-in-python-using-custom-search
TODO: DOCS for using it
1. Install google-api-python-client | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-8 | TODO: DOCS for using it
1. Install google-api-python-client
- If you donβt already have a Google account, sign up.
- If you have never created a Google APIs Console project,
read the Managing Projects page and create a project in the Google API Console.
- Install the library using pip install google-api-python-client
The current version of the library is 2.70.0 at this time
2. To create an API key:
- Navigate to the APIs & ServicesβCredentials panel in Cloud Console.
- Select Create credentials, then select API key from the drop-down menu.
- The API key created dialog box displays your newly created key.
- You now have an API_KEY
3. Setup Custom Search Engine so you can search the entire web
- Create a custom search engine in this link.
- In Sites to search, add any valid URL (i.e. www.stackoverflow.com).
- Thatβs all you have to fill up, the rest doesnβt matter.
In the left-side menu, click Edit search engine β {your search engine name}
β Setup Set Search the entire web to ON. Remove the URL you added from
the list of Sites to search.
- Under Search engine ID youβll find the search-engine-ID.
4. Enable the Custom Search API
- Navigate to the APIs & ServicesβDashboard panel in Cloud Console.
- Click Enable APIs and Services.
- Search for Custom Search API and click on it.
- Click Enable.
URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis
.com
Parameters
search_engine (Any) β
google_api_key (Optional[str]) β
google_cse_id (Optional[str]) β
k (int) β
siterestrict (bool) β
Return type
None | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-9 | siterestrict (bool) β
Return type
None
attribute google_api_key: Optional[str] = Noneο
attribute google_cse_id: Optional[str] = Noneο
attribute k: int = 10ο
attribute siterestrict: bool = Falseο
results(query, num_results)[source]ο
Run query through GoogleSearch and return metadata.
Parameters
query (str) β The query to search for.
num_results (int) β The number of results to return.
Returns
snippet - The description of the result.
title - The title of the result.
link - The link to the result.
Return type
A list of dictionaries with the following keys
run(query)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
Return type
str
class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source]ο
Bases: pydantic.main.BaseModel
Wrapper around the Serper.dev Google Search API.
You can create a free API key at https://serper.dev.
To use, you should have the environment variable SERPER_API_KEY
set with your API key, or pass serper_api_key as a named parameter
to the constructor.
Example
from langchain import GoogleSerperAPIWrapper
google_serper = GoogleSerperAPIWrapper()
Parameters
k (int) β
gl (str) β
hl (str) β
type (Literal['news', 'search', 'places', 'images']) β | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-10 | type (Literal['news', 'search', 'places', 'images']) β
tbs (Optional[str]) β
serper_api_key (Optional[str]) β
aiosession (Optional[aiohttp.client.ClientSession]) β
result_key_for_type (dict) β
Return type
None
attribute aiosession: Optional[aiohttp.client.ClientSession] = Noneο
attribute gl: str = 'us'ο
attribute hl: str = 'en'ο
attribute k: int = 10ο
attribute serper_api_key: Optional[str] = Noneο
attribute tbs: Optional[str] = Noneο
attribute type: Literal['news', 'search', 'places', 'images'] = 'search'ο
async aresults(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
async arun(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result async.
Parameters
query (str) β
kwargs (Any) β
Return type
str
results(query, **kwargs)[source]ο
Run query through GoogleSearch.
Parameters
query (str) β
kwargs (Any) β
Return type
Dict
run(query, **kwargs)[source]ο
Run query through GoogleSearch and parse result.
Parameters
query (str) β
kwargs (Any) β
Return type
str
class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source]ο
Bases: pydantic.main.BaseModel
Wrapper around GraphQL API.
To use, you should have the gql python package installed. | https://api.python.langchain.com/en/stable/modules/utilities.html |
6d859eadd4b9-11 | Wrapper around GraphQL API.
To use, you should have the gql python package installed.
This wrapper will use the GraphQL API to conduct queries.
Parameters
custom_headers (Optional[Dict[str, str]]) β
graphql_endpoint (str) β
gql_client (Any) β
gql_function (Callable[[str], Any]) β
Return type
None
attribute custom_headers: Optional[Dict[str, str]] = Noneο
attribute graphql_endpoint: str [Required]ο
run(query)[source]ο
Run a GraphQL query and get the results.
Parameters
query (str) β
Return type
str | https://api.python.langchain.com/en/stable/modules/utilities.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.