id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
2b28b54605b7-122
index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair class langchain.vectorstores.Tigris(client, embeddings, index_name)[source] Bases: langchain.vectorstores.base.VectorStore Parameters client (TigrisClient) – embeddings (Embeddings) – index_name (str) – property search_index: TigrisVectorStore add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids for documents. Ids will be autogenerated if not provided. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – filter (Optional[TigrisFilter]) – kwargs (Any) – Return type List[Document] similarity_search_with_score(query, k=4, filter=None)[source] Run similarity search with Chroma with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[TigrisFilter]) – Filter by metadata. Defaults to None. Returns
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-123
filter (Optional[TigrisFilter]) – Filter by metadata. Defaults to None. Returns List of documents most similar to the querytext with distance in float. Return type List[Tuple[Document, float]] classmethod from_texts(texts, embedding, metadatas=None, ids=None, client=None, index_name=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – client (Optional[TigrisClient]) – index_name (Optional[str]) – kwargs (Any) – Return type Tigris class langchain.vectorstores.Typesense(typesense_client, embedding, *, typesense_collection_name=None, text_key='text')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Typesense vector search. To use, you should have the typesense python package installed. Example from langchain.embedding.openai import OpenAIEmbeddings from langchain.vectorstores import Typesense import typesense node = { "host": "localhost", # For Typesense Cloud use xxx.a1.typesense.net "port": "8108", # For Typesense Cloud use 443 "protocol": "http" # For Typesense Cloud use https } typesense_client = typesense.Client( { "nodes": [node], "api_key": "<API_KEY>", "connection_timeout_seconds": 2 } ) typesense_collection_name = "langchain-memory" embedding = OpenAIEmbeddings() vectorstore = Typesense( typesense_client=typesense_client, embedding=embedding,
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-124
typesense_client=typesense_client, embedding=embedding, typesense_collection_name=typesense_collection_name, text_key="text", ) Parameters typesense_client (Client) – embedding (Embeddings) – typesense_collection_name (Optional[str]) – text_key (str) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embedding and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=10, filter='')[source] Return typesense documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 10. Minimum 10 results would be returned. filter (Optional[str]) – typesense filter_by expression to filter documents on Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=10, filter='', **kwargs)[source] Return typesense documents most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 10. Minimum 10 results would be returned.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-125
Minimum 10 results would be returned. filter (Optional[str]) – typesense filter_by expression to filter documents on kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_client_params(embedding, *, host='localhost', port='8108', protocol='http', typesense_api_key=None, connection_timeout_seconds=2, **kwargs)[source] Initialize Typesense directly from client parameters. Example from langchain.embedding.openai import OpenAIEmbeddings from langchain.vectorstores import Typesense # Pass in typesense_api_key as kwarg or set env var "TYPESENSE_API_KEY". vectorstore = Typesense( OpenAIEmbeddings(), host="localhost", port="8108", protocol="http", typesense_collection_name="langchain-memory", ) Parameters embedding (langchain.embeddings.base.Embeddings) – host (str) – port (Union[str, int]) – protocol (str) – typesense_api_key (Optional[str]) – connection_timeout_seconds (int) – kwargs (Any) – Return type langchain.vectorstores.typesense.Typesense classmethod from_texts(texts, embedding, metadatas=None, ids=None, typesense_client=None, typesense_client_params=None, typesense_collection_name=None, text_key='text', **kwargs)[source] Construct Typesense wrapper from raw text. Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – typesense_client (Optional[Client]) – typesense_client_params (Optional[dict]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-126
typesense_client_params (Optional[dict]) – typesense_collection_name (Optional[str]) – text_key (str) – kwargs (Any) – Return type Typesense class langchain.vectorstores.Vectara(vectara_customer_id=None, vectara_corpus_id=None, vectara_api_key=None)[source] Bases: langchain.vectorstores.base.VectorStore Implementation of Vector Store using Vectara (https://vectara.com). .. rubric:: Example from langchain.vectorstores import Vectara vectorstore = Vectara( vectara_customer_id=vectara_customer_id, vectara_corpus_id=vectara_corpus_id, vectara_api_key=vectara_api_key ) Parameters vectara_customer_id (Optional[str]) – vectara_corpus_id (Optional[str]) – vectara_api_key (Optional[str]) – add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source] Return Vectara documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 5. lambda_val (float) – lexical match parameter for hybrid search.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-127
lambda_val (float) – lexical match parameter for hybrid search. filter (Optional[str]) – Dictionary of argument(s) to filter on metadata. For example a filter can be β€œdoc.rating > 3.0 and part.lang = β€˜deu’”} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. n_sentence_context (int) – number of sentences before/after the matching segment to add kwargs (Any) – Returns List of Documents most similar to the query and score for each. Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=5, lambda_val=0.025, filter=None, n_sentence_context=0, **kwargs)[source] Return Vectara documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 5. filter (Optional[str]) – Dictionary of argument(s) to filter on metadata. For example a filter can be β€œdoc.rating > 3.0 and part.lang = β€˜deu’”} see https://docs.vectara.com/docs/search-apis/sql/filter-overview for more details. n_sentence_context (int) – number of sentences before/after the matching segment to add lambda_val (float) – kwargs (Any) – Returns List of Documents most similar to the query Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, **kwargs)[source] Construct Vectara wrapper from raw documents. This is intended to be a quick way to get started. .. rubric:: Example from langchain import Vectara
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-128
.. rubric:: Example from langchain import Vectara vectara = Vectara.from_texts( texts, vectara_customer_id=customer_id, vectara_corpus_id=corpus_id, vectara_api_key=api_key, ) Parameters texts (List[str]) – embedding (Optional[langchain.embeddings.base.Embeddings]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.vectara.Vectara as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.vectara.VectaraRetriever class langchain.vectorstores.VectorStore[source] Bases: abc.ABC Interface for vector stores. abstract add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters Returns List of ids from adding the texts into the vectorstore. Return type List[str] delete(ids)[source] Delete by vector ID. Parameters ids (List[str]) – List of ids to delete. Returns True if deletion is successful, False otherwise, None if not implemented. Return type Optional[bool] async aadd_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-129
texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] add_documents(documents, **kwargs)[source] Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. documents (List[langchain.schema.Document]) – kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] async aadd_documents(documents, **kwargs)[source] Run more documents through the embeddings and add to the vectorstore. Parameters (List[Document] (documents) – Documents to add to the vectorstore. documents (List[langchain.schema.Document]) – kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] search(query, search_type, **kwargs)[source] Return docs most similar to query using specified search type. Parameters query (str) – search_type (str) – kwargs (Any) – Return type List[langchain.schema.Document] async asearch(query, search_type, **kwargs)[source] Return docs most similar to query using specified search type. Parameters query (str) – search_type (str) – kwargs (Any) – Return type List[langchain.schema.Document] abstract similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-130
kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query (str) – input text k (int) – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns List of Tuples of (doc, similarity_score) Return type List[Tuple[langchain.schema.Document, float]] async asimilarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] async asimilarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-131
Return type List[langchain.schema.Document] async asimilarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] async amax_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-132
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] async amax_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_documents(documents, embedding, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.base.VST async classmethod afrom_documents(documents, embedding, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-133
kwargs (Any) – Return type langchain.vectorstores.base.VST abstract classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.base.VST async classmethod afrom_texts(texts, embedding, metadatas=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.base.VST as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.base.VectorStoreRetriever class langchain.vectorstores.Weaviate(client, index_name, text_key, embedding=None, attributes=None, relevance_score_fn=<function _default_score_normalizer>, by_text=True)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Weaviate vector database. To use, you should have the weaviate-client python package installed. Example import weaviate from langchain.vectorstores import Weaviate client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...) weaviate = Weaviate(client, index_name, text_key) Parameters client (Any) – index_name (str) – text_key (str) – embedding (Optional[Embeddings]) – attributes (Optional[List[str]]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-134
embedding (Optional[Embeddings]) – attributes (Optional[List[str]]) – relevance_score_fn (Optional[Callable[[float], float]]) – by_text (bool) – add_texts(texts, metadatas=None, **kwargs)[source] Upload texts with metadata (properties) to Weaviate. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_text(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Look up similar documents by embedding vector in Weaviate. Parameters embedding (List[float]) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-135
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, **kwargs)[source] Return list of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-136
text and cosine distance in float for each. Lower score represents more similarity. Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Construct Weaviate wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in the Weaviate instance. Adds the documents to the newly created Weaviate index. This is intended to be a quick way to get started. Example from langchain.vectorstores.weaviate import Weaviate from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() weaviate = Weaviate.from_texts( texts, embeddings, weaviate_url="http://localhost:8080" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.weaviate.Weaviate delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None
https://api.python.langchain.com/en/stable/modules/vectorstores.html
0344f06df581-0
Agent Toolkits Agent toolkits.
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-1
langchain.agents.agent_toolkits.create_json_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with JSON.\nYour goal is to return a final answer by interacting with the JSON.\nYou have access to the following tools which help you learn more about the JSON you are interacting with.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nDo not make up any information that is not contained in the JSON.\nYour input to the tools should be in the form of `data["key"][0]` where `data` is the JSON blob you are interacting with, and the syntax used is Python. \nYou should only use keys that you know for a fact exist. You must validate that a key exists by seeing it previously when calling `json_spec_list_keys`. \nIf you have not seen a key in one of those responses, you cannot use it.\nYou should only add one key at a time to the path. You cannot add multiple keys at once.\nIf you encounter a "KeyError", go back to the previous key, look at the available keys, and try again.\n\nIf the question does not seem to be related to the JSON, just return "I don\'t know" as the answer.\nAlways begin your interaction with the `json_spec_list_keys` tool with input "data" to see what keys exist in the JSON.\n\nNote that sometimes the value at a given path is large. In this case, you will get an error "Value is a large dictionary, should explore its keys directly".\nIn this case, you should ALWAYS follow up by using the `json_spec_list_keys` tool to see what keys exist at that path.\nDo not simply refer the user to the JSON or a section of the JSON, as this is not a valid answer. Keep digging until you find the
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-2
a section of the JSON, as this is not a valid answer. Keep digging until you find the answer and explicitly return it.\n', suffix='Begin!"\n\nQuestion: {input}\nThought: I should look at the keys that exist in data to see what I have access to\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-3
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.json.toolkit.JsonToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-4
langchain.agents.agent_toolkits.create_sql_agent(llm, toolkit, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix='You are an agent designed to interact with a SQL database.\nGiven an input question, create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix=None, format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False,
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-5
max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-6
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (Optional[str]) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-7
langchain.agents.agent_toolkits.create_openapi_agent(llm, toolkit, callback_manager=None, prefix="You are an agent designed to answer questions by making web requests to an API given the openapi spec.\n\nIf the question does not seem related to the API, return I don't know. Do not make up an answer.\nOnly use information provided by the tools to construct your response.\n\nFirst, find the base URL needed to make the request.\n\nSecond, find the relevant paths needed to answer the question. Take note that, sometimes, you might need to make more than one request to more than one path to answer the question.\n\nThird, find the required parameters needed to make the request. For GET requests, these are usually URL parameters and for POST requests, these are request body parameters.\n\nFourth, make the requests needed to answer the question. Ensure that you are sending the correct parameters to the request by checking which parameters are required. For parameters with a fixed set of values, please use the spec to look at which values are allowed.\n\nUse the exact parameter names as listed in the spec, do not make up any names or abbreviate the names of parameters.\nIf you get a not found error, ensure that you are using a path that actually exists in the spec.\n", suffix='Begin!\n\nQuestion: {input}\nThought: I should explore the spec to find the base url for the API.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-8
Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, return_intermediate_steps=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-9
Construct a json agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – return_intermediate_steps (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-10
langchain.agents.agent_toolkits.create_pbi_agent(llm, toolkit, powerbi=None, callback_manager=None, prefix='You are an agent designed to help users interact with a PowerBI Dataset.\n\nAgent has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I can first ask which tables I have, then how each table is defined and then ask the query tool the question I need, and finally create a nice sentence that answers the question.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', examples=None,
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-11
know the final answer\nFinal Answer: the final answer to the original input question', examples=None, input_variables=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-12
Construct a pbi agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-13
Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_pbi_chat_agent(llm, toolkit, powerbi=None, callback_manager=None, output_parser=None, prefix='Assistant is a large language model built to help users interact with a PowerBI Dataset.\n\nAssistant has access to a tool that can write a query based on the question and then run those against PowerBI, Microsofts business intelligence tool. The questions from the users should be interpreted as related to the dataset that is available and not general questions about the world. If the question does not seem related to the dataset, just return "This does not appear to be part of this dataset." as the answer.\n\nGiven an input question, ask to run the questions against the dataset, then look at the results and return the answer, the answer should be a complete sentence that answers the question, if multiple rows are asked find a way to write that in a easily readable format for a human, also make sure to represent numbers in readable ways, like 1M instead of 1000000. Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\n', suffix="TOOLS\n------\nAssistant can ask the user to use tools to look up information that may be helpful in answering the users original question. The tools the human can use are:\n\n{{tools}}\n\n{format_instructions}\n\nUSER'S INPUT\n--------------------\nHere is the user's input (remember to respond with a markdown code snippet of a json blob with a single action, and NOTHING else):\n\n{{{{input}}}}\n", examples=None, input_variables=None, memory=None, top_k=10, verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a pbi agent from an Chat LLM and tools.
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-14
Construct a pbi agent from an Chat LLM and tools. If you supply only a toolkit and no powerbi dataset, the same LLM is used for both. Parameters llm (langchain.chat_models.base.BaseChatModel) – toolkit (Optional[langchain.agents.agent_toolkits.powerbi.toolkit.PowerBIToolkit]) – powerbi (Optional[langchain.utilities.powerbi.PowerBIDataset]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – output_parser (Optional[langchain.agents.agent.AgentOutputParser]) – prefix (str) – suffix (str) – examples (Optional[str]) – input_variables (Optional[List[str]]) – memory (Optional[langchain.memory.chat_memory.BaseChatMemory]) – top_k (int) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_python_agent(llm, tool, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, verbose=False, prefix='You are an agent designed to write and execute python code to answer questions.\nYou have access to a python REPL, which you can use to execute python code.\nIf you get an error, debug your code and try again.\nOnly use the output of your code to answer the question. \nYou might know the answer without running any code, but you should still run the code to get the answer.\nIf it does not seem like you can write code to answer the question, just return "I don\'t know" as the answer.\n', agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-15
Construct a python agent from an LLM and tool. Parameters llm (langchain.base_language.BaseLanguageModel) – tool (langchain.tools.python.tool.PythonREPLTool) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – verbose (bool) – prefix (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-16
class langchain.agents.agent_toolkits.JsonToolkit(*, spec)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a JSON spec. Parameters spec (langchain.tools.json.tool.JsonSpec) – Return type None attribute spec: langchain.tools.json.tool.JsonSpec [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.SQLDatabaseToolkit(*, db, llm)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with SQL databases. Parameters db (langchain.sql_database.SQLDatabase) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute db: langchain.sql_database.SQLDatabase [Required] attribute llm: langchain.base_language.BaseLanguageModel [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] property dialect: str Return string representation of dialect to use. class langchain.agents.agent_toolkits.SparkSQLToolkit(*, db, llm)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with Spark SQL. Parameters db (langchain.utilities.spark_sql.SparkSQL) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute db: langchain.utilities.spark_sql.SparkSQL [Required] attribute llm: langchain.base_language.BaseLanguageModel [Required] get_tools()[source] Get the tools in the toolkit. Return type
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-17
get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.NLAToolkit(*, nla_tools)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Natural Language API Toolkit Definition. Parameters nla_tools (Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool]) – Return type None attribute nla_tools: Sequence[langchain.agents.agent_toolkits.nla.tool.NLATool] [Required] List of API Endpoint Tools. classmethod from_llm_and_ai_plugin(llm, ai_plugin, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – ai_plugin (langchain.tools.plugin.AIPlugin) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_ai_plugin_url(llm, ai_plugin_url, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – ai_plugin_url (str) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_spec(llm, spec, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit by creating tools for each operation. Parameters
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-18
Instantiate the toolkit by creating tools for each operation. Parameters llm (langchain.base_language.BaseLanguageModel) – spec (langchain.utilities.openapi.OpenAPISpec) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit classmethod from_llm_and_url(llm, open_api_url, requests=None, verbose=False, **kwargs)[source] Instantiate the toolkit from an OpenAPI Spec URL Parameters llm (langchain.base_language.BaseLanguageModel) – open_api_url (str) – requests (Optional[langchain.requests.Requests]) – verbose (bool) – kwargs (Any) – Return type langchain.agents.agent_toolkits.nla.toolkit.NLAToolkit get_tools()[source] Get the tools for all the API operations. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.PowerBIToolkit(*, powerbi, llm, examples=None, max_iterations=5, callback_manager=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with PowerBI dataset. Parameters powerbi (langchain.utilities.powerbi.PowerBIDataset) – llm (langchain.base_language.BaseLanguageModel) – examples (Optional[str]) – max_iterations (int) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – Return type None attribute callback_manager: Optional[langchain.callbacks.base.BaseCallbackManager] = None attribute examples: Optional[str] = None
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-19
attribute examples: Optional[str] = None attribute llm: langchain.base_language.BaseLanguageModel [Required] attribute max_iterations: int = 5 attribute powerbi: langchain.utilities.powerbi.PowerBIDataset [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.OpenAPIToolkit(*, json_agent, requests_wrapper)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a OpenAPI api. Parameters json_agent (langchain.agents.agent.AgentExecutor) – requests_wrapper (langchain.requests.TextRequestsWrapper) – Return type None attribute json_agent: langchain.agents.agent.AgentExecutor [Required] attribute requests_wrapper: langchain.requests.TextRequestsWrapper [Required] classmethod from_llm(llm, json_spec, requests_wrapper, **kwargs)[source] Create json agent from llm, then initialize. Parameters llm (langchain.base_language.BaseLanguageModel) – json_spec (langchain.tools.json.tool.JsonSpec) – requests_wrapper (langchain.requests.TextRequestsWrapper) – kwargs (Any) – Return type langchain.agents.agent_toolkits.openapi.toolkit.OpenAPIToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.VectorStoreToolkit(*, vectorstore_info, llm=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a vector store. Parameters
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-20
Toolkit for interacting with a vector store. Parameters vectorstore_info (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Optional] attribute vectorstore_info: langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] langchain.agents.agent_toolkits.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore router agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.VectorStoreInfo(*, vectorstore, name, description)[source] Bases: pydantic.main.BaseModel Information about a vectorstore. Parameters
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-21
Bases: pydantic.main.BaseModel Information about a vectorstore. Parameters vectorstore (langchain.vectorstores.base.VectorStore) – name (str) – description (str) – Return type None attribute description: str [Required] attribute name: str [Required] attribute vectorstore: langchain.vectorstores.base.VectorStore [Required] class langchain.agents.agent_toolkits.VectorStoreRouterToolkit(*, vectorstores, llm=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for routing between vector stores. Parameters vectorstores (List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo]) – llm (langchain.base_language.BaseLanguageModel) – Return type None attribute llm: langchain.base_language.BaseLanguageModel [Optional] attribute vectorstores: List[langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreInfo] [Required] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] langchain.agents.agent_toolkits.create_pandas_dataframe_agent(llm, df, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, callback_manager=None, prefix=None, suffix=None, input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, include_df_in_prompt=True, **kwargs)[source] Construct a pandas agent from an LLM and dataframe. Parameters llm (langchain.base_language.BaseLanguageModel) – df (Any) – agent_type (langchain.agents.agent_types.AgentType) –
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-22
agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (Optional[str]) – suffix (Optional[str]) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – include_df_in_prompt (Optional[bool]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_spark_dataframe_agent(llm, df, callback_manager=None, prefix='\nYou are working with a spark dataframe in Python. The name of the dataframe is `df`.\nYou should use the tools below to answer the question posed of you:', suffix='\nThis is the result of `print(df.first())`:\n{df}\n\nBegin!\nQuestion: {input}\n{agent_scratchpad}', input_variables=None, verbose=False, return_intermediate_steps=False, max_iterations=15, max_execution_time=None, early_stopping_method='force', agent_executor_kwargs=None, **kwargs)[source] Construct a spark agent from an LLM and dataframe. Parameters llm (langchain.llms.base.BaseLLM) – df (Any) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – input_variables (Optional[List[str]]) – verbose (bool) – return_intermediate_steps (bool) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) –
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-23
max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-24
langchain.agents.agent_toolkits.create_spark_sql_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to interact with Spark SQL.\nGiven an input question, create a syntactically correct Spark SQL query to run, then look at the results of the query and return the answer.\nUnless the user specifies a specific number of examples they wish to obtain, always limit your query to at most {top_k} results.\nYou can order the results by a relevant column to return the most interesting examples in the database.\nNever query for all the columns from a specific table, only ask for the relevant columns given the question.\nYou have access to tools for interacting with the database.\nOnly use the below tools. Only use the information returned by the below tools to construct your final answer.\nYou MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.\n\nDO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.\n\nIf the question does not seem related to the database, just return "I don\'t know" as the answer.\n', suffix='Begin!\n\nQuestion: {input}\nThought: I should look at the tables in the database to see what I can query.\n{agent_scratchpad}', format_instructions='Use the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question', input_variables=None, top_k=10,
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-25
Answer: the final answer to the original input question', input_variables=None, top_k=10, max_iterations=15, max_execution_time=None, early_stopping_method='force', verbose=False, agent_executor_kwargs=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-26
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.spark_sql.toolkit.SparkSQLToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (str) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.agent_toolkits.create_csv_agent(llm, path, pandas_kwargs=None, **kwargs)[source] Create csv agent by loading to a dataframe and using pandas agent. Parameters llm (langchain.base_language.BaseLanguageModel) – path (Union[str, List[str]]) – pandas_kwargs (Optional[dict]) – kwargs (Any) – Return type langchain.agents.agent.AgentExecutor class langchain.agents.agent_toolkits.ZapierToolkit(*, tools=[])[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Zapier Toolkit. Parameters tools (List[langchain.tools.base.BaseTool]) – Return type None attribute tools: List[langchain.tools.base.BaseTool] = [] async classmethod async_from_zapier_nla_wrapper(zapier_nla_wrapper)[source] Create a toolkit from a ZapierNLAWrapper. Parameters
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-27
Create a toolkit from a ZapierNLAWrapper. Parameters zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – Return type langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit classmethod from_zapier_nla_wrapper(zapier_nla_wrapper)[source] Create a toolkit from a ZapierNLAWrapper. Parameters zapier_nla_wrapper (langchain.utilities.zapier.ZapierNLAWrapper) – Return type langchain.agents.agent_toolkits.zapier.toolkit.ZapierToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.GmailToolkit(*, api_resource=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with Gmail. Parameters api_resource (Resource) – Return type None attribute api_resource: Resource [Optional] get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.JiraToolkit(*, tools=[])[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Jira Toolkit. Parameters tools (List[langchain.tools.base.BaseTool]) – Return type None attribute tools: List[langchain.tools.base.BaseTool] = [] classmethod from_jira_api_wrapper(jira_api_wrapper)[source] Parameters jira_api_wrapper (langchain.utilities.jira.JiraAPIWrapper) – Return type langchain.agents.agent_toolkits.jira.toolkit.JiraToolkit get_tools()[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-28
get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.FileManagementToolkit(*, root_dir=None, selected_tools=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for interacting with a Local Files. Parameters root_dir (Optional[str]) – selected_tools (Optional[List[str]]) – Return type None attribute root_dir: Optional[str] = None If specified, all file operations are made relative to root_dir. attribute selected_tools: Optional[List[str]] = None If provided, only provide the selected tools. Defaults to all. get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.PlayWrightBrowserToolkit(*, sync_browser=None, async_browser=None)[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for web browser tools. Parameters sync_browser (Optional['SyncBrowser']) – async_browser (Optional['AsyncBrowser']) – Return type None attribute async_browser: Optional['AsyncBrowser'] = None attribute sync_browser: Optional['SyncBrowser'] = None classmethod from_browser(sync_browser=None, async_browser=None)[source] Instantiate the toolkit. Parameters sync_browser (Optional[SyncBrowser]) – async_browser (Optional[AsyncBrowser]) – Return type PlayWrightBrowserToolkit get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool] class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
0344f06df581-29
class langchain.agents.agent_toolkits.AzureCognitiveServicesToolkit[source] Bases: langchain.agents.agent_toolkits.base.BaseToolkit Toolkit for Azure Cognitive Services. Return type None get_tools()[source] Get the tools in the toolkit. Return type List[langchain.tools.base.BaseTool]
https://api.python.langchain.com/en/stable/modules/agent_toolkits.html
fcfdf074c66a-0
Output Parsers class langchain.output_parsers.BooleanOutputParser(*, true_val='YES', false_val='NO')[source] Bases: langchain.schema.BaseOutputParser[bool] Parameters true_val (str) – false_val (str) – Return type None attribute false_val: str = 'NO' attribute true_val: str = 'YES' parse(text)[source] Parse the output of an LLM call to a boolean. Parameters text (str) – output of language model Returns boolean Return type bool class langchain.output_parsers.CombiningOutputParser(*, parsers)[source] Bases: langchain.schema.BaseOutputParser Class to combine multiple output parsers into one. Parameters parsers (List[langchain.schema.BaseOutputParser]) – Return type None attribute parsers: List[langchain.schema.BaseOutputParser] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, Any] class langchain.output_parsers.CommaSeparatedListOutputParser[source] Bases: langchain.output_parsers.list.ListOutputParser Parse out comma separated lists. Return type None get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str]
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-1
Parameters text (str) – Return type List[str] class langchain.output_parsers.DatetimeOutputParser(*, format='%Y-%m-%dT%H:%M:%S.%fZ')[source] Bases: langchain.schema.BaseOutputParser[datetime.datetime] Parameters format (str) – Return type None attribute format: str = '%Y-%m-%dT%H:%M:%S.%fZ' get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type datetime.datetime class langchain.output_parsers.EnumOutputParser(*, enum)[source] Bases: langchain.schema.BaseOutputParser Parameters enum (Type[enum.Enum]) – Return type None attribute enum: Type[enum.Enum] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(response)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model response (str) – Returns structured output Return type Any class langchain.output_parsers.GuardrailsOutputParser(*, guard=None, api=None, args=None, kwargs=None)[source] Bases: langchain.schema.BaseOutputParser Parameters guard (Any) –
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-2
Bases: langchain.schema.BaseOutputParser Parameters guard (Any) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type None attribute api: Optional[Callable] = None attribute args: Any = None attribute guard: Any = None attribute kwargs: Any = None classmethod from_pydantic(output_class, num_reasks=1, api=None, *args, **kwargs)[source] Parameters output_class (Any) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser classmethod from_rail(rail_file, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_file (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser classmethod from_rail_string(rail_str, num_reasks=1, api=None, *args, **kwargs)[source] Parameters rail_str (str) – num_reasks (int) – api (Optional[Callable]) – args (Any) – kwargs (Any) – Return type langchain.output_parsers.rail_parser.GuardrailsOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call.
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-3
str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Dict class langchain.output_parsers.ListOutputParser[source] Bases: langchain.schema.BaseOutputParser Class to parse the output of an LLM call to a list. Return type None abstract parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type List[str] class langchain.output_parsers.OutputFixingParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] Wraps a parser and tries to fix parsing errors. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required]
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-4
attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'instructions'], output_parser=None, partial_variables={}, template='Instructions:\n--------------\n{instructions}\n--------------\nCompletion:\n--------------\n{completion}\n--------------\n\nAbove, the Completion did not satisfy the constraints given in the Instructions.\nError:\n--------------\n{error}\n--------------\n\nPlease try again. Please only respond with an answer that satisfies the constraints laid out in the Instructions:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.fix.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.fix.OutputFixingParser[langchain.output_parsers.fix.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.fix.T class langchain.output_parsers.PydanticOutputParser(*, pydantic_object)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.pydantic.T] Parameters pydantic_object (Type[langchain.output_parsers.pydantic.T]) – Return type None
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-5
Return type None attribute pydantic_object: Type[langchain.output_parsers.pydantic.T] [Required] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type langchain.output_parsers.pydantic.T class langchain.output_parsers.RegexDictParser(*, regex_pattern="{}:\\s?([^.'\\n']*)\\.?", output_key_to_format, no_update_value=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex_pattern (str) – output_key_to_format (Dict[str, str]) – no_update_value (Optional[str]) – Return type None attribute no_update_value: Optional[str] = None attribute output_key_to_format: Dict[str, str] [Required] attribute regex_pattern: str = "{}:\\s?([^.'\\n']*)\\.?" parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.RegexParser(*, regex, output_keys, default_output_key=None)[source] Bases: langchain.schema.BaseOutputParser Class to parse the output into a dictionary. Parameters regex (str) – output_keys (List[str]) – default_output_key (Optional[str]) – Return type None
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-6
default_output_key (Optional[str]) – Return type None attribute default_output_key: Optional[str] = None attribute output_keys: List[str] [Required] attribute regex: str [Required] parse(text)[source] Parse the output of an LLM call. Parameters text (str) – Return type Dict[str, str] class langchain.output_parsers.ResponseSchema(*, name, description, type='string')[source] Bases: pydantic.main.BaseModel Parameters name (str) – description (str) – type (str) – Return type None attribute description: str [Required] attribute name: str [Required] attribute type: str = 'string' class langchain.output_parsers.RetryOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt and the completion to another LLM, and telling it the completion did not satisfy criteria in the prompt. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required]
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-7
attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryOutputParser[langchain.output_parsers.retry.T] get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-8
Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.RetryWithErrorOutputParser(*, parser, retry_chain)[source] Bases: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] Wraps a parser and tries to fix parsing errors. Does this by passing the original prompt, the completion, AND the error that was raised to another language model and telling it that the completion did not work, and raised the given error. Differs from RetryOutputParser in that this implementation provides the error that was raised back to the LLM, which in theory should give it more information on how to fix it. Parameters parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – retry_chain (langchain.chains.llm.LLMChain) – Return type None attribute parser: langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T] [Required] attribute retry_chain: langchain.chains.llm.LLMChain [Required] classmethod from_llm(llm, parser, prompt=PromptTemplate(input_variables=['completion', 'error', 'prompt'], output_parser=None, partial_variables={}, template='Prompt:\n{prompt}\nCompletion:\n{completion}\n\nAbove, the Completion did not satisfy the constraints given in the Prompt.\nDetails: {error}\nPlease try again:', template_format='f-string', validate_template=True))[source] Parameters llm (langchain.base_language.BaseLanguageModel) – parser (langchain.schema.BaseOutputParser[langchain.output_parsers.retry.T]) – prompt (langchain.prompts.base.BasePromptTemplate) – Return type langchain.output_parsers.retry.RetryWithErrorOutputParser[langchain.output_parsers.retry.T]
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-9
get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(completion)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model completion (str) – Returns structured output Return type langchain.output_parsers.retry.T parse_with_prompt(completion, prompt_value)[source] Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion (str) – output of language model prompt – prompt value prompt_value (langchain.schema.PromptValue) – Returns structured output Return type langchain.output_parsers.retry.T class langchain.output_parsers.StructuredOutputParser(*, response_schemas)[source] Bases: langchain.schema.BaseOutputParser Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type None attribute response_schemas: List[langchain.output_parsers.structured.ResponseSchema] [Required] classmethod from_response_schemas(response_schemas)[source] Parameters response_schemas (List[langchain.output_parsers.structured.ResponseSchema]) – Return type langchain.output_parsers.structured.StructuredOutputParser get_format_instructions()[source] Instructions on how the LLM output should be formatted. Return type str parse(text)[source] Parse the output of an LLM call.
https://api.python.langchain.com/en/stable/modules/output_parsers.html
fcfdf074c66a-10
str parse(text)[source] Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text (str) – output of language model Returns structured output Return type Any
https://api.python.langchain.com/en/stable/modules/output_parsers.html
4f5128845410-0
Embeddings Wrappers around embedding modules. class langchain.embeddings.OpenAIEmbeddings(*, client=None, model='text-embedding-ada-002', deployment='text-embedding-ada-002', openai_api_version=None, openai_api_base=None, openai_api_type=None, openai_proxy=None, embedding_ctx_length=8191, openai_api_key=None, openai_organization=None, allowed_special={}, disallowed_special='all', chunk_size=1000, max_retries=6, request_timeout=None, headers=None, tiktoken_model_name=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around OpenAI embedding models. To use, you should have the openai python package installed, and the environment variable OPENAI_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import OpenAIEmbeddings openai = OpenAIEmbeddings(openai_api_key="my-api-key") In order to use the library with Microsoft Azure endpoints, you need to set the OPENAI_API_TYPE, OPENAI_API_BASE, OPENAI_API_KEY and OPENAI_API_VERSION. The OPENAI_API_TYPE must be set to β€˜azure’ and the others correspond to the properties of your endpoint. In addition, the deployment name must be passed as the model parameter. Example import os os.environ["OPENAI_API_TYPE"] = "azure" os.environ["OPENAI_API_BASE"] = "https://<your-endpoint.openai.azure.com/" os.environ["OPENAI_API_KEY"] = "your AzureOpenAI key" os.environ["OPENAI_API_VERSION"] = "2023-03-15-preview" os.environ["OPENAI_PROXY"] = "http://your-corporate-proxy:8080"
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-1
from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings( deployment="your-embeddings-deployment-name", model="your-embeddings-model-name", openai_api_base="https://your-endpoint.openai.azure.com/", openai_api_type="azure", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – deployment (str) – openai_api_version (Optional[str]) – openai_api_base (Optional[str]) – openai_api_type (Optional[str]) – openai_proxy (Optional[str]) – embedding_ctx_length (int) – openai_api_key (Optional[str]) – openai_organization (Optional[str]) – allowed_special (Union[Literal['all'], typing.Set[str]]) – disallowed_special (Union[Literal['all'], typing.Set[str], typing.Sequence[str]]) – chunk_size (int) – max_retries (int) – request_timeout (Optional[Union[float, Tuple[float, float]]]) – headers (Any) – tiktoken_model_name (Optional[str]) – Return type None attribute chunk_size: int = 1000 Maximum number of texts to embed in each batch attribute max_retries: int = 6 Maximum number of retries to make when generating. attribute request_timeout: Optional[Union[float, Tuple[float, float]]] = None Timeout in seconds for the OpenAPI request. attribute tiktoken_model_name: Optional[str] = None The model name to pass to tiktoken when using this class.
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-2
The model name to pass to tiktoken when using this class. Tiktoken is used to count the number of tokens in documents to constrain them to be under a certain limit. By default, when set to None, this will be the same as the embedding model name. However, there are some cases where you may want to use this Embedding class with a model name not supported by tiktoken. This can include when using Azure embeddings or when using one of the many model providers that expose an OpenAI-like API but with different models. In those cases, in order to avoid erroring when tiktoken is called, you can specify a model name to use here. async aembed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint async for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] async aembed_query(text)[source] Call out to OpenAI’s embedding endpoint async for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] embed_documents(texts, chunk_size=0)[source] Call out to OpenAI’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size (Optional[int]) – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source]
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-3
Return type List[List[float]] embed_query(text)[source] Call out to OpenAI’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.HuggingFaceEmbeddings(*, client=None, model_name='sentence-transformers/all-mpnet-base-v2', cache_folder=None, model_kwargs=None, encode_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers python package installed. Example from langchain.embeddings import HuggingFaceEmbeddings model_name = "sentence-transformers/all-mpnet-base-v2" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': False} hf = HuggingFaceEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use.
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-4
Model name to use. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.CohereEmbeddings(*, client=None, model='embed-english-v2.0', truncate=None, cohere_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Cohere embedding models. To use, you should have the cohere python package installed, and the environment variable COHERE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import CohereEmbeddings cohere = CohereEmbeddings( model="embed-english-light-v2.0", cohere_api_key="my-api-key" ) Parameters client (Any) – model (str) – truncate (Optional[str]) – cohere_api_key (Optional[str]) – Return type None attribute model: str = 'embed-english-v2.0' Model name to use. attribute truncate: Optional[str] = None Truncate embeddings that are too long from start or end (β€œNONE”|”START”|”END”) embed_documents(texts)[source] Call out to Cohere’s embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-5
Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Cohere’s embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ElasticsearchEmbeddings(client, model_id, *, input_field='text_field')[source] Bases: langchain.embeddings.base.Embeddings Wrapper around Elasticsearch embedding models. This class provides an interface to generate embeddings using a model deployed in an Elasticsearch cluster. It requires an Elasticsearch connection object and the model_id of the model deployed in the cluster. In Elasticsearch you need to have an embedding model loaded and deployed. - https://www.elastic.co/guide/en/elasticsearch/reference/current/infer-trained-model.html - https://www.elastic.co/guide/en/machine-learning/current/ml-nlp-deploy-models.html Parameters client (MlClient) – model_id (str) – input_field (str) – classmethod from_credentials(model_id, *, es_cloud_id=None, es_user=None, es_password=None, input_field='text_field')[source] Instantiate embeddings from Elasticsearch credentials. Parameters model_id (str) – The model_id of the model deployed in the Elasticsearch cluster. input_field (str) – The name of the key for the input text field in the document. Defaults to β€˜text_field’. es_cloud_id (Optional[str]) – (str, optional): The Elasticsearch cloud ID to connect to. es_user (Optional[str]) – (str, optional): Elasticsearch username. es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-6
es_password (Optional[str]) – (str, optional): Elasticsearch password. Return type langchain.embeddings.elasticsearch.ElasticsearchEmbeddings Example from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Credentials can be passed in two ways. Either set the env vars # ES_CLOUD_ID, ES_USER, ES_PASSWORD and they will be automatically # pulled in, or pass them in directly as kwargs. embeddings = ElasticsearchEmbeddings.from_credentials( model_id, input_field=input_field, # es_cloud_id="foo", # es_user="bar", # es_password="baz", ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) classmethod from_es_connection(model_id, es_connection, input_field='text_field')[source] Instantiate embeddings from an existing Elasticsearch connection. This method provides a way to create an instance of the ElasticsearchEmbeddings class using an existing Elasticsearch connection. The connection object is used to create an MlClient, which is then used to initialize the ElasticsearchEmbeddings instance. Args: model_id (str): The model_id of the model deployed in the Elasticsearch cluster. es_connection (elasticsearch.Elasticsearch): An existing Elasticsearch connection object. input_field (str, optional): The name of the key for the input text field in the document. Defaults to β€˜text_field’. Returns: ElasticsearchEmbeddings: An instance of the ElasticsearchEmbeddings class. Example from elasticsearch import Elasticsearch
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-7
Example from elasticsearch import Elasticsearch from langchain.embeddings import ElasticsearchEmbeddings # Define the model ID and input field name (if different from default) model_id = "your_model_id" # Optional, only if different from 'text_field' input_field = "your_input_field" # Create Elasticsearch connection es_connection = Elasticsearch( hosts=["localhost:9200"], http_auth=("user", "password") ) # Instantiate ElasticsearchEmbeddings using the existing connection embeddings = ElasticsearchEmbeddings.from_es_connection( model_id, es_connection, input_field=input_field, ) documents = [ "This is an example document.", "Another example document to generate embeddings for.", ] embeddings_generator.embed_documents(documents) Parameters model_id (str) – es_connection (Elasticsearch) – input_field (str) – Return type ElasticsearchEmbeddings embed_documents(texts)[source] Generate embeddings for a list of documents. Parameters texts (List[str]) – A list of document text strings to generate embeddings for. Returns A list of embeddings, one for each document in the inputlist. Return type List[List[float]] embed_query(text)[source] Generate an embedding for a single query text. Parameters text (str) – The query text to generate an embedding for. Returns The embedding for the input query text. Return type List[float] class langchain.embeddings.LlamaCppEmbeddings(*, client=None, model_path, n_ctx=512, n_parts=- 1, seed=- 1, f16_kv=False, logits_all=False, vocab_only=False, use_mlock=False, n_threads=None, n_batch=8, n_gpu_layers=None)[source]
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-8
Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around llama.cpp embedding models. To use, you should have the llama-cpp-python library installed, and provide the path to the Llama model as a named parameter to the constructor. Check out: https://github.com/abetlen/llama-cpp-python Example from langchain.embeddings import LlamaCppEmbeddings llama = LlamaCppEmbeddings(model_path="/path/to/model.bin") Parameters client (Any) – model_path (str) – n_ctx (int) – n_parts (int) – seed (int) – f16_kv (bool) – logits_all (bool) – vocab_only (bool) – use_mlock (bool) – n_threads (Optional[int]) – n_batch (Optional[int]) – n_gpu_layers (Optional[int]) – Return type None attribute f16_kv: bool = False Use half-precision for key/value cache. attribute logits_all: bool = False Return logits for all tokens, not just the last token. attribute n_batch: Optional[int] = 8 Number of tokens to process in parallel. Should be a number between 1 and n_ctx. attribute n_ctx: int = 512 Token context window. attribute n_gpu_layers: Optional[int] = None Number of layers to be loaded into gpu memory. Default None. attribute n_parts: int = -1 Number of parts to split the model into. If -1, the number of parts is automatically determined. attribute n_threads: Optional[int] = None Number of threads to use. If None, the number
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-9
Number of threads to use. If None, the number of threads is automatically determined. attribute seed: int = -1 Seed. If -1, a random seed is used. attribute use_mlock: bool = False Force system to keep model in RAM. attribute vocab_only: bool = False Only load the vocabulary, no weights. embed_documents(texts)[source] Embed a list of documents using the Llama model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using the Llama model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceHubEmbeddings(*, client=None, repo_id='sentence-transformers/all-mpnet-base-v2', task='feature-extraction', model_kwargs=None, huggingfacehub_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around HuggingFaceHub embedding models. To use, you should have the huggingface_hub python package installed, and the environment variable HUGGINGFACEHUB_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import HuggingFaceHubEmbeddings repo_id = "sentence-transformers/all-mpnet-base-v2" hf = HuggingFaceHubEmbeddings( repo_id=repo_id, task="feature-extraction", huggingfacehub_api_token="my-api-key", ) Parameters client (Any) –
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-10
) Parameters client (Any) – repo_id (str) – task (Optional[str]) – model_kwargs (Optional[dict]) – huggingfacehub_api_token (Optional[str]) – Return type None attribute model_kwargs: Optional[dict] = None Key word arguments to pass to the model. attribute repo_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute task: Optional[str] = 'feature-extraction' Task to call the model with. embed_documents(texts)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to HuggingFaceHub’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.ModelScopeEmbeddings(*, embed=None, model_id='damo/nlp_corom_sentence-embedding_english-base')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around modelscope_hub embedding models. To use, you should have the modelscope python package installed. Example from langchain.embeddings import ModelScopeEmbeddings model_id = "damo/nlp_corom_sentence-embedding_english-base" embed = ModelScopeEmbeddings(model_id=model_id) Parameters embed (Any) – model_id (str) – Return type None
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-11
embed (Any) – model_id (str) – Return type None attribute model_id: str = 'damo/nlp_corom_sentence-embedding_english-base' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a modelscope embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a modelscope embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.TensorflowHubEmbeddings(*, embed=None, model_url='https://tfhub.dev/google/universal-sentence-encoder-multilingual/3')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around tensorflow_hub embedding models. To use, you should have the tensorflow_text python package installed. Example from langchain.embeddings import TensorflowHubEmbeddings url = "https://tfhub.dev/google/universal-sentence-encoder-multilingual/3" tf = TensorflowHubEmbeddings(model_url=url) Parameters embed (Any) – model_url (str) – Return type None attribute model_url: str = 'https://tfhub.dev/google/universal-sentence-encoder-multilingual/3' Model name to use. embed_documents(texts)[source] Compute doc embeddings using a TensorflowHub embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]]
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-12
Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a TensorflowHub embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SagemakerEndpointEmbeddings(*, client=None, endpoint_name='', region_name='', credentials_profile_name=None, content_handler, model_kwargs=None, endpoint_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around custom Sagemaker Inference Endpoints. To use, you must supply the endpoint name from your deployed Sagemaker model & the region where it is deployed. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Sagemaker endpoint. See: https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies.html Parameters client (Any) – endpoint_name (str) – region_name (str) – credentials_profile_name (Optional[str]) – content_handler (langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler) – model_kwargs (Optional[Dict]) – endpoint_kwargs (Optional[Dict]) – Return type None attribute content_handler: langchain.embeddings.sagemaker_endpoint.EmbeddingsContentHandler [Required] The content handler class that provides an input and output transform functions to handle formats between LLM
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-13
The content handler class that provides an input and output transform functions to handle formats between LLM and the endpoint. attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute endpoint_kwargs: Optional[Dict] = None Optional attributes passed to the invoke_endpoint function. See `boto3`_. docs for more info. .. _boto3: <https://boto3.amazonaws.com/v1/documentation/api/latest/index.html> attribute endpoint_name: str = '' The name of the endpoint from the deployed Sagemaker model. Must be unique within an AWS Region. attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: str = '' The aws region where the Sagemaker model is deployed, eg. us-west-2. embed_documents(texts, chunk_size=64)[source] Compute doc embeddings using a SageMaker Inference Endpoint. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – The chunk size defines how many input texts will be grouped together as request. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a SageMaker inference endpoint. Parameters text (str) – The text to embed. Returns
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-14
Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.HuggingFaceInstructEmbeddings(*, client=None, model_name='hkunlp/instructor-large', cache_folder=None, model_kwargs=None, encode_kwargs=None, embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around sentence_transformers embedding models. To use, you should have the sentence_transformers and InstructorEmbedding python packages installed. Example from langchain.embeddings import HuggingFaceInstructEmbeddings model_name = "hkunlp/instructor-large" model_kwargs = {'device': 'cpu'} encode_kwargs = {'normalize_embeddings': True} hf = HuggingFaceInstructEmbeddings( model_name=model_name, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs ) Parameters client (Any) – model_name (str) – cache_folder (Optional[str]) – model_kwargs (Dict[str, Any]) – encode_kwargs (Dict[str, Any]) – embed_instruction (str) – query_instruction (str) – Return type None attribute cache_folder: Optional[str] = None Path to store models. Can be also set by SENTENCE_TRANSFORMERS_HOME environment variable. attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute encode_kwargs: Dict[str, Any] [Optional] Key word arguments to pass when calling the encode method of the model. attribute model_kwargs: Dict[str, Any] [Optional]
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-15
attribute model_kwargs: Dict[str, Any] [Optional] Key word arguments to pass to the model. attribute model_name: str = 'hkunlp/instructor-large' Model name to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.MosaicMLInstructorEmbeddings(*, endpoint_url='https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ', retry_sleep=1.0, mosaicml_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MosaicML’s embedding inference service. To use, you should have the environment variable MOSAICML_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. Example from langchain.llms import MosaicMLInstructorEmbeddings endpoint_url = ( "https://models.hosted-on.mosaicml.hosting/instructor-large/v1/predict" ) mosaic_llm = MosaicMLInstructorEmbeddings( endpoint_url=endpoint_url,
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-16
endpoint_url=endpoint_url, mosaicml_api_token="my-api-key" ) Parameters endpoint_url (str) – embed_instruction (str) – query_instruction (str) – retry_sleep (float) – mosaicml_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction used to embed documents. attribute endpoint_url: str = 'https://models.hosted-on.mosaicml.hosting/instructor-xl/v1/predict' Endpoint URL to use. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction used to embed the query. attribute retry_sleep: float = 1.0 How long to try sleeping for if a rate limit is encountered embed_documents(texts)[source] Embed documents using a MosaicML deployed instructor embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MosaicML deployed instructor embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn, load_fn_kwargs=None, model_reqs=['./', 'torch'], inference_kwargs=None)[source] Bases: langchain.llms.self_hosted.SelfHostedPipeline, langchain.embeddings.base.Embeddings
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-17
Runs custom embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example using a model load function:from langchain.embeddings import SelfHostedEmbeddings from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline import runhouse as rh gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") def get_pipeline(): model_id = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) return pipeline("feature-extraction", model=model, tokenizer=tokenizer) embeddings = SelfHostedEmbeddings( model_load_fn=get_pipeline, hardware=gpu model_reqs=["./", "torch", "transformers"], ) Example passing in a pipeline path:from langchain.embeddings import SelfHostedHFEmbeddings import runhouse as rh from transformers import pipeline gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") pipeline = pipeline(model="bert-base-uncased", task="feature-extraction") rh.blob(pickle.dumps(pipeline), path="models/pipeline.pkl").save().to(gpu, path="models") embeddings = SelfHostedHFEmbeddings.from_pipeline( pipeline="models/pipeline.pkl", hardware=gpu, model_reqs=["./", "torch", "transformers"], ) Parameters cache (Optional[bool]) – verbose (bool) –
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-18
) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – Return type None attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings on the remote hardware. attribute inference_kwargs: Any = None Any kwargs to pass to the model’s inference function. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace transformer model. Parameters texts (List[str]) – The list of texts to embed.s Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace transformer model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float]
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-19
Returns Embeddings for the text. Return type List[float] class langchain.embeddings.SelfHostedHuggingFaceEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'sentence_transformers', 'torch'], inference_kwargs=None, model_id='sentence-transformers/all-mpnet-base-v2')[source] Bases: langchain.embeddings.self_hosted.SelfHostedEmbeddings Runs sentence_transformers embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceEmbeddings import runhouse as rh model_name = "sentence-transformers/all-mpnet-base-v2" gpu = rh.cluster(name="rh-a10x", instance_type="A100:1") hf = SelfHostedHuggingFaceEmbeddings(model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) –
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-20
inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – Return type None attribute hardware: Any = None Remote hardware to send the inference function to. attribute inference_fn: Callable = <function _embed_documents> Inference function to extract the embeddings. attribute load_fn_kwargs: Optional[dict] = None Key word arguments to pass to the model load function. attribute model_id: str = 'sentence-transformers/all-mpnet-base-v2' Model name to use. attribute model_load_fn: Callable = <function load_embedding_model> Function to load the model remotely on the server. attribute model_reqs: List[str] = ['./', 'sentence_transformers', 'torch'] Requirements to install on hardware to inference the model. class langchain.embeddings.SelfHostedHuggingFaceInstructEmbeddings(*, cache=None, verbose=None, callbacks=None, callback_manager=None, tags=None, pipeline_ref=None, client=None, inference_fn=<function _embed_documents>, hardware=None, model_load_fn=<function load_embedding_model>, load_fn_kwargs=None, model_reqs=['./', 'InstructorEmbedding', 'torch'], inference_kwargs=None, model_id='hkunlp/instructor-large', embed_instruction='Represent the document for retrieval: ', query_instruction='Represent the question for retrieving supporting documents: ')[source] Bases: langchain.embeddings.self_hosted_hugging_face.SelfHostedHuggingFaceEmbeddings Runs InstructorEmbedding embedding models on self-hosted remote hardware. Supported hardware includes auto-launched instances on AWS, GCP, Azure,
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-21
Supported hardware includes auto-launched instances on AWS, GCP, Azure, and Lambda, as well as servers specified by IP address and SSH credentials (such as on-prem, or another cloud like Paperspace, Coreweave, etc.). To use, you should have the runhouse python package installed. Example from langchain.embeddings import SelfHostedHuggingFaceInstructEmbeddings import runhouse as rh model_name = "hkunlp/instructor-large" gpu = rh.cluster(name='rh-a10x', instance_type='A100:1') hf = SelfHostedHuggingFaceInstructEmbeddings( model_name=model_name, hardware=gpu) Parameters cache (Optional[bool]) – verbose (bool) – callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – tags (Optional[List[str]]) – pipeline_ref (Any) – client (Any) – inference_fn (Callable) – hardware (Any) – model_load_fn (Callable) – load_fn_kwargs (Optional[dict]) – model_reqs (List[str]) – inference_kwargs (Any) – model_id (str) – embed_instruction (str) – query_instruction (str) – Return type None attribute embed_instruction: str = 'Represent the document for retrieval: ' Instruction to use for embedding documents. attribute model_id: str = 'hkunlp/instructor-large' Model name to use. attribute model_reqs: List[str] = ['./', 'InstructorEmbedding', 'torch'] Requirements to install on hardware to inference the model.
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-22
Requirements to install on hardware to inference the model. attribute query_instruction: str = 'Represent the question for retrieving supporting documents: ' Instruction to use for embedding query. embed_documents(texts)[source] Compute doc embeddings using a HuggingFace instruct model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a HuggingFace instruct model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.FakeEmbeddings(*, size)[source] Bases: langchain.embeddings.base.Embeddings, pydantic.main.BaseModel Parameters size (int) – Return type None embed_documents(texts)[source] Embed search docs. Parameters texts (List[str]) – Return type List[List[float]] embed_query(text)[source] Embed query text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaAsymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper for Aleph Alpha’s Asymmetric Embeddings AA provides you with an endpoint to embed a document and a query. The models were optimized to make the embeddings of documents and the query for a document as similar as possible.
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-23
the query for a document as similar as possible. To learn more, check out: https://docs.aleph-alpha.com/docs/tasks/semantic_embed/ Example from aleph_alpha import AlephAlphaAsymmetricSemanticEmbedding embeddings = AlephAlphaSymmetricSemanticEmbedding() document = "This is a content of the document" query = "What is the content of the document?" doc_result = embeddings.embed_documents([document]) query_result = embeddings.embed_query(query) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) – Return type None attribute aleph_alpha_api_key: Optional[str] = None API key for Aleph Alpha API. attribute compress_to_size: Optional[int] = 128 Should the returned embeddings come back as an original 5120-dim vector, or should it be compressed to 128-dim. attribute contextual_control_threshold: Optional[int] = None Attention control parameters only apply to those tokens that have explicitly been set in the request. attribute control_log_additive: Optional[bool] = True Apply controls on prompt items by adding the log(control_factor) to attention scores. attribute hosting: Optional[str] = 'https://api.aleph-alpha.com' Optional parameter that specifies which datacenters may process the request. attribute model: Optional[str] = 'luminous-base' Model name to use. attribute normalize: Optional[bool] = True Should returned embeddings be normalized
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-24
attribute normalize: Optional[bool] = True Should returned embeddings be normalized embed_documents(texts)[source] Call out to Aleph Alpha’s asymmetric Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] class langchain.embeddings.AlephAlphaSymmetricSemanticEmbedding(*, client=None, model='luminous-base', hosting='https://api.aleph-alpha.com', normalize=True, compress_to_size=128, contextual_control_threshold=None, control_log_additive=True, aleph_alpha_api_key=None)[source] Bases: langchain.embeddings.aleph_alpha.AlephAlphaAsymmetricSemanticEmbedding The symmetric version of the Aleph Alpha’s semantic embeddings. The main difference is that here, both the documents and queries are embedded with a SemanticRepresentation.Symmetric .. rubric:: Example from aleph_alpha import AlephAlphaSymmetricSemanticEmbedding embeddings = AlephAlphaAsymmetricSemanticEmbedding() text = "This is a test text" doc_result = embeddings.embed_documents([text]) query_result = embeddings.embed_query(text) Parameters client (Any) – model (Optional[str]) – hosting (Optional[str]) – normalize (Optional[bool]) – compress_to_size (Optional[int]) – contextual_control_threshold (Optional[int]) – control_log_additive (Optional[bool]) – aleph_alpha_api_key (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-25
aleph_alpha_api_key (Optional[str]) – Return type None embed_documents(texts)[source] Call out to Aleph Alpha’s Document endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to Aleph Alpha’s asymmetric, query embedding endpoint :param text: The text to embed. Returns Embeddings for the text. Parameters text (str) – Return type List[float] langchain.embeddings.SentenceTransformerEmbeddings alias of langchain.embeddings.huggingface.HuggingFaceEmbeddings class langchain.embeddings.MiniMaxEmbeddings(*, endpoint_url='https://api.minimax.chat/v1/embeddings', model='embo-01', embed_type_db='db', embed_type_query='query', minimax_group_id=None, minimax_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around MiniMax’s embedding inference service. To use, you should have the environment variable MINIMAX_GROUP_ID and MINIMAX_API_KEY set with your API token, or pass it as a named parameter to the constructor. Example from langchain.embeddings import MiniMaxEmbeddings embeddings = MiniMaxEmbeddings() query_text = "This is a test query." query_result = embeddings.embed_query(query_text) document_text = "This is a test document." document_result = embeddings.embed_documents([document_text]) Parameters endpoint_url (str) – model (str) – embed_type_db (str) – embed_type_query (str) – minimax_group_id (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-26
embed_type_query (str) – minimax_group_id (Optional[str]) – minimax_api_key (Optional[str]) – Return type None attribute embed_type_db: str = 'db' For embed_documents attribute embed_type_query: str = 'query' For embed_query attribute endpoint_url: str = 'https://api.minimax.chat/v1/embeddings' Endpoint URL to use. attribute minimax_api_key: Optional[str] = None API Key for MiniMax API. attribute minimax_group_id: Optional[str] = None Group ID for MiniMax API. attribute model: str = 'embo-01' Embeddings model name to use. embed_documents(texts)[source] Embed documents using a MiniMax embedding endpoint. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a MiniMax embedding endpoint. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.BedrockEmbeddings(*, client=None, region_name=None, credentials_profile_name=None, model_id='amazon.titan-e1t-medium', model_kwargs=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Embeddings provider to invoke Bedrock embedding models. To authenticate, the AWS client uses the following methods to automatically load credentials: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html If a specific credential profile should be used, you must pass
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-27
If a specific credential profile should be used, you must pass the name of the profile from the ~/.aws/credentials file that is to be used. Make sure the credentials / roles used have the required policies to access the Bedrock service. Parameters client (Any) – region_name (Optional[str]) – credentials_profile_name (Optional[str]) – model_id (str) – model_kwargs (Optional[Dict]) – Return type None attribute credentials_profile_name: Optional[str] = None The name of the profile in the ~/.aws/credentials or ~/.aws/config files, which has either access keys or role information specified. If not specified, the default credential profile or, if on an EC2 instance, credentials from IMDS will be used. See: https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html attribute model_id: str = 'amazon.titan-e1t-medium' Id of the model to call, e.g., amazon.titan-e1t-medium, this is equivalent to the modelId property in the list-foundation-models api attribute model_kwargs: Optional[Dict] = None Key word arguments to pass to the model. attribute region_name: Optional[str] = None The aws region e.g., us-west-2. Fallsback to AWS_DEFAULT_REGION env variable or region specified in ~/.aws/config in case it is not provided here. embed_documents(texts, chunk_size=1)[source] Compute doc embeddings using a Bedrock model. Parameters texts (List[str]) – The list of texts to embed. chunk_size (int) – Bedrock currently only allows single string inputs, so chunk size is always 1. This input is here only for compatibility with the embeddings interface. Returns
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-28
only for compatibility with the embeddings interface. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Compute query embeddings using a Bedrock model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DeepInfraEmbeddings(*, model_id='sentence-transformers/clip-ViT-B-32', normalize=False, embed_instruction='passage: ', query_instruction='query: ', model_kwargs=None, deepinfra_api_token=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around Deep Infra’s embedding inference service. To use, you should have the environment variable DEEPINFRA_API_TOKEN set with your API token, or pass it as a named parameter to the constructor. There are multiple embeddings models available, see https://deepinfra.com/models?type=embeddings. Example from langchain.embeddings import DeepInfraEmbeddings deepinfra_emb = DeepInfraEmbeddings( model_id="sentence-transformers/clip-ViT-B-32", deepinfra_api_token="my-api-key" ) r1 = deepinfra_emb.embed_documents( [ "Alpha is the first letter of Greek alphabet", "Beta is the second letter of Greek alphabet", ] ) r2 = deepinfra_emb.embed_query( "What is the second letter of Greek alphabet" ) Parameters model_id (str) – normalize (bool) – embed_instruction (str) – query_instruction (str) – model_kwargs (Optional[dict]) – deepinfra_api_token (Optional[str]) – Return type None
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-29
deepinfra_api_token (Optional[str]) – Return type None attribute embed_instruction: str = 'passage: ' Instruction used to embed documents. attribute model_id: str = 'sentence-transformers/clip-ViT-B-32' Embeddings model to use. attribute model_kwargs: Optional[dict] = None Other model keyword args attribute normalize: bool = False whether to normalize the computed embeddings attribute query_instruction: str = 'query: ' Instruction used to embed the query. embed_documents(texts)[source] Embed documents using a Deep Infra deployed embedding model. Parameters texts (List[str]) – The list of texts to embed. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Embed a query using a Deep Infra deployed embedding model. Parameters text (str) – The text to embed. Returns Embeddings for the text. Return type List[float] class langchain.embeddings.DashScopeEmbeddings(*, client=None, model='text-embedding-v1', dashscope_api_key=None, max_retries=5)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around DashScope embedding models. To use, you should have the dashscope python package installed, and the environment variable DASHSCOPE_API_KEY set with your API key or pass it as a named parameter to the constructor. Example from langchain.embeddings import DashScopeEmbeddings embeddings = DashScopeEmbeddings(dashscope_api_key="my-api-key") Example import os os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY"
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-30
os.environ["DASHSCOPE_API_KEY"] = "your DashScope API KEY" from langchain.embeddings.dashscope import DashScopeEmbeddings embeddings = DashScopeEmbeddings( model="text-embedding-v1", ) text = "This is a test query." query_result = embeddings.embed_query(text) Parameters client (Any) – model (str) – dashscope_api_key (Optional[str]) – max_retries (int) – Return type None attribute dashscope_api_key: Optional[str] = None Maximum number of retries to make when generating. embed_documents(texts)[source] Call out to DashScope’s embedding endpoint for embedding search docs. Parameters texts (List[str]) – The list of texts to embed. chunk_size – The chunk size of embeddings. If None, will use the chunk size specified by the class. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Call out to DashScope’s embedding endpoint for embedding query text. Parameters text (str) – The text to embed. Returns Embedding for the text. Return type List[float] class langchain.embeddings.EmbaasEmbeddings(*, model='e5-large-v2', instruction=None, api_url='https://api.embaas.io/v1/embeddings/', embaas_api_key=None)[source] Bases: pydantic.main.BaseModel, langchain.embeddings.base.Embeddings Wrapper around embaas’s embedding service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Initialise with default model and instruction
https://api.python.langchain.com/en/stable/modules/embeddings.html
4f5128845410-31
it as a named parameter to the constructor. Example # Initialise with default model and instruction from langchain.embeddings import EmbaasEmbeddings emb = EmbaasEmbeddings() # Initialise with custom model and instruction from langchain.embeddings import EmbaasEmbeddings emb_model = "instructor-large" emb_inst = "Represent the Wikipedia document for retrieval" emb = EmbaasEmbeddings( model=emb_model, instruction=emb_inst ) Parameters model (str) – instruction (Optional[str]) – api_url (str) – embaas_api_key (Optional[str]) – Return type None attribute api_url: str = 'https://api.embaas.io/v1/embeddings/' The URL for the embaas embeddings API. attribute instruction: Optional[str] = None Instruction used for domain-specific embeddings. attribute model: str = 'e5-large-v2' The model used for embeddings. embed_documents(texts)[source] Get embeddings for a list of texts. Parameters texts (List[str]) – The list of texts to get embeddings for. Returns List of embeddings, one for each text. Return type List[List[float]] embed_query(text)[source] Get embeddings for a single text. Parameters text (str) – The text to get embeddings for. Returns List of embeddings. Return type List[float]
https://api.python.langchain.com/en/stable/modules/embeddings.html
6d859eadd4b9-0
Utilities General utilities. class langchain.utilities.ApifyWrapper(*, apify_client=None, apify_client_async=None)[source] Bases: pydantic.main.BaseModel Wrapper around Apify. To use, you should have the apify-client python package installed, and the environment variable APIFY_API_TOKEN set with your API key, or pass apify_api_token as a named parameter to the constructor. Parameters apify_client (Any) – apify_client_async (Any) – Return type None attribute apify_client: Any = None attribute apify_client_async: Any = None async acall_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader async acall_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source]
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-1
Run a saved Actor task on Apify and wait for results to be ready. Parameters task_id (str) – The ID or name of the task on the Apify platform. task_input (Dict) – The input object of the task that you’re trying to run. Overrides the task’s saved input. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from thetask run’s default dataset. Return type ApifyDatasetLoader call_actor(actor_id, run_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run an Actor on the Apify platform and wait for results to be ready. Parameters actor_id (str) – The ID or name of the Actor on the Apify platform. run_input (Dict) – The input object of the Actor that you’re trying to run. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-2
timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from theActor run’s default dataset. Return type ApifyDatasetLoader call_actor_task(task_id, task_input, dataset_mapping_function, *, build=None, memory_mbytes=None, timeout_secs=None)[source] Run a saved Actor task on Apify and wait for results to be ready. Parameters task_id (str) – The ID or name of the task on the Apify platform. task_input (Dict) – The input object of the task that you’re trying to run. Overrides the task’s saved input. dataset_mapping_function (Callable) – A function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. build (str, optional) – Optionally specifies the actor build to run. It can be either a build tag or build number. memory_mbytes (int, optional) – Optional memory limit for the run, in megabytes. timeout_secs (int, optional) – Optional timeout for the run, in seconds. Returns A loader that will fetch the records from thetask run’s default dataset. Return type ApifyDatasetLoader class langchain.utilities.ArxivAPIWrapper(*, arxiv_search=None, arxiv_exceptions=None, top_k_results=3, load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000, ARXIV_MAX_QUERY_LENGTH=300)[source] Bases: pydantic.main.BaseModel Wrapper around ArxivAPI. To use, you should have the arxiv python package installed. https://lukasschwab.me/arxiv.py/index.html This wrapper will use the Arxiv API to conduct searches and
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-3
This wrapper will use the Arxiv API to conduct searches and fetch document summaries. By default, it will return the document summaries of the top-k results. It limits the Document content by doc_content_chars_max. Set doc_content_chars_max=None if you don’t want to limit the content size. Parameters top_k_results (int) – number of the top-scored document used for the arxiv tool ARXIV_MAX_QUERY_LENGTH (int) – the cut limit on the query used for the arxiv tool. load_max_docs (int) – a limit to the number of loaded documents load_all_available_meta (bool) – if True: the metadata of the loaded Documents gets all available meta info(see https://lukasschwab.me/arxiv.py/index.html#Result), if False: the metadata gets only the most informative fields. arxiv_search (Any) – arxiv_exceptions (Any) – doc_content_chars_max (Optional[int]) – Return type None attribute arxiv_exceptions: Any = None attribute doc_content_chars_max: Optional[int] = 4000 attribute load_all_available_meta: bool = False attribute load_max_docs: int = 100 attribute top_k_results: int = 3 load(query)[source] Run Arxiv search and get the article texts plus the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search Returns: a list of documents with the document.page_content in text format Parameters query (str) – Return type List[langchain.schema.Document] run(query)[source] Run Arxiv search and get the article meta information. See https://lukasschwab.me/arxiv.py/index.html#Search
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-4
See https://lukasschwab.me/arxiv.py/index.html#Search See https://lukasschwab.me/arxiv.py/index.html#Result It uses only the most informative fields of article meta information. Parameters query (str) – Return type str class langchain.utilities.BashProcess(strip_newlines=False, return_err_output=False, persistent=False)[source] Bases: object Executes bash commands and returns the output. Parameters strip_newlines (bool) – return_err_output (bool) – persistent (bool) – run(commands)[source] Run commands and return final output. Parameters commands (Union[str, List[str]]) – Return type str process_output(output, command)[source] Parameters output (str) – command (str) – Return type str class langchain.utilities.BibtexparserWrapper[source] Bases: pydantic.main.BaseModel Wrapper around bibtexparser. To use, you should have the bibtexparser python package installed. https://bibtexparser.readthedocs.io/en/master/ This wrapper will use bibtexparser to load a collection of references from a bibtex file and fetch document summaries. Return type None get_metadata(entry, load_extra=False)[source] Get metadata for the given entry. Parameters entry (Mapping[str, Any]) – load_extra (bool) – Return type Dict[str, Any] load_bibtex_entries(path)[source] Load bibtex entries from the bibtex file at the given path. Parameters path (str) – Return type List[Dict[str, Any]]
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-5
Parameters path (str) – Return type List[Dict[str, Any]] class langchain.utilities.BingSearchAPIWrapper(*, bing_subscription_key, bing_search_url, k=10)[source] Bases: pydantic.main.BaseModel Wrapper for Bing Search API. In order to set this up, follow instructions at: https://levelup.gitconnected.com/api-tutorial-how-to-use-bing-web-search-api-in-python-4165d5592a7e Parameters bing_subscription_key (str) – bing_search_url (str) – k (int) – Return type None attribute bing_search_url: str [Required] attribute bing_subscription_key: str [Required] attribute k: int = 10 results(query, num_results)[source] Run query through BingSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through BingSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.BraveSearchWrapper(*, api_key, search_kwargs=None)[source] Bases: pydantic.main.BaseModel Parameters api_key (str) – search_kwargs (dict) – Return type None attribute api_key: str [Required] attribute search_kwargs: dict [Optional] run(query)[source] Parameters query (str) – Return type str
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-6
Parameters query (str) – Return type str class langchain.utilities.DuckDuckGoSearchAPIWrapper(*, k=10, region='wt-wt', safesearch='moderate', time='y', max_results=5)[source] Bases: pydantic.main.BaseModel Wrapper for DuckDuckGo Search API. Free and does not require any setup Parameters k (int) – region (Optional[str]) – safesearch (str) – time (Optional[str]) – max_results (int) – Return type None attribute k: int = 10 attribute max_results: int = 5 attribute region: Optional[str] = 'wt-wt' attribute safesearch: str = 'moderate' attribute time: Optional[str] = 'y' get_snippets(query)[source] Run query through DuckDuckGo and return concatenated results. Parameters query (str) – Return type List[str] results(query, num_results)[source] Run query through DuckDuckGo and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Parameters query (str) – Return type str class langchain.utilities.GooglePlacesAPIWrapper(*, gplaces_api_key=None, google_map_client=None, top_k_results=None)[source] Bases: pydantic.main.BaseModel Wrapper around Google Places API.
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-7
Bases: pydantic.main.BaseModel Wrapper around Google Places API. To use, you should have the googlemaps python package installed,an API key for the google maps platform, and the enviroment variable β€˜β€™GPLACES_API_KEY’’ set with your API key , or pass β€˜gplaces_api_key’ as a named parameter to the constructor. By default, this will return the all the results on the input query.You can use the top_k_results argument to limit the number of results. Example from langchain import GooglePlacesAPIWrapper gplaceapi = GooglePlacesAPIWrapper() Parameters gplaces_api_key (Optional[str]) – google_map_client (Any) – top_k_results (Optional[int]) – Return type None attribute gplaces_api_key: Optional[str] = None attribute top_k_results: Optional[int] = None fetch_place_details(place_id)[source] Parameters place_id (str) – Return type Optional[str] format_place_details(place_details)[source] Parameters place_details (Dict[str, Any]) – Return type Optional[str] run(query)[source] Run Places search and get k number of places that exists that match. Parameters query (str) – Return type str class langchain.utilities.GoogleSearchAPIWrapper(*, search_engine=None, google_api_key=None, google_cse_id=None, k=10, siterestrict=False)[source] Bases: pydantic.main.BaseModel Wrapper for Google Search API. Adapted from: Instructions adapted from https://stackoverflow.com/questions/ 37083058/ programmatically-searching-google-in-python-using-custom-search TODO: DOCS for using it 1. Install google-api-python-client
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-8
TODO: DOCS for using it 1. Install google-api-python-client - If you don’t already have a Google account, sign up. - If you have never created a Google APIs Console project, read the Managing Projects page and create a project in the Google API Console. - Install the library using pip install google-api-python-client The current version of the library is 2.70.0 at this time 2. To create an API key: - Navigate to the APIs & Servicesβ†’Credentials panel in Cloud Console. - Select Create credentials, then select API key from the drop-down menu. - The API key created dialog box displays your newly created key. - You now have an API_KEY 3. Setup Custom Search Engine so you can search the entire web - Create a custom search engine in this link. - In Sites to search, add any valid URL (i.e. www.stackoverflow.com). - That’s all you have to fill up, the rest doesn’t matter. In the left-side menu, click Edit search engine β†’ {your search engine name} β†’ Setup Set Search the entire web to ON. Remove the URL you added from the list of Sites to search. - Under Search engine ID you’ll find the search-engine-ID. 4. Enable the Custom Search API - Navigate to the APIs & Servicesβ†’Dashboard panel in Cloud Console. - Click Enable APIs and Services. - Search for Custom Search API and click on it. - Click Enable. URL for it: https://console.cloud.google.com/apis/library/customsearch.googleapis .com Parameters search_engine (Any) – google_api_key (Optional[str]) – google_cse_id (Optional[str]) – k (int) – siterestrict (bool) – Return type None
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-9
siterestrict (bool) – Return type None attribute google_api_key: Optional[str] = None attribute google_cse_id: Optional[str] = None attribute k: int = 10 attribute siterestrict: bool = False results(query, num_results)[source] Run query through GoogleSearch and return metadata. Parameters query (str) – The query to search for. num_results (int) – The number of results to return. Returns snippet - The description of the result. title - The title of the result. link - The link to the result. Return type A list of dictionaries with the following keys run(query)[source] Run query through GoogleSearch and parse result. Parameters query (str) – Return type str class langchain.utilities.GoogleSerperAPIWrapper(*, k=10, gl='us', hl='en', type='search', tbs=None, serper_api_key=None, aiosession=None, result_key_for_type={'images': 'images', 'news': 'news', 'places': 'places', 'search': 'organic'})[source] Bases: pydantic.main.BaseModel Wrapper around the Serper.dev Google Search API. You can create a free API key at https://serper.dev. To use, you should have the environment variable SERPER_API_KEY set with your API key, or pass serper_api_key as a named parameter to the constructor. Example from langchain import GoogleSerperAPIWrapper google_serper = GoogleSerperAPIWrapper() Parameters k (int) – gl (str) – hl (str) – type (Literal['news', 'search', 'places', 'images']) –
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-10
type (Literal['news', 'search', 'places', 'images']) – tbs (Optional[str]) – serper_api_key (Optional[str]) – aiosession (Optional[aiohttp.client.ClientSession]) – result_key_for_type (dict) – Return type None attribute aiosession: Optional[aiohttp.client.ClientSession] = None attribute gl: str = 'us' attribute hl: str = 'en' attribute k: int = 10 attribute serper_api_key: Optional[str] = None attribute tbs: Optional[str] = None attribute type: Literal['news', 'search', 'places', 'images'] = 'search' async aresults(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict async arun(query, **kwargs)[source] Run query through GoogleSearch and parse result async. Parameters query (str) – kwargs (Any) – Return type str results(query, **kwargs)[source] Run query through GoogleSearch. Parameters query (str) – kwargs (Any) – Return type Dict run(query, **kwargs)[source] Run query through GoogleSearch and parse result. Parameters query (str) – kwargs (Any) – Return type str class langchain.utilities.GraphQLAPIWrapper(*, custom_headers=None, graphql_endpoint, gql_client=None, gql_function)[source] Bases: pydantic.main.BaseModel Wrapper around GraphQL API. To use, you should have the gql python package installed.
https://api.python.langchain.com/en/stable/modules/utilities.html
6d859eadd4b9-11
Wrapper around GraphQL API. To use, you should have the gql python package installed. This wrapper will use the GraphQL API to conduct queries. Parameters custom_headers (Optional[Dict[str, str]]) – graphql_endpoint (str) – gql_client (Any) – gql_function (Callable[[str], Any]) – Return type None attribute custom_headers: Optional[Dict[str, str]] = None attribute graphql_endpoint: str [Required] run(query)[source] Run a GraphQL query and get the results. Parameters query (str) – Return type str
https://api.python.langchain.com/en/stable/modules/utilities.html