id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
2b28b54605b7-22
Returns List of IDs of the added texts. Return type List[str] similarity_search_with_score_id_by_vector(embedding, k=4)[source] Return docs most similar to embedding vector. No support for filter query (on metadata) along with vector search. Parameters embedding (str) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of (Document, score, id), the most similar to the query vector. Return type List[Tuple[langchain.schema.Document, float, str]] similarity_search_with_score_id(query, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float, str]] similarity_search_with_score_by_vector(embedding, k=4)[source] Return docs most similar to embedding vector. No support for filter query (on metadata) along with vector search. Parameters embedding (str) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of (Document, score), the most similar to the query vector. Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-23
Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Returns List of Documents selected by maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-24
Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Optional. Returns List of Documents selected by maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Create a Cassandra vectorstore from raw texts. No support for specifying text IDs Returns a Cassandra vectorstore. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.cassandra.CVST classmethod from_documents(documents, embedding, **kwargs)[source] Create a Cassandra vectorstore from a document list. No support for specifying text IDs Returns a Cassandra vectorstore. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.cassandra.CVST class langchain.vectorstores.Chroma(collection_name='langchain', embedding_function=None, persist_directory=None, client_settings=None, collection_metadata=None, client=None)[source] Bases: langchain.vectorstores.base.VectorStore
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-25
Bases: langchain.vectorstores.base.VectorStore Wrapper around ChromaDB embeddings platform. To use, you should have the chromadb python package installed. Example from langchain.vectorstores import Chroma from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Chroma("langchain_store", embeddings) Parameters collection_name (str) – embedding_function (Optional[Embeddings]) – persist_directory (Optional[str]) – client_settings (Optional[chromadb.config.Settings]) – collection_metadata (Optional[Dict]) – client (Optional[chromadb.Client]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Run similarity search with Chroma. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of documents most similar to the query text. Return type List[Document] similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source] Return docs most similar to embedding vector.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-26
Return docs most similar to embedding vector. :param embedding: Embedding to look up documents similar to. :type embedding: str :param k: Number of Documents to return. Defaults to 4. :type k: int :param filter: Filter by metadata. Defaults to None. :type filter: Optional[Dict[str, str]] Returns List of Documents most similar to the query vector. Parameters embedding (List[float]) – k (int) – filter (Optional[Dict[str, str]]) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None, **kwargs)[source] Run similarity search with Chroma with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-27
lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] delete_collection()[source] Delete the collection. Return type None get(ids=None, where=None, limit=None, offset=None, where_document=None, include=None)[source] Gets the collection. Parameters ids (Optional[OneOrMany[ID]]) – The ids of the embeddings to get. Optional. where (Optional[Where]) – A Where type dict used to filter results by.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-28
where (Optional[Where]) – A Where type dict used to filter results by. E.g. {“color” : “red”, “price”: 4.20}. Optional. limit (Optional[int]) – The number of documents to return. Optional. offset (Optional[int]) – The offset to start returning results from. Useful for paging results with limit. Optional. where_document (Optional[WhereDocument]) – A WhereDocument type dict used to filter by the documents. E.g. {$contains: {“text”: “hello”}}. Optional. include (Optional[List[str]]) – A list of what to include in the results. Can contain “embeddings”, “metadatas”, “documents”. Ids are always included. Defaults to [“metadatas”, “documents”]. Optional. Return type Dict[str, Any] persist()[source] Persist the collection. This can be used to explicitly persist the data to disk. It will also be called automatically when the object is destroyed. Return type None update_document(document_id, document)[source] Update a document in the collection. Parameters document_id (str) – ID of the document to update. document (Document) – Document to update. Return type None classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source] Create a Chroma vectorstore from a raw documents. If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Parameters texts (List[str]) – List of texts to add to the collection. collection_name (str) – Name of the collection to create.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-29
collection_name (str) – Name of the collection to create. persist_directory (Optional[str]) – Directory to persist the collection. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – List of document IDs. Defaults to None. client_settings (Optional[chromadb.config.Settings]) – Chroma client settings client (Optional[chromadb.Client]) – kwargs (Any) – Returns Chroma vectorstore. Return type Chroma classmethod from_documents(documents, embedding=None, ids=None, collection_name='langchain', persist_directory=None, client_settings=None, client=None, **kwargs)[source] Create a Chroma vectorstore from a list of documents. If a persist_directory is specified, the collection will be persisted there. Otherwise, the data will be ephemeral in-memory. Parameters collection_name (str) – Name of the collection to create. persist_directory (Optional[str]) – Directory to persist the collection. ids (Optional[List[str]]) – List of document IDs. Defaults to None. documents (List[Document]) – List of documents to add to the vectorstore. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. client_settings (Optional[chromadb.config.Settings]) – Chroma client settings client (Optional[chromadb.Client]) – kwargs (Any) – Returns Chroma vectorstore. Return type Chroma delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.Clickhouse(embedding, config=None, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-30
Bases: langchain.vectorstores.base.VectorStore Wrapper around ClickHouse vector database You need a clickhouse-connect python package, and a valid account to connect to ClickHouse. ClickHouse can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[ClickHouse official site](https://clickhouse.com/clickhouse) Parameters embedding (Embeddings) – config (Optional[ClickhouseSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Insert more texts through the embeddings and add to the VectorStore. Parameters texts (Iterable[str]) – Iterable of strings to add to the VectorStore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the VectorStore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create ClickHouse wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (ClickHouseSettings, Optional) – ClickHouse configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-31
Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to ClickHouse. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns ClickHouse Index Return type langchain.vectorstores.clickhouse.Clickhouse similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-32
of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with ClickHouse Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str pydantic settings langchain.vectorstores.ClickhouseSettings[source] Bases: pydantic.env_settings.BaseSettings ClickHouse Client Configuration Attribute: clickhouse_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’. clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443. username (str) : Username to login. Defaults to None. password (str) : Password to login. Defaults to None. index_type (str): index type string. index_param (list): index build parameter. index_query_params(dict): index query parameters.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-33
index_param (list): index build parameter. index_query_params(dict): index query parameters. database (str) : Database name to find the table. Defaults to ‘default’. table (str) : Table name to operate on. Defaults to ‘vector_table’. metric (str)Metric to compute distance,supported are (‘angular’, ‘euclidean’, ‘manhattan’, ‘hamming’, ‘dot’). Defaults to ‘angular’. https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169 column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, must be same size to number of columns. For example: .. code-block:: python {‘id’: ‘text_id’, ‘uuid’: ‘global_unique_id’ ‘embedding’: ‘text_embedding’, ‘document’: ‘text_plain’, ‘metadata’: ‘metadata_dictionary_in_json’, } Defaults to identity map. Show JSON schema{ "title": "ClickhouseSettings",
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-34
Show JSON schema{ "title": "ClickhouseSettings", "description": "ClickHouse Client Configuration\n\nAttribute:\n clickhouse_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n clickhouse_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (list): index build parameter.\n index_query_params(dict): index query parameters.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('angular', 'euclidean', 'manhattan', 'hamming',\n 'dot'). Defaults to 'angular'.\n https://github.com/spotify/annoy/blob/main/src/annoymodule.cc#L149-L169\n\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'uuid': 'global_unique_id'\n 'embedding': 'text_embedding',\n 'document': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.", "type": "object", "properties": { "host": {
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-35
"type": "object", "properties": { "host": { "title": "Host", "default": "localhost", "env_names": "{'clickhouse_host'}", "type": "string" }, "port": { "title": "Port", "default": 8123, "env_names": "{'clickhouse_port'}", "type": "integer" }, "username": { "title": "Username", "env_names": "{'clickhouse_username'}", "type": "string" }, "password": { "title": "Password", "env_names": "{'clickhouse_password'}", "type": "string" }, "index_type": { "title": "Index Type", "default": "annoy", "env_names": "{'clickhouse_index_type'}", "type": "string" }, "index_param": { "title": "Index Param", "default": [ "'L2Distance'", 100 ], "env_names": "{'clickhouse_index_param'}", "anyOf": [ { "type": "array", "items": {} }, { "type": "object" } ] }, "index_query_params": { "title": "Index Query Params", "default": {}, "env_names": "{'clickhouse_index_query_params'}", "type": "object", "additionalProperties": { "type": "string" } },
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-36
"type": "string" } }, "column_map": { "title": "Column Map", "default": { "id": "id", "uuid": "uuid", "document": "document", "embedding": "embedding", "metadata": "metadata" }, "env_names": "{'clickhouse_column_map'}", "type": "object", "additionalProperties": { "type": "string" } }, "database": { "title": "Database", "default": "default", "env_names": "{'clickhouse_database'}", "type": "string" }, "table": { "title": "Table", "default": "langchain", "env_names": "{'clickhouse_table'}", "type": "string" }, "metric": { "title": "Metric", "default": "angular", "env_names": "{'clickhouse_metric'}", "type": "string" } }, "additionalProperties": false } Config env_file: str = .env env_file_encoding: str = utf-8 env_prefix: str = clickhouse_ Fields column_map (Dict[str, str]) database (str) host (str) index_param (Optional[Union[List, Dict]]) index_query_params (Dict[str, str]) index_type (str) metric (str) password (Optional[str]) port (int) table (str) username (Optional[str])
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-37
port (int) table (str) username (Optional[str]) attribute column_map: Dict[str, str] = {'document': 'document', 'embedding': 'embedding', 'id': 'id', 'metadata': 'metadata', 'uuid': 'uuid'} attribute database: str = 'default' attribute host: str = 'localhost' attribute index_param: Optional[Union[List, Dict]] = ["'L2Distance'", 100] attribute index_query_params: Dict[str, str] = {} attribute index_type: str = 'annoy' attribute metric: str = 'angular' attribute password: Optional[str] = None attribute port: int = 8123 attribute table: str = 'langchain' attribute username: Optional[str] = None class langchain.vectorstores.DeepLake(dataset_path='./deeplake/', token=None, embedding_function=None, read_only=False, ingestion_batch_size=1000, num_workers=0, verbose=True, exec_option='python', **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Deep Lake, a data lake for deep learning applications. We integrated deeplake’s similarity search and filtering for fast prototyping, Now, it supports Tensor Query Language (TQL) for production use cases over billion rows. Why Deep Lake? Not only stores embeddings, but also the original data with version control. Serverless, doesn’t require another service and can be used with majorcloud providers (S3, GCS, etc.) More than just a multi-modal vector store. You can use the datasetto fine-tune your own LLM models. To use, you should have the deeplake python package installed. Example
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-38
To use, you should have the deeplake python package installed. Example from langchain.vectorstores import DeepLake from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = DeepLake("langchain_store", embeddings.embed_query) Parameters dataset_path (str) – token (Optional[str]) – embedding_function (Optional[Embeddings]) – read_only (bool) – ingestion_batch_size (int) – num_workers (int) – verbose (bool) – exec_option (str) – kwargs (Any) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Examples >>> ids = deeplake_vectorstore.add_texts( ... texts = <list_of_texts>, ... metadatas = <list_of_metadata_jsons>, ... ids = <list_of_ids>, ... ) Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. **kwargs – other optional keyword arguments. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search( ... query=<your_query>, ... k=<num_items>, ... exec_option=<preferred_exec_option>, ... ) >>> # Run tql search:
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-39
... ) >>> # Run tql search: >>> data = vector_store.tql_search( ... tql_query="SELECT * WHERE id == <id>", ... exec_option="compute_engine", ... ) Parameters k (int) – Number of Documents to return. Defaults to 4. query (str) – Text to look up similar documents. **kwargs – Additional keyword arguments include: embedding (Callable): Embedding function to use. Defaults to None. distance_metric (str): ‘L2’ for Euclidean, ‘L1’ for Nuclear, ‘max’ for L-infinity, ‘cos’ for cosine, ‘dot’ for dot product. Defaults to ‘L2’. filter (Union[Dict, Callable], optional): Additional filterbefore embedding search. - Dict: Key-value search on tensors of htype json, (sample must satisfy all key-value filters) Dict = {“tensor_1”: {“key”: value}, “tensor_2”: {“key”: value}} Function: Compatible with deeplake.filter. Defaults to None. exec_option (str): Supports 3 ways to perform searching.’python’, ‘compute_engine’, or ‘tensor_db’. Defaults to ‘python’. - ‘python’: Pure-python implementation for the client. WARNING: not recommended for big datasets. ’compute_engine’: C++ implementation of the Compute Engine forthe client. Not for in-memory or local datasets. ’tensor_db’: Managed Tensor Database for storage and query.Only for data in Deep Lake Managed Database. Use runtime = {“db_engine”: True} during dataset creation. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-40
similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Examples >>> # Search using an embedding >>> data = vector_store.similarity_search_by_vector( ... embedding=<your_embedding>, ... k=<num_items_to_return>, ... exec_option=<preferred_exec_option>, ... ) Parameters embedding (Union[List[float], np.ndarray]) – Embedding to find similar docs. k (int) – Number of Documents to return. Defaults to 4. **kwargs – Additional keyword arguments including: filter (Union[Dict, Callable], optional): Additional filter before embedding search. - Dict - Key-value search on tensors of htype json. True if all key-value filters are satisfied. Dict = {“tensor_name_1”: {“key”: value}, ”tensor_name_2”: {“key”: value}} Function - Any function compatible withdeeplake.filter. Defaults to None. exec_option (str): Options for search execution include”python”, “compute_engine”, or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-41
runtime = {“db_engine”: True} during dataset creation. distance_metric (str): L2 for Euclidean, L1 for Nuclear,max for L-infinity distance, cos for cosine similarity, ‘dot’ for dot product. Defaults to L2. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[Document] similarity_search_with_score(query, k=4, **kwargs)[source] Run similarity search with Deep Lake with distance returned. Examples: >>> data = vector_store.similarity_search_with_score( … query=<your_query>, … embedding=<your_embedding_function> … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. **kwargs – Additional keyword arguments. Some of these arguments are: distance_metric: L2 for Euclidean, L1 for Nuclear, max L-infinity distance, cos for cosine similarity, ‘dot’ for dot product. Defaults to L2. filter (Optional[Dict[str, str]]): Filter by metadata. Defaults to None.embedding_function (Callable): Embedding function to use. Defaults to None. exec_option (str): DeepLakeVectorStore supports 3 ways to performsearching. It could be either “python”, “compute_engine” or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-42
any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. kwargs (Any) – Returns List of documents most similar to the querytext with distance in float. Return type List[Tuple[Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected docs. Examples: >>> data = vector_store.max_marginal_relevance_search_by_vector( … embedding=<your_embedding>, … fetch_k=<elements_to_fetch_before_mmr_search>, … k=<number_of_items_to_return>, … exec_option=<preferred_exec_option>, … ) Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch for MMR algorithm. lambda_mult (float) – Number between 0 and 1 determining the degree of diversity. 0 corresponds to max diversity and 1 to min diversity. Defaults to 0.5. exec_option (str) – DeepLakeVectorStore supports 3 ways for searching. Could be “python”, “compute_engine” or “tensor_db”. Defaults to “python”. - “python” - Pure-python implementation running on the client.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-43
“python”. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. **kwargs – Additional keyword arguments. kwargs (Any) – Returns List[Documents] - A list of documents. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, exec_option=None, **kwargs)[source] Return docs selected using maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Examples: >>> # Search using an embedding >>> data = vector_store.max_marginal_relevance_search( … query = <query_to_search>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>, … ) Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents for MMR algorithm. lambda_mult (float) – Value between 0 and 1. 0 corresponds
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-44
lambda_mult (float) – Value between 0 and 1. 0 corresponds to maximum diversity and 1 to minimum. Defaults to 0.5. exec_option (str) – Supports 3 ways to perform searching. - “python” - Pure-python implementation running on the client. Can be used for data stored anywhere. WARNING: using this option with big datasets is discouraged due to potential memory issues. ”compute_engine” - Performant C++ implementation of the DeepLake Compute Engine. Runs on the client and can be used for any data stored in or connected to Deep Lake. It cannot be used with in-memory or local datasets. ”tensor_db” - Performant, fully-hosted Managed Tensor Database.Responsible for storage and query execution. Only available for data stored in the Deep Lake Managed Database. To store datasets in this database, specify runtime = {“db_engine”: True} during dataset creation. **kwargs – Additional keyword arguments kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Raises ValueError – when MRR search is on but embedding function is not specified. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, ids=None, dataset_path='./deeplake/', **kwargs)[source] Create a Deep Lake dataset from a raw documents. If a dataset_path is specified, the dataset will be persisted in that location, otherwise by default at ./deeplake Examples: >>> # Search using an embedding >>> vector_store = DeepLake.from_texts( … texts = <the_texts_that_you_want_to_embed>, … embedding_function = <embedding_function_for_query>, … k = <number_of_items_to_return>, … exec_option = <preferred_exec_option>,
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-45
… exec_option = <preferred_exec_option>, … ) Parameters dataset_path (str) – The full path to the dataset. Can be: Deep Lake cloud path of the form hub://username/dataset_name.To write to Deep Lake cloud datasets, ensure that you are logged in to Deep Lake (use ‘activeloop login’ from command line) AWS S3 path of the form s3://bucketname/path/to/dataset.Credentials are required in either the environment Google Cloud Storage path of the formgcs://bucketname/path/to/dataset Credentials are required in either the environment Local file system path of the form ./path/to/dataset or~/path/to/dataset or path/to/dataset. In-memory path of the form mem://path/to/dataset which doesn’tsave the dataset, but keeps it in memory instead. Should be used only for testing as it does not persist. texts (List[Document]) – List of documents to add. embedding (Optional[Embeddings]) – Embedding function. Defaults to None. Note, in other places, it is called embedding_function. metadatas (Optional[List[dict]]) – List of metadatas. Defaults to None. ids (Optional[List[str]]) – List of document IDs. Defaults to None. **kwargs – Additional keyword arguments. kwargs (Any) – Returns Deep Lake dataset. Return type DeepLake Raises ValueError – If ‘embedding’ is provided in kwargs. This is deprecated, please use embedding_function instead. delete(ids=None, filter=None, delete_all=None)[source] Delete the entities in the dataset. Parameters ids (Optional[List[str]], optional) – The document_ids to delete. Defaults to None. filter (Optional[Dict[str, str]], optional) – The filter to delete by.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-46
filter (Optional[Dict[str, str]], optional) – The filter to delete by. Defaults to None. delete_all (Optional[bool], optional) – Whether to drop the dataset. Defaults to None. Returns Whether the delete operation was successful. Return type bool classmethod force_delete_by_path(path)[source] Force delete dataset by path. Parameters path (str) – path of the dataset to delete. Raises ValueError – if deeplake is not installed. Return type None delete_dataset()[source] Delete the collection. Return type None class langchain.vectorstores.DocArrayHnswSearch(doc_index, embedding)[source] Bases: langchain.vectorstores.docarray.base.DocArrayIndex Wrapper around HnswLib storage. To use it, you should have the docarray package with version >=0.32.0 installed. You can install it with pip install “langchain[docarray]”. Parameters doc_index (BaseDocIndex) – embedding (langchain.embeddings.base.Embeddings) – classmethod from_params(embedding, work_dir, n_dim, dist_metric='cosine', max_elements=1024, index=True, ef_construction=200, ef=10, M=16, allow_replace_deleted=True, num_threads=1, **kwargs)[source] Initialize DocArrayHnswSearch store. Parameters embedding (Embeddings) – Embedding function. work_dir (str) – path to the location where all the data will be stored. n_dim (int) – dimension of an embedding. dist_metric (str) – Distance metric for DocArrayHnswSearch can be one of: “cosine”, “ip”, and “l2”. Defaults to “cosine”.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-47
“cosine”, “ip”, and “l2”. Defaults to “cosine”. max_elements (int) – Maximum number of vectors that can be stored. Defaults to 1024. index (bool) – Whether an index should be built for this field. Defaults to True. ef_construction (int) – defines a construction time/accuracy trade-off. Defaults to 200. ef (int) – parameter controlling query time/accuracy trade-off. Defaults to 10. M (int) – parameter that defines the maximum number of outgoing connections in the graph. Defaults to 16. allow_replace_deleted (bool) – Enables replacing of deleted elements with new added ones. Defaults to True. num_threads (int) – Sets the number of cpu threads to use. Defaults to 1. **kwargs – Other keyword arguments to be passed to the get_doc_cls method. kwargs (Any) – Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch classmethod from_texts(texts, embedding, metadatas=None, work_dir=None, n_dim=None, **kwargs)[source] Create an DocArrayHnswSearch store and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. work_dir (str) – path to the location where all the data will be stored. n_dim (int) – dimension of an embedding. **kwargs – Other keyword arguments to be passed to the __init__ method. kwargs (Any) – Returns DocArrayHnswSearch Vector Store Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-48
Return type langchain.vectorstores.docarray.hnsw.DocArrayHnswSearch class langchain.vectorstores.DocArrayInMemorySearch(doc_index, embedding)[source] Bases: langchain.vectorstores.docarray.base.DocArrayIndex Wrapper around in-memory storage for exact search. To use it, you should have the docarray package with version >=0.32.0 installed. You can install it with pip install “langchain[docarray]”. Parameters doc_index (BaseDocIndex) – embedding (langchain.embeddings.base.Embeddings) – classmethod from_params(embedding, metric='cosine_sim', **kwargs)[source] Initialize DocArrayInMemorySearch store. Parameters embedding (Embeddings) – Embedding function. metric (str) – metric for exact nearest-neighbor search. Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”. Defaults to “cosine_sim”. **kwargs – Other keyword arguments to be passed to the get_doc_cls method. kwargs (Any) – Return type langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Create an DocArrayInMemorySearch store and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[Dict[Any, Any]]]) – Metadata for each text if it exists. Defaults to None. metric (str) – metric for exact nearest-neighbor search. Can be one of: “cosine_sim”, “euclidean_dist” and “sqeuclidean_dist”. Defaults to “cosine_sim”. kwargs (Any) – Returns
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-49
Defaults to “cosine_sim”. kwargs (Any) – Returns DocArrayInMemorySearch Vector Store Return type langchain.vectorstores.docarray.in_memory.DocArrayInMemorySearch class langchain.vectorstores.ElasticVectorSearch(elasticsearch_url, index_name, embedding, *, ssl_verify=None)[source] Bases: langchain.vectorstores.base.VectorStore, abc.ABC Wrapper around Elasticsearch as a vector database. To connect to an Elasticsearch instance that does not require login credentials, pass the Elasticsearch URL and index name along with the embedding object to the constructor. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch( elasticsearch_url="http://localhost:9200", index_name="test_index", embedding=embedding ) To connect to an Elasticsearch instance that requires login credentials, including Elastic Cloud, use the Elasticsearch URL format https://username:password@es_host:9243. For example, to connect to Elastic Cloud, create the Elasticsearch URL with the required authentication details and pass it to the ElasticVectorSearch constructor as the named parameter elasticsearch_url. You can obtain your Elastic Cloud URL and login credentials by logging in to the Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and navigating to the “Deployments” page. To obtain your Elastic Cloud password for the default “elastic” user: Log in to the Elastic Cloud console at https://cloud.elastic.co Go to “Security” > “Users” Locate the “elastic” user and click “Edit” Click “Reset password” Follow the prompts to reset the password The format for Elastic Cloud URLs is
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-50
Follow the prompts to reset the password The format for Elastic Cloud URLs is https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embedding = OpenAIEmbeddings() elastic_host = "cluster_id.region_id.gcp.cloud.es.io" elasticsearch_url = f"https://username:password@{elastic_host}:9243" elastic_vector_search = ElasticVectorSearch( elasticsearch_url=elasticsearch_url, index_name="test_index", embedding=embedding ) Parameters elasticsearch_url (str) – The URL for the Elasticsearch instance. index_name (str) – The name of the Elasticsearch index for the embeddings. embedding (Embeddings) – An object that provides the ability to embed text. It should be an instance of a class that subclasses the Embeddings abstract base class, such as OpenAIEmbeddings() ssl_verify (Optional[Dict[str, Any]]) – Raises ValueError – If the elasticsearch python package is not installed. add_texts(texts, metadatas=None, refresh_indices=True, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. refresh_indices (bool) – bool to refresh ElasticSearch indices ids (Optional[List[str]]) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-51
Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None, **kwargs)[source] Return docs most similar to query. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query. Parameters query (str) – k (int) – filter (Optional[dict]) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, elasticsearch_url=None, index_name=None, refresh_indices=True, **kwargs)[source] Construct ElasticVectorSearch wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in the Elasticsearch instance. Adds the documents to the newly created Elasticsearch index. This is intended to be a quick way to get started. Example from langchain import ElasticVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() elastic_vector_search = ElasticVectorSearch.from_texts( texts, embeddings, elasticsearch_url="http://localhost:9200" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – elasticsearch_url (Optional[str]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-52
elasticsearch_url (Optional[str]) – index_name (Optional[str]) – refresh_indices (bool) – kwargs (Any) – Return type langchain.vectorstores.elastic_vector_search.ElasticVectorSearch create_index(client, index_name, mapping)[source] Parameters client (Any) – index_name (str) – mapping (Dict) – Return type None client_search(client, index_name, script_query, size)[source] Parameters client (Any) – index_name (str) – script_query (Dict) – size (int) – Return type Any delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.FAISS(embedding_function, index, docstore, index_to_docstore_id, relevance_score_fn=<function _default_relevance_score_fn>, normalize_L2=False)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around FAISS vector database. To use, you should have the faiss python package installed. Example from langchain import FAISS faiss = FAISS(embedding_function, index, docstore, index_to_docstore_id) Parameters embedding_function (Callable) – index (Any) – docstore (Docstore) – index_to_docstore_id (Dict[int, str]) – relevance_score_fn (Optional[Callable[[float], float]]) – normalize_L2 (bool) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-53
Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of unique IDs. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] add_embeddings(text_embeddings, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters text_embeddings (Iterable[Tuple[str, List[float]]]) – Iterable pairs of string and embedding to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of unique IDs. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, Any]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. **kwargs – kwargs to be passed to similarity search. Can include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-54
filter the resulting set of retrieved docs kwargs (Any) – Returns List of documents most similar to the query text and L2 distance in float for each. Lower score represents more similarity. Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score(query, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of documents most similar to the query text with L2 distance in float. Lower score represents more similarity. Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector(embedding, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of Documents most similar to the embedding. Return type List[langchain.schema.Document] similarity_search(query, k=4, filter=None, fetch_k=20, **kwargs)[source] Return docs most similar to query. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-55
Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, Any]]) – (Optional[Dict[str, str]]): Filter by metadata. Defaults to None. fetch_k (int) – (Optional[int]) Number of Documents to fetch before filtering. Defaults to 20. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] max_marginal_relevance_search_with_score_by_vector(embedding, *, k=4, fetch_k=20, lambda_mult=0.5, filter=None)[source] Return docs and their similarity scores selected using the maximal marginalrelevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – Returns List of Documents and similarity scores selected by maximal marginalrelevance and score for each. Return type List[Tuple[langchain.schema.Document, float]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-56
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch before filtering (if needed) to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[Dict[str, Any]]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] merge_from(target)[source]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-57
Return type List[langchain.schema.Document] merge_from(target)[source] Merge another FAISS object with the current one. Add the target FAISS to the current one. Parameters target (langchain.vectorstores.faiss.FAISS) – FAISS object you wish to merge into the current one Returns None. Return type None classmethod from_texts(texts, embedding, metadatas=None, ids=None, **kwargs)[source] Construct FAISS wrapper from raw documents. This is a user friendly interface that: Embeds documents. Creates an in memory docstore Initializes the FAISS database This is intended to be a quick way to get started. Example from langchain import FAISS from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() faiss = FAISS.from_texts(texts, embeddings) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – kwargs (Any) – Return type langchain.vectorstores.faiss.FAISS classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ids=None, **kwargs)[source] Construct FAISS wrapper from raw documents. This is a user friendly interface that: Embeds documents. Creates an in memory docstore Initializes the FAISS database This is intended to be a quick way to get started. Example from langchain import FAISS from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings)
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-58
faiss = FAISS.from_embeddings(text_embedding_pairs, embeddings) Parameters text_embeddings (List[Tuple[str, List[float]]]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – kwargs (Any) – Return type langchain.vectorstores.faiss.FAISS save_local(folder_path, index_name='index')[source] Save FAISS index, docstore, and index_to_docstore_id to disk. Parameters folder_path (str) – folder path to save index, docstore, and index_to_docstore_id to. index_name (str) – for saving with a specific index file name Return type None classmethod load_local(folder_path, embeddings, index_name='index')[source] Load FAISS index, docstore, and index_to_docstore_id from disk. Parameters folder_path (str) – folder path to load index, docstore, and index_to_docstore_id from. embeddings (langchain.embeddings.base.Embeddings) – Embeddings to use when generating queries index_name (str) – for saving with a specific index file name Return type langchain.vectorstores.faiss.FAISS class langchain.vectorstores.Hologres(connection_string, embedding_function, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, logger=None)[source] Bases: langchain.vectorstores.base.VectorStore VectorStore implementation using Hologres. connection_string is a hologres connection string. embedding_function any embedding function implementinglangchain.embeddings.base.Embeddings interface. ndims is the number of dimensions of the embedding output. table_name is the name of the table to store embeddings and data.(default: langchain_pg_embedding)
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-59
- NOTE: The table will be created when initializing the store (if not exists) So, make sure the user has the right permissions to create tables. pre_delete_table if True, will delete the table if it exists.(default: False) - Useful for testing. Parameters connection_string (str) – embedding_function (Embeddings) – ndims (int) – table_name (str) – pre_delete_table (bool) – logger (Optional[logging.Logger]) – Return type None create_vector_extension()[source] Return type None create_table()[source] Return type None add_embeddings(texts, embeddings, metadatas, ids, **kwargs)[source] Add embeddings to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. embeddings (List[List[float]]) – List of list of embedding vectors. metadatas (List[dict]) – List of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (List[str]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Run similarity search with Hologres with distance.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-60
Run similarity search with Hologres with distance. Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, filter=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_score(query, k=4, filter=None)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. Defaults to None. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search_with_score_by_vector(embedding, k=4, filter=None)[source] Parameters embedding (List[float]) – k (int) – filter (Optional[dict]) – Return type List[Tuple[langchain.schema.Document, float]]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-61
Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Postgres connection string is required “Either pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod from_embeddings(text_embeddings, embedding, metadatas=None, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_table=False, **kwargs)[source] Construct Hologres wrapper from raw documents and pre- generated embeddings. Return VectorStore initialized from documents and embeddings. Postgres connection string is required “Either pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Example from langchain import Hologres from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() text_embeddings = embeddings.embed_documents(texts) text_embedding_pairs = list(zip(texts, text_embeddings)) faiss = Hologres.from_embeddings(text_embedding_pairs, embeddings) Parameters text_embeddings (List[Tuple[str, List[float]]]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ndims (int) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-62
metadatas (Optional[List[dict]]) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod from_existing_index(embedding, ndims=1536, table_name='langchain_pg_embedding', pre_delete_table=False, **kwargs)[source] Get intsance of an existing Hologres store.This method will return the instance of the store without inserting any new embeddings Parameters embedding (langchain.embeddings.base.Embeddings) – ndims (int) – table_name (str) – pre_delete_table (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod get_connection_string(kwargs)[source] Parameters kwargs (Dict[str, Any]) – Return type str classmethod from_documents(documents, embedding, ndims=1536, table_name='langchain_pg_embedding', ids=None, pre_delete_collection=False, **kwargs)[source] Return VectorStore initialized from documents and embeddings. Postgres connection string is required “Either pass it as a parameter or set the HOLOGRES_CONNECTION_STRING environment variable. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – ndims (int) – table_name (str) – ids (Optional[List[str]]) – pre_delete_collection (bool) – kwargs (Any) – Return type langchain.vectorstores.hologres.Hologres classmethod connection_string_from_db_params(host, port, database, user, password)[source]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-63
Return connection string from database parameters. Parameters host (str) – port (int) – database (str) – user (str) – password (str) – Return type str class langchain.vectorstores.LanceDB(connection, embedding, vector_key='vector', id_key='id', text_key='text')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around LanceDB vector database. To use, you should have lancedb python package installed. Example db = lancedb.connect('./lancedb') table = db.open_table('my_table') vectorstore = LanceDB(table, embedding_function) vectorstore.add_texts(['text1', 'text2']) result = vectorstore.similarity_search('text1') Parameters connection (Any) – embedding (Embeddings) – vector_key (Optional[str]) – id_key (Optional[str]) – text_key (Optional[str]) – add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Turn texts into embedding and add it to the database Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. kwargs (Any) – Returns List of ids of the added texts. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return documents most similar to the query Parameters query (str) – String to query the vectorstore with. k (int) – Number of documents to return. kwargs (Any) – Returns
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-64
kwargs (Any) – Returns List of documents most similar to the query. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, connection=None, vector_key='vector', id_key='id', text_key='text', **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – connection (Any) – vector_key (Optional[str]) – id_key (Optional[str]) – text_key (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.lancedb.LanceDB class langchain.vectorstores.MatchingEngine(project_id, index, endpoint, embedding, gcs_client, gcs_bucket_name, credentials=None)[source] Bases: langchain.vectorstores.base.VectorStore Vertex Matching Engine implementation of the vector store. While the embeddings are stored in the Matching Engine, the embedded documents will be stored in GCS. An existing Index and corresponding Endpoint are preconditions for using this module. See usage in docs/modules/indexes/vectorstores/examples/matchingengine.ipynb Note that this implementation is mostly meant for reading if you are planning to do a real time implementation. While reading is a real time operation, updating the index takes close to one hour. Parameters project_id (str) – index (MatchingEngineIndex) – endpoint (MatchingEngineIndexEndpoint) – embedding (Embeddings) – gcs_client (storage.Client) – gcs_bucket_name (str) – credentials (Optional[Credentials]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-65
gcs_bucket_name (str) – credentials (Optional[Credentials]) – add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters. Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – The string that will be used to search for similar documents. k (int) – The amount of neighbors that will be retrieved. kwargs (Any) – Returns A list of k matching documents. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, **kwargs)[source] Use from components instead. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type langchain.vectorstores.matching_engine.MatchingEngine classmethod from_components(project_id, region, gcs_bucket_name, index_id, endpoint_id, credentials_path=None, embedding=None)[source] Takes the object creation out of the constructor. Parameters project_id (str) – The GCP project id. region (str) – The default location making the API calls. It must have regional. (the same location as the GCS bucket and must be) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-66
regional. (the same location as the GCS bucket and must be) – gcs_bucket_name (str) – The location where the vectors will be stored in created. (order for the index to be) – index_id (str) – The id of the created index. endpoint_id (str) – The id of the created endpoint. credentials_path (Optional[str]) – (Optional) The path of the Google credentials on system. (the local file) – embedding (Optional[langchain.embeddings.base.Embeddings]) – The Embeddings that will be used for texts. (embedding the) – Returns A configured MatchingEngine with the texts added to the index. Return type langchain.vectorstores.matching_engine.MatchingEngine class langchain.vectorstores.Milvus(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around the Milvus vector database. Parameters embedding_function (Embeddings) – collection_name (str) – connection_args (Optional[dict[str, Any]]) – consistency_level (str) – index_params (Optional[dict]) – search_params (Optional[dict]) – drop_old (Optional[bool]) – add_texts(texts, metadatas=None, timeout=None, batch_size=1000, **kwargs)[source] Insert text data into Milvus. Inserting data when the collection has not be made yet will result in creating a new Collection. The data of the first entity decides the schema of the new collection, the dim is extracted from the first embedding and the columns are decided by the first metadata dict.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-67
embedding and the columns are decided by the first metadata dict. Metada keys will need to be present for all inserted values. At the moment there is no None equivalent in Milvus. Parameters texts (Iterable[str]) – The texts to embed, it is assumed that they all fit in memory. metadatas (Optional[List[dict]]) – Metadata dicts attached to each of the texts. Defaults to None. timeout (Optional[int]) – Timeout for each batch insert. Defaults to None. batch_size (int, optional) – Batch size to use for insertion. Defaults to 1000. kwargs (Any) – Raises MilvusException – Failure to add texts Returns The resulting keys for each inserted element. Return type List[str] similarity_search(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a similarity search against the query string. Parameters query (str) – The text to search. k (int, optional) – How many results to return. Defaults to 4. param (dict, optional) – The search params for the index type. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] similarity_search_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a similarity search against the query string. Parameters embedding (List[float]) – The embedding vector to search. k (int, optional) – How many results to return. Defaults to 4.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-68
k (int, optional) – How many results to return. Defaults to 4. param (dict, optional) – The search params for the index type. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] similarity_search_with_score(query, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search on a query string and return results with score. For more information about the search parameters, take a look at the pymilvus documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md Parameters query (str) – The text being searched. k (int, optional) – The amount of results ot return. Defaults to 4. param (dict) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Return type List[float], List[Tuple[Document, any, any]] similarity_search_with_score_by_vector(embedding, k=4, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search on a query string and return results with score. For more information about the search parameters, take a look at the pymilvus documentation found here:
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-69
documentation found here: https://milvus.io/api-reference/pymilvus/v2.2.6/Collection/search().md Parameters embedding (List[float]) – The embedding vector being searched. k (int, optional) – The amount of results ot return. Defaults to 4. param (dict) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Result doc and score. Return type List[Tuple[Document, float]] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search and return results that are reordered by MMR. Parameters query (str) – The text being searched. k (int, optional) – How many results to give. Defaults to 4. fetch_k (int, optional) – Total results to select k from. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5 param (dict, optional) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-70
Returns Document results for search. Return type List[Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, param=None, expr=None, timeout=None, **kwargs)[source] Perform a search and return results that are reordered by MMR. Parameters embedding (str) – The embedding vector being searched. k (int, optional) – How many results to give. Defaults to 4. fetch_k (int, optional) – Total results to select k from. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5 param (dict, optional) – The search params for the specified index. Defaults to None. expr (str, optional) – Filtering expression. Defaults to None. timeout (int, optional) – How long to wait before timeout error. Defaults to None. kwargs (Any) – Collection.search() keyword arguments. Returns Document results for search. Return type List[Document] classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={'host': 'localhost', 'password': '', 'port': '19530', 'secure': False, 'user': ''}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source] Create a Milvus collection, indexes it with HNSW, and insert data. Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-71
metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. collection_name (str, optional) – Collection name to use. Defaults to “LangChainCollection”. connection_args (dict[str, Any], optional) – Connection args to use. Defaults to DEFAULT_MILVUS_CONNECTION. consistency_level (str, optional) – Which consistency level to use. Defaults to “Session”. index_params (Optional[dict], optional) – Which index_params to use. Defaults to None. search_params (Optional[dict], optional) – Which search params to use. Defaults to None. drop_old (Optional[bool], optional) – Whether to drop the collection with that name if it exists. Defaults to False. kwargs (Any) – Returns Milvus Vector Store Return type Milvus class langchain.vectorstores.Zilliz(embedding_function, collection_name='LangChainCollection', connection_args=None, consistency_level='Session', index_params=None, search_params=None, drop_old=False)[source] Bases: langchain.vectorstores.milvus.Milvus Parameters embedding_function (Embeddings) – collection_name (str) – connection_args (Optional[dict[str, Any]]) – consistency_level (str) – index_params (Optional[dict]) – search_params (Optional[dict]) – drop_old (Optional[bool]) – classmethod from_texts(texts, embedding, metadatas=None, collection_name='LangChainCollection', connection_args={}, consistency_level='Session', index_params=None, search_params=None, drop_old=False, **kwargs)[source] Create a Zilliz collection, indexes it with HNSW, and insert data. Parameters texts (List[str]) – Text data.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-72
Parameters texts (List[str]) – Text data. embedding (Embeddings) – Embedding function. metadatas (Optional[List[dict]]) – Metadata for each text if it exists. Defaults to None. collection_name (str, optional) – Collection name to use. Defaults to “LangChainCollection”. connection_args (dict[str, Any], optional) – Connection args to use. Defaults to DEFAULT_MILVUS_CONNECTION. consistency_level (str, optional) – Which consistency level to use. Defaults to “Session”. index_params (Optional[dict], optional) – Which index_params to use. Defaults to None. search_params (Optional[dict], optional) – Which search params to use. Defaults to None. drop_old (Optional[bool], optional) – Whether to drop the collection with that name if it exists. Defaults to False. kwargs (Any) – Returns Zilliz Vector Store Return type Zilliz class langchain.vectorstores.SingleStoreDB(embedding, *, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore This class serves as a Pythonic interface to the SingleStore DB database. The prerequisite for using this class is the installation of the singlestoredb Python package. The SingleStoreDB vectorstore can be created by providing an embedding function and the relevant parameters for the database connection, connection pool, and optionally, the names of the table and the fields to use. Parameters embedding (Embeddings) – distance_strategy (DistanceStrategy) – table_name (str) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-73
distance_strategy (DistanceStrategy) – table_name (str) – content_field (str) – metadata_field (str) – vector_field (str) – pool_size (int) – max_overflow (int) – timeout (float) – kwargs (Any) – vector_field Pass the rest of the kwargs to the connection. connection_kwargs Add program name and version to connection attributes. add_texts(texts, metadatas=None, embeddings=None, **kwargs)[source] Add more texts to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None. kwargs (Any) – Returns empty list Return type List[str] similarity_search(query, k=4, filter=None, **kwargs)[source] Returns the most similar indexed documents to the query text. Uses cosine similarity. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. filter (dict) – A dictionary of metadata fields and values to filter by. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] Examples similarity_search_with_score(query, k=4, filter=None)[source] Return docs most similar to query. Uses cosine similarity. Parameters query (str) – Text to look up documents similar to.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-74
Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – A dictionary of metadata fields and values to filter by. Defaults to None. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts(texts, embedding, metadatas=None, distance_strategy=DistanceStrategy.DOT_PRODUCT, table_name='embeddings', content_field='content', metadata_field='metadata', vector_field='vector', pool_size=5, max_overflow=10, timeout=30, **kwargs)[source] Create a SingleStoreDB vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new table for the embeddings in SingleStoreDB. Adds the documents to the newly created table. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – distance_strategy (langchain.vectorstores.singlestoredb.DistanceStrategy) – table_name (str) – content_field (str) – metadata_field (str) – vector_field (str) – pool_size (int) – max_overflow (int) – timeout (float) – kwargs (Any) – Return type langchain.vectorstores.singlestoredb.SingleStoreDB as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.singlestoredb.SingleStoreDBRetriever
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-75
Return type langchain.vectorstores.singlestoredb.SingleStoreDBRetriever class langchain.vectorstores.Clarifai(user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Clarifai AI platform’s vector store. To use, you should have the clarifai python package installed. Example from langchain.vectorstores import Clarifai from langchain.embeddings.openai import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Clarifai("langchain_store", embeddings.embed_query) Parameters user_id (Optional[str]) – app_id (Optional[str]) – pat (Optional[str]) – number_of_docs (Optional[int]) – api_base (Optional[str]) – Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Add texts to the Clarifai vectorstore. This will push the text to a Clarifai application. Application use base workflow that create and store embedding for each text. Make sure you are using a base workflow that is compatible with text (such as Language Understanding). Parameters texts (Iterable[str]) – Texts to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. ids (Optional[List[str]], optional) – Optional list of IDs. kwargs (Any) – Returns List of IDs of the added texts. Return type List[str] similarity_search_with_score(query, k=4, filter=None, namespace=None, **kwargs)[source] Run similarity search with score using Clarifai. Parameters query (str) – Query text to search for.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-76
Parameters query (str) – Query text to search for. k (int) – Number of results to return. Defaults to 4. filter (Optional[Dict[str, str]]) – Filter by metadata. None. (Defaults to) – namespace (Optional[str]) – kwargs (Any) – Returns List of documents most simmilar to the query text. Return type List[Document] similarity_search(query, k=4, **kwargs)[source] Run similarity search using Clarifai. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding=None, metadatas=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source] Create a Clarifai vectorstore from a list of texts. Parameters user_id (str) – User ID. app_id (str) – App ID. texts (List[str]) – List of texts to add. pat (Optional[str]) – Personal access token. Defaults to None. number_of_docs (Optional[int]) – Number of documents to return None. (Defaults to) – api_base (Optional[str]) – API base. Defaults to None. metadatas (Optional[List[dict]]) – Optional list of metadatas. None. – embedding (Optional[langchain.embeddings.base.Embeddings]) – kwargs (Any) – Returns Clarifai vectorstore. Return type Clarifai
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-77
Returns Clarifai vectorstore. Return type Clarifai classmethod from_documents(documents, embedding=None, user_id=None, app_id=None, pat=None, number_of_docs=None, api_base=None, **kwargs)[source] Create a Clarifai vectorstore from a list of documents. Parameters user_id (str) – User ID. app_id (str) – App ID. documents (List[Document]) – List of documents to add. pat (Optional[str]) – Personal access token. Defaults to None. number_of_docs (Optional[int]) – Number of documents to return None. (during vector search. Defaults to) – api_base (Optional[str]) – API base. Defaults to None. embedding (Optional[langchain.embeddings.base.Embeddings]) – kwargs (Any) – Returns Clarifai vectorstore. Return type Clarifai class langchain.vectorstores.OpenSearchVectorSearch(opensearch_url, index_name, embedding_function, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around OpenSearch as a vector database. Example from langchain import OpenSearchVectorSearch opensearch_vector_search = OpenSearchVectorSearch( "http://localhost:9200", "embeddings", embedding_function ) Parameters opensearch_url (str) – index_name (str) – embedding_function (Embeddings) – kwargs (Any) – add_texts(texts, metadatas=None, ids=None, bulk_size=500, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-78
Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. bulk_size (int) – Bulk API request count; Default: 500 kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] Optional Args:vector_field: Document field embeddings are stored in. Defaults to “vector_field”. text_field: Document field the text of the document is stored in. Defaults to “text”. similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. By default, supports Approximate Search. Also supports Script Scoring and Painless Scripting. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query. Return type List[langchain.schema.Document] Optional Args:vector_field: Document field embeddings are stored in. Defaults to “vector_field”. text_field: Document field the text of the document is stored in. Defaults to “text”. metadata_field: Document field that metadata is stored in. Defaults to “metadata”. Can be set to a special value “*” to include the entire document. Optional Args for Approximate Search:search_type: “approximate_search”; default: “approximate_search” boolean_filter: A Boolean filter consists of a Boolean query that contains a k-NN query and a filter. subquery_clause: Query clause on the knn vector field; default: “must”
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-79
subquery_clause: Query clause on the knn vector field; default: “must” lucene_filter: the Lucene algorithm decides whether to perform an exact k-NN search with pre-filtering or an approximate search with modified post-filtering. Optional Args for Script Scoring Search:search_type: “script_scoring”; default: “approximate_search” space_type: “l2”, “l1”, “linf”, “cosinesimil”, “innerproduct”, “hammingbit”; default: “l2” pre_filter: script_score query to pre-filter documents before identifying nearest neighbors; default: {“match_all”: {}} Optional Args for Painless Scripting Search:search_type: “painless_scripting”; default: “approximate_search” space_type: “l2Squared”, “l1Norm”, “cosineSimilarity”; default: “l2Squared” pre_filter: script_score query to pre-filter documents before identifying nearest neighbors; default: {“match_all”: {}} similarity_search_with_score(query, k=4, **kwargs)[source] Return docs and it’s scores most similar to query. By default, supports Approximate Search. Also supports Script Scoring and Painless Scripting. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents along with its scores most similar to the query. Return type List[Tuple[langchain.schema.Document, float]] Optional Args:same as similarity_search max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-80
Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type list[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, bulk_size=500, **kwargs)[source] Construct OpenSearchVectorSearch wrapper from raw documents. Example from langchain import OpenSearchVectorSearch from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() opensearch_vector_search = OpenSearchVectorSearch.from_texts( texts, embeddings, opensearch_url="http://localhost:9200" ) OpenSearch by default supports Approximate Search powered by nmslib, faiss and lucene engines recommended for large datasets. Also supports brute force search through Script Scoring and Painless Scripting. Optional Args:vector_field: Document field embeddings are stored in. Defaults to “vector_field”. text_field: Document field the text of the document is stored in. Defaults to “text”. Optional Keyword Args for Approximate Search:engine: “nmslib”, “faiss”, “lucene”; default: “nmslib”
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-81
space_type: “l2”, “l1”, “cosinesimil”, “linf”, “innerproduct”; default: “l2” ef_search: Size of the dynamic list used during k-NN searches. Higher values lead to more accurate but slower searches; default: 512 ef_construction: Size of the dynamic list used during k-NN graph creation. Higher values lead to more accurate graph but slower indexing speed; default: 512 m: Number of bidirectional links created for each new element. Large impact on memory consumption. Between 2 and 100; default: 16 Keyword Args for Script Scoring or Painless Scripting:is_appx_search: False Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – bulk_size (int) – kwargs (Any) – Return type langchain.vectorstores.opensearch_vector_search.OpenSearchVectorSearch class langchain.vectorstores.MongoDBAtlasVectorSearch(collection, embedding, *, index_name='default', text_key='text', embedding_key='embedding')[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around MongoDB Atlas Vector Search. To use, you should have both: - the pymongo python package installed - a connection string associated with a MongoDB Atlas Cluster having deployed an Atlas Search index Example from langchain.vectorstores import MongoDBAtlasVectorSearch from langchain.embeddings.openai import OpenAIEmbeddings from pymongo import MongoClient mongo_client = MongoClient("<YOUR-CONNECTION-STRING>") collection = mongo_client["<db_name>"]["<collection_name>"] embeddings = OpenAIEmbeddings() vectorstore = MongoDBAtlasVectorSearch(collection, embeddings) Parameters collection (Collection[MongoDBDocumentType]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-82
Parameters collection (Collection[MongoDBDocumentType]) – embedding (Embeddings) – index_name (str) – text_key (str) – embedding_key (str) – classmethod from_connection_string(connection_string, namespace, embedding, **kwargs)[source] Parameters connection_string (str) – namespace (str) – embedding (langchain.embeddings.base.Embeddings) – kwargs (Any) – Return type langchain.vectorstores.mongodb_atlas.MongoDBAtlasVectorSearch add_texts(texts, metadatas=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[Dict[str, Any]]]) – Optional list of metadatas associated with the texts. kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List similarity_search_with_score(query, *, k=4, pre_filter=None, post_filter_pipeline=None)[source] Return MongoDB documents most similar to query, along with scores. Use the knnBeta Operator available in MongoDB Atlas Search This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes. For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta Parameters query (str) – Text to look up documents similar to. k (int) – Optional Number of Documents to return. Defaults to 4. pre_filter (Optional[dict]) – Optional Dictionary of argument(s) to prefilter on document fields.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-83
fields. post_filter_pipeline (Optional[List[Dict]]) – Optional Pipeline of MongoDB aggregation stages following the knnBeta search. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, pre_filter=None, post_filter_pipeline=None, **kwargs)[source] Return MongoDB documents most similar to query. Use the knnBeta Operator available in MongoDB Atlas Search This feature is in early access and available only for evaluation purposes, to validate functionality, and to gather feedback from a small closed group of early access users. It is not recommended for production deployments as we may introduce breaking changes. For more: https://www.mongodb.com/docs/atlas/atlas-search/knn-beta Parameters query (str) – Text to look up documents similar to. k (int) – Optional Number of Documents to return. Defaults to 4. pre_filter (Optional[dict]) – Optional Dictionary of argument(s) to prefilter on document fields. post_filter_pipeline (Optional[List[Dict]]) – Optional Pipeline of MongoDB aggregation stages following the knnBeta search. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, collection=None, **kwargs)[source] Construct MongoDBAtlasVectorSearch wrapper from raw documents. This is a user-friendly interface that: Embeds documents. Adds the documents to a provided MongoDB Atlas Vector Search index(Lucene) This is intended to be a quick way to get started. Example Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-84
embedding (Embeddings) – metadatas (Optional[List[dict]]) – collection (Optional[Collection[MongoDBDocumentType]]) – kwargs (Any) – Return type MongoDBAtlasVectorSearch class langchain.vectorstores.MyScale(embedding, config=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around MyScale vector database You need a clickhouse-connect python package, and a valid account to connect to MyScale. MyScale can not only search with simple vector indexes, it also supports complex query with multiple conditions, constraints and even sub-queries. For more information, please visit[myscale official site](https://docs.myscale.com/en/overview/) Parameters embedding (Embeddings) – config (Optional[MyScaleSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create Myscale wrapper with existing texts
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-85
Create Myscale wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (MyScaleSettings, Optional) – Myscale configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to MyScale. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. into (Other keyword arguments will pass) – [clickhouse-connect](https://clickhouse.com/docs/en/integrations/python#clickhouse-connect-driver-api) embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns MyScale Index Return type langchain.vectorstores.myscale.MyScale similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale by vectors Parameters query (str) – query string
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-86
Perform a similarity search with MyScale by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with MyScale Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str pydantic settings langchain.vectorstores.MyScaleSettings[source] Bases: pydantic.env_settings.BaseSettings MyScale Client Configuration Attribute:
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-87
Bases: pydantic.env_settings.BaseSettings MyScale Client Configuration Attribute: myscale_host (str)An URL to connect to MyScale backend.Defaults to ‘localhost’. myscale_port (int) : URL port to connect with HTTP. Defaults to 8443. username (str) : Username to login. Defaults to None. password (str) : Password to login. Defaults to None. index_type (str): index type string. index_param (dict): index build parameter. database (str) : Database name to find the table. Defaults to ‘default’. table (str) : Table name to operate on. Defaults to ‘vector_table’. metric (str)Metric to compute distance,supported are (‘l2’, ‘cosine’, ‘ip’). Defaults to ‘cosine’. column_map (Dict)Column type map to project column name onto langchainsemantics. Must have keys: text, id, vector, must be same size to number of columns. For example: .. code-block:: python {‘id’: ‘text_id’, ‘vector’: ‘text_embedding’, ‘text’: ‘text_plain’, ‘metadata’: ‘metadata_dictionary_in_json’, } Defaults to identity map. Show JSON schema{ "title": "MyScaleSettings",
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-88
Show JSON schema{ "title": "MyScaleSettings", "description": "MyScale Client Configuration\n\nAttribute:\n myscale_host (str) : An URL to connect to MyScale backend.\n Defaults to 'localhost'.\n myscale_port (int) : URL port to connect with HTTP. Defaults to 8443.\n username (str) : Username to login. Defaults to None.\n password (str) : Password to login. Defaults to None.\n index_type (str): index type string.\n index_param (dict): index build parameter.\n database (str) : Database name to find the table. Defaults to 'default'.\n table (str) : Table name to operate on.\n Defaults to 'vector_table'.\n metric (str) : Metric to compute distance,\n supported are ('l2', 'cosine', 'ip'). Defaults to 'cosine'.\n column_map (Dict) : Column type map to project column name onto langchain\n semantics. Must have keys: `text`, `id`, `vector`,\n must be same size to number of columns. For example:\n .. code-block:: python\n\n {\n 'id': 'text_id',\n 'vector': 'text_embedding',\n 'text': 'text_plain',\n 'metadata': 'metadata_dictionary_in_json',\n }\n\n Defaults to identity map.", "type": "object", "properties": { "host": { "title": "Host", "default": "localhost", "env_names": "{'myscale_host'}", "type": "string" }, "port": { "title": "Port",
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-89
}, "port": { "title": "Port", "default": 8443, "env_names": "{'myscale_port'}", "type": "integer" }, "username": { "title": "Username", "env_names": "{'myscale_username'}", "type": "string" }, "password": { "title": "Password", "env_names": "{'myscale_password'}", "type": "string" }, "index_type": { "title": "Index Type", "default": "IVFFLAT", "env_names": "{'myscale_index_type'}", "type": "string" }, "index_param": { "title": "Index Param", "env_names": "{'myscale_index_param'}", "type": "object", "additionalProperties": { "type": "string" } }, "column_map": { "title": "Column Map", "default": { "id": "id", "text": "text", "vector": "vector", "metadata": "metadata" }, "env_names": "{'myscale_column_map'}", "type": "object", "additionalProperties": { "type": "string" } }, "database": { "title": "Database", "default": "default", "env_names": "{'myscale_database'}", "type": "string" }, "table": { "title": "Table",
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-90
}, "table": { "title": "Table", "default": "langchain", "env_names": "{'myscale_table'}", "type": "string" }, "metric": { "title": "Metric", "default": "cosine", "env_names": "{'myscale_metric'}", "type": "string" } }, "additionalProperties": false } Config env_file: str = .env env_file_encoding: str = utf-8 env_prefix: str = myscale_ Fields column_map (Dict[str, str]) database (str) host (str) index_param (Optional[Dict[str, str]]) index_type (str) metric (str) password (Optional[str]) port (int) table (str) username (Optional[str]) attribute column_map: Dict[str, str] = {'id': 'id', 'metadata': 'metadata', 'text': 'text', 'vector': 'vector'} attribute database: str = 'default' attribute host: str = 'localhost' attribute index_param: Optional[Dict[str, str]] = None attribute index_type: str = 'IVFFLAT' attribute metric: str = 'cosine' attribute password: Optional[str] = None attribute port: int = 8443 attribute table: str = 'langchain' attribute username: Optional[str] = None class langchain.vectorstores.Pinecone(index, embedding_function, text_key, namespace=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Pinecone vector database.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-91
Bases: langchain.vectorstores.base.VectorStore Wrapper around Pinecone vector database. To use, you should have the pinecone-client python package installed. Example from langchain.vectorstores import Pinecone from langchain.embeddings.openai import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console pinecone.init(api_key="***", environment="...") index = pinecone.Index("langchain-demo") embeddings = OpenAIEmbeddings() vectorstore = Pinecone(index, embeddings.embed_query, "text") Parameters index (Any) – embedding_function (Callable) – text_key (str) – namespace (Optional[str]) – add_texts(texts, metadatas=None, ids=None, namespace=None, batch_size=32, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[List[str]]) – Optional list of ids to associate with the texts. namespace (Optional[str]) – Optional pinecone namespace to add the texts to. batch_size (int) – kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, k=4, filter=None, namespace=None)[source] Return pinecone documents most similar to query, along with scores. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-92
k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – Dictionary of argument(s) to filter on metadata namespace (Optional[str]) – Namespace to search in. Default will search in ‘’ namespace. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, filter=None, namespace=None, **kwargs)[source] Return pinecone documents most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[dict]) – Dictionary of argument(s) to filter on metadata namespace (Optional[str]) – Namespace to search in. Default will search in ‘’ namespace. kwargs (Any) – Returns List of Documents most similar to the query and score for each Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[dict]) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-93
Defaults to 0.5. filter (Optional[dict]) – namespace (Optional[str]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, filter=None, namespace=None, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. filter (Optional[dict]) – namespace (Optional[str]) – kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, batch_size=32, text_key='text', index_name=None, namespace=None, **kwargs)[source] Construct Pinecone wrapper from raw documents. This is a user friendly interface that: Embeds documents. Adds the documents to a provided Pinecone index This is intended to be a quick way to get started. Example from langchain import Pinecone from langchain.embeddings import OpenAIEmbeddings import pinecone # The environment should be the one specified next to the API key # in your Pinecone console
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-94
# in your Pinecone console pinecone.init(api_key="***", environment="...") embeddings = OpenAIEmbeddings() pinecone = Pinecone.from_texts( texts, embeddings, index_name="langchain-demo" ) Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – batch_size (int) – text_key (str) – index_name (Optional[str]) – namespace (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.pinecone.Pinecone classmethod from_existing_index(index_name, embedding, text_key='text', namespace=None)[source] Load pinecone vectorstore from index name. Parameters index_name (str) – embedding (langchain.embeddings.base.Embeddings) – text_key (str) – namespace (Optional[str]) – Return type langchain.vectorstores.pinecone.Pinecone delete(ids, namespace=None)[source] Delete by vector IDs. :param ids: List of ids to delete. Parameters ids (List[str]) – namespace (Optional[str]) – Return type None class langchain.vectorstores.Qdrant(client, collection_name, embeddings=None, content_payload_key='page_content', metadata_payload_key='metadata', embedding_function=None)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Qdrant vector database. To use you should have the qdrant-client package installed. Example from qdrant_client import QdrantClient from langchain import Qdrant client = QdrantClient() collection_name = "MyCollection"
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-95
client = QdrantClient() collection_name = "MyCollection" qdrant = Qdrant(client, collection_name, embedding_function) Parameters client (Any) – collection_name (str) – embeddings (Optional[Embeddings]) – content_payload_key (str) – metadata_payload_key (str) – embedding_function (Optional[Callable]) – CONTENT_KEY = 'page_content' METADATA_KEY = 'metadata' add_texts(texts, metadatas=None, ids=None, batch_size=64, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. ids (Optional[Sequence[str]]) – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. batch_size (int) – How many vectors upload per-request. Default: 64 kwargs (Any) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-96
May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of Documents most similar to the query. Return type List[Document] similarity_search_with_score(query, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-97
If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] similarity_search_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-98
Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of Documents most similar to the query. Return type List[Document] similarity_search_with_score_by_vector(embedding, k=4, filter=None, search_params=None, offset=0, score_threshold=None, consistency=None, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding vector to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. filter (Optional[MetadataFilter]) – Filter by metadata. Defaults to None. search_params (Optional[common_types.SearchParams]) – Additional search params offset (int) – Offset of the first result to return. May be used to paginate results. Note: large offset values may cause performance issues. score_threshold (Optional[float]) – Define a minimal score threshold for the result. If defined, less similar results will not be returned. Score of the returned result might be higher or smaller than the threshold depending on the Distance function used. E.g. for cosine similarity only higher scores will be returned.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-99
E.g. for cosine similarity only higher scores will be returned. consistency (Optional[common_types.ReadConsistency]) – Read consistency of the search. Defines how many replicas should be queried before returning the result. Values: - int - number of replicas to query, values should present in all queried replicas ’majority’ - query all replicas, but return values present in themajority of replicas ’quorum’ - query the majority of replicas, return values present inall of them ’all’ - query all replicas, and return values present in all replicas kwargs (Any) – Returns List of documents most similar to the query text and cosine distance in float for each. Lower score represents more similarity. Return type List[Tuple[Document, float]] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. Defaults to 20. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-100
Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, location=None, url=None, port=6333, grpc_port=6334, prefer_grpc=False, https=None, api_key=None, prefix=None, timeout=None, host=None, path=None, collection_name=None, distance_func='Cosine', content_payload_key='page_content', metadata_payload_key='metadata', batch_size=64, shard_number=None, replication_factor=None, write_consistency_factor=None, on_disk_payload=None, hnsw_config=None, optimizers_config=None, wal_config=None, quantization_config=None, init_from=None, **kwargs)[source] Construct Qdrant wrapper from a list of texts. Parameters texts (List[str]) – A list of texts to be indexed in Qdrant. embedding (Embeddings) – A subclass of Embeddings, responsible for text vectorization. metadatas (Optional[List[dict]]) – An optional list of metadata. If provided it has to be of the same length as a list of texts. ids (Optional[Sequence[str]]) – Optional list of ids to associate with the texts. Ids have to be uuid-like strings. location (Optional[str]) – If :memory: - use in-memory Qdrant instance. If str - use it as a url parameter. If None - fallback to relying on host and port parameters. url (Optional[str]) – either host or str of “Optional[scheme], host, Optional[port], Optional[prefix]”. Default: None port (Optional[int]) – Port of the REST API interface. Default: 6333 grpc_port (int) – Port of the gRPC interface. Default: 6334 prefer_grpc (bool) – If true - use gPRC interface whenever possible in custom methods. Default: False
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-101
Default: False https (Optional[bool]) – If true - use HTTPS(SSL) protocol. Default: None api_key (Optional[str]) – API key for authentication in Qdrant Cloud. Default: None prefix (Optional[str]) – If not None - add prefix to the REST URL path. Example: service/v1 will result in http://localhost:6333/service/v1/{qdrant-endpoint} for REST API. Default: None timeout (Optional[float]) – Timeout for REST and gRPC API requests. Default: 5.0 seconds for REST and unlimited for gRPC host (Optional[str]) – Host name of Qdrant service. If url and host are None, set to ‘localhost’. Default: None path (Optional[str]) – Path in which the vectors will be stored while using local mode. Default: None collection_name (Optional[str]) – Name of the Qdrant collection to be used. If not provided, it will be created randomly. Default: None distance_func (str) – Distance function. One of: “Cosine” / “Euclid” / “Dot”. Default: “Cosine” content_payload_key (str) – A payload key used to store the content of the document. Default: “page_content” metadata_payload_key (str) – A payload key used to store the metadata of the document. Default: “metadata” batch_size (int) – How many vectors upload per-request. Default: 64 shard_number (Optional[int]) – Number of shards in collection. Default is 1, minimum is 1. replication_factor (Optional[int]) – Replication factor for collection. Default is 1, minimum is 1. Defines how many copies of each shard will be created. Have effect only in distributed mode.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-102
Defines how many copies of each shard will be created. Have effect only in distributed mode. write_consistency_factor (Optional[int]) – Write consistency factor for collection. Default is 1, minimum is 1. Defines how many replicas should apply the operation for us to consider it successful. Increasing this number will make the collection more resilient to inconsistencies, but will also make it fail if not enough replicas are available. Does not have any performance impact. Have effect only in distributed mode. on_disk_payload (Optional[bool]) – If true - point`s payload will not be stored in memory. It will be read from the disk every time it is requested. This setting saves RAM by (slightly) increasing the response time. Note: those payload values that are involved in filtering and are indexed - remain in RAM. hnsw_config (Optional[common_types.HnswConfigDiff]) – Params for HNSW index optimizers_config (Optional[common_types.OptimizersConfigDiff]) – Params for optimizer wal_config (Optional[common_types.WalConfigDiff]) – Params for Write-Ahead-Log quantization_config (Optional[common_types.QuantizationConfig]) – Params for quantization, if None - quantization will be disabled init_from (Optional[common_types.InitFrom]) – Use data stored in another collection to initialize this collection **kwargs – Additional arguments passed directly into REST client initialization kwargs (Any) – Return type Qdrant This is a user-friendly interface that: 1. Creates embeddings, one for each text 2. Initializes the Qdrant database as an in-memory docstore by default (and overridable to a remote docstore) Adds the text embeddings to the Qdrant database This is intended to be a quick way to get started. Example
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-103
This is intended to be a quick way to get started. Example from langchain import Qdrant from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() qdrant = Qdrant.from_texts(texts, embeddings, "localhost") class langchain.vectorstores.Redis(redis_url, index_name, embedding_function, content_key='content', metadata_key='metadata', vector_key='content_vector', relevance_score_fn=<function _default_relevance_score>, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Redis vector database. To use, you should have the redis python package installed. Example from langchain.vectorstores import Redis from langchain.embeddings import OpenAIEmbeddings embeddings = OpenAIEmbeddings() vectorstore = Redis( redis_url="redis://username:password@localhost:6379" index_name="my-index", embedding_function=embeddings.embed_query, ) Parameters redis_url (str) – index_name (str) – embedding_function (Callable) – content_key (str) – metadata_key (str) – vector_key (str) – relevance_score_fn (Optional[Callable[[float], float]]) – kwargs (Any) – add_texts(texts, metadatas=None, embeddings=None, batch_size=1000, **kwargs)[source] Add more texts to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings/text to add to the vectorstore. metadatas (Optional[List[dict]], optional) – Optional list of metadatas. Defaults to None. embeddings (Optional[List[List[float]]], optional) – Optional pre-generated embeddings. Defaults to None.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-104
embeddings. Defaults to None. keys (List[str]) or ids (List[str]) – Identifiers of entries. Defaults to None. batch_size (int, optional) – Batch size to use for writes. Defaults to 1000. kwargs (Any) – Returns List of ids added to the vectorstore Return type List[str] similarity_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document] similarity_search_limit_score(query, k=4, score_threshold=0.2, **kwargs)[source] Returns the most similar indexed documents to the query text within the score_threshold range. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. score_threshold (float) – The minimum matching score required for a document 0.2. (to be considered a match. Defaults to) – similarity (Because the similarity calculation algorithm is based on cosine) – kwargs (Any) – Return type List[langchain.schema.Document] :param : :param the smaller the angle: :param the higher the similarity.: Returns A list of documents that are most similar to the query text, including the match score for each document. Return type List[Document] Parameters query (str) – k (int) – score_threshold (float) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-105
k (int) – score_threshold (float) – kwargs (Any) – Note If there are no documents that satisfy the score_threshold value, an empty list is returned. similarity_search_with_score(query, k=4)[source] Return docs most similar to query. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. Returns List of Documents most similar to the query and score for each Return type List[Tuple[langchain.schema.Document, float]] classmethod from_texts_return_keys(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', distance_metric='COSINE', **kwargs)[source] Create a Redis vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in Redis. Adds the documents to the newly created Redis index. Returns the keys of the newly created documents. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (Optional[str]) – content_key (str) – metadata_key (str) – vector_key (str) – distance_metric (Literal['COSINE', 'IP', 'L2']) – kwargs (Any) – Return type Tuple[langchain.vectorstores.redis.Redis, List[str]]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-106
Return type Tuple[langchain.vectorstores.redis.Redis, List[str]] classmethod from_texts(texts, embedding, metadatas=None, index_name=None, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source] Create a Redis vectorstore from raw documents. This is a user-friendly interface that: Embeds documents. Creates a new index for the embeddings in Redis. Adds the documents to the newly created Redis index. This is intended to be a quick way to get started. .. rubric:: Example Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (Optional[str]) – content_key (str) – metadata_key (str) – vector_key (str) – kwargs (Any) – Return type langchain.vectorstores.redis.Redis static delete(ids, **kwargs)[source] Delete a Redis entry. Parameters ids (List[str]) – List of ids (keys) to delete. kwargs (Any) – Returns Whether or not the deletions were successful. Return type bool static drop_index(index_name, delete_documents, **kwargs)[source] Drop a Redis search index. Parameters index_name (str) – Name of the index to drop. delete_documents (bool) – Whether to drop the associated documents. kwargs (Any) – Returns Whether or not the drop was successful. Return type bool classmethod from_existing_index(embedding, index_name, content_key='content', metadata_key='metadata', vector_key='content_vector', **kwargs)[source] Connect to an existing Redis index. Parameters
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-107
Connect to an existing Redis index. Parameters embedding (langchain.embeddings.base.Embeddings) – index_name (str) – content_key (str) – metadata_key (str) – vector_key (str) – kwargs (Any) – Return type langchain.vectorstores.redis.Redis as_retriever(**kwargs)[source] Parameters kwargs (Any) – Return type langchain.vectorstores.redis.RedisVectorStoreRetriever class langchain.vectorstores.Rockset(client, embeddings, collection_name, text_key, embedding_key)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper arpund Rockset vector database. To use, you should have the rockset python package installed. Note that to use this, the collection being used must already exist in your Rockset instance. You must also ensure you use a Rockset ingest transformation to apply VECTOR_ENFORCE on the column being used to store embedding_key in the collection. See: https://rockset.com/blog/introducing-vector-search-on-rockset/ for more details Everything below assumes commons Rockset workspace. TODO: Add support for workspace args. Example from langchain.vectorstores import Rockset from langchain.embeddings.openai import OpenAIEmbeddings import rockset # Make sure you use the right host (region) for your Rockset instance # and APIKEY has both read-write access to your collection. rs = rockset.RocksetClient(host=rockset.Regions.use1a1, api_key="***") collection_name = "langchain_demo" embeddings = OpenAIEmbeddings() vectorstore = Rockset(rs, collection_name, embeddings, "description", "description_embedding") Parameters client (Any) – embeddings (Embeddings) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-108
Parameters client (Any) – embeddings (Embeddings) – collection_name (str) – text_key (str) – embedding_key (str) – add_texts(texts, metadatas=None, ids=None, batch_size=32, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore Args: texts: Iterable of strings to add to the vectorstore. metadatas: Optional list of metadatas associated with the texts. ids: Optional list of ids to associate with the texts. batch_size: Send documents in batches to rockset. Returns List of ids from adding the texts into the vectorstore. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – batch_size (int) – kwargs (Any) – Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, client=None, collection_name='', text_key='', embedding_key='', ids=None, batch_size=32, **kwargs)[source] Create Rockset wrapper with existing texts. This is intended as a quicker way to get started. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – client (Any) – collection_name (str) – text_key (str) – embedding_key (str) – ids (Optional[List[str]]) – batch_size (int) – kwargs (Any) – Return type langchain.vectorstores.rocksetdb.Rockset
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-109
Return type langchain.vectorstores.rocksetdb.Rockset class DistanceFunction(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source] Bases: enum.Enum COSINE_SIM = 'COSINE_SIM' EUCLIDEAN_DIST = 'EUCLIDEAN_DIST' DOT_PRODUCT = 'DOT_PRODUCT' order_by()[source] Return type str similarity_search_with_relevance_scores(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Perform a similarity search with Rockset Parameters query (str) – Text to look up documents similar to. distance_func (DistanceFunction) – how to compute distance between two vectors in Rockset. k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – Metadata filters supplied as a SQL where condition string. Defaults to None. eg. “price<=70.0 AND brand=’Nintendo’” NOTE – Please do not let end-user to fill this and always be aware of SQL injection. kwargs (Any) – Returns List of documents with their relevance score Return type List[Tuple[Document, float]] similarity_search(query, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Same as similarity_search_with_relevance_scores but doesn’t return the scores. Parameters query (str) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-110
kwargs (Any) – Return type List[Document] similarity_search_by_vector(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Accepts a query_embedding (vector), and returns documents with similar embeddings. Parameters embedding (List[float]) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Document] similarity_search_by_vector_with_relevance_scores(embedding, k=4, distance_func=DistanceFunction.COSINE_SIM, where_str=None, **kwargs)[source] Accepts a query_embedding (vector), and returns documents with similar embeddings along with their relevance scores. Parameters embedding (List[float]) – k (int) – distance_func (DistanceFunction) – where_str (Optional[str]) – kwargs (Any) – Return type List[Tuple[Document, float]] delete_texts(ids)[source] Delete a list of docs from the Rockset collection Parameters ids (List[str]) – Return type None class langchain.vectorstores.SKLearnVectorStore(embedding, *, persist_path=None, serializer='json', metric='cosine', **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore A simple in-memory vector store based on the scikit-learn library NearestNeighbors implementation. Parameters embedding (langchain.embeddings.base.Embeddings) – persist_path (Optional[str]) – serializer (Literal['json', 'bson', 'parquet']) – metric (str) – kwargs (Any) – Return type None persist()[source] Return type
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-111
Return type None persist()[source] Return type None add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] similarity_search_with_score(query, *, k=4, **kwargs)[source] Parameters query (str) – k (int) – kwargs (Any) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param embedding: Embedding to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity.
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-112
to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. Parameters embedding (List[float]) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. :param query: Text to look up documents similar to. :param k: Number of Documents to return. Defaults to 4. :param fetch_k: Number of Documents to fetch to pass to MMR algorithm. :param lambda_mult: Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. Returns List of Documents selected by maximal marginal relevance. Parameters query (str) – k (int) – fetch_k (int) – lambda_mult (float) – kwargs (Any) – Return type List[langchain.schema.Document] classmethod from_texts(texts, embedding, metadatas=None, ids=None, persist_path=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – ids (Optional[List[str]]) – persist_path (Optional[str]) – kwargs (Any) – Return type
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-113
persist_path (Optional[str]) – kwargs (Any) – Return type langchain.vectorstores.sklearn.SKLearnVectorStore class langchain.vectorstores.StarRocks(embedding, config=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around StarRocks vector database You need a pymysql python package, and a valid account to connect to StarRocks. Right now StarRocks has only implemented cosine_similarity function to compute distance between two vectors. And there is no vector inside right now, so we have to iterate all vectors and compute spatial distance. For more information, please visit[StarRocks official site](https://www.starrocks.io/) [StarRocks github](https://github.com/StarRocks/starrocks) Parameters embedding (Embeddings) – config (Optional[StarRocksSettings]) – kwargs (Any) – Return type None escape_str(value)[source] Parameters value (str) – Return type str add_texts(texts, metadatas=None, batch_size=32, ids=None, **kwargs)[source] Insert more texts through the embeddings and add to the VectorStore. Parameters texts (Iterable[str]) – Iterable of strings to add to the VectorStore. ids (Optional[Iterable[str]]) – Optional list of ids to associate with the texts. batch_size (int) – Batch size of insertion metadata – Optional column data to be inserted metadatas (Optional[List[dict]]) – kwargs (Any) – Returns List of ids from adding the texts into the VectorStore. Return type List[str]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-114
List of ids from adding the texts into the VectorStore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, config=None, text_ids=None, batch_size=32, **kwargs)[source] Create StarRocks wrapper with existing texts Parameters embedding_function (Embeddings) – Function to extract text embedding texts (Iterable[str]) – List or tuple of strings to be added config (StarRocksSettings, Optional) – StarRocks configuration text_ids (Optional[Iterable], optional) – IDs for the texts. Defaults to None. batch_size (int, optional) – Batchsize when transmitting data to StarRocks. Defaults to 32. metadata (List[dict], optional) – metadata to texts. Defaults to None. embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[Dict[Any, Any]]]) – kwargs (Any) – Returns StarRocks Index Return type langchain.vectorstores.starrocks.StarRocks similarity_search(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of Documents Return type List[Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-115
Returns List of Documents Return type List[Document] similarity_search_by_vector(embedding, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks by vectors Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. embedding (List[float]) – kwargs (Any) – Returns List of (Document, similarity) Return type List[Document] similarity_search_with_relevance_scores(query, k=4, where_str=None, **kwargs)[source] Perform a similarity search with StarRocks Parameters query (str) – query string k (int, optional) – Top K neighbors to retrieve. Defaults to 4. where_str (Optional[str], optional) – where condition string. Defaults to None. NOTE – Please do not let end-user to fill this and always be aware of SQL injection. When dealing with metadatas, remember to use {self.metadata_column}.attribute instead of attribute alone. The default name for it is metadata. kwargs (Any) – Returns List of documents Return type List[Document] drop()[source] Helper function: Drop data Return type None property metadata_column: str class langchain.vectorstores.SupabaseVectorStore(client, embedding, table_name, query_name=None)[source] Bases: langchain.vectorstores.base.VectorStore
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-116
Bases: langchain.vectorstores.base.VectorStore VectorStore for a Supabase postgres database. Assumes you have the pgvector extension installed and a match_documents (or similar) function. For more details: https://js.langchain.com/docs/modules/indexes/vector_stores/integrations/supabase You can implement your own match_documents function in order to limit the search space to a subset of documents based on your own authorization or business logic. Note that the Supabase Python client does not yet support async operations. If you’d like to use max_marginal_relevance_search, please review the instructions below on modifying the match_documents function to return matched embeddings. Parameters client (supabase.client.Client) – embedding (Embeddings) – table_name (str) – query_name (Union[str, None]) – Return type None table_name: str query_name: str add_texts(texts, metadatas=None, ids=None, **kwargs)[source] Run more texts through the embeddings and add to the vectorstore. Parameters texts (Iterable[str]) – Iterable of strings to add to the vectorstore. metadatas (Optional[List[dict[Any, Any]]]) – Optional list of metadatas associated with the texts. kwargs (Any) – vectorstore specific parameters ids (Optional[List[str]]) – Returns List of ids from adding the texts into the vectorstore. Return type List[str] classmethod from_texts(texts, embedding, metadatas=None, client=None, table_name='documents', query_name='match_documents', ids=None, **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (Embeddings) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-117
Parameters texts (List[str]) – embedding (Embeddings) – metadatas (Optional[List[dict]]) – client (Optional[supabase.client.Client]) – table_name (Optional[str]) – query_name (Union[str, None]) – ids (Optional[List[str]]) – kwargs (Any) – Return type SupabaseVectorStore add_vectors(vectors, documents, ids)[source] Parameters vectors (List[List[float]]) – documents (List[langchain.schema.Document]) – ids (List[str]) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Return docs most similar to query. Parameters query (str) – k (int) – kwargs (Any) – Return type List[langchain.schema.Document] similarity_search_by_vector(embedding, k=4, **kwargs)[source] Return docs most similar to embedding vector. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. kwargs (Any) – Returns List of Documents most similar to the query vector. Return type List[langchain.schema.Document] similarity_search_with_relevance_scores(query, k=4, **kwargs)[source] Return docs and relevance scores in the range [0, 1]. 0 is dissimilar, 1 is most similar. Parameters query (str) – input text k (int) – Number of Documents to return. Defaults to 4. **kwargs – kwargs to be passed to similarity search. Should include:
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-118
**kwargs – kwargs to be passed to similarity search. Should include: score_threshold: Optional, a floating point value between 0 to 1 to filter the resulting set of retrieved docs kwargs (Any) – Returns List of Tuples of (doc, similarity_score) Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector_with_relevance_scores(query, k)[source] Parameters query (List[float]) – k (int) – Return type List[Tuple[langchain.schema.Document, float]] similarity_search_by_vector_returning_embeddings(query, k)[source] Parameters query (List[float]) – k (int) – Return type List[Tuple[langchain.schema.Document, float, numpy.ndarray[numpy.float32, Any]]] max_marginal_relevance_search_by_vector(embedding, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters embedding (List[float]) – Embedding to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-119
Return type List[langchain.schema.Document] max_marginal_relevance_search(query, k=4, fetch_k=20, lambda_mult=0.5, **kwargs)[source] Return docs selected using the maximal marginal relevance. Maximal marginal relevance optimizes for similarity to query AND diversity among selected documents. Parameters query (str) – Text to look up documents similar to. k (int) – Number of Documents to return. Defaults to 4. fetch_k (int) – Number of Documents to fetch to pass to MMR algorithm. lambda_mult (float) – Number between 0 and 1 that determines the degree of diversity among the results with 0 corresponding to maximum diversity and 1 to minimum diversity. Defaults to 0.5. kwargs (Any) – Returns List of Documents selected by maximal marginal relevance. Return type List[langchain.schema.Document] max_marginal_relevance_search requires that query_name returns matched embeddings alongside the match documents. The following function demonstrates how to do this: ```sql CREATE FUNCTION match_documents_embeddings(query_embedding vector(1536), match_count int) RETURNS TABLE(id bigint, content text, metadata jsonb, embedding vector(1536), similarity float) LANGUAGE plpgsql AS $$ # variable_conflict use_column BEGINRETURN query SELECT id, content, metadata, embedding, 1 -(docstore.embedding <=> query_embedding) AS similarity FROMdocstore ORDER BYdocstore.embedding <=> query_embedding LIMIT match_count; END; $$; ``` delete(ids)[source] Delete by vector IDs. Parameters ids (List[str]) – List of ids to delete. Return type None
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-120
Parameters ids (List[str]) – List of ids to delete. Return type None class langchain.vectorstores.Tair(embedding_function, url, index_name, content_key='content', metadata_key='metadata', search_params=None, **kwargs)[source] Bases: langchain.vectorstores.base.VectorStore Wrapper around Tair Vector store. Parameters embedding_function (Embeddings) – url (str) – index_name (str) – content_key (str) – metadata_key (str) – search_params (Optional[dict]) – kwargs (Any) – create_index_if_not_exist(dim, distance_type, index_type, data_type, **kwargs)[source] Parameters dim (int) – distance_type (str) – index_type (str) – data_type (str) – kwargs (Any) – Return type bool add_texts(texts, metadatas=None, **kwargs)[source] Add texts data to an existing index. Parameters texts (Iterable[str]) – metadatas (Optional[List[dict]]) – kwargs (Any) – Return type List[str] similarity_search(query, k=4, **kwargs)[source] Returns the most similar indexed documents to the query text. Parameters query (str) – The query text for which to find similar documents. k (int) – The number of documents to return. Default is 4. kwargs (Any) – Returns A list of documents that are most similar to the query text. Return type List[Document]
https://api.python.langchain.com/en/stable/modules/vectorstores.html
2b28b54605b7-121
Return type List[Document] classmethod from_texts(texts, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Return VectorStore initialized from texts and embeddings. Parameters texts (List[str]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair classmethod from_documents(documents, embedding, metadatas=None, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Return VectorStore initialized from documents and embeddings. Parameters documents (List[langchain.schema.Document]) – embedding (langchain.embeddings.base.Embeddings) – metadatas (Optional[List[dict]]) – index_name (str) – content_key (str) – metadata_key (str) – kwargs (Any) – Return type langchain.vectorstores.tair.Tair static drop_index(index_name='langchain', **kwargs)[source] Drop an existing index. Parameters index_name (str) – Name of the index to drop. kwargs (Any) – Returns True if the index is dropped successfully. Return type bool classmethod from_existing_index(embedding, index_name='langchain', content_key='content', metadata_key='metadata', **kwargs)[source] Connect to an existing Tair index. Parameters embedding (langchain.embeddings.base.Embeddings) – index_name (str) – content_key (str) –
https://api.python.langchain.com/en/stable/modules/vectorstores.html