id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
127fa9812842-4
|
**Answer**: Heat-bath random walks with Markov base (HB-MB) is a class of stochastic processes that have been studied in the field of statistical mechanics and condensed matter physics. In these processes, a particle moves in a lattice by making a transition to a neighboring site, which is chosen according to a probability distribution that depends on the energy of the particle and the energy of its surroundings.
The HB-MB process was introduced by Bortz, Kalos, and Lebowitz in 1975 as a way to simulate the dynamics of interacting particles in a lattice at thermal equilibrium. The method has been used to study a variety of physical phenomena, including phase transitions, critical behavior, and transport properties.
References:
Bortz, A. B., Kalos, M. H., & Lebowitz, J. L. (1975). A new algorithm for Monte Carlo simulation of Ising spin systems. Journal of Computational Physics, 17(1), 10-18.
Binder, K., & Heermann, D. W. (2010). Monte Carlo simulation in statistical physics: an introduction. Springer Science & Business Media.
previous
Retrievers
next
AWS Kendra
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/arxiv.html
|
a5e34d90dfb3-0
|
.ipynb
.pdf
Databerry
Contents
Query
Databerry#
Databerry platform brings data from anywhere (Datsources: Text, PDF, Word, PowerPpoint, Excel, Notion, Airtable, Google Sheets, etc..) into Datastores (container of multiple Datasources).
Then your Datastores can be connected to ChatGPT via Plugins or any other Large Langue Model (LLM) via the Databerry API.
This notebook shows how to use Databerry’s retriever.
First, you will need to sign up for Databerry, create a datastore, add some data and get your datastore api endpoint url. You need the API Key.
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import DataberryRetriever
retriever = DataberryRetriever(
datastore_url="https://clg1xg2h80000l708dymr0fxc.databerry.ai/query",
# api_key="DATABERRY_API_KEY", # optional if datastore is public
# top_k=10 # optional
)
retriever.get_relevant_documents("What is Daftpage?")
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/databerry.html
|
a5e34d90dfb3-1
|
)
retriever.get_relevant_documents("What is Daftpage?")
[Document(page_content='✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramGetting StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!DaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord', metadata={'source': 'https:/daftpage.com/help/getting-started', 'score': 0.8697265}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/databerry.html
|
a5e34d90dfb3-2
|
Document(page_content="✨ Made with DaftpageOpen main menuPricingTemplatesLoginSearchHelpGetting StartedFeaturesAffiliate ProgramHelp CenterWelcome to Daftpage’s help center—the one-stop shop for learning everything about building websites with Daftpage.Daftpage is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.86570895}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/databerry.html
|
a5e34d90dfb3-3
|
Document(page_content=" is the simplest way to create websites for all purposes in seconds. Without knowing how to code, and for free!Get StartedDaftpage is a new type of website builder that works like a doc.It makes website building easy, fun and offers tons of powerful features for free. Just type / in your page to get started!Start here✨ Create your first site🧱 Add blocks🚀 PublishGuides🔖 Add a custom domainFeatures🔥 Drops🎨 Drawings👻 Ghost mode💀 Skeleton modeCant find the answer you're looking for?mail us at support@daftpage.comJoin the awesome Daftpage community on: 👾 DiscordDaftpageCopyright © 2022 Daftpage, Inc.All rights reserved.ProductPricingTemplatesHelp & SupportHelp CenterGetting startedBlogCompanyAboutRoadmapTwitterAffiliate Program👾 Discord", metadata={'source': 'https:/daftpage.com/help', 'score': 0.8645384})]
previous
Contextual Compression
next
ElasticSearch BM25
Contents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/databerry.html
|
a01b6a89dd97-0
|
.ipynb
.pdf
Self-querying with Weaviate
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
Self-querying with Weaviate#
Creating a Weaviate vectorstore#
First we’ll want to create a Weaviate VectorStore and seed it with some data. We’ve created a small demo set of documents that contain summaries of movies.
NOTE: The self-query retriever requires you to have lark installed (pip install lark). We also need the weaviate-client package.
#!pip install lark weaviate-client
from langchain.schema import Document
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Weaviate
import os
embeddings = OpenAIEmbeddings()
docs = [
Document(page_content="A bunch of scientists bring back dinosaurs and mayhem breaks loose", metadata={"year": 1993, "rating": 7.7, "genre": "science fiction"}),
Document(page_content="Leo DiCaprio gets lost in a dream within a dream within a dream within a ...", metadata={"year": 2010, "director": "Christopher Nolan", "rating": 8.2}),
Document(page_content="A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea", metadata={"year": 2006, "director": "Satoshi Kon", "rating": 8.6}),
Document(page_content="A bunch of normal-sized women are supremely wholesome and some men pine after them", metadata={"year": 2019, "director": "Greta Gerwig", "rating": 8.3}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate_self_query.html
|
a01b6a89dd97-1
|
Document(page_content="Toys come alive and have a blast doing so", metadata={"year": 1995, "genre": "animated"}),
Document(page_content="Three men walk into the Zone, three men walk out of the Zone", metadata={"year": 1979, "rating": 9.9, "director": "Andrei Tarkovsky", "genre": "science fiction", "rating": 9.9})
]
vectorstore = Weaviate.from_documents(
docs, embeddings, weaviate_url="http://127.0.0.1:8080"
)
Creating our self-querying retriever#
Now we can instantiate our retriever. To do this we’ll need to provide some information upfront about the metadata fields that our documents support and a short description of the document contents.
from langchain.llms import OpenAI
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
metadata_field_info=[
AttributeInfo(
name="genre",
description="The genre of the movie",
type="string or list[string]",
),
AttributeInfo(
name="year",
description="The year the movie was released",
type="integer",
),
AttributeInfo(
name="director",
description="The name of the movie director",
type="string",
),
AttributeInfo(
name="rating",
description="A 1-10 rating for the movie",
type="float"
),
]
document_content_description = "Brief summary of a movie"
llm = OpenAI(temperature=0)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate_self_query.html
|
a01b6a89dd97-2
|
llm = OpenAI(temperature=0)
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_field_info, verbose=True)
Testing it out#
And now we can try actually using our retriever!
# This example only specifies a relevant query
retriever.get_relevant_documents("What are some movies about dinosaurs")
query='dinosaur' filter=None limit=None
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995}),
Document(page_content='Three men walk into the Zone, three men walk out of the Zone', metadata={'genre': 'science fiction', 'rating': 9.9, 'year': 1979}),
Document(page_content='A psychologist / detective gets lost in a series of dreams within dreams within dreams and Inception reused the idea', metadata={'genre': None, 'rating': 8.6, 'year': 2006})]
# This example specifies a query and a filter
retriever.get_relevant_documents("Has Greta Gerwig directed any movies about women")
query='women' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='director', value='Greta Gerwig') limit=None
[Document(page_content='A bunch of normal-sized women are supremely wholesome and some men pine after them', metadata={'genre': None, 'rating': 8.3, 'year': 2019})]
Filter k#
We can also use the self query retriever to specify k: the number of documents to fetch.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate_self_query.html
|
a01b6a89dd97-3
|
We can do this by passing enable_limit=True to the constructor.
retriever = SelfQueryRetriever.from_llm(
llm,
vectorstore,
document_content_description,
metadata_field_info,
enable_limit=True,
verbose=True
)
# This example only specifies a relevant query
retriever.get_relevant_documents("what are two movies about dinosaurs")
query='dinosaur' filter=None limit=2
[Document(page_content='A bunch of scientists bring back dinosaurs and mayhem breaks loose', metadata={'genre': 'science fiction', 'rating': 7.7, 'year': 1993}),
Document(page_content='Toys come alive and have a blast doing so', metadata={'genre': 'animated', 'rating': None, 'year': 1995})]
previous
Weaviate Hybrid Search
next
Wikipedia
Contents
Creating a Weaviate vectorstore
Creating our self-querying retriever
Testing it out
Filter k
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate_self_query.html
|
2fbc4de19103-0
|
.ipynb
.pdf
Wikipedia
Contents
Installation
Examples
Running retriever
Question Answering on facts
Wikipedia#
Wikipedia is a multilingual free online encyclopedia written and maintained by a community of volunteers, known as Wikipedians, through open collaboration and using a wiki-based editing system called MediaWiki. Wikipedia is the largest and most-read reference work in history.
This notebook shows how to retrieve wiki pages from wikipedia.org into the Document format that is used downstream.
Installation#
First, you need to install wikipedia python package.
#!pip install wikipedia
WikipediaRetriever has these arguments:
optional lang: default=”en”. Use it to search in a specific language part of Wikipedia
optional load_max_docs: default=100. Use it to limit number of downloaded documents. It takes time to download all 100 documents, so use a small number for experiments. There is a hard limit of 300 for now.
optional load_all_available_meta: default=False. By default only the most important fields downloaded: Published (date when document was published/last updated), title, Summary. If True, other fields also downloaded.
get_relevant_documents() has one argument, query: free text which used to find documents in Wikipedia
Examples#
Running retriever#
from langchain.retrievers import WikipediaRetriever
retriever = WikipediaRetriever()
docs = retriever.get_relevant_documents(query='HUNTER X HUNTER')
docs[0].metadata # meta-information of the Document
{'title': 'Hunter × Hunter',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/wikipedia.html
|
2fbc4de19103-1
|
'summary': 'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The story focuses on a young boy named Gon Freecss who discovers that his father, who left him at a young age, is actually a world-renowned Hunter, a licensed professional who specializes in fantastical pursuits such as locating rare or unidentified animal species, treasure hunting, surveying unexplored enclaves, or hunting down lawless individuals. Gon departs on a journey to become a Hunter and eventually find his father. Along the way, Gon meets various other Hunters and encounters the paranormal.\nHunter × Hunter was adapted into a 62-episode anime television series produced by Nippon Animation and directed by Kazuhiro Furuhashi, which ran on Fuji Television from October 1999 to March 2001. Three separate original video animations (OVAs) totaling 30 episodes were subsequently produced by Nippon Animation and released in Japan from 2002 to 2004. A second anime television series by Madhouse aired on Nippon Television from October 2011 to September 2014, totaling 148 episodes, with two animated theatrical films released in 2013. There are also numerous audio albums, video games, musicals, and other media based on Hunter × Hunter.\nThe manga has been translated into English and released in North America by Viz Media since April 2005. Both television series have been also licensed by Viz Media, with the first series having aired on the Funimation Channel in 2009 and the second series broadcast
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/wikipedia.html
|
2fbc4de19103-2
|
with the first series having aired on the Funimation Channel in 2009 and the second series broadcast on Adult Swim\'s Toonami programming block from April 2016 to June 2019.\nHunter × Hunter has been a huge critical and financial success and has become one of the best-selling manga series of all time, having over 84 million copies in circulation by July 2022.\n\n'}
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/wikipedia.html
|
2fbc4de19103-3
|
docs[0].page_content[:400] # a content of the Document
'Hunter × Hunter (stylized as HUNTER×HUNTER and pronounced "hunter hunter") is a Japanese manga series written and illustrated by Yoshihiro Togashi. It has been serialized in Shueisha\'s shōnen manga magazine Weekly Shōnen Jump since March 1998, although the manga has frequently gone on extended hiatuses since 2006. Its chapters have been collected in 37 tankōbon volumes as of November 2022. The sto'
Question Answering on facts#
# get a token: https://platform.openai.com/account/api-keys
from getpass import getpass
OPENAI_API_KEY = getpass()
········
import os
os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name='gpt-3.5-turbo') # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model,retriever=retriever)
questions = [
"What is Apify?",
"When the Monument to the Martyrs of the 1830 Revolution was created?",
"What is the Abhayagiri Vihāra?",
# "How big is Wikipédia en français?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result['answer']))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
-> **Question**: What is Apify?
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/wikipedia.html
|
2fbc4de19103-4
|
-> **Question**: What is Apify?
**Answer**: Apify is a platform that allows you to easily automate web scraping, data extraction and web automation. It provides a cloud-based infrastructure for running web crawlers and other automation tasks, as well as a web-based tool for building and managing your crawlers. Additionally, Apify offers a marketplace for buying and selling pre-built crawlers and related services.
-> **Question**: When the Monument to the Martyrs of the 1830 Revolution was created?
**Answer**: Apify is a web scraping and automation platform that enables you to extract data from websites, turn unstructured data into structured data, and automate repetitive tasks. It provides a user-friendly interface for creating web scraping scripts without any coding knowledge. Apify can be used for various web scraping tasks such as data extraction, web monitoring, content aggregation, and much more. Additionally, it offers various features such as proxy support, scheduling, and integration with other tools to make web scraping and automation tasks easier and more efficient.
-> **Question**: What is the Abhayagiri Vihāra?
**Answer**: Abhayagiri Vihāra was a major monastery site of Theravada Buddhism that was located in Anuradhapura, Sri Lanka. It was founded in the 2nd century BCE and is considered to be one of the most important monastic complexes in Sri Lanka.
previous
Self-querying with Weaviate
next
Zep
Contents
Installation
Examples
Running retriever
Question Answering on facts
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/wikipedia.html
|
c811a681e451-0
|
.ipynb
.pdf
LOTR (Merger Retriever)
Contents
Remove redundant results from the merged retrievers.
LOTR (Merger Retriever)#
Lord of the Retrievers, also known as MergerRetriever, takes a list of retrievers as input and merges the results of their get_relevant_documents() methods into a single list. The merged results will be a list of documents that are relevant to the query and that have been ranked by the different retrievers.
The MergerRetriever class can be used to improve the accuracy of document retrieval in a number of ways. First, it can combine the results of multiple retrievers, which can help to reduce the risk of bias in the results. Second, it can rank the results of the different retrievers, which can help to ensure that the most relevant documents are returned first.
import os
import chromadb
from langchain.retrievers.merger_retriever import MergerRetriever
from langchain.vectorstores import Chroma
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.embeddings import OpenAIEmbeddings
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.retrievers import ContextualCompressionRetriever
# Get 3 diff embeddings.
all_mini = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
multi_qa_mini = HuggingFaceEmbeddings(model_name="multi-qa-MiniLM-L6-dot-v1")
filter_embeddings = OpenAIEmbeddings()
ABS_PATH = os.path.dirname(os.path.abspath(__file__))
DB_DIR = os.path.join(ABS_PATH, "db")
# Instantiate 2 diff cromadb indexs, each one with a diff embedding.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/merger_retriever.html
|
c811a681e451-1
|
# Instantiate 2 diff cromadb indexs, each one with a diff embedding.
client_settings = chromadb.config.Settings(
chroma_db_impl="duckdb+parquet",
persist_directory=DB_DIR,
anonymized_telemetry=False,
)
db_all = Chroma(
collection_name="project_store_all",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=all_mini,
)
db_multi_qa = Chroma(
collection_name="project_store_multi",
persist_directory=DB_DIR,
client_settings=client_settings,
embedding_function=multi_qa_mini,
)
# Define 2 diff retrievers with 2 diff embeddings and diff search type.
retriever_all = db_all.as_retriever(
search_type="similarity", search_kwargs={"k": 5, "include_metadata": True}
)
retriever_multi_qa = db_multi_qa.as_retriever(
search_type="mmr", search_kwargs={"k": 5, "include_metadata": True}
)
# The Lord of the Retrievers will hold the ouput of boths retrievers and can be used as any other
# retriever on different types of chains.
lotr = MergerRetriever(retrievers=[retriever_all, retriever_multi_qa])
Remove redundant results from the merged retrievers.#
# We can remove redundant results from both retrievers using yet another embedding.
# Using multiples embeddings in diff steps could help reduce biases.
filter = EmbeddingsRedundantFilter(embeddings=filter_embeddings)
pipeline = DocumentCompressorPipeline(transformers=[filter])
compression_retriever = ContextualCompressionRetriever(
base_compressor=pipeline, base_retriever=lotr
)
previous
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/merger_retriever.html
|
c811a681e451-2
|
base_compressor=pipeline, base_retriever=lotr
)
previous
kNN
next
Metal
Contents
Remove redundant results from the merged retrievers.
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/merger_retriever.html
|
aaf6444c42fb-0
|
.ipynb
.pdf
Metal
Contents
Ingest Documents
Query
Metal#
Metal is a managed service for ML Embeddings.
This notebook shows how to use Metal’s retriever.
First, you will need to sign up for Metal and get an API key. You can do so here
# !pip install metal_sdk
from metal_sdk.metal import Metal
API_KEY = ""
CLIENT_ID = ""
INDEX_ID = ""
metal = Metal(API_KEY, CLIENT_ID, INDEX_ID);
Ingest Documents#
You only need to do this if you haven’t already set up an index
metal.index( {"text": "foo1"})
metal.index( {"text": "foo"})
{'data': {'id': '642739aa7559b026b4430e42',
'text': 'foo',
'createdAt': '2023-03-31T19:51:06.748Z'}}
Query#
Now that our index is set up, we can set up a retriever and start querying it.
from langchain.retrievers import MetalRetriever
retriever = MetalRetriever(metal, params={"limit": 2})
retriever.get_relevant_documents("foo1")
[Document(page_content='foo1', metadata={'dist': '1.19209289551e-07', 'id': '642739a17559b026b4430e40', 'createdAt': '2023-03-31T19:50:57.853Z'}),
Document(page_content='foo1', metadata={'dist': '4.05311584473e-06', 'id': '642738f67559b026b4430e3c', 'createdAt': '2023-03-31T19:48:06.769Z'})]
previous
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/metal.html
|
aaf6444c42fb-1
|
previous
LOTR (Merger Retriever)
next
Pinecone Hybrid Search
Contents
Ingest Documents
Query
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/metal.html
|
8415773a60dc-0
|
.ipynb
.pdf
Weaviate Hybrid Search
Weaviate Hybrid Search#
Weaviate is an open source vector database.
Hybrid search is a technique that combines multiple search algorithms to improve the accuracy and relevance of search results. It uses the best features of both keyword-based search algorithms with vector search techniques.
The Hybrid search in Weaviate uses sparse and dense vectors to represent the meaning and context of search queries and documents.
This notebook shows how to use Weaviate hybrid search as a LangChain retriever.
Set up the retriever:
#!pip install weaviate-client
import weaviate
import os
WEAVIATE_URL = os.getenv("WEAVIATE_URL")
client = weaviate.Client(
url=WEAVIATE_URL,
auth_client_secret=weaviate.AuthApiKey(api_key=os.getenv("WEAVIATE_API_KEY")),
additional_headers={
"X-Openai-Api-Key": os.getenv("OPENAI_API_KEY"),
},
)
# client.schema.delete_all()
from langchain.retrievers.weaviate_hybrid_search import WeaviateHybridSearchRetriever
from langchain.schema import Document
/workspaces/langchain/langchain/vectorstores/analyticdb.py:20: MovedIn20Warning: The ``declarative_base()`` function is now available as sqlalchemy.orm.declarative_base(). (deprecated since: 2.0) (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
Base = declarative_base() # type: Any
retriever = WeaviateHybridSearchRetriever(
client, index_name="LangChain", text_key="text"
)
Add some data:
docs = [
Document(
metadata={
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
8415773a60dc-1
|
)
Add some data:
docs = [
Document(
metadata={
"title": "Embracing The Future: AI Unveiled",
"author": "Dr. Rebecca Simmons",
},
page_content="A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.",
),
Document(
metadata={
"title": "Symbiosis: Harmonizing Humans and AI",
"author": "Prof. Jonathan K. Sterling",
},
page_content="Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.",
),
Document(
metadata={"title": "AI: The Ethical Quandary", "author": "Dr. Rebecca Simmons"},
page_content="In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.",
),
Document(
metadata={
"title": "Conscious Constructs: The Search for AI Sentience",
"author": "Dr. Samuel Cortez",
},
page_content="Dr. Cortez takes readers on a journey exploring the controversial topic of AI consciousness. The book provides compelling arguments for and against the possibility of true AI sentience.",
),
Document(
metadata={
"title": "Invisible Routines: Hidden AI in Everyday Life",
"author": "Prof. Jonathan K. Sterling",
},
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
8415773a60dc-2
|
"author": "Prof. Jonathan K. Sterling",
},
page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.",
),
]
retriever.add_documents(docs)
['eda16d7d-437d-4613-84ae-c2e38705ec7a',
'04b501bf-192b-4e72-be77-2fbbe7e67ebf',
'18a1acdb-23b7-4482-ab04-a6c2ed51de77',
'88e82cc3-c020-4b5a-b3c6-ca7cf3fc6a04',
'f6abd9d5-32ed-46c4-bd08-f8d0f7c9fc95']
Do a hybrid search:
retriever.get_relevant_documents("the ethical implications of AI")
[Document(page_content='In her second book, Dr. Simmons delves deeper into the ethical considerations surrounding AI development and deployment. It is an eye-opening examination of the dilemmas faced by developers, policymakers, and society at large.', metadata={}),
Document(page_content='A comprehensive analysis of the evolution of artificial intelligence, from its inception to its future prospects. Dr. Simmons covers ethical considerations, potentials, and threats posed by AI.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
8415773a60dc-3
|
Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={})]
Do a hybrid search with where filter:
retriever.get_relevant_documents(
"AI integration in society",
where_filter={
"path": ["author"],
"operator": "Equal",
"valueString": "Prof. Jonathan K. Sterling",
},
)
[Document(page_content='Prof. Sterling explores the potential for harmonious coexistence between humans and artificial intelligence. The book discusses how AI can be integrated into society in a beneficial and non-disruptive manner.', metadata={}),
Document(page_content="In his follow-up to 'Symbiosis', Prof. Sterling takes a look at the subtle, unnoticed presence and influence of AI in our everyday lives. It reveals how AI has become woven into our routines, often without our explicit realization.", metadata={})]
previous
Vespa
next
Self-querying with Weaviate
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/retrievers/examples/weaviate-hybrid.html
|
97f9205d66b2-0
|
.ipynb
.pdf
Getting Started
Contents
Add texts
From Documents
Getting Started#
This notebook showcases basic functionality related to VectorStores. A key part of working with vectorstores is creating the vector to put in them, which is usually created via embeddings. Therefore, it is recommended that you familiarize yourself with the embedding notebook before diving into this.
This covers generic high level functionality related to all vector stores.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/getting_started.html
|
97f9205d66b2-1
|
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Add texts#
You can easily add text to a vectorstore with the add_texts method. It will return a list of document IDs (in case you need to use them downstream).
docsearch.add_texts(["Ankush went to Princeton"])
['a05e3d0c-ab40-11ed-a853-e65801318981']
query = "Where did Ankush go to college?"
docs = docsearch.similarity_search(query)
docs[0]
Document(page_content='Ankush went to Princeton', lookup_str='', metadata={}, lookup_index=0)
From Documents#
We can also initialize a vectorstore from documents directly. This is useful when we use the method on the text splitter to get documents directly (handy when the original documents have associated metadata).
documents = text_splitter.create_documents([state_of_the_union], metadatas=[{"source": "State of the Union"}])
docsearch = Chroma.from_documents(documents, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/getting_started.html
|
97f9205d66b2-2
|
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Vectorstores
next
AnalyticDB
Contents
Add texts
From Documents
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/getting_started.html
|
ab74b8054487-0
|
.ipynb
.pdf
Azure Cognitive Search
Contents
Azure Cognitive Search
Install Azure Cognitive Search SDK
Import required libraries
Configure OpenAI settings
Configure vector store settings
Create embeddings and vector store instances
Insert text and embeddings into vector store
Perform a vector similarity search
Perform a Hybrid Search
Azure Cognitive Search#
Install Azure Cognitive Search SDK#
!pip install --index-url=https://pkgs.dev.azure.com/azure-sdk/public/_packaging/azure-sdk-for-python/pypi/simple/ azure-search-documents==11.4.0a20230509004
!pip install azure-identity
Import required libraries#
import os, json
import openai
from dotenv import load_dotenv
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.schema import BaseRetriever
from langchain.vectorstores.azuresearch import AzureSearch
Configure OpenAI settings#
Configure the OpenAI settings to use Azure OpenAI or OpenAI
# Load environment variables from a .env file using load_dotenv():
load_dotenv()
openai.api_type = "azure"
openai.api_base = "YOUR_OPENAI_ENDPOINT"
openai.api_version = "2023-05-15"
openai.api_key = "YOUR_OPENAI_API_KEY"
model: str = "text-embedding-ada-002"
Configure vector store settings#
Set up the vector store settings using environment variables:
vector_store_address: str = 'YOUR_AZURE_SEARCH_ENDPOINT'
vector_store_password: str = 'YOUR_AZURE_SEARCH_ADMIN_KEY'
index_name: str = "langchain-vector-demo"
Create embeddings and vector store instances#
Create instances of the OpenAIEmbeddings and AzureSearch classes:
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(model=model, chunk_size=1)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/azuresearch.html
|
ab74b8054487-1
|
vector_store: AzureSearch = AzureSearch(azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query)
Insert text and embeddings into vector store#
Add texts and metadata from the JSON data to the vector store:
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader('../../../state_of_the_union.txt', encoding='utf-8')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
Perform a vector similarity search#
Execute a pure vector similarity search using the similarity_search() method:
# Perform a similarity search
docs = vector_store.similarity_search(query="What did the president say about Ketanji Brown Jackson", k=3, search_type='similarity')
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/azuresearch.html
|
ab74b8054487-2
|
Perform a Hybrid Search#
Execute hybrid search using the hybrid_search() method:
# Perform a hybrid search
docs = vector_store.similarity_search(query="What did the president say about Ketanji Brown Jackson", k=3)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
AwaDB
next
Chroma
Contents
Azure Cognitive Search
Install Azure Cognitive Search SDK
Import required libraries
Configure OpenAI settings
Configure vector store settings
Create embeddings and vector store instances
Insert text and embeddings into vector store
Perform a vector similarity search
Perform a Hybrid Search
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/azuresearch.html
|
029c149e4650-0
|
.ipynb
.pdf
AnalyticDB
AnalyticDB#
AnalyticDB for PostgreSQL is a massively parallel processing (MPP) data warehousing service that is designed to analyze large volumes of data online.
AnalyticDB for PostgreSQL is developed based on the open source Greenplum Database project and is enhanced with in-depth extensions by Alibaba Cloud. AnalyticDB for PostgreSQL is compatible with the ANSI SQL 2003 syntax and the PostgreSQL and Oracle database ecosystems. AnalyticDB for PostgreSQL also supports row store and column store. AnalyticDB for PostgreSQL processes petabytes of data offline at a high performance level and supports highly concurrent online queries.
This notebook shows how to use functionality related to the AnalyticDB vector database.
To run, you should have an AnalyticDB instance up and running:
Using AnalyticDB Cloud Vector Database. Click here to fast deploy it.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import AnalyticDB
Split documents and get embeddings by call OpenAI API
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Connect to AnalyticDB by setting related ENVIRONMENTS.
export PG_HOST={your_analyticdb_hostname}
export PG_PORT={your_analyticdb_port} # Optional, default is 5432
export PG_DATABASE={your_database} # Optional, default is postgres
export PG_USER={database_username}
export PG_PASSWORD={database_password}
Then store your embeddings and documents into AnalyticDB
import os
connection_string = AnalyticDB.connection_string_from_db_params(
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/analyticdb.html
|
029c149e4650-1
|
import os
connection_string = AnalyticDB.connection_string_from_db_params(
driver=os.environ.get("PG_DRIVER", "psycopg2cffi"),
host=os.environ.get("PG_HOST", "localhost"),
port=int(os.environ.get("PG_PORT", "5432")),
database=os.environ.get("PG_DATABASE", "postgres"),
user=os.environ.get("PG_USER", "postgres"),
password=os.environ.get("PG_PASSWORD", "postgres"),
)
vector_db = AnalyticDB.from_documents(
docs,
embeddings,
connection_string= connection_string,
)
Query and retrieve data
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
previous
Getting Started
next
Annoy
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/analyticdb.html
|
ed13bace7fec-0
|
.ipynb
.pdf
FAISS
Contents
Similarity Search with score
Saving and loading
Merging
Similarity Search with filtering
FAISS#
Facebook AI Similarity Search (Faiss) is a library for efficient similarity search and clustering of dense vectors. It contains algorithms that search in sets of vectors of any size, up to ones that possibly do not fit in RAM. It also contains supporting code for evaluation and parameter tuning.
Faiss documentation.
This notebook shows how to use functionality related to the FAISS vector database.
#!pip install faiss
# OR
!pip install faiss-cpu
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
# Uncomment the following line if you need to initialize FAISS with no AVX2 optimization
# os.environ['FAISS_NO_AVX2'] = '1'
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-1
|
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity Search with score#
There are some FAISS specific methods. One of them is similarity_search_with_score, which allows you to return not only the documents but also the distance score of the query to them. The returned distance score is L2 distance. Therefore, a lower score is better.
docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-2
|
docs_and_scores = db.similarity_search_with_score(query)
docs_and_scores[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),
0.36913747)
It is also possible to do a search for documents similar to a given embedding vector using similarity_search_by_vector which accepts an embedding vector as a parameter instead of a string.
embedding_vector = embeddings.embed_query(query)
docs_and_scores = db.similarity_search_by_vector(embedding_vector)
Saving and loading#
You can also save and load a FAISS index. This is useful so you don’t have to recreate it everytime you use it.
db.save_local("faiss_index")
new_db = FAISS.load_local("faiss_index", embeddings)
docs = new_db.similarity_search(query)
docs[0]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-3
|
docs = new_db.similarity_search(query)
docs[0]
Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'})
Merging#
You can also merge two FAISS vectorstores
db1 = FAISS.from_texts(["foo"], embeddings)
db2 = FAISS.from_texts(["bar"], embeddings)
db1.docstore._dict
{'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={})}
db2.docstore._dict
{'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}
db1.merge_from(db2)
db1.docstore._dict
{'068c473b-d420-487a-806b-fb0ccea7f711': Document(page_content='foo', metadata={}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-4
|
'807e0c63-13f6-4070-9774-5c6f0fbb9866': Document(page_content='bar', metadata={})}
Similarity Search with filtering#
FAISS vectorstore can also support filtering, since the FAISS does not natively support filtering we have to do it manually. This is done by first fetching more results than k and then filtering them. You can filter the documents based on metadata. You can also set the fetch_k parameter when calling any search method to set how many documents you want to fetch before filtering. Here is a small example:
from langchain.schema import Document
list_of_documents = [
Document(page_content="foo", metadata=dict(page=1)),
Document(page_content="bar", metadata=dict(page=1)),
Document(page_content="foo", metadata=dict(page=2)),
Document(page_content="barbar", metadata=dict(page=2)),
Document(page_content="foo", metadata=dict(page=3)),
Document(page_content="bar burr", metadata=dict(page=3)),
Document(page_content="foo", metadata=dict(page=4)),
Document(page_content="bar bruh", metadata=dict(page=4))
]
db = FAISS.from_documents(list_of_documents, embeddings)
results_with_scores = db.similarity_search_with_score("foo")
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 2}, Score: 5.159960813797904e-15
Content: foo, Metadata: {'page': 3}, Score: 5.159960813797904e-15
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-5
|
Content: foo, Metadata: {'page': 4}, Score: 5.159960813797904e-15
Now we make the same query call but we filter for only page = 1
results_with_scores = db.similarity_search_with_score("foo", filter=dict(page=1))
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
Content: bar, Metadata: {'page': 1}, Score: 0.3131446838378906
Same thing can be done with the max_marginal_relevance_search as well.
results = db.max_marginal_relevance_search("foo", filter=dict(page=1))
for doc in results:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}")
Content: foo, Metadata: {'page': 1}
Content: bar, Metadata: {'page': 1}
Here is an example of how to set fetch_k parameter when calling similarity_search. Usually you would want the fetch_k parameter >> k parameter. This is because the fetch_k parameter is the number of documents that will be fetched before filtering. If you set fetch_k to a low number, you might not get enough documents to filter from.
results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)
for doc, score in results_with_scores:
print(f"Content: {doc.page_content}, Metadata: {doc.metadata}, Score: {score}")
Content: foo, Metadata: {'page': 1}, Score: 5.159960813797904e-15
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
ed13bace7fec-6
|
Content: bar, Metadata: {'page': 1}, Score: 0.3131446838378906
previous
ElasticSearch
next
Hologres
Contents
Similarity Search with score
Saving and loading
Merging
Similarity Search with filtering
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/faiss.html
|
1f11c895f833-0
|
.ipynb
.pdf
Typesense
Contents
Similarity Search
Typesense as a Retriever
Typesense#
Typesense is an open source, in-memory search engine, that you can either self-host or run on Typesense Cloud.
Typesense focuses on performance by storing the entire index in RAM (with a backup on disk) and also focuses on providing an out-of-the-box developer experience by simplifying available options and setting good defaults.
It also lets you combine attribute-based filtering together with vector queries, to fetch the most relevant documents.
This notebook shows you how to use Typesense as your VectorStore.
Let’s first install our dependencies:
!pip install typesense openapi-schema-pydantic openai tiktoken
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Typesense
from langchain.document_loaders import TextLoader
Let’s import our test dataset:
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Typesense.from_documents(docs,
embeddings,
typesense_client_params={
'host': 'localhost', # Use xxx.a1.typesense.net for Typesense Cloud
'port': '8108', # Use 443 for Typesense Cloud
'protocol': 'http', # Use https for Typesense Cloud
'typesense_api_key': 'xyz',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/typesense.html
|
1f11c895f833-1
|
'typesense_api_key': 'xyz',
'typesense_collection_name': 'lang-chain'
})
Similarity Search#
query = "What did the president say about Ketanji Brown Jackson"
found_docs = docsearch.similarity_search(query)
print(found_docs[0].page_content)
Typesense as a Retriever#
Typesense, as all the other vector stores, is a LangChain Retriever, by using cosine similarity.
retriever = docsearch.as_retriever()
retriever
query = "What did the president say about Ketanji Brown Jackson"
retriever.get_relevant_documents(query)[0]
previous
Tigris
next
Vectara
Contents
Similarity Search
Typesense as a Retriever
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/typesense.html
|
6b342b4ccb1d-0
|
.ipynb
.pdf
Milvus
Milvus#
Milvus is a database that stores, indexes, and manages massive embedding vectors generated by deep neural networks and other machine learning (ML) models.
This notebook shows how to use functionality related to the Milvus vector database.
To run, you should have a Milvus instance up and running.
!pip install pymilvus
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Milvus
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_db = Milvus.from_documents(
docs,
embeddings,
connection_args={"host": "127.0.0.1", "port": "19530"},
)
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_db.similarity_search(query)
docs[0].page_content
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/milvus.html
|
6b342b4ccb1d-1
|
docs = vector_db.similarity_search(query)
docs[0].page_content
'Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.'
previous
MatchingEngine
next
<no title>
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/milvus.html
|
53ba2ce0b803-0
|
.ipynb
.pdf
Tigris
Contents
Initialize Tigris vector store
Similarity Search
Similarity Search with score (vector distance)
Tigris#
Tigris is an open source Serverless NoSQL Database and Search Platform designed to simplify building high-performance vector search applications.
Tigris eliminates the infrastructure complexity of managing, operating, and synchronizing multiple tools, allowing you to focus on building great applications instead.
This notebook guides you how to use Tigris as your VectorStore
Pre requisites
An OpenAI account. You can sign up for an account here
Sign up for a free Tigris account. Once you have signed up for the Tigris account, create a new project called vectordemo. Next, make a note of the Uri for the region you’ve created your project in, the clientId and clientSecret. You can get all this information from the Application Keys section of the project.
Let’s first install our dependencies:
!pip install tigrisdb openapi-schema-pydantic openai tiktoken
We will load the OpenAI api key and Tigris credentials in our environment
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
os.environ['TIGRIS_PROJECT'] = getpass.getpass('Tigris Project Name:')
os.environ['TIGRIS_CLIENT_ID'] = getpass.getpass('Tigris Client Id:')
os.environ['TIGRIS_CLIENT_SECRET'] = getpass.getpass('Tigris Client Secret:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Tigris
from langchain.document_loaders import TextLoader
Initialize Tigris vector store#
Let’s import our test dataset:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/tigris.html
|
53ba2ce0b803-1
|
Initialize Tigris vector store#
Let’s import our test dataset:
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vector_store = Tigris.from_documents(docs, embeddings, index_name="my_embeddings")
Similarity Search#
query = "What did the president say about Ketanji Brown Jackson"
found_docs = vector_store.similarity_search(query)
print(found_docs)
Similarity Search with score (vector distance)#
query = "What did the president say about Ketanji Brown Jackson"
result = vector_store.similarity_search_with_score(query)
for (doc, score) in result:
print(f"document={doc}, score={score}")
previous
Tair
next
Typesense
Contents
Initialize Tigris vector store
Similarity Search
Similarity Search with score (vector distance)
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/tigris.html
|
839bea08bb79-0
|
.ipynb
.pdf
Deep Lake
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
Deep Lake#
Deep Lake as a Multi-Modal Vector Store that stores embeddings and their metadata including text, jsons, images, audio, video, and more. It saves the data locally, in your cloud, or on Activeloop storage. It performs hybrid search including embeddings and their attributes.
This notebook showcases basic functionality related to Deep Lake. While Deep Lake can store embeddings, it is capable of storing any type of data. It is a fully fledged serverless data lake with version control, query engine and streaming dataloader to deep learning frameworks.
For more information, please see the Deep Lake documentation or api reference
!pip install openai deeplake tiktoken
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DeepLake
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
embeddings = OpenAIEmbeddings()
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-1
|
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
Create a dataset locally at ./deeplake/, then run similiarity search. The Deeplake+LangChain integration uses Deep Lake datasets under the hood, so dataset and vector store are used interchangeably. To create a dataset in your own cloud, or in the Deep Lake storage, adjust the path accordingly.
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings)
db.add_documents(docs)
# or shorter
# db = DeepLake.from_documents(docs, dataset_path="./my_deeplake/", embedding=embeddings, overwrite=True)
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
/home/leo/.local/lib/python3.10/site-packages/deeplake/util/check_latest_version.py:32: UserWarning: A newer version of deeplake (3.3.2) is available. It's recommended that you update to the latest version using `pip install -U deeplake`.
warnings.warn(
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████████████████████████████████| 1/1 [00:07<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (42, 1536) float32 None
ids text (42, 1) str None
metadata json (42, 1) str None
text text (42, 1) str None
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-2
|
text text (42, 1) str None
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Later, you can reload the dataset without recomputing embeddings
db = DeepLake(dataset_path="./my_deeplake/", embedding_function=embeddings, read_only=True)
docs = db.similarity_search(query)
./my_deeplake/ loaded successfully.
Deep Lake Dataset in ./my_deeplake/ already exists, loading from the storage
Dataset(path='./my_deeplake/', read_only=True, tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (42, 1536) float32 None
ids text (42, 1) str None
metadata json (42, 1) str None
text text (42, 1) str None
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-3
|
text text (42, 1) str None
Deep Lake, for now, is single writer and multiple reader. Setting read_only=True helps to avoid acquring the writer lock.
Retrieval Question/Answering#
from langchain.chains import RetrievalQA
from langchain.llms import OpenAIChat
qa = RetrievalQA.from_chain_type(llm=OpenAIChat(model='gpt-3.5-turbo'), chain_type='stuff', retriever=db.as_retriever())
/home/leo/.local/lib/python3.10/site-packages/langchain/llms/openai.py:624: UserWarning: You are trying to use a chat model. This way of initializing it is no longer supported. Instead, please use: `from langchain.chat_models import ChatOpenAI`
warnings.warn(
query = 'What did the president say about Ketanji Brown Jackson'
qa.run(query)
'The president nominated Ketanji Brown Jackson to serve on the United States Supreme Court. He described her as a former top litigator in private practice, a former federal public defender, a consensus builder, and from a family of public school educators and police officers. He also mentioned that she has received broad support from various groups since being nominated.'
Attribute based filtering in metadata#
import random
for d in docs:
d.metadata['year'] = random.randint(2012, 2014)
db = DeepLake.from_documents(docs, embeddings, dataset_path="./my_deeplake/", overwrite=True)
./my_deeplake/ loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:04<00:00
Dataset(path='./my_deeplake/', tensors=['embedding', 'ids', 'metadata', 'text'])
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-4
|
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
db.similarity_search('What did the president say about Ketanji Brown Jackson', filter={'year': 2013})
100%|██████████| 4/4 [00:00<00:00, 1080.24it/s]
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-5
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Choosing distance function#
Distance function L2 for Euclidean, L1 for Nuclear, Max l-infinity distnace, cos for cosine similarity, dot for dot product
db.similarity_search('What did the president say about Ketanji Brown Jackson?', distance_metric='cos')
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-6
|
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-7
|
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-8
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-9
|
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012})]
Maximal Marginal relevance#
Using maximal marginal relevance
db.max_marginal_relevance_search('What did the president say about Ketanji Brown Jackson?')
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-10
|
[Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-11
|
Document(page_content='Tonight, I’m announcing a crackdown on these companies overcharging American businesses and consumers. \n\nAnd as Wall Street firms take over more nursing homes, quality in those homes has gone down and costs have gone up. \n\nThat ends on my watch. \n\nMedicare is going to set higher standards for nursing homes and make sure your loved ones get the care they deserve and expect. \n\nWe’ll also cut costs and keep the economy going strong by giving workers a fair shot, provide more training and apprenticeships, hire them based on their skills not degrees. \n\nLet’s pass the Paycheck Fairness Act and paid leave. \n\nRaise the minimum wage to $15 an hour and extend the Child Tax Credit, so no one has to raise a family in poverty. \n\nLet’s increase Pell Grants and increase our historic support of HBCUs, and invest in what Jill—our First Lady who teaches full-time—calls America’s best-kept secret: community colleges.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-12
|
Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2012}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-13
|
Document(page_content='And for our LGBTQ+ Americans, let’s finally get the bipartisan Equality Act to my desk. The onslaught of state laws targeting transgender Americans and their families is wrong. \n\nAs I said last year, especially to our younger transgender Americans, I will always have your back as your President, so you can be yourself and reach your God-given potential. \n\nWhile it often appears that we never agree, that isn’t true. I signed 80 bipartisan bills into law last year. From preventing government shutdowns to protecting Asian-Americans from still-too-common hate crimes to reforming military justice. \n\nAnd soon, we’ll strengthen the Violence Against Women Act that I first wrote three decades ago. It is important for us to show the nation that we can come together and do big things. \n\nSo tonight I’m offering a Unity Agenda for the Nation. Four big things we can do together. \n\nFirst, beat the opioid epidemic.', metadata={'source': '../../../state_of_the_union.txt', 'year': 2013})]
Delete dataset#
db.delete_dataset()
and if delete fails you can also force delete
DeepLake.force_delete_by_path("./my_deeplake")
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory#
By default deep lake datasets are stored locally, in case you want to store them in memory, in the Deep Lake Managed DB, or in any object storage, you can provide the corresponding path to the dataset. You can retrieve your user token from app.activeloop.ai
os.environ['ACTIVELOOP_TOKEN'] = getpass.getpass('Activeloop Token:')
# Embed and store the texts
username = "<username>" # your username on app.activeloop.ai
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-14
|
username = "<username>" # your username on app.activeloop.ai
dataset_path = f"hub://{username}/langchain_test" # could be also ./local/path (much faster locally), s3://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings, overwrite=True)
db.add_documents(docs)
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test
hub://davitbun/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:14<00:00
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
['d6d6ccb4-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb5-e187-11ed-b66d-41c5f7b85421',
'd6d6ccb6-e187-11ed-b66d-41c5f7b85421',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-15
|
'd6d6ccb7-e187-11ed-b66d-41c5f7b85421']
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Creating dataset on AWS S3#
dataset_path = f"s3://BUCKET/langchain_test" # could be also ./local/path (much faster locally), hub://bucket/path/to/dataset, gcs://path/to/dataset, etc.
embedding = OpenAIEmbeddings()
db = DeepLake.from_documents(docs, dataset_path=dataset_path, embedding=embeddings, overwrite=True, creds = {
'aws_access_key_id': os.environ['AWS_ACCESS_KEY_ID'],
'aws_secret_access_key': os.environ['AWS_SECRET_ACCESS_KEY'],
'aws_session_token': os.environ['AWS_SESSION_TOKEN'], # Optional
})
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-16
|
})
s3://hub-2.0-datasets-n/langchain_test loaded successfully.
Evaluating ingest: 100%|██████████| 1/1 [00:10<00:00
\
Dataset(path='s3://hub-2.0-datasets-n/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
Deep Lake API#
you can access the Deep Lake dataset at db.ds
# get structure of the dataset
db.ds.summary()
Dataset(path='hub://davitbun/langchain_test', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
text text (4, 1) str None
# get embeddings numpy array
embeds = db.ds.embedding.numpy()
Transfer local dataset to cloud#
Copy already created dataset to the cloud. You can also transfer from cloud to local.
import deeplake
username = "davitbun" # your username on app.activeloop.ai
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-17
|
username = "davitbun" # your username on app.activeloop.ai
source = f"hub://{username}/langchain_test" # could be local, s3, gcs, etc.
destination = f"hub://{username}/langchain_test_copy" # could be local, s3, gcs, etc.
deeplake.deepcopy(src=source, dest=destination, overwrite=True)
Copying dataset: 100%|██████████| 56/56 [00:38<00:00
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
Your Deep Lake dataset has been successfully created!
The dataset is private so make sure you are logged in!
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
db = DeepLake(dataset_path=destination, embedding_function=embeddings)
db.add_documents(docs)
This dataset can be visualized in Jupyter Notebook by ds.visualize() or at https://app.activeloop.ai/davitbun/langchain_test_copy
/
hub://davitbun/langchain_test_copy loaded successfully.
Deep Lake Dataset in hub://davitbun/langchain_test_copy already exists, loading from the storage
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (4, 1536) float32 None
ids text (4, 1) str None
metadata json (4, 1) str None
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
839bea08bb79-18
|
metadata json (4, 1) str None
text text (4, 1) str None
Evaluating ingest: 100%|██████████| 1/1 [00:31<00:00
-
Dataset(path='hub://davitbun/langchain_test_copy', tensors=['embedding', 'ids', 'metadata', 'text'])
tensor htype shape dtype compression
------- ------- ------- ------- -------
embedding generic (8, 1536) float32 None
ids text (8, 1) str None
metadata json (8, 1) str None
text text (8, 1) str None
['ad42f3fe-e188-11ed-b66d-41c5f7b85421',
'ad42f3ff-e188-11ed-b66d-41c5f7b85421',
'ad42f400-e188-11ed-b66d-41c5f7b85421',
'ad42f401-e188-11ed-b66d-41c5f7b85421']
previous
ClickHouse Vector Search
next
DocArrayHnswSearch
Contents
Retrieval Question/Answering
Attribute based filtering in metadata
Choosing distance function
Maximal Marginal relevance
Delete dataset
Deep Lake datasets on cloud (Activeloop, AWS, GCS, etc.) or in memory
Creating dataset on AWS S3
Deep Lake API
Transfer local dataset to cloud
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/deeplake.html
|
c601f9564972-0
|
.ipynb
.pdf
ElasticSearch
Contents
ElasticSearch
ElasticVectorSearch class
Installation
Example
ElasticKnnSearch Class
Test adding vectors
Test knn search using query vector builder
Test knn search using pre generated vector
Test source option
Test fields option
Test with es client connection rather than cloud_id
ElasticSearch#
Elasticsearch is a distributed, RESTful search and analytics engine. It provides a distributed, multitenant-capable full-text search engine with an HTTP web interface and schema-free JSON documents.
This notebook shows how to use functionality related to the Elasticsearch database.
ElasticVectorSearch class#
Installation#
Check out Elasticsearch installation instructions.
To connect to an Elasticsearch instance that does not require
login credentials, pass the Elasticsearch URL and index name along with the
embedding object to the constructor.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url="http://localhost:9200",
index_name="test_index",
embedding=embedding
)
To connect to an Elasticsearch instance that requires login credentials,
including Elastic Cloud, use the Elasticsearch URL format
https://username:password@es_host:9243. For example, to connect to Elastic
Cloud, create the Elasticsearch URL with the required authentication details and
pass it to the ElasticVectorSearch constructor as the named parameter
elasticsearch_url.
You can obtain your Elastic Cloud URL and login credentials by logging in to the
Elastic Cloud console at https://cloud.elastic.co, selecting your deployment, and
navigating to the “Deployments” page.
To obtain your Elastic Cloud password for the default “elastic” user:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-1
|
To obtain your Elastic Cloud password for the default “elastic” user:
Log in to the Elastic Cloud console at https://cloud.elastic.co
Go to “Security” > “Users”
Locate the “elastic” user and click “Edit”
Click “Reset password”
Follow the prompts to reset the password
Format for Elastic Cloud URLs is
https://username:password@cluster_id.region_id.gcp.cloud.es.io:9243.
Example:
from langchain import ElasticVectorSearch
from langchain.embeddings import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
elastic_host = "cluster_id.region_id.gcp.cloud.es.io"
elasticsearch_url = f"https://username:password@{elastic_host}:9243"
elastic_vector_search = ElasticVectorSearch(
elasticsearch_url=elasticsearch_url,
index_name="test_index",
embedding=embedding
)
!pip install elasticsearch
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
Example#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = ElasticVectorSearch.from_documents(docs, embeddings, elasticsearch_url="http://localhost:9200")
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-2
|
docs = db.similarity_search(query)
print(docs[0].page_content)
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
ElasticKnnSearch Class#
The ElasticKnnSearch implements features allowing storing vectors and documents in Elasticsearch for use with approximate kNN search
!pip install langchain elasticsearch
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.embeddings import ElasticsearchEmbeddings
import elasticsearch
# Initialize ElasticsearchEmbeddings
model_id = "<model_id_from_es>"
dims = dim_count
es_cloud_id = "ESS_CLOUD_ID"
es_user = "es_user"
es_password = "es_pass"
test_index = "<index_name>"
#input_field = "your_input_field" # if different from 'text_field'
# Generate embedding object
embeddings = ElasticsearchEmbeddings.from_credentials(
model_id,
#input_field=input_field,
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-3
|
model_id,
#input_field=input_field,
es_cloud_id=es_cloud_id,
es_user=es_user,
es_password=es_password,
)
# Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
es_cloud_id=es_cloud_id,
es_user=es_user,
es_password=es_password,
index_name= test_index,
embedding= embeddings
)
Test adding vectors#
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is great for data analysis."]
knn_search.from_texts(new_texts, dims=dims)
Test knn search using query vector builder#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)
print(f"kNN search results for query '{query}': {knn_result}")
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2)
print(f"Hybrid search results for query '{query}': {hybrid_result}")
print(f"The 'text' field value from the top hit is: '{hybrid_result['hits']['hits'][0]['_source']['text']}'")
Test knn search using pre generated vector#
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-4
|
Test knn search using pre generated vector#
# Generate embedding for tests
query_text = 'Hello'
query_embedding = embeddings.embed_query(query_text)
print(f"Length of embedding: {len(query_embedding)}\nFirst two items in embedding: {query_embedding[:2]}")
# Test knn Search
knn_result = knn_search.knn_search(query_vector = query_embedding, k=2)
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
# Test hybrid search - Requires both query_text and query_vector
knn_result = knn_search.knn_hybrid_search(query_vector = query_embedding, query=query_text, k=2)
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
Test source option#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, source=False)
assert not '_source' in knn_result['hits']['hits'][0].keys()
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, source=False)
assert not '_source' in hybrid_result['hits']['hits'][0].keys()
Test fields option#
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2, fields=['text'])
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-5
|
assert 'text' in knn_result['hits']['hits'][0]['fields'].keys()
# Test `hybrid_search` method
query = "Hello"
hybrid_result = knn_search.knn_hybrid_search(query = query, model_id= model_id, k=2, fields=['text'])
assert 'text' in hybrid_result['hits']['hits'][0]['fields'].keys()
Test with es client connection rather than cloud_id#
# Create Elasticsearch connection
es_connection = Elasticsearch(
hosts=['https://es_cluster_url:port'],
basic_auth=('user', 'password')
)
# Instantiate ElasticsearchEmbeddings using es_connection
embeddings = ElasticsearchEmbeddings.from_es_connection(
model_id,
es_connection,
)
# Initialize ElasticKnnSearch
knn_search = ElasticKnnSearch(
es_connection = es_connection,
index_name= test_index,
embedding= embeddings
)
# Test `knn_search` method with model_id and query_text
query = "Hello"
knn_result = knn_search.knn_search(query = query, model_id= model_id, k=2)
print(f"kNN search results for query '{query}': {knn_result}")
print(f"The 'text' field value from the top hit is: '{knn_result['hits']['hits'][0]['_source']['text']}'")
previous
DocArrayInMemorySearch
next
FAISS
Contents
ElasticSearch
ElasticVectorSearch class
Installation
Example
ElasticKnnSearch Class
Test adding vectors
Test knn search using query vector builder
Test knn search using pre generated vector
Test source option
Test fields option
Test with es client connection rather than cloud_id
By Harrison Chase
© Copyright 2023, Harrison Chase.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
c601f9564972-6
|
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/elasticsearch.html
|
b1adb8fbfb9c-0
|
.ipynb
.pdf
LanceDB
LanceDB#
LanceDB is an open-source database for vector-search built with persistent storage, which greatly simplifies retrevial, filtering and management of embeddings. Fully open source.
This notebook shows how to use functionality related to the LanceDB vector database based on the Lance data format.
!pip install lancedb
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import LanceDB
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
documents = CharacterTextSplitter().split_documents(documents)
embeddings = OpenAIEmbeddings()
import lancedb
db = lancedb.connect('/tmp/lancedb')
table = db.create_table("my_table", data=[
{"vector": embeddings.embed_query("Hello World"), "text": "Hello World", "id": "1"}
], mode="overwrite")
docsearch = LanceDB.from_documents(documents, embeddings, connection=table)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
print(docs[0].page_content)
They were responding to a 9-1-1 call when a man shot and killed them with a stolen gun.
Officer Mora was 27 years old.
Officer Rivera was 22.
Both Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/lancedb.html
|
b1adb8fbfb9c-1
|
I spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.
I’ve worked on these issues a long time.
I know what works: Investing in crime preventionand community police officers who’ll walk the beat, who’ll know the neighborhood, and who can restore trust and safety.
So let’s not abandon our streets. Or choose between safety and equal justice.
Let’s come together to protect our communities, restore trust, and hold law enforcement accountable.
That’s why the Justice Department required body cameras, banned chokeholds, and restricted no-knock warrants for its officers.
That’s why the American Rescue Plan provided $350 Billion that cities, states, and counties can use to hire more police and invest in proven strategies like community violence interruption—trusted messengers breaking the cycle of violence and trauma and giving young people hope.
We should all agree: The answer is not to Defund the police. The answer is to FUND the police with the resources and training they need to protect our communities.
I ask Democrats and Republicans alike: Pass my budget and keep our neighborhoods safe.
And I will keep doing everything in my power to crack down on gun trafficking and ghost guns you can buy online and make at home—they have no serial numbers and can’t be traced.
And I ask Congress to pass proven measures to reduce gun violence. Pass universal background checks. Why should anyone on a terrorist list be able to purchase a weapon?
Ban assault weapons and high-capacity magazines.
Repeal the liability shield that makes gun manufacturers the only industry in America that can’t be sued.
These laws don’t infringe on the Second Amendment. They save lives.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/lancedb.html
|
b1adb8fbfb9c-2
|
These laws don’t infringe on the Second Amendment. They save lives.
The most fundamental right in America is the right to vote – and to have it counted. And it’s under assault.
In state after state, new laws have been passed, not only to suppress the vote, but to subvert entire elections.
We cannot let this happen.
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/lancedb.html
|
b1adb8fbfb9c-3
|
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
previous
Hologres
next
MatchingEngine
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/lancedb.html
|
d71ba4b3e4fd-0
|
.ipynb
.pdf
ClickHouse Vector Search
Contents
Setting up envrionments
Get connection info and data schema
Clickhouse table schema
Filtering
Deleting your data
ClickHouse Vector Search#
ClickHouse is the fastest and most resource efficient open-source database for real-time apps and analytics with full SQL support and a wide range of functions to assist users in writing analytical queries. Lately added data structures and distance search functions (like L2Distance) as well as approximate nearest neighbor search indexes enable ClickHouse to be used as a high performance and scalable vector database to store and search vectors with SQL.
This notebook shows how to use functionality related to the ClickHouse vector search.
Setting up envrionments#
Setting up local clickhouse server with docker (optional)
! docker run -d -p 8123:8123 -p9000:9000 --name langchain-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server:23.4.2.11
Setup up clickhouse client driver
!pip install clickhouse-connect
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
if not os.environ['OPENAI_API_KEY']:
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Clickhouse, ClickhouseSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/clickhouse.html
|
d71ba4b3e4fd-1
|
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for d in docs:
d.metadata = {'some': 'metadata'}
settings = ClickhouseSettings(table="clickhouse_vector_search_example")
docsearch = Clickhouse.from_documents(docs, embeddings, config=settings)
query = "What did the president say about Ketanji Brown Jackson"
docs = docsearch.similarity_search(query)
Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 2801.49it/s]
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Get connection info and data schema#
print(str(docsearch))
default.clickhouse_vector_search_example @ localhost:8123
username: None
Table Schema:
---------------------------------------------------
|id |Nullable(String) |
|document |Nullable(String) |
|embedding |Array(Float32) |
|metadata |Object('json') |
|uuid |UUID |
---------------------------------------------------
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/clickhouse.html
|
d71ba4b3e4fd-2
|
|uuid |UUID |
---------------------------------------------------
Clickhouse table schema#
Clickhouse table will be automatically created if not exist by default. Advanced users could pre-create the table with optimized settings. For distributed Clickhouse cluster with sharding, table engine should be configured as Distributed.
print(f"Clickhouse Table DDL:\n\n{docsearch.schema}")
Clickhouse Table DDL:
CREATE TABLE IF NOT EXISTS default.clickhouse_vector_search_example(
id Nullable(String),
document Nullable(String),
embedding Array(Float32),
metadata JSON,
uuid UUID DEFAULT generateUUIDv4(),
CONSTRAINT cons_vec_len CHECK length(embedding) = 1536,
INDEX vec_idx embedding TYPE annoy(100,'L2Distance') GRANULARITY 1000
) ENGINE = MergeTree ORDER BY uuid SETTINGS index_granularity = 8192
Filtering#
You can have direct access to ClickHouse SQL where statement. You can write WHERE clause following standard SQL.
NOTE: Please be aware of SQL injection, this interface must not be directly called by end-user.
If you custimized your column_map under your setting, you search with filter like this:
from langchain.vectorstores import Clickhouse, ClickhouseSettings
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
for i, d in enumerate(docs):
d.metadata = {'doc_id': i}
docsearch = Clickhouse.from_documents(docs, embeddings)
Inserting data...: 100%|██████████| 42/42 [00:00<00:00, 6939.56it/s]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/clickhouse.html
|
d71ba4b3e4fd-3
|
meta = docsearch.metadata_column
output = docsearch.similarity_search_with_relevance_scores('What did the president say about Ketanji Brown Jackson?',
k=4, where_str=f"{meta}.doc_id<10")
for d, dist in output:
print(dist, d.metadata, d.page_content[:20] + '...')
0.6779101415357189 {'doc_id': 0} Madam Speaker, Madam...
0.6997970363474885 {'doc_id': 8} And so many families...
0.7044504914336727 {'doc_id': 1} Groups of citizens b...
0.7053558702165094 {'doc_id': 6} And I’m taking robus...
Deleting your data#
docsearch.drop()
previous
Chroma
next
Deep Lake
Contents
Setting up envrionments
Get connection info and data schema
Clickhouse table schema
Filtering
Deleting your data
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/clickhouse.html
|
19c5dc65898d-0
|
.ipynb
.pdf
Atlas
Atlas#
Atlas is a platform for interacting with both small and internet scale unstructured datasets by Nomic.
This notebook shows you how to use functionality related to the AtlasDB vectorstore.
!pip install spacy
!python3 -m spacy download en_core_web_sm
!pip install nomic
import time
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import SpacyTextSplitter
from langchain.vectorstores import AtlasDB
from langchain.document_loaders import TextLoader
ATLAS_TEST_API_KEY = '7xDPkYXSYDc1_ErdTPIcoAR9RNd8YDlkS3nVNXcVoIMZ6'
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = SpacyTextSplitter(separator='|')
texts = []
for doc in text_splitter.split_documents(documents):
texts.extend(doc.page_content.split('|'))
texts = [e.strip() for e in texts]
db = AtlasDB.from_texts(texts=texts,
name='test_index_'+str(time.time()), # unique name for your vector store
description='test_index', #a description for your vector store
api_key=ATLAS_TEST_API_KEY,
index_kwargs={'build_topic_model': True})
db.project.wait_for_project_lock()
db.project
test_index_1677255228.136989
A description for your project 508 datums inserted.
1 index built.
Projections
test_index_1677255228.136989_index. Status Completed. view online
Projection ID: db996d77-8981-48a0-897a-ff2c22bbf541
Hide embedded project
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/atlas.html
|
19c5dc65898d-1
|
Hide embedded project
Explore on atlas.nomic.ai
previous
Annoy
next
AwaDB
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/atlas.html
|
ff48fb474252-0
|
.ipynb
.pdf
DocArrayHnswSearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
DocArrayHnswSearch#
DocArrayHnswSearch is a lightweight Document Index implementation provided by Docarray that runs fully locally and is best suited for small- to medium-sized datasets. It stores vectors on disk in hnswlib, and stores all other data in SQLite.
This notebook shows how to use functionality related to the DocArrayHnswSearch.
Setup#
Uncomment the below cells to install docarray and get/set your OpenAI api key if you haven’t already done so.
# !pip install "docarray[hnswlib]"
# Get an OpenAI token: https://platform.openai.com/account/api-keys
# import os
# from getpass import getpass
# OPENAI_API_KEY = getpass()
# os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY
Using DocArrayHnswSearch#
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import DocArrayHnswSearch
from langchain.document_loaders import TextLoader
documents = TextLoader('../../../state_of_the_union.txt').load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayHnswSearch.from_documents(docs, embeddings, work_dir='hnswlib_store/', n_dim=1536)
Similarity search#
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
print(docs[0].page_content)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
ff48fb474252-1
|
docs = db.similarity_search(query)
print(docs[0].page_content)
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
Similarity search with score#
The returned distance score is cosine distance. Therefore, a lower score is better.
docs = db.similarity_search_with_score(query)
docs[0]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
ff48fb474252-2
|
docs = db.similarity_search_with_score(query)
docs[0]
(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={}),
0.36962226)
import shutil
# delete the dir
shutil.rmtree('hnswlib_store')
previous
Deep Lake
next
DocArrayInMemorySearch
Contents
Setup
Using DocArrayHnswSearch
Similarity search
Similarity search with score
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/docarray_hnsw.html
|
aad6f8212517-0
|
.ipynb
.pdf
Tair
Tair#
Tair is a cloud native in-memory database service developed by Alibaba Cloud.
It provides rich data models and enterprise-grade capabilities to support your real-time online scenarios while maintaining full compatibility with open source Redis. Tair also introduces persistent memory-optimized instances that are based on the new non-volatile memory (NVM) storage medium.
This notebook shows how to use functionality related to the Tair vector database.
To run, you should have a Tair instance up and running.
from langchain.embeddings.fake import FakeEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Tair
from langchain.document_loaders import TextLoader
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = FakeEmbeddings(size=128)
Connect to Tair using the TAIR_URL environment variable
export TAIR_URL="redis://{username}:{password}@{tair_address}:{tair_port}"
or the keyword argument tair_url.
Then store documents and embeddings into Tair.
tair_url = "redis://localhost:6379"
# drop first if index already exists
Tair.drop_index(tair_url=tair_url)
vector_store = Tair.from_documents(
docs,
embeddings,
tair_url=tair_url
)
Query similar documents.
query = "What did the president say about Ketanji Brown Jackson"
docs = vector_store.similarity_search(query)
docs[0]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/tair.html
|
aad6f8212517-1
|
docs = vector_store.similarity_search(query)
docs[0]
Document(page_content='We’re going after the criminals who stole billions in relief money meant for small businesses and millions of Americans. \n\nAnd tonight, I’m announcing that the Justice Department will name a chief prosecutor for pandemic fraud. \n\nBy the end of this year, the deficit will be down to less than half what it was before I took office. \n\nThe only president ever to cut the deficit by more than one trillion dollars in a single year. \n\nLowering your costs also means demanding more competition. \n\nI’m a capitalist, but capitalism without competition isn’t capitalism. \n\nIt’s exploitation—and it drives up prices. \n\nWhen corporations don’t have to compete, their profits go up, your prices go up, and small businesses and family farmers and ranchers go under. \n\nWe see it happening with ocean carriers moving goods in and out of America. \n\nDuring the pandemic, these foreign-owned companies raised prices by as much as 1,000% and made record profits.', metadata={'source': '../../../state_of_the_union.txt'})
previous
Supabase (Postgres)
next
Tigris
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/tair.html
|
297a70d3516a-0
|
.ipynb
.pdf
PGVector
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
Working with vectorstore in PG
Uploading a vectorstore in PG
Retrieving a vectorstore in PG
PGVector#
PGVector is an open-source vector similarity search for Postgres
It supports:
exact and approximate nearest neighbor search
L2 distance, inner product, and cosine distance
This notebook shows how to use the Postgres vector database (PGVector).
See the installation instruction.
# Pip install necessary package
!pip install pgvector
!pip install openai
!pip install psycopg2-binary
!pip install tiktoken
Requirement already satisfied: pgvector in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.1.8)
Requirement already satisfied: numpy in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from pgvector) (1.24.3)
Requirement already satisfied: openai in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.27.7)
Requirement already satisfied: requests>=2.20 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (2.28.2)
Requirement already satisfied: tqdm in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (4.65.0)
Requirement already satisfied: aiohttp in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from openai) (3.8.4)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-1
|
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.1.0)
Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.20->openai) (2023.5.7)
Requirement already satisfied: attrs>=17.3.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (23.1.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (6.0.4)
Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (4.0.2)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-2
|
Requirement already satisfied: yarl<2.0,>=1.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.9.2)
Requirement already satisfied: frozenlist>=1.1.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.3)
Requirement already satisfied: aiosignal>=1.1.2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from aiohttp->openai) (1.3.1)
Requirement already satisfied: psycopg2-binary in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (2.9.6)
Requirement already satisfied: tiktoken in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (0.4.0)
Requirement already satisfied: regex>=2022.1.18 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2023.5.5)
Requirement already satisfied: requests>=2.26.0 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from tiktoken) (2.28.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.1.0)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-3
|
Requirement already satisfied: idna<4,>=2.5 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in /Users/joyeed/langchain/langchain/.venv/lib/python3.9/site-packages (from requests>=2.26.0->tiktoken) (2023.5.7)
We want to use OpenAIEmbeddings so we have to get the OpenAI API Key.
import os
import getpass
os.environ['OPENAI_API_KEY'] = getpass.getpass('OpenAI API Key:')
OpenAI API Key:········
## Loading Environment Variables
from typing import List, Tuple
from dotenv import load_dotenv
load_dotenv()
False
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.pgvector import PGVector
from langchain.document_loaders import TextLoader
from langchain.docstore.document import Document
loader = TextLoader('../../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
## PGVector needs the connection string to the database.
## We will load it from the environment variables.
import os
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-4
|
## We will load it from the environment variables.
import os
CONNECTION_STRING = PGVector.connection_string_from_db_params(
driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
host=os.environ.get("PGVECTOR_HOST", "localhost"),
port=int(os.environ.get("PGVECTOR_PORT", "5432")),
database=os.environ.get("PGVECTOR_DATABASE", "postgres"),
user=os.environ.get("PGVECTOR_USER", "postgres"),
password=os.environ.get("PGVECTOR_PASSWORD", "postgres"),
)
## Example
# postgresql+psycopg2://username:password@localhost:5432/database_name
# ## PGVector needs the connection string to the database.
# ## We will load it from the environment variables.
# import os
# CONNECTION_STRING = PGVector.connection_string_from_db_params(
# driver=os.environ.get("PGVECTOR_DRIVER", "psycopg2"),
# host=os.environ.get("PGVECTOR_HOST", "localhost"),
# port=int(os.environ.get("PGVECTOR_PORT", "5432")),
# database=os.environ.get("PGVECTOR_DATABASE", "rd-embeddings"),
# user=os.environ.get("PGVECTOR_USER", "admin"),
# password=os.environ.get("PGVECTOR_PASSWORD", "password"),
# )
# ## Example
# # postgresql+psycopg2://username:password@localhost:5432/database_name
Similarity search with score#
Similarity Search with Euclidean Distance (Default)#
# The PGVector Module will try to create a table with the name of the collection. So, make sure that the collection name is unique and the user has the
# permission to create a table.
db = PGVector.from_documents(
embedding=embeddings,
documents=docs,
collection_name="state_of_the_union",
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-5
|
documents=docs,
collection_name="state_of_the_union",
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db.similarity_search_with_score(query)
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
--------------------------------------------------------------------------------
Score: 0.6076804864602984
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6076804864602984
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-6
|
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.659062774389974
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.659062774389974
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-7
|
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
Working with vectorstore in PG#
Uploading a vectorstore in PG#
data=docs
api_key=os.environ['OPENAI_API_KEY']
db = PGVector.from_documents(
documents=docs,
embedding=embeddings,
collection_name=collection_name,
connection_string=connection_string,
distance_strategy=DistanceStrategy.COSINE,
openai_api_key=api_key,
pre_delete_collection=False
)
Retrieving a vectorstore in PG#
connection_string = CONNECTION_STRING
embedding=embeddings
collection_name="state_of_the_union"
from langchain.vectorstores.pgvector import DistanceStrategy
store = PGVector(
connection_string=connection_string,
embedding_function=embedding,
collection_name=collection_name,
distance_strategy=DistanceStrategy.COSINE
)
retriever = store.as_retriever()
print(retriever)
vectorstore=<langchain.vectorstores.pgvector.PGVector object at 0x7fe9a1b1c670> search_type='similarity' search_kwargs={}
# When we have an existing PG VEctor
DEFAULT_DISTANCE_STRATEGY = DistanceStrategy.EUCLIDEAN
db1 = PGVector.from_existing_index(
embedding=embeddings,
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-8
|
db1 = PGVector.from_existing_index(
embedding=embeddings,
collection_name="state_of_the_union",
distance_strategy=DEFAULT_DISTANCE_STRATEGY,
pre_delete_collection = False,
connection_string=CONNECTION_STRING,
)
query = "What did the president say about Ketanji Brown Jackson"
docs_with_score: List[Tuple[Document, float]] = db1.similarity_search_with_score(query)
print(docs_with_score)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-9
|
[(Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections. \n\nTonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service. \n\nOne of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court. \n\nAnd I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}),
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-10
|
Breyer’s legacy of excellence.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6075870262188066), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668), (Document(page_content='A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans. \n\nAnd if we are to advance liberty and justice, we need to secure the Border and fix the immigration system. \n\nWe can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling. \n\nWe’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers. \n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-11
|
\n\nWe’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster. \n\nWe’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.', metadata={'source': '../../../state_of_the_union.txt'}), 0.6589478388546668)]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-12
|
for doc, score in docs_with_score:
print("-" * 80)
print("Score: ", score)
print(doc.page_content)
print("-" * 80)
--------------------------------------------------------------------------------
Score: 0.6075870262188066
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6075870262188066
Tonight. I call on the Senate to: Pass the Freedom to Vote Act. Pass the John Lewis Voting Rights Act. And while you’re at it, pass the Disclose Act so Americans can know who is funding our elections.
Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
One of the most serious constitutional responsibilities a President has is nominating someone to serve on the United States Supreme Court.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-13
|
And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nation’s top legal minds, who will continue Justice Breyer’s legacy of excellence.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6589478388546668
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
--------------------------------------------------------------------------------
Score: 0.6589478388546668
A former top litigator in private practice. A former federal public defender. And from a family of public school educators and police officers. A consensus builder. Since she’s been nominated, she’s received a broad range of support—from the Fraternal Order of Police to former judges appointed by Democrats and Republicans.
And if we are to advance liberty and justice, we need to secure the Border and fix the immigration system.
We can do both. At our border, we’ve installed new technology like cutting-edge scanners to better detect drug smuggling.
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
297a70d3516a-14
|
We’ve set up joint patrols with Mexico and Guatemala to catch more human traffickers.
We’re putting in place dedicated immigration judges so families fleeing persecution and violence can have their cases heard faster.
We’re securing commitments and supporting partners in South and Central America to host more refugees and secure their own borders.
--------------------------------------------------------------------------------
previous
OpenSearch
next
Pinecone
Contents
Similarity search with score
Similarity Search with Euclidean Distance (Default)
Working with vectorstore in PG
Uploading a vectorstore in PG
Retrieving a vectorstore in PG
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/indexes/vectorstores/examples/pgvector.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.