id
stringlengths 14
16
| text
stringlengths 36
2.73k
| source
stringlengths 59
127
|
---|---|---|
63cad9945ed4-7
|
{'uuid': '52cfe3e8-b800-4dd8-a7dd-8e9e4764dfc8', 'created_at': '2023-05-25T15:09:41.913856Z', 'role': 'ai', 'content': "Octavia Butler's contemporaries included Ursula K. Le Guin, Samuel R. Delany, and Joanna Russ.", 'token_count': 27} 0.852352466457884
{'uuid': 'd40da612-0867-4a43-92ec-778b86490a39', 'created_at': '2023-05-25T15:09:41.858543Z', 'role': 'human', 'content': 'Who was Octavia Butler?', 'token_count': 8} 0.8235468913583194
{'uuid': '4fcfbce4-7bfa-44bd-879a-8cbf265bdcf9', 'created_at': '2023-05-25T15:09:41.893848Z', 'role': 'ai', 'content': 'Octavia Estelle Butler (June 22, 1947 – February 24, 2006) was an American science fiction author.', 'token_count': 31} 0.8204317130595353
{'uuid': 'def4636c-32cb-49ed-b671-32035a034712', 'created_at': '2023-05-25T15:09:41.919874Z', 'role': 'ai', 'content': 'Octavia Butler won the Hugo Award, the Nebula Award, and the MacArthur Fellowship.', 'token_count': 21} 0.8196714827228725
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory/examples/zep_memory.html
|
63cad9945ed4-8
|
{'uuid': '862107de-8f6f-43c0-91fa-4441f01b2b3a', 'created_at': '2023-05-25T15:09:41.898149Z', 'role': 'human', 'content': 'Which books of hers were made into movies?', 'token_count': 11} 0.7954322970428519
{'uuid': '97164506-90fe-4c71-9539-69ebcd1d90a2', 'created_at': '2023-05-25T15:09:41.90887Z', 'role': 'human', 'content': 'Who were her contemporaries?', 'token_count': 8} 0.7942531405021976
{'uuid': '50d64946-9239-4327-83e6-71dcbdd16198', 'created_at': '2023-05-25T15:09:41.957437Z', 'role': 'ai', 'content': 'Parable of the Sower is a science fiction novel by Octavia Butler, published in 1993. It follows the story of Lauren Olamina, a young woman living in a dystopian future where society has collapsed due to environmental disasters, poverty, and violence.', 'token_count': 56} 0.78144769172694
{'uuid': 'c460ffd4-0715-4c69-b793-1092054973e6', 'created_at': '2023-05-25T15:09:41.903082Z', 'role': 'ai', 'content': "The most well-known adaptation of Octavia Butler's work is the FX series Kindred, based on her novel of the same name.", 'token_count': 29} 0.7811962820699464
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory/examples/zep_memory.html
|
63cad9945ed4-9
|
previous
Redis Chat Message History
next
Indexes
Contents
REACT Agent Chat Message History Example
Initialize the Zep Chat Message History Class and initialize the Agent
Add some history data
Run the agent
Inspect the Zep memory
Vector search over the Zep memory
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory/examples/zep_memory.html
|
1225b3f64d34-0
|
.ipynb
.pdf
How to add memory to a Multi-Input Chain
How to add memory to a Multi-Input Chain#
Most memory objects assume a single input. In this notebook, we go over how to add memory to a chain that has multiple inputs. As an example of such a chain, we will add memory to a question/answering chain. This chain takes as inputs both related documents and a user question.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings.cohere import CohereEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores.elastic_vector_search import ElasticVectorSearch
from langchain.vectorstores import Chroma
from langchain.docstore.document import Document
with open('../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_text(state_of_the_union)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_texts(texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))])
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
query = "What did the president say about Justice Breyer"
docs = docsearch.similarity_search(query)
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
template = """You are a chatbot having a conversation with a human.
Given the following extracted parts of a long document and a question, create a final answer.
{context}
{chat_history}
Human: {human_input}
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory/examples/adding_memory_chain_multiple_inputs.html
|
1225b3f64d34-1
|
{context}
{chat_history}
Human: {human_input}
Chatbot:"""
prompt = PromptTemplate(
input_variables=["chat_history", "human_input", "context"],
template=template
)
memory = ConversationBufferMemory(memory_key="chat_history", input_key="human_input")
chain = load_qa_chain(OpenAI(temperature=0), chain_type="stuff", memory=memory, prompt=prompt)
query = "What did the president say about Justice Breyer"
chain({"input_documents": docs, "human_input": query}, return_only_outputs=True)
{'output_text': ' Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.'}
print(chain.memory.buffer)
Human: What did the president say about Justice Breyer
AI: Tonight, I’d like to honor someone who has dedicated his life to serve this country: Justice Stephen Breyer—an Army veteran, Constitutional scholar, and retiring Justice of the United States Supreme Court. Justice Breyer, thank you for your service.
previous
How to add Memory to an LLMChain
next
How to add Memory to an Agent
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/memory/examples/adding_memory_chain_multiple_inputs.html
|
958f5252040f-0
|
.ipynb
.pdf
Callbacks
Contents
Callbacks
How to use callbacks
When do you want to use each of these?
Tags
Using an existing handler
Creating a custom handler
Async Callbacks
Using multiple handlers, passing in handlers
Tracing and Token Counting
Tracing
Token Counting
Callbacks#
LangChain provides a callbacks system that allows you to hook into the various stages of your LLM application. This is useful for logging, monitoring, streaming, and other tasks.
You can subscribe to these events by using the callbacks argument available throughout the API. This argument is list of handler objects, which are expected to implement one or more of the methods described below in more detail. There are two main callbacks mechanisms:
Constructor callbacks will be used for all calls made on that object, and will be scoped to that object only, i.e. if you pass a handler to the LLMChain constructor, it will not be used by the model attached to that chain.
Request callbacks will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed through). These are explicitly passed through.
Advanced: When you create a custom chain you can easily set it up to use the same callback system as all the built-in chains.
_call, _generate, _run, and equivalent async methods on Chains / LLMs / Chat Models / Agents / Tools now receive a 2nd argument called run_manager which is bound to that run, and contains the logging methods that can be used by that object (i.e. on_llm_new_token). This is useful when constructing a custom chain. See this guide for more information on how to create custom chains and use callbacks inside them.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-1
|
CallbackHandlers are objects that implement the CallbackHandler interface, which has a method for each event that can be subscribed to. The CallbackManager will call the appropriate method on each handler when the event is triggered.
class BaseCallbackHandler:
"""Base callback handler that can be used to handle callbacks from langchain."""
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
"""Run when LLM starts running."""
def on_chat_model_start(
self, serialized: Dict[str, Any], messages: List[List[BaseMessage]], **kwargs: Any
) -> Any:
"""Run when Chat Model starts running."""
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
"""Run on new LLM token. Only available when streaming is enabled."""
def on_llm_end(self, response: LLMResult, **kwargs: Any) -> Any:
"""Run when LLM ends running."""
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
"""Run when chain starts running."""
def on_chain_end(self, outputs: Dict[str, Any], **kwargs: Any) -> Any:
"""Run when chain ends running."""
def on_chain_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when chain errors."""
def on_tool_start(
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-2
|
"""Run when chain errors."""
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
"""Run when tool starts running."""
def on_tool_end(self, output: str, **kwargs: Any) -> Any:
"""Run when tool ends running."""
def on_tool_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when tool errors."""
def on_text(self, text: str, **kwargs: Any) -> Any:
"""Run on arbitrary text."""
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
"""Run on agent action."""
def on_agent_finish(self, finish: AgentFinish, **kwargs: Any) -> Any:
"""Run on agent end."""
How to use callbacks#
The callbacks argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) in two different places:
Constructor callbacks: defined in the constructor, eg. LLMChain(callbacks=[handler], tags=['a-tag']), which will be used for all calls made on that object, and will be scoped to that object only, eg. if you pass a handler to the LLMChain constructor, it will not be used by the Model attached to that chain.
Request callbacks: defined in the call()/run()/apply() methods used for issuing a request, eg. chain.call(inputs, callbacks=[handler], tags=['a-tag']), which will be used for that specific request only, and all sub-requests that it contains (eg. a call to an LLMChain triggers a call to a Model, which uses the same handler passed in the call() method).
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-3
|
The verbose argument is available on most objects throughout the API (Chains, Models, Tools, Agents, etc.) as a constructor argument, eg. LLMChain(verbose=True), and it is equivalent to passing a ConsoleCallbackHandler to the callbacks argument of that object and all child objects. This is useful for debugging, as it will log all events to the console.
When do you want to use each of these?#
Constructor callbacks are most useful for use cases such as logging, monitoring, etc., which are not specific to a single request, but rather to the entire chain. For example, if you want to log all the requests made to an LLMChain, you would pass a handler to the constructor.
Request callbacks are most useful for use cases such as streaming, where you want to stream the output of a single request to a specific websocket connection, or other similar use cases. For example, if you want to stream the output of a single request to a websocket, you would pass a handler to the call() method
Tags#
You can add tags to your callbacks by passing a tags argument to the call()/run()/apply() methods. This is useful for filtering your logs, eg. if you want to log all requests made to a specific LLMChain, you can add a tag, and then filter your logs by that tag. You can pass tags to both constructor and request callbacks, see the examples above for details. These tags are then passed to the tags argument of the “start” callback methods, ie. on_llm_start, on_chat_model_start, on_chain_start, on_tool_start.
Using an existing handler#
LangChain provides a few built-in handlers that you can use to get started. These are available in the langchain/callbacks module. The most basic handler is the StdOutCallbackHandler, which simply logs all events to stdout. In the future we will add more default handlers to the library.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-4
|
Note when the verbose flag on the object is set to true, the StdOutCallbackHandler will be invoked even without being explicitly passed in.
from langchain.callbacks import StdOutCallbackHandler
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
handler = StdOutCallbackHandler()
llm = OpenAI()
prompt = PromptTemplate.from_template("1 + {number} = ")
# First, let's explicitly set the StdOutCallbackHandler in `callbacks`
chain = LLMChain(llm=llm, prompt=prompt, callbacks=[handler])
chain.run(number=2)
# Then, let's use the `verbose` flag to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt, verbose=True)
chain.run(number=2)
# Finally, let's use the request `callbacks` to achieve the same result
chain = LLMChain(llm=llm, prompt=prompt)
chain.run(number=2, callbacks=[handler])
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
> Entering new LLMChain chain...
Prompt after formatting:
1 + 2 =
> Finished chain.
'\n\n3'
Creating a custom handler#
You can create a custom handler to set on the object as well. In the example below, we’ll implement streaming with a custom handler.
from langchain.callbacks.base import BaseCallbackHandler
from langchain.chat_models import ChatOpenAI
from langchain.schema import HumanMessage
class MyCustomHandler(BaseCallbackHandler):
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-5
|
from langchain.schema import HumanMessage
class MyCustomHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"My custom handler, token: {token}")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomHandler()])
chat([HumanMessage(content="Tell me a joke")])
My custom handler, token:
My custom handler, token: Why
My custom handler, token: did
My custom handler, token: the
My custom handler, token: tomato
My custom handler, token: turn
My custom handler, token: red
My custom handler, token: ?
My custom handler, token: Because
My custom handler, token: it
My custom handler, token: saw
My custom handler, token: the
My custom handler, token: salad
My custom handler, token: dressing
My custom handler, token: !
My custom handler, token:
AIMessage(content='Why did the tomato turn red? Because it saw the salad dressing!', additional_kwargs={})
Async Callbacks#
If you are planning to use the async API, it is recommended to use AsyncCallbackHandler to avoid blocking the runloop.
Advanced if you use a sync CallbackHandler while using an async method to run your llm/chain/tool/agent, it will still work. However, under the hood, it will be called with run_in_executor which can cause issues if your CallbackHandler is not thread-safe.
import asyncio
from typing import Any, Dict, List
from langchain.schema import LLMResult
from langchain.callbacks.base import AsyncCallbackHandler
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-6
|
from langchain.schema import LLMResult
from langchain.callbacks.base import AsyncCallbackHandler
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(max_tokens=25, streaming=True, callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()])
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
zzzz....
Hi! I just woke up. Your llm is starting
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Why
Sync handler being called in a `thread_pool_executor`: token: don
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-7
|
Sync handler being called in a `thread_pool_executor`: token: don
Sync handler being called in a `thread_pool_executor`: token: 't
Sync handler being called in a `thread_pool_executor`: token: scientists
Sync handler being called in a `thread_pool_executor`: token: trust
Sync handler being called in a `thread_pool_executor`: token: atoms
Sync handler being called in a `thread_pool_executor`: token: ?
Sync handler being called in a `thread_pool_executor`: token: Because
Sync handler being called in a `thread_pool_executor`: token: they
Sync handler being called in a `thread_pool_executor`: token: make
Sync handler being called in a `thread_pool_executor`: token: up
Sync handler being called in a `thread_pool_executor`: token: everything
Sync handler being called in a `thread_pool_executor`: token: !
Sync handler being called in a `thread_pool_executor`: token:
zzzz....
Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Why don't scientists trust atoms?\n\nBecause they make up everything!", generation_info=None, message=AIMessage(content="Why don't scientists trust atoms?\n\nBecause they make up everything!", additional_kwargs={}))]], llm_output={'token_usage': {}, 'model_name': 'gpt-3.5-turbo'})
Using multiple handlers, passing in handlers#
In the previous examples, we passed in callback handlers upon creation of an object by using callbacks=. In this case, the callbacks will be scoped to that particular object.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-8
|
However, in many cases, it is advantageous to pass in handlers instead when running the object. When we pass through CallbackHandlers using the callbacks keyword arg when executing an run, those callbacks will be issued by all nested objects involved in the execution. For example, when a handler is passed through to an Agent, it will be used for all callbacks related to the agent and all the objects involved in the agent’s execution, in this case, the Tools, LLMChain, and LLM.
This prevents us from having to manually attach the handlers to each individual nested object.
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# First, define custom callback handler implementations
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {serialized['name']}")
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
print(f"on_new_token {token}")
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
print(f"on_chain_start {serialized['name']}")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-9
|
) -> Any:
print(f"on_tool_start {serialized['name']}")
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
print(f"on_agent_action {action}")
class MyCustomHandlerTwo(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start (I'm the second handler!!) {serialized['name']}")
# Instantiate the handlers
handler1 = MyCustomHandlerOne()
handler2 = MyCustomHandlerTwo()
# Setup the agent. Only the `llm` will issue callbacks for handler2
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
# Callbacks for handler1 will be issued by every object involved in the
# Agent execution (llm, llmchain, tool, agent executor)
agent.run("What is 2 raised to the 0.235 power?", callbacks=[handler1])
on_chain_start AgentExecutor
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token I
on_new_token need
on_new_token to
on_new_token use
on_new_token a
on_new_token calculator
on_new_token to
on_new_token solve
on_new_token this
on_new_token .
on_new_token
Action
on_new_token :
on_new_token Calculator
on_new_token
Action
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-10
|
Action
on_new_token :
on_new_token Calculator
on_new_token
Action
on_new_token Input
on_new_token :
on_new_token 2
on_new_token ^
on_new_token 0
on_new_token .
on_new_token 235
on_new_token
on_agent_action AgentAction(tool='Calculator', tool_input='2^0.235', log=' I need to use a calculator to solve this.\nAction: Calculator\nAction Input: 2^0.235')
on_tool_start Calculator
on_chain_start LLMMathChain
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token
on_new_token ```text
on_new_token
on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token
on_new_token ```
on_new_token ...
on_new_token num
on_new_token expr
on_new_token .
on_new_token evaluate
on_new_token ("
on_new_token 2
on_new_token **
on_new_token 0
on_new_token .
on_new_token 235
on_new_token ")
on_new_token ...
on_new_token
on_new_token
on_chain_start LLMChain
on_llm_start OpenAI
on_llm_start (I'm the second handler!!) OpenAI
on_new_token I
on_new_token now
on_new_token know
on_new_token the
on_new_token final
on_new_token answer
on_new_token .
on_new_token
Final
on_new_token Answer
on_new_token :
on_new_token 1
on_new_token .
on_new_token 17
on_new_token 690
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-11
|
on_new_token .
on_new_token 17
on_new_token 690
on_new_token 67
on_new_token 372
on_new_token 187
on_new_token 674
on_new_token
'1.1769067372187674'
Tracing and Token Counting#
Tracing and token counting are two capabilities we provide which are built on our callbacks mechanism.
Tracing#
There are two recommended ways to trace your LangChains:
Setting the LANGCHAIN_TRACING environment variable to "true".
Using a context manager with tracing_enabled() to trace a particular block of code.
Note if the environment variable is set, all code will be traced, regardless of whether or not it’s within the context manager.
import os
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# To run the code, make sure to set OPENAI_API_KEY and SERPAPI_API_KEY
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math", "serpapi"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
questions = [
"Who won the US Open men's final in 2019? What is his age raised to the 0.334 power?",
"Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 power?",
"Who won the most recent formula 1 grand prix? What is their age raised to the 0.23 power?",
"Who won the US Open women's final in 2019? What is her age raised to the 0.34 power?",
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-12
|
"Who is Beyonce's husband? What is his age raised to the 0.19 power?",
]
os.environ["LANGCHAIN_TRACING"] = "true"
# Both of the agent runs will be traced because the environment variable is set
agent.run(questions[0])
with tracing_enabled() as session:
assert session
agent.run(questions[1])
> Entering new AgentExecutor chain...
I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
Action Input: "US Open men's final 2019 winner"
Observation: Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...
Thought: I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"
Observation: 36 years
Thought: I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334
Observation: Answer: 3.3098250249682484
Thought: I now know the final answer
Final Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.
> Finished chain.
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-13
|
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
# Now, we unset the environment variable and use a context manager.
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"]
# here, we are writing traces to "my_test_session"
with tracing_enabled("my_test_session") as session:
assert session
agent.run(questions[0]) # this should be traced
agent.run(questions[1]) # this should not be traced
> Entering new AgentExecutor chain...
I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
Action Input: "US Open men's final 2019 winner"
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-14
|
Action: Search
Action Input: "US Open men's final 2019 winner"
Observation: Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ...
Thought: I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"
Observation: 36 years
Thought: I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334
Observation: Answer: 3.3098250249682484
Thought: I now know the final answer
Final Answer: Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484.
> Finished chain.
> Entering new AgentExecutor chain...
I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"
Observation: Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.
Thought: I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age"
Observation: 29 years
Thought: I need to calculate 29 raised to the 0.23 power.
Action: Calculator
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-15
|
Action: Calculator
Action Input: 29^0.23
Observation: Answer: 2.169459462491557
Thought: I now know the final answer.
Final Answer: Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557.
> Finished chain.
"Harry Styles is Olivia Wilde's boyfriend and his current age raised to the 0.23 power is 2.169459462491557."
# The context manager is concurrency safe:
if "LANGCHAIN_TRACING" in os.environ:
del os.environ["LANGCHAIN_TRACING"]
# start a background task
task = asyncio.create_task(agent.arun(questions[0])) # this should not be traced
with tracing_enabled() as session:
assert session
tasks = [agent.arun(q) for q in questions[1:3]] # these should be traced
await asyncio.gather(*tasks)
await task
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
I need to find out who won the grand prix and then calculate their age raised to the 0.23 power.
Action: Search
Action Input: "Formula 1 Grand Prix Winner" I need to find out who won the US Open men's final in 2019 and then calculate his age raised to the 0.334 power.
Action: Search
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-16
|
Action: Search
Action Input: "US Open men's final 2019 winner"Rafael Nadal defeated Daniil Medvedev in the final, 7–5, 6–3, 5–7, 4–6, 6–4 to win the men's singles tennis title at the 2019 US Open. It was his fourth US ... I need to find out who Olivia Wilde's boyfriend is and then calculate his age raised to the 0.23 power.
Action: Search
Action Input: "Olivia Wilde boyfriend"Sudeikis and Wilde's relationship ended in November 2020. Wilde was publicly served with court documents regarding child custody while she was presenting Don't Worry Darling at CinemaCon 2022. In January 2021, Wilde began dating singer Harry Styles after meeting during the filming of Don't Worry Darling.Lewis Hamilton has won 103 Grands Prix during his career. He won 21 races with McLaren and has won 82 with Mercedes. Lewis Hamilton holds the record for the ... I need to find out the age of the winner
Action: Search
Action Input: "Rafael Nadal age"36 years I need to find out Harry Styles' age.
Action: Search
Action Input: "Harry Styles age" I need to find out Lewis Hamilton's age
Action: Search
Action Input: "Lewis Hamilton Age"29 years I need to calculate the age raised to the 0.334 power
Action: Calculator
Action Input: 36^0.334 I need to calculate 29 raised to the 0.23 power.
Action: Calculator
Action Input: 29^0.23Answer: 3.3098250249682484Answer: 2.16945946249155738 years
> Finished chain.
> Finished chain.
I now need to calculate 38 raised to the 0.23 power
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-17
|
I now need to calculate 38 raised to the 0.23 power
Action: Calculator
Action Input: 38^0.23Answer: 2.3086081644669734
> Finished chain.
"Rafael Nadal, aged 36, won the US Open men's final in 2019 and his age raised to the 0.334 power is 3.3098250249682484."
Token Counting#
LangChain offers a context manager that allows you to count tokens.
from langchain.callbacks import get_openai_callback
llm = OpenAI(temperature=0)
with get_openai_callback() as cb:
llm("What is the square root of 4?")
total_tokens = cb.total_tokens
assert total_tokens > 0
with get_openai_callback() as cb:
llm("What is the square root of 4?")
llm("What is the square root of 4?")
assert cb.total_tokens == total_tokens * 2
# You can kick off concurrent runs from within the context manager
with get_openai_callback() as cb:
await asyncio.gather(
*[llm.agenerate(["What is the square root of 4?"]) for _ in range(3)]
)
assert cb.total_tokens == total_tokens * 3
# The context manager is concurrency safe
task = asyncio.create_task(llm.agenerate(["What is the square root of 4?"]))
with get_openai_callback() as cb:
await llm.agenerate(["What is the square root of 4?"])
await task
assert cb.total_tokens == total_tokens
previous
Plan and Execute
next
Autonomous Agents
Contents
Callbacks
How to use callbacks
When do you want to use each of these?
Tags
Using an existing handler
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
958f5252040f-18
|
When do you want to use each of these?
Tags
Using an existing handler
Creating a custom handler
Async Callbacks
Using multiple handlers, passing in handlers
Tracing and Token Counting
Tracing
Token Counting
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/callbacks/getting_started.html
|
230a4236d05f-0
|
.rst
.pdf
How-To Guides
How-To Guides#
A chain is made up of links, which can be either primitives or other chains.
Primitives can be either prompts, models, arbitrary functions, or other chains.
The examples here are broken up into three sections:
Generic Functionality
Covers both generic chains (that are useful in a wide variety of applications) as well as generic functionality related to those chains.
Async API for Chain
Creating a custom Chain
Loading from LangChainHub
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
Router Chains
Sequential Chains
Serialization
Transformation Chain
Index-related Chains
Chains related to working with indexes.
Analyze Document
Chat Over Documents with Chat History
Graph QA
Hypothetical Document Embeddings
Question Answering with Sources
Question Answering
Summarization
Retrieval Question/Answering
Retrieval Question Answering with Sources
Vector DB Text Generation
All other chains
All other types of chains!
API Chains
Self-Critique Chain with Constitutional AI
Extraction
FLARE
GraphCypherQAChain
NebulaGraphQAChain
BashChain
LLMCheckerChain
LLM Math
LLMRequestsChain
LLMSummarizationCheckerChain
Moderation
Router Chains: Selecting from multiple prompts with MultiPromptChain
Router Chains: Selecting from multiple prompts with MultiRetrievalQAChain
OpenAPI Chain
PAL
SQL Chain example
Tagging
previous
Getting Started
next
Async API for Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/how_to_guides.html
|
f9c45827a15f-0
|
.ipynb
.pdf
Getting Started
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
Getting Started#
In this tutorial, we will learn about creating simple chains in LangChain. We will learn how to create a chain, add components to it, and run it.
In this tutorial, we will cover:
Using a simple LLM chain
Creating sequential chains
Creating a custom chain
Why do we need chains?#
Chains allow us to combine multiple components together to create a single, coherent application. For example, we can create a chain that takes user input, formats it with a PromptTemplate, and then passes the formatted response to an LLM. We can build more complex chains by combining multiple chains together, or by combining chains with other components.
Quick start: Using LLMChain#
The LLMChain is a simple chain that takes in a prompt template, formats it with the user input and returns the response from an LLM.
To use the LLMChain, first create a prompt template.
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
We can now create a very simple chain that will take user input, format the prompt with it, and then send it to the LLM.
from langchain.chains import LLMChain
chain = LLMChain(llm=llm, prompt=prompt)
# Run the chain only specifying the input variable.
print(chain.run("colorful socks"))
Colorful Toes Co.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-1
|
print(chain.run("colorful socks"))
Colorful Toes Co.
If there are multiple variables, you can input them all at once using a dictionary.
prompt = PromptTemplate(
input_variables=["company", "product"],
template="What is a good name for {company} that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
print(chain.run({
'company': "ABC Startup",
'product': "colorful socks"
}))
Socktopia Colourful Creations.
You can use a chat model in an LLMChain as well:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
)
human_message_prompt = HumanMessagePromptTemplate(
prompt=PromptTemplate(
template="What is a good name for a company that makes {product}?",
input_variables=["product"],
)
)
chat_prompt_template = ChatPromptTemplate.from_messages([human_message_prompt])
chat = ChatOpenAI(temperature=0.9)
chain = LLMChain(llm=chat, prompt=chat_prompt_template)
print(chain.run("colorful socks"))
Rainbow Socks Co.
Different ways of calling chains#
All classes inherited from Chain offer a few ways of running chain logic. The most direct one is by using __call__:
chat = ChatOpenAI(temperature=0)
prompt_template = "Tell me a {adjective} joke"
llm_chain = LLMChain(
llm=chat,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain(inputs={"adjective":"corny"})
{'adjective': 'corny',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-2
|
{'adjective': 'corny',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
By default, __call__ returns both the input and output key values. You can configure it to only return output key values by setting return_only_outputs to True.
llm_chain("corny", return_only_outputs=True)
{'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
If the Chain only outputs one output key (i.e. only has one element in its output_keys), you can use run method. Note that run outputs a string instead of a dictionary.
# llm_chain only has one output key, so we can use run
llm_chain.output_keys
['text']
llm_chain.run({"adjective":"corny"})
'Why did the tomato turn red? Because it saw the salad dressing!'
In the case of one input key, you can input the string directly without specifying the input mapping.
# These two are equivalent
llm_chain.run({"adjective":"corny"})
llm_chain.run("corny")
# These two are also equivalent
llm_chain("corny")
llm_chain({"adjective":"corny"})
{'adjective': 'corny',
'text': 'Why did the tomato turn red? Because it saw the salad dressing!'}
Tips: You can easily integrate a Chain object as a Tool in your Agent via its run method. See an example here.
Add memory to chains#
Chain supports taking a BaseMemory object as its memory argument, allowing Chain object to persist data across multiple calls. In other words, it makes Chain a stateful object.
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory()
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-3
|
llm=chat,
memory=ConversationBufferMemory()
)
conversation.run("Answer briefly. What are the first 3 colors of a rainbow?")
# -> The first three colors of a rainbow are red, orange, and yellow.
conversation.run("And the next 4?")
# -> The next four colors of a rainbow are green, blue, indigo, and violet.
'The next four colors of a rainbow are green, blue, indigo, and violet.'
Essentially, BaseMemory defines an interface of how langchain stores memory. It allows reading of stored data through load_memory_variables method and storing new data through save_context method. You can learn more about it in Memory section.
Debug Chain#
It can be hard to debug Chain object solely from its output as most Chain objects involve a fair amount of input prompt preprocessing and LLM output post-processing. Setting verbose to True will print out some internal states of the Chain object while it is being ran.
conversation = ConversationChain(
llm=chat,
memory=ConversationBufferMemory(),
verbose=True
)
conversation.run("What is ChatGPT?")
> Entering new ConversationChain chain...
Prompt after formatting:
The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
Current conversation:
Human: What is ChatGPT?
AI:
> Finished chain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-4
|
Human: What is ChatGPT?
AI:
> Finished chain.
'ChatGPT is an AI language model developed by OpenAI. It is based on the GPT-3 architecture and is capable of generating human-like responses to text prompts. ChatGPT has been trained on a massive amount of text data and can understand and respond to a wide range of topics. It is often used for chatbots, virtual assistants, and other conversational AI applications.'
Combine chains with the SequentialChain#
The next step after calling a language model is to make a series of calls to a language model. We can do this using sequential chains, which are chains that execute their links in a predefined order. Specifically, we will use the SimpleSequentialChain. This is the simplest type of a sequential chain, where each step has a single input/output, and the output of one step is the input to the next.
In this tutorial, our sequential chain will:
First, create a company name for a product. We will reuse the LLMChain we’d previously initialized to create this company name.
Then, create a catchphrase for the product. We will initialize a new LLMChain to create this catchphrase, as shown below.
second_prompt = PromptTemplate(
input_variables=["company_name"],
template="Write a catchphrase for the following company: {company_name}",
)
chain_two = LLMChain(llm=llm, prompt=second_prompt)
Now we can combine the two LLMChains, so that we can create a company name and a catchphrase in a single step.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[chain, chain_two], verbose=True)
# Run the chain specifying only the input variable for the first chain.
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-5
|
catchphrase = overall_chain.run("colorful socks")
print(catchphrase)
> Entering new SimpleSequentialChain chain...
Rainbow Socks Co.
"Put a little rainbow in your step!"
> Finished chain.
"Put a little rainbow in your step!"
Create a custom chain with the Chain class#
LangChain provides many chains out of the box, but sometimes you may want to create a custom chain for your specific use case. For this example, we will create a custom chain that concatenates the outputs of 2 LLMChains.
In order to create a custom chain:
Start by subclassing the Chain class,
Fill out the input_keys and output_keys properties,
Add the _call method that shows how to execute the chain.
These steps are demonstrated in the example below:
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from typing import Dict, List
class ConcatenateChain(Chain):
chain_1: LLMChain
chain_2: LLMChain
@property
def input_keys(self) -> List[str]:
# Union of the input keys of the two chains.
all_input_vars = set(self.chain_1.input_keys).union(set(self.chain_2.input_keys))
return list(all_input_vars)
@property
def output_keys(self) -> List[str]:
return ['concat_output']
def _call(self, inputs: Dict[str, str]) -> Dict[str, str]:
output_1 = self.chain_1.run(inputs)
output_2 = self.chain_2.run(inputs)
return {'concat_output': output_1 + output_2}
Now, we can try running the chain that we called.
prompt_1 = PromptTemplate(
input_variables=["product"],
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
f9c45827a15f-6
|
prompt_1 = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain_1 = LLMChain(llm=llm, prompt=prompt_1)
prompt_2 = PromptTemplate(
input_variables=["product"],
template="What is a good slogan for a company that makes {product}?",
)
chain_2 = LLMChain(llm=llm, prompt=prompt_2)
concat_chain = ConcatenateChain(chain_1=chain_1, chain_2=chain_2)
concat_output = concat_chain.run("colorful socks")
print(f"Concatenated output:\n{concat_output}")
Concatenated output:
Funky Footwear Company
"Brighten Up Your Day with Our Colorful Socks!"
That’s it! For more details about how to do cool things with Chains, check out the how-to guide for chains.
previous
Chains
next
How-To Guides
Contents
Why do we need chains?
Quick start: Using LLMChain
Different ways of calling chains
Add memory to chains
Debug Chain
Combine chains with the SequentialChain
Create a custom chain with the Chain class
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/getting_started.html
|
c50ca7af02b9-0
|
.ipynb
.pdf
Serialization
Contents
Saving a chain to disk
Loading a chain from disk
Saving components separately
Serialization#
This notebook covers how to serialize chains to and from disk. The serialization format we use is json or yaml. Currently, only some chains support this type of serialization. We will grow the number of supported chains over time.
Saving a chain to disk#
First, let’s go over how to save a chain to disk. This can be done with the .save method, and specifying a file path with a json or yaml extension.
from langchain import PromptTemplate, OpenAI, LLMChain
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0), verbose=True)
llm_chain.save("llm_chain.json")
Let’s now take a look at what’s inside this saved file
!cat llm_chain.json
{
"memory": null,
"verbose": true,
"prompt": {
"input_variables": [
"question"
],
"output_parser": null,
"template": "Question: {question}\n\nAnswer: Let's think step by step.",
"template_format": "f-string"
},
"llm": {
"model_name": "text-davinci-003",
"temperature": 0.0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"best_of": 1,
"request_timeout": null,
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/serialization.html
|
c50ca7af02b9-1
|
"best_of": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
},
"output_key": "text",
"_type": "llm_chain"
}
Loading a chain from disk#
We can load a chain from disk by using the load_chain method.
from langchain.chains import load_chain
chain = load_chain("llm_chain.json")
chain.run("whats 2 + 2")
> Entering new LLMChain chain...
Prompt after formatting:
Question: whats 2 + 2
Answer: Let's think step by step.
> Finished chain.
' 2 + 2 = 4'
Saving components separately#
In the above example, we can see that the prompt and llm configuration information is saved in the same json as the overall chain. Alternatively, we can split them up and save them separately. This is often useful to make the saved components more modular. In order to do this, we just need to specify llm_path instead of the llm component, and prompt_path instead of the prompt component.
llm_chain.prompt.save("prompt.json")
!cat prompt.json
{
"input_variables": [
"question"
],
"output_parser": null,
"template": "Question: {question}\n\nAnswer: Let's think step by step.",
"template_format": "f-string"
}
llm_chain.llm.save("llm.json")
!cat llm.json
{
"model_name": "text-davinci-003",
"temperature": 0.0,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/serialization.html
|
c50ca7af02b9-2
|
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"best_of": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
}
config = {
"memory": None,
"verbose": True,
"prompt_path": "prompt.json",
"llm_path": "llm.json",
"output_key": "text",
"_type": "llm_chain"
}
import json
with open("llm_chain_separate.json", "w") as f:
json.dump(config, f, indent=2)
!cat llm_chain_separate.json
{
"memory": null,
"verbose": true,
"prompt_path": "prompt.json",
"llm_path": "llm.json",
"output_key": "text",
"_type": "llm_chain"
}
We can then load it in the same way
chain = load_chain("llm_chain_separate.json")
chain.run("whats 2 + 2")
> Entering new LLMChain chain...
Prompt after formatting:
Question: whats 2 + 2
Answer: Let's think step by step.
> Finished chain.
' 2 + 2 = 4'
previous
Sequential Chains
next
Transformation Chain
Contents
Saving a chain to disk
Loading a chain from disk
Saving components separately
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/serialization.html
|
ae357dc74de5-0
|
.ipynb
.pdf
Loading from LangChainHub
Loading from LangChainHub#
This notebook covers how to load chains from LangChainHub.
from langchain.chains import load_chain
chain = load_chain("lc://chains/llm-math/chain.json")
chain.run("whats 2 raised to .12")
> Entering new LLMMathChain chain...
whats 2 raised to .12
Answer: 1.0791812460476249
> Finished chain.
'Answer: 1.0791812460476249'
Sometimes chains will require extra arguments that were not serialized with the chain. For example, a chain that does question answering over a vector database will require a vector database.
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain import OpenAI, VectorDBQA
from langchain.document_loaders import TextLoader
loader = TextLoader('../../state_of_the_union.txt')
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Chroma.from_documents(texts, embeddings)
Running Chroma using direct local API.
Using DuckDB in-memory for database. Data will be transient.
chain = load_chain("lc://chains/vector-db-qa/stuff/chain.json", vectorstore=vectorstore)
query = "What did the president say about Ketanji Brown Jackson"
chain.run(query)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/from_hub.html
|
ae357dc74de5-1
|
chain.run(query)
" The president said that Ketanji Brown Jackson is a Circuit Court of Appeals Judge, one of the nation's top legal minds, a former top litigator in private practice, a former federal public defender, has received a broad range of support from the Fraternal Order of Police to former judges appointed by Democrats and Republicans, and will continue Justice Breyer's legacy of excellence."
previous
Creating a custom Chain
next
LLM Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/from_hub.html
|
9bd6ce2ff374-0
|
.ipynb
.pdf
Router Chains
Contents
LLMRouterChain
EmbeddingRouterChain
Router Chains#
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the next chain to use for a given input.
Router chains are made up of two components:
The RouterChain itself (responsible for selecting the next chain to call)
destination_chains: chains that the router chain can route to
In this notebook we will focus on the different types of routing chains. We will show these routing chains used in a MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
from langchain.chains.router import MultiPromptChain
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.chains.llm import LLMChain
from langchain.prompts import PromptTemplate
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.
Here is a question:
{input}"""
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for answering math questions",
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/router.html
|
9bd6ce2ff374-1
|
"description": "Good for answering math questions",
"prompt_template": math_template
}
]
llm = OpenAI()
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
prompt_template = p_info["prompt_template"]
prompt = PromptTemplate(template=prompt_template, input_variables=["input"])
chain = LLMChain(llm=llm, prompt=prompt)
destination_chains[name] = chain
default_chain = ConversationChain(llm=llm, output_key="text")
LLMRouterChain#
This chain uses an LLM to determine how to route things.
from langchain.chains.router.llm_router import LLMRouterChain, RouterOutputParser
from langchain.chains.router.multi_prompt_prompt import MULTI_PROMPT_ROUTER_TEMPLATE
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(
destinations=destinations_str
)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)
print(chain.run("What is black body radiation?"))
> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/router.html
|
9bd6ce2ff374-2
|
physics: {'input': 'What is black body radiation?'}
> Finished chain.
Black body radiation is the term used to describe the electromagnetic radiation emitted by a “black body”—an object that absorbs all radiation incident upon it. A black body is an idealized physical body that absorbs all incident electromagnetic radiation, regardless of frequency or angle of incidence. It does not reflect, emit or transmit energy. This type of radiation is the result of the thermal motion of the body's atoms and molecules, and it is emitted at all wavelengths. The spectrum of radiation emitted is described by Planck's law and is known as the black body spectrum.
print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))
> Entering new MultiPromptChain chain...
math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'}
> Finished chain.
?
The answer is 43. One plus 43 is 44 which is divisible by 3.
print(chain.run("What is the name of the type of cloud that rins"))
> Entering new MultiPromptChain chain...
None: {'input': 'What is the name of the type of cloud that rains?'}
> Finished chain.
The type of cloud that rains is called a cumulonimbus cloud. It is a tall and dense cloud that is often accompanied by thunder and lightning.
EmbeddingRouterChain#
The EmbeddingRouterChain uses embeddings and similarity to route between destination chains.
from langchain.chains.router.embedding_router import EmbeddingRouterChain
from langchain.embeddings import CohereEmbeddings
from langchain.vectorstores import Chroma
names_and_descriptions = [
("physics", ["for questions about physics"]),
("math", ["for questions about math"]),
]
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/router.html
|
9bd6ce2ff374-3
|
("math", ["for questions about math"]),
]
router_chain = EmbeddingRouterChain.from_names_and_descriptions(
names_and_descriptions, Chroma, CohereEmbeddings(), routing_keys=["input"]
)
Using embedded DuckDB without persistence: data will be transient
chain = MultiPromptChain(router_chain=router_chain, destination_chains=destination_chains, default_chain=default_chain, verbose=True)
print(chain.run("What is black body radiation?"))
> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.
Black body radiation is the emission of energy from an idealized physical body (known as a black body) that is in thermal equilibrium with its environment. It is emitted in a characteristic pattern of frequencies known as a black-body spectrum, which depends only on the temperature of the body. The study of black body radiation is an important part of astrophysics and atmospheric physics, as the thermal radiation emitted by stars and planets can often be approximated as black body radiation.
print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))
> Entering new MultiPromptChain chain...
math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'}
> Finished chain.
?
Answer: The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43.
previous
LLM Chain
next
Sequential Chains
Contents
LLMRouterChain
EmbeddingRouterChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/router.html
|
71f0bd480c50-0
|
.ipynb
.pdf
Creating a custom Chain
Creating a custom Chain#
To implement your own custom chain you can subclass Chain and implement the following methods:
from __future__ import annotations
from typing import Any, Dict, List, Optional
from pydantic import Extra
from langchain.base_language import BaseLanguageModel
from langchain.callbacks.manager import (
AsyncCallbackManagerForChainRun,
CallbackManagerForChainRun,
)
from langchain.chains.base import Chain
from langchain.prompts.base import BasePromptTemplate
class MyCustomChain(Chain):
"""
An example of a custom chain.
"""
prompt: BasePromptTemplate
"""Prompt object to use."""
llm: BaseLanguageModel
output_key: str = "text" #: :meta private:
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
@property
def input_keys(self) -> List[str]:
"""Will be whatever keys the prompt expects.
:meta private:
"""
return self.prompt.input_variables
@property
def output_keys(self) -> List[str]:
"""Will always return text key.
:meta private:
"""
return [self.output_key]
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
# Your custom chain logic goes here
# This is just an example that mimics LLMChain
prompt_value = self.prompt.format_prompt(**inputs)
# Whenever you call a language model, or another chain, you should pass
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/custom_chain.html
|
71f0bd480c50-1
|
# Whenever you call a language model, or another chain, you should pass
# a callback manager to it. This allows the inner run to be tracked by
# any callbacks that are registered on the outer run.
# You can always obtain a callback manager for this by calling
# `run_manager.get_child()` as shown below.
response = self.llm.generate_prompt(
[prompt_value],
callbacks=run_manager.get_child() if run_manager else None
)
# If you want to log something about this run, you can do so by calling
# methods on the `run_manager`, as shown below. This will trigger any
# callbacks that are registered for that event.
if run_manager:
run_manager.on_text("Log something about this run")
return {self.output_key: response.generations[0][0].text}
async def _acall(
self,
inputs: Dict[str, Any],
run_manager: Optional[AsyncCallbackManagerForChainRun] = None,
) -> Dict[str, str]:
# Your custom chain logic goes here
# This is just an example that mimics LLMChain
prompt_value = self.prompt.format_prompt(**inputs)
# Whenever you call a language model, or another chain, you should pass
# a callback manager to it. This allows the inner run to be tracked by
# any callbacks that are registered on the outer run.
# You can always obtain a callback manager for this by calling
# `run_manager.get_child()` as shown below.
response = await self.llm.agenerate_prompt(
[prompt_value],
callbacks=run_manager.get_child() if run_manager else None
)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/custom_chain.html
|
71f0bd480c50-2
|
callbacks=run_manager.get_child() if run_manager else None
)
# If you want to log something about this run, you can do so by calling
# methods on the `run_manager`, as shown below. This will trigger any
# callbacks that are registered for that event.
if run_manager:
await run_manager.on_text("Log something about this run")
return {self.output_key: response.generations[0][0].text}
@property
def _chain_type(self) -> str:
return "my_custom_chain"
from langchain.callbacks.stdout import StdOutCallbackHandler
from langchain.chat_models.openai import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
chain = MyCustomChain(
prompt=PromptTemplate.from_template('tell us a joke about {topic}'),
llm=ChatOpenAI()
)
chain.run({'topic': 'callbacks'}, callbacks=[StdOutCallbackHandler()])
> Entering new MyCustomChain chain...
Log something about this run
> Finished chain.
'Why did the callback function feel lonely? Because it was always waiting for someone to call it back!'
previous
Async API for Chain
next
Loading from LangChainHub
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/custom_chain.html
|
cfda074507f9-0
|
.ipynb
.pdf
Transformation Chain
Transformation Chain#
This notebook showcases using a generic transformation chain.
As an example, we will create a dummy transformation that takes in a super long text, filters the text to only the first 3 paragraphs, and then passes that into an LLMChain to summarize those.
from langchain.chains import TransformChain, LLMChain, SimpleSequentialChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
with open("../../state_of_the_union.txt") as f:
state_of_the_union = f.read()
def transform_func(inputs: dict) -> dict:
text = inputs["text"]
shortened_text = "\n\n".join(text.split("\n\n")[:3])
return {"output_text": shortened_text}
transform_chain = TransformChain(input_variables=["text"], output_variables=["output_text"], transform=transform_func)
template = """Summarize this text:
{output_text}
Summary:"""
prompt = PromptTemplate(input_variables=["output_text"], template=template)
llm_chain = LLMChain(llm=OpenAI(), prompt=prompt)
sequential_chain = SimpleSequentialChain(chains=[transform_chain, llm_chain])
sequential_chain.run(state_of_the_union)
' The speaker addresses the nation, noting that while last year they were kept apart due to COVID-19, this year they are together again. They are reminded that regardless of their political affiliations, they are all Americans.'
previous
Serialization
next
Analyze Document
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/transformation.html
|
91a767214cd6-0
|
.ipynb
.pdf
Sequential Chains
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
Sequential Chains#
The next step after calling a language model is make a series of calls to a language model. This is particularly useful when you want to take the output from one call and use it as the input to another.
In this notebook we will walk through some examples for how to do this, using sequential chains. Sequential chains are defined as a series of chains, called in deterministic order. There are two types of sequential chains:
SimpleSequentialChain: The simplest form of sequential chains, where each step has a singular input/output, and the output of one step is the input to the next.
SequentialChain: A more general form of sequential chains, allowing for multiple inputs/outputs.
SimpleSequentialChain#
In this series of chains, each individual chain has a single input and a single output, and the output of one step is used as input to the next.
Let’s walk through a toy example of doing this, where the first chain takes in the title of an imaginary play and then generates a synopsis for that title, and the second chain takes in the synopsis of that play and generates an imaginary review for that play.
from langchain.llms import OpenAI
from langchain.chains import LLMChain
from langchain.prompts import PromptTemplate
# This is an LLMChain to write a synopsis given a title of a play.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play, it is your job to write a synopsis for that title.
Title: {title}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title"], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-1
|
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template)
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SimpleSequentialChain
overall_chain = SimpleSequentialChain(chains=[synopsis_chain, review_chain], verbose=True)
review = overall_chain.run("Tragedy at sunset on the beach")
> Entering new SimpleSequentialChain chain...
Tragedy at Sunset on the Beach is a story of a young couple, Jack and Sarah, who are in love and looking forward to their future together. On the night of their anniversary, they decide to take a walk on the beach at sunset. As they are walking, they come across a mysterious figure, who tells them that their love will be tested in the near future.
The figure then tells the couple that the sun will soon set, and with it, a tragedy will strike. If Jack and Sarah can stay together and pass the test, they will be granted everlasting love. However, if they fail, their love will be lost forever.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-2
|
The play follows the couple as they struggle to stay together and battle the forces that threaten to tear them apart. Despite the tragedy that awaits them, they remain devoted to one another and fight to keep their love alive. In the end, the couple must decide whether to take a chance on their future together or succumb to the tragedy of the sunset.
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
> Finished chain.
print(review)
Tragedy at Sunset on the Beach is an emotionally gripping story of love, hope, and sacrifice. Through the story of Jack and Sarah, the audience is taken on a journey of self-discovery and the power of love to overcome even the greatest of obstacles.
The play's talented cast brings the characters to life, allowing us to feel the depths of their emotion and the intensity of their struggle. With its compelling story and captivating performances, this play is sure to draw in audiences and leave them on the edge of their seats.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-3
|
The play's setting of the beach at sunset adds a touch of poignancy and romanticism to the story, while the mysterious figure serves to keep the audience enthralled. Overall, Tragedy at Sunset on the Beach is an engaging and thought-provoking play that is sure to leave audiences feeling inspired and hopeful.
Sequential Chain#
Of course, not all sequential chains will be as simple as passing a single string as an argument and getting a single string as output for all steps in the chain. In this next example, we will experiment with more complex chains that involve multiple inputs, and where there also multiple final outputs.
Of particular importance is how we name the input/output variable names. In the above example we didn’t have to think about that because we were just passing the output of one chain directly as input to the next, but here we do have worry about that because we have multiple inputs.
# This is an LLMChain to write a synopsis given a title of a play and the era it is set in.
llm = OpenAI(temperature=.7)
template = """You are a playwright. Given the title of play and the era it is set in, it is your job to write a synopsis for that title.
Title: {title}
Era: {era}
Playwright: This is a synopsis for the above play:"""
prompt_template = PromptTemplate(input_variables=["title", 'era'], template=template)
synopsis_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="synopsis")
# This is an LLMChain to write a review of a play given a synopsis.
llm = OpenAI(temperature=.7)
template = """You are a play critic from the New York Times. Given the synopsis of play, it is your job to write a review for that play.
Play Synopsis:
{synopsis}
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-4
|
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:"""
prompt_template = PromptTemplate(input_variables=["synopsis"], template=template)
review_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="review")
# This is the overall chain where we run these two chains in sequence.
from langchain.chains import SequentialChain
overall_chain = SequentialChain(
chains=[synopsis_chain, review_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["synopsis", "review"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-5
|
'era': 'Victorian England',
'synopsis': "\n\nThe play follows the story of John, a young man from a wealthy Victorian family, who dreams of a better life for himself. He soon meets a beautiful young woman named Mary, who shares his dream. The two fall in love and decide to elope and start a new life together.\n\nOn their journey, they make their way to a beach at sunset, where they plan to exchange their vows of love. Unbeknownst to them, their plans are overheard by John's father, who has been tracking them. He follows them to the beach and, in a fit of rage, confronts them. \n\nA physical altercation ensues, and in the struggle, John's father accidentally stabs Mary in the chest with his sword. The two are left in shock and disbelief as Mary dies in John's arms, her last words being a declaration of her love for him.\n\nThe tragedy of the play comes to a head when John, broken and with no hope of a future, chooses to take his own life by jumping off the cliffs into the sea below. \n\nThe play is a powerful story of love, hope, and loss set against the backdrop of 19th century England.",
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-6
|
'review': "\n\nThe latest production from playwright X is a powerful and heartbreaking story of love and loss set against the backdrop of 19th century England. The play follows John, a young man from a wealthy Victorian family, and Mary, a beautiful young woman with whom he falls in love. The two decide to elope and start a new life together, and the audience is taken on a journey of hope and optimism for the future.\n\nUnfortunately, their dreams are cut short when John's father discovers them and in a fit of rage, fatally stabs Mary. The tragedy of the play is further compounded when John, broken and without hope, takes his own life. The storyline is not only realistic, but also emotionally compelling, drawing the audience in from start to finish.\n\nThe acting was also commendable, with the actors delivering believable and nuanced performances. The playwright and director have successfully crafted a timeless tale of love and loss that will resonate with audiences for years to come. Highly recommended."}
Memory in Sequential Chains#
Sometimes you may want to pass along some context to use in each step of the chain or in a later part of the chain, but maintaining and chaining together the input/output variables can quickly get messy. Using SimpleMemory is a convenient way to do manage this and clean up your chains.
For example, using the previous playwright SequentialChain, lets say you wanted to include some context about date, time and location of the play, and using the generated synopsis and review, create some social media post text. You could add these new context variables as input_variables, or we can add a SimpleMemory to the chain to manage this context:
from langchain.chains import SequentialChain
from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-7
|
from langchain.memory import SimpleMemory
llm = OpenAI(temperature=.7)
template = """You are a social media manager for a theater company. Given the title of play, the era it is set in, the date,time and location, the synopsis of the play, and the review of the play, it is your job to write a social media post for that play.
Here is some context about the time and location of the play:
Date and Time: {time}
Location: {location}
Play Synopsis:
{synopsis}
Review from a New York Times play critic of the above play:
{review}
Social Media Post:
"""
prompt_template = PromptTemplate(input_variables=["synopsis", "review", "time", "location"], template=template)
social_chain = LLMChain(llm=llm, prompt=prompt_template, output_key="social_post_text")
overall_chain = SequentialChain(
memory=SimpleMemory(memories={"time": "December 25th, 8pm PST", "location": "Theater in the Park"}),
chains=[synopsis_chain, review_chain, social_chain],
input_variables=["era", "title"],
# Here we return multiple variables
output_variables=["social_post_text"],
verbose=True)
overall_chain({"title":"Tragedy at sunset on the beach", "era": "Victorian England"})
> Entering new SequentialChain chain...
> Finished chain.
{'title': 'Tragedy at sunset on the beach',
'era': 'Victorian England',
'time': 'December 25th, 8pm PST',
'location': 'Theater in the Park',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
91a767214cd6-8
|
'location': 'Theater in the Park',
'social_post_text': "\nSpend your Christmas night with us at Theater in the Park and experience the heartbreaking story of love and loss that is 'A Walk on the Beach'. Set in Victorian England, this romantic tragedy follows the story of Frances and Edward, a young couple whose love is tragically cut short. Don't miss this emotional and thought-provoking production that is sure to leave you in tears. #AWalkOnTheBeach #LoveAndLoss #TheaterInThePark #VictorianEngland"}
previous
Router Chains
next
Serialization
Contents
SimpleSequentialChain
Sequential Chain
Memory in Sequential Chains
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/sequential_chains.html
|
37cb620079e7-0
|
.ipynb
.pdf
Async API for Chain
Async API for Chain#
LangChain provides async support for Chains by leveraging the asyncio library.
Async methods are currently supported in LLMChain (through arun, apredict, acall) and LLMMathChain (through arun and acall), ChatVectorDBChain, and QA chains. Async support for other chains is on the roadmap.
import asyncio
import time
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
def generate_serially():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
for _ in range(5):
resp = chain.run(product="toothpaste")
print(resp)
async def async_generate(chain):
resp = await chain.arun(product="toothpaste")
print(resp)
async def generate_concurrently():
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["product"],
template="What is a good name for a company that makes {product}?",
)
chain = LLMChain(llm=llm, prompt=prompt)
tasks = [async_generate(chain) for _ in range(5)]
await asyncio.gather(*tasks)
s = time.perf_counter()
# If running this outside of Jupyter, use asyncio.run(generate_concurrently())
await generate_concurrently()
elapsed = time.perf_counter() - s
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/async_chain.html
|
37cb620079e7-1
|
await generate_concurrently()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Concurrent executed in {elapsed:0.2f} seconds." + '\033[0m')
s = time.perf_counter()
generate_serially()
elapsed = time.perf_counter() - s
print('\033[1m' + f"Serial executed in {elapsed:0.2f} seconds." + '\033[0m')
BrightSmile Toothpaste Company
BrightSmile Toothpaste Co.
BrightSmile Toothpaste
Gleaming Smile Inc.
SparkleSmile Toothpaste
Concurrent executed in 1.54 seconds.
BrightSmile Toothpaste Co.
MintyFresh Toothpaste Co.
SparkleSmile Toothpaste.
Pearly Whites Toothpaste Co.
BrightSmile Toothpaste.
Serial executed in 6.38 seconds.
previous
How-To Guides
next
Creating a custom Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/async_chain.html
|
e98b75f55faf-0
|
.ipynb
.pdf
LLM Chain
Contents
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
LLM Chain#
LLMChain is perhaps one of the most popular ways of querying an LLM object. It formats the prompt template using the input key values provided (and also memory key values, if available), passes the formatted string to LLM and returns the LLM output. Below we show additional functionalities of LLMChain class.
from langchain import PromptTemplate, OpenAI, LLMChain
prompt_template = "What is a good name for a company that makes {product}?"
llm = OpenAI(temperature=0)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template)
)
llm_chain("colorful socks")
{'product': 'colorful socks', 'text': '\n\nSocktastic!'}
Additional ways of running LLM Chain#
Aside from __call__ and run methods shared by all Chain object (see Getting Started to learn more), LLMChain offers a few more ways of calling the chain logic:
apply allows you run the chain against a list of inputs:
input_list = [
{"product": "socks"},
{"product": "computer"},
{"product": "shoes"}
]
llm_chain.apply(input_list)
[{'text': '\n\nSocktastic!'},
{'text': '\n\nTechCore Solutions.'},
{'text': '\n\nFootwear Factory.'}]
generate is similar to apply, except it return an LLMResult instead of string. LLMResult often contains useful generation such as token usages and finish reason.
llm_chain.generate(input_list)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/llm_chain.html
|
e98b75f55faf-1
|
llm_chain.generate(input_list)
LLMResult(generations=[[Generation(text='\n\nSocktastic!', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nTechCore Solutions.', generation_info={'finish_reason': 'stop', 'logprobs': None})], [Generation(text='\n\nFootwear Factory.', generation_info={'finish_reason': 'stop', 'logprobs': None})]], llm_output={'token_usage': {'prompt_tokens': 36, 'total_tokens': 55, 'completion_tokens': 19}, 'model_name': 'text-davinci-003'})
predict is similar to run method except that the input keys are specified as keyword arguments instead of a Python dict.
# Single input example
llm_chain.predict(product="colorful socks")
'\n\nSocktastic!'
# Multiple inputs example
template = """Tell me a {adjective} joke about {subject}."""
prompt = PromptTemplate(template=template, input_variables=["adjective", "subject"])
llm_chain = LLMChain(prompt=prompt, llm=OpenAI(temperature=0))
llm_chain.predict(adjective="sad", subject="ducks")
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
Parsing the outputs#
By default, LLMChain does not parse the output even if the underlying prompt object has an output parser. If you would like to apply that output parser on the LLM output, use predict_and_parse instead of predict and apply_and_parse instead of apply.
With predict:
from langchain.output_parsers import CommaSeparatedListOutputParser
output_parser = CommaSeparatedListOutputParser()
template = """List all the colors in a rainbow"""
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/llm_chain.html
|
e98b75f55faf-2
|
template = """List all the colors in a rainbow"""
prompt = PromptTemplate(template=template, input_variables=[], output_parser=output_parser)
llm_chain = LLMChain(prompt=prompt, llm=llm)
llm_chain.predict()
'\n\nRed, orange, yellow, green, blue, indigo, violet'
With predict_and_parser:
llm_chain.predict_and_parse()
['Red', 'orange', 'yellow', 'green', 'blue', 'indigo', 'violet']
Initialize from string#
You can also construct an LLMChain from a string template directly.
template = """Tell me a {adjective} joke about {subject}."""
llm_chain = LLMChain.from_string(llm=llm, template=template)
llm_chain.predict(adjective="sad", subject="ducks")
'\n\nQ: What did the duck say when his friend died?\nA: Quack, quack, goodbye.'
previous
Loading from LangChainHub
next
Router Chains
Contents
LLM Chain
Additional ways of running LLM Chain
Parsing the outputs
Initialize from string
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/generic/llm_chain.html
|
b80d4de316a8-0
|
.ipynb
.pdf
Router Chains: Selecting from multiple prompts with MultiPromptChain
Router Chains: Selecting from multiple prompts with MultiPromptChain#
This notebook demonstrates how to use the RouterChain paradigm to create a chain that dynamically selects the prompt to use for a given input. Specifically we show how to use the MultiPromptChain to create a question-answering chain that selects the prompt which is most relevant for a given question, and then answers the question using that prompt.
from langchain.chains.router import MultiPromptChain
from langchain.llms import OpenAI
physics_template = """You are a very smart physics professor. \
You are great at answering questions about physics in a concise and easy to understand manner. \
When you don't know the answer to a question you admit that you don't know.
Here is a question:
{input}"""
math_template = """You are a very good mathematician. You are great at answering math questions. \
You are so good because you are able to break down hard problems into their component parts, \
answer the component parts, and then put them together to answer the broader question.
Here is a question:
{input}"""
prompt_infos = [
{
"name": "physics",
"description": "Good for answering questions about physics",
"prompt_template": physics_template
},
{
"name": "math",
"description": "Good for answering math questions",
"prompt_template": math_template
}
]
chain = MultiPromptChain.from_prompts(OpenAI(), prompt_infos, verbose=True)
print(chain.run("What is black body radiation?"))
> Entering new MultiPromptChain chain...
physics: {'input': 'What is black body radiation?'}
> Finished chain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/multi_prompt_router.html
|
b80d4de316a8-1
|
physics: {'input': 'What is black body radiation?'}
> Finished chain.
Black body radiation is the emission of electromagnetic radiation from a body due to its temperature. It is a type of thermal radiation that is emitted from the surface of all objects that are at a temperature above absolute zero. It is a spectrum of radiation that is influenced by the temperature of the body and is independent of the composition of the emitting material.
print(chain.run("What is the first prime number greater than 40 such that one plus the prime number is divisible by 3"))
> Entering new MultiPromptChain chain...
math: {'input': 'What is the first prime number greater than 40 such that one plus the prime number is divisible by 3'}
> Finished chain.
?
The first prime number greater than 40 such that one plus the prime number is divisible by 3 is 43. To solve this problem, we can break down the question into two parts: finding the first prime number greater than 40, and then finding a number that is divisible by 3.
The first step is to find the first prime number greater than 40. A prime number is a number that is only divisible by 1 and itself. The next prime number after 40 is 41.
The second step is to find a number that is divisible by 3. To do this, we can add 1 to 41, which gives us 42. Now, we can check if 42 is divisible by 3. 42 divided by 3 is 14, so 42 is divisible by 3.
Therefore, the answer to the question is 43.
print(chain.run("What is the name of the type of cloud that rins"))
> Entering new MultiPromptChain chain...
None: {'input': 'What is the name of the type of cloud that rains?'}
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/multi_prompt_router.html
|
b80d4de316a8-2
|
None: {'input': 'What is the name of the type of cloud that rains?'}
> Finished chain.
The type of cloud that typically produces rain is called a cumulonimbus cloud. This type of cloud is characterized by its large vertical extent and can produce thunderstorms and heavy precipitation. Is there anything else you'd like to know?
previous
Moderation
next
Router Chains: Selecting from multiple prompts with MultiRetrievalQAChain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/multi_prompt_router.html
|
ff4380adc0c6-0
|
.ipynb
.pdf
PAL
Contents
Math Prompt
Colored Objects
Intermediate Steps
PAL#
Implements Program-Aided Language Models, as in https://arxiv.org/pdf/2211.10435.pdf.
from langchain.chains import PALChain
from langchain import OpenAI
llm = OpenAI(temperature=0, max_tokens=512)
Math Prompt#
pal_chain = PALChain.from_math_prompt(llm, verbose=True)
question = "Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"
pal_chain.run(question)
> Entering new PALChain chain...
def solution():
"""Jan has three times the number of pets as Marcia. Marcia has two more pets than Cindy. If Cindy has four pets, how many total pets do the three have?"""
cindy_pets = 4
marcia_pets = cindy_pets + 2
jan_pets = marcia_pets * 3
total_pets = cindy_pets + marcia_pets + jan_pets
result = total_pets
return result
> Finished chain.
'28'
Colored Objects#
pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True)
question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"
pal_chain.run(question)
> Entering new PALChain chain...
# Put objects into a list to record ordering
objects = []
objects += [('booklet', 'blue')] * 2
objects += [('booklet', 'purple')] * 2
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/pal.html
|
ff4380adc0c6-1
|
objects += [('booklet', 'purple')] * 2
objects += [('sunglasses', 'yellow')] * 2
# Remove all pairs of sunglasses
objects = [object for object in objects if object[0] != 'sunglasses']
# Count number of purple objects
num_purple = len([object for object in objects if object[1] == 'purple'])
answer = num_purple
> Finished PALChain chain.
'2'
Intermediate Steps#
You can also use the intermediate steps flag to return the code executed that generates the answer.
pal_chain = PALChain.from_colored_object_prompt(llm, verbose=True, return_intermediate_steps=True)
question = "On the desk, you see two blue booklets, two purple booklets, and two yellow pairs of sunglasses. If I remove all the pairs of sunglasses from the desk, how many purple items remain on it?"
result = pal_chain({"question": question})
> Entering new PALChain chain...
# Put objects into a list to record ordering
objects = []
objects += [('booklet', 'blue')] * 2
objects += [('booklet', 'purple')] * 2
objects += [('sunglasses', 'yellow')] * 2
# Remove all pairs of sunglasses
objects = [object for object in objects if object[0] != 'sunglasses']
# Count number of purple objects
num_purple = len([object for object in objects if object[1] == 'purple'])
answer = num_purple
> Finished chain.
result['intermediate_steps']
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/pal.html
|
ff4380adc0c6-2
|
answer = num_purple
> Finished chain.
result['intermediate_steps']
"# Put objects into a list to record ordering\nobjects = []\nobjects += [('booklet', 'blue')] * 2\nobjects += [('booklet', 'purple')] * 2\nobjects += [('sunglasses', 'yellow')] * 2\n\n# Remove all pairs of sunglasses\nobjects = [object for object in objects if object[0] != 'sunglasses']\n\n# Count number of purple objects\nnum_purple = len([object for object in objects if object[1] == 'purple'])\nanswer = num_purple"
previous
OpenAPI Chain
next
SQL Chain example
Contents
Math Prompt
Colored Objects
Intermediate Steps
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/pal.html
|
808b16fa5fc3-0
|
.ipynb
.pdf
FLARE
Contents
Imports
Retriever
FLARE Chain
FLARE#
This notebook is an implementation of Forward-Looking Active REtrieval augmented generation (FLARE).
Please see the original repo here.
The basic idea is:
Start answering a question
If you start generating tokens the model is uncertain about, look up relevant documents
Use those documents to continue generating
Repeat until finished
There is a lot of cool detail in how the lookup of relevant documents is done.
Basically, the tokens that model is uncertain about are highlighted, and then an LLM is called to generate a question that would lead to that answer. For example, if the generated text is Joe Biden went to Harvard, and the tokens the model was uncertain about was Harvard, then a good generated question would be where did Joe Biden go to college. This generated question is then used in a retrieval step to fetch relevant documents.
In order to set up this chain, we will need three things:
An LLM to generate the answer
An LLM to generate hypothetical questions to use in retrieval
A retriever to use to look up answers for
The LLM that we use to generate the answer needs to return logprobs so we can identify uncertain tokens. For that reason, we HIGHLY recommend that you use the OpenAI wrapper (NB: not the ChatOpenAI wrapper, as that does not return logprobs).
The LLM we use to generate hypothetical questions to use in retrieval can be anything. In this notebook we will use ChatOpenAI because it is fast and cheap.
The retriever can be anything. In this notebook we will use SERPER search engine, because it is cheap.
Other important parameters to understand:
max_generation_len: The maximum number of tokens to generate before stopping to check if any are uncertain
min_prob: Any tokens generated with probability below this will be considered uncertain
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-1
|
min_prob: Any tokens generated with probability below this will be considered uncertain
Imports#
import os
os.environ["SERPER_API_KEY"] = ""
import re
import numpy as np
from langchain.schema import BaseRetriever
from langchain.utilities import GoogleSerperAPIWrapper
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
from langchain.schema import Document
Retriever#
class SerperSearchRetriever(BaseRetriever):
def __init__(self, search):
self.search = search
def get_relevant_documents(self, query: str):
return [Document(page_content=self.search.run(query))]
async def aget_relevant_documents(self, query: str):
raise NotImplemented
retriever = SerperSearchRetriever(GoogleSerperAPIWrapper())
FLARE Chain#
# We set this so we can see what exactly is going on
import langchain
langchain.verbose = True
from langchain.chains import FlareChain
flare = FlareChain.from_llm(
ChatOpenAI(temperature=0),
retriever=retriever,
max_generation_len=164,
min_prob=.3,
)
query = "explain in great detail the difference between the langchain framework and baby agi"
flare.run(query)
> Entering new FlareChain chain...
Current Response:
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
>>> CONTEXT:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> RESPONSE:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-2
|
>>> RESPONSE:
> Entering new QuestionGeneratorChain chain...
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " decentralized platform for natural language processing" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-3
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " uses a blockchain" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " distributed ledger to" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-4
|
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " process data, allowing for secure and transparent data sharing." is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-5
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " set of tools" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " help developers create" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-6
|
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " create an AI system" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> EXISTING PARTIAL RESPONSE:
The Langchain Framework is a decentralized platform for natural language processing (NLP) applications. It uses a blockchain-based distributed ledger to store and process data, allowing for secure and transparent data sharing. The Langchain Framework also provides a set of tools and services to help developers create and deploy NLP applications.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-7
|
Baby AGI, on the other hand, is an artificial general intelligence (AGI) platform. It uses a combination of deep learning and reinforcement learning to create an AI system that can learn and adapt to new tasks. Baby AGI is designed to be a general-purpose AI system that can be used for a variety of applications, including natural language processing.
In summary, the Langchain Framework is a platform for NLP applications, while Baby AGI is an AI system designed for
The question to which the answer is the term/entity/phrase " NLP applications" is:
> Finished chain.
Generated Questions: ['What is the Langchain Framework?', 'What technology does the Langchain Framework use to store and process data for secure and transparent data sharing?', 'What technology does the Langchain Framework use to store and process data?', 'What does the Langchain Framework use a blockchain-based distributed ledger for?', 'What does the Langchain Framework provide in addition to a decentralized platform for natural language processing applications?', 'What set of tools and services does the Langchain Framework provide?', 'What is the purpose of Baby AGI?', 'What type of applications is the Langchain Framework designed for?']
> Entering new _OpenAIResponseChain chain...
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-8
|
>>> CONTEXT: LangChain: Software. LangChain is a software development framework designed to simplify the creation of applications using large language models. LangChain Initial release date: October 2022. LangChain Programming languages: Python and JavaScript. LangChain Developer(s): Harrison Chase. LangChain License: MIT License. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... Type: Software framework. At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. LangChain is a powerful tool that can be used to work with Large Language Models (LLMs). LLMs are very general in nature, which means that while they can ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. LangChain is a software development framework designed to simplify the creation of applications using large language models (LLMs). Written in: Python and JavaScript. Initial release: October 2022. LangChain - The A.I-native developer toolkit We started LangChain with the intent to build a modular and flexible framework for developing A.I- ... LangChain explained in 3 minutes - LangChain is a ... Duration: 3:03. Posted: Apr 13, 2023. LangChain is a framework built to help you build LLM-powered applications more easily by providing you with the following:. LangChain is a framework that enables quick and easy development of applications that make use of Large Language Models, for example, GPT-3. LangChain is a powerful open-source framework for developing applications powered by language models. It connects to the AI models you want to ...
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-9
|
LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Missing: secure | Must include:secure. Blockchain is the best way to secure the data of the shared community. Utilizing the capabilities of the blockchain nobody can read or interfere ... This modern technology consists of a chain of blocks that allows to securely store all committed transactions using shared and distributed ... A Blockchain network is used in the healthcare system to preserve and exchange patient data through hospitals, diagnostic laboratories, pharmacy firms, and ... In this article, I will walk you through the process of using the LangChain.js library with Google Cloud Functions, helping you leverage the ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: transparent | Must include:transparent. This technology keeps a distributed ledger on each blockchain node, making it more secure and transparent. The blockchain network can operate smart ... blockchain technology can offer a highly secured health data ledger to ... framework can be employed to store encrypted healthcare data in a ... In a simplified way, Blockchain is a data structure that stores transactions in an ordered way and linked to the previous block, serving as a ... Blockchain technology is a decentralized, distributed ledger that stores the record of ownership of digital assets. Missing: Langchain | Must include:Langchain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-10
|
LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. This documentation covers the steps to integrate Pinecone, a high-performance vector database, with LangChain, a framework for building applications powered ... The ability to connect to any model, ingest any custom database, and build upon a framework that can take action provides numerous use cases for ... With LangChain, developers can use a framework that abstracts the core building blocks of LLM applications. LangChain empowers developers to ... Build a question-answering tool based on financial data with LangChain & Deep Lake's unified & streamable data store. Browse applications built on LangChain technology. Explore PoC and MVP applications created by our community and discover innovative use cases for LangChain ... LangChain is a great framework that can be used for developing applications powered by LLMs. When you intend to enhance your application ... In this blog, we'll introduce you to LangChain and Ray Serve and how to use them to build a search engine using LLM embeddings and a vector ... The LinkChain Framework simplifies embedding creation and storage using Pinecone and Chroma, with code that loads files, splits documents, and creates embedding ... Missing: technology | Must include:technology.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-11
|
Blockchain is one type of a distributed ledger. Distributed ledgers use independent computers (referred to as nodes) to record, share and ... Missing: Langchain | Must include:Langchain. Blockchain is used in distributed storage software where huge data is broken down into chunks. This is available in encrypted data across a ... People sometimes use the terms 'Blockchain' and 'Distributed Ledger' interchangeably. This post aims to analyze the features of each. A distributed ledger ... Missing: Framework | Must include:Framework. Think of a “distributed ledger” that uses cryptography to allow each participant in the transaction to add to the ledger in a secure way without ... In this paper, we provide an overview of the history of trade settlement and discuss this nascent technology that may now transform traditional ... Missing: Langchain | Must include:Langchain. LangChain is a blockchain-based language education platform that aims to revolutionize the way people learn languages. Missing: Framework | Must include:Framework. It uses the distributed ledger technology framework and Smart contract engine for building scalable Business Blockchain applications. The fabric ... It looks at the assets the use case is handling, the different parties conducting transactions, and the smart contract, distributed ... Are you curious to know how Blockchain and Distributed ... Duration: 44:31. Posted: May 4, 2021. A blockchain is a distributed and immutable ledger to transfer ownership, record transactions, track assets, and ensure transparency, security, trust and value ... Missing: Langchain | Must include:Langchain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-12
|
LangChain is an intuitive framework created to assist in developing applications driven by a language model, such as OpenAI or Hugging Face. Missing: decentralized | Must include:decentralized. LangChain, created by Harrison Chase, is a Python library that provides out-of-the-box support to build NLP applications using LLMs. Missing: decentralized | Must include:decentralized. LangChain provides a standard interface for chains, enabling developers to create sequences of calls that go beyond a single LLM call. Chains ... Missing: decentralized platform natural. LangChain is a powerful framework that simplifies the process of building advanced language model applications. Missing: platform | Must include:platform. Are your language models ignoring previous instructions ... Duration: 32:23. Posted: Feb 21, 2023. LangChain is a framework that enables quick and easy development of applications ... Prompting is the new way of programming NLP models. Missing: decentralized platform. It then uses natural language processing and machine learning algorithms to search ... Summarization is handled via cohere, QnA is handled via langchain, ... LangChain is a framework for developing applications powered by language models. ... There are several main modules that LangChain provides support for. Missing: decentralized platform. In the healthcare-chain system, blockchain provides an appreciated secure ... The entire process of adding new and previous block data is performed based on ... ChatGPT is a large language model developed by OpenAI, ... tool for a wide range of applications, including natural language processing, ...
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-13
|
LangChain is a powerful tool that can be used to work with Large Language ... If an API key has been provided, create an OpenAI language model instance At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. A tutorial of the six core modules of the LangChain Python package covering models, prompts, chains, agents, indexes, and memory with OpenAI ... LangChain's collection of tools refers to a set of tools provided by the LangChain framework for developing applications powered by language models. LangChain is a framework for developing applications powered by language models. We believe that the most powerful and differentiated applications will not only ... LangChain is an open-source library that provides developers with the tools to build applications powered by large language models (LLMs). LangChain is a framework for including AI from large language models inside data pipelines and applications. This tutorial provides an overview of what you ... Plan-and-Execute Agents · Feature Stores and LLMs · Structured Tools · Auto-Evaluator Opportunities · Callbacks Improvements · Unleashing the power ... Tool: A function that performs a specific duty. This can be things like: Google Search, Database lookup, Python REPL, other chains. · LLM: The language model ... LangChain provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications.
Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. This system is exploring and demonstrating to us the potential of large language models, such as GPT and how it can autonomously perform tasks. Apr 17, 2023
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-14
|
At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.
>>> USER INPUT: explain in great detail the difference between the langchain framework and baby agi
>>> RESPONSE:
> Finished chain.
> Finished chain.
' LangChain is a framework for developing applications powered by language models. It provides a standard interface for chains, lots of integrations with other tools, and end-to-end chains for common applications. On the other hand, Baby AGI is an AI system that is exploring and demonstrating the potential of large language models, such as GPT, and how it can autonomously perform tasks. Baby AGI has the ability to complete tasks, generate new tasks based on previous results, and prioritize tasks in real-time. '
llm = OpenAI()
llm(query)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-15
|
llm = OpenAI()
llm(query)
'\n\nThe Langchain framework and Baby AGI are both artificial intelligence (AI) frameworks that are used to create intelligent agents. The Langchain framework is a supervised learning system that is based on the concept of “language chains”. It uses a set of rules to map natural language inputs to specific outputs. It is a general-purpose AI framework and can be used to build applications such as natural language processing (NLP), chatbots, and more.\n\nBaby AGI, on the other hand, is an unsupervised learning system that uses neural networks and reinforcement learning to learn from its environment. It is used to create intelligent agents that can adapt to changing environments. It is a more advanced AI system and can be used to build more complex applications such as game playing, robotic vision, and more.\n\nThe main difference between the two is that the Langchain framework uses supervised learning while Baby AGI uses unsupervised learning. The Langchain framework is a general-purpose AI framework that can be used for various applications, while Baby AGI is a more advanced AI system that can be used to create more complex applications.'
flare.run("how are the origin stories of langchain and bitcoin similar or different?")
> Entering new FlareChain chain...
Current Response:
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
>>> CONTEXT:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> RESPONSE:
> Entering new QuestionGeneratorChain chain...
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-16
|
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
The question to which the answer is the term/entity/phrase " very different origin" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
The question to which the answer is the term/entity/phrase " 2020 by a" is:
Prompt after formatting:
Given a user input and an existing partial response as context, ask a question to which the answer is the given term/entity/phrase:
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> EXISTING PARTIAL RESPONSE:
Langchain and Bitcoin have very different origin stories. Bitcoin was created by the mysterious Satoshi Nakamoto in 2008 as a decentralized digital currency. Langchain, on the other hand, was created in 2020 by a team of developers as a platform for creating and managing decentralized language learning applications.
FINISHED
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-17
|
FINISHED
The question to which the answer is the term/entity/phrase " developers as a platform for creating and managing decentralized language learning applications." is:
> Finished chain.
Generated Questions: ['How would you describe the origin stories of Langchain and Bitcoin in terms of their similarities or differences?', 'When was Langchain created and by whom?', 'What was the purpose of creating Langchain?']
> Entering new _OpenAIResponseChain chain...
Prompt after formatting:
Respond to the user message using any relevant context. If context is provided, you should ground your answer in that context. Once you're done responding return FINISHED.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-18
|
>>> CONTEXT: Bitcoin and Ethereum have many similarities but different long-term visions and limitations. Ethereum changed from proof of work to proof of ... Bitcoin will be around for many years and examining its white paper origins is a great exercise in understanding why. Satoshi Nakamoto's blueprint describes ... Bitcoin is a new currency that was created in 2009 by an unknown person using the alias Satoshi Nakamoto. Transactions are made with no middle men – meaning, no ... Missing: Langchain | Must include:Langchain. By comparison, Bitcoin transaction speeds are tremendously lower. ... learn about its history and its role in the emergence of the Bitcoin ... LangChain is a powerful framework that simplifies the process of ... tasks like document retrieval, clustering, and similarity comparisons. Key terms: Bitcoin System, Blockchain Technology, ... Furthermore, the research paper will discuss and compare the five payment. Blockchain first appeared in Nakamoto's Bitcoin white paper that describes a new decentralized cryptocurrency [1]. Bitcoin takes the blockchain technology ... Missing: stories | Must include:stories. A score of 0 means there were not enough data for this term. Google trends was accessed on 5 November 2018 with searches for bitcoin, euro, gold ... Contracts, transactions, and records of them provide critical structure in our economic system, but they haven't kept up with the world's digital ... Missing: Langchain | Must include:Langchain. Of course, traders try to make a profit on their portfolio in this way.The difference between investing and trading is the regularity with which ...
After all these giant leaps forward in the LLM space, OpenAI released ChatGPT — thrusting LLMs into the spotlight. LangChain appeared around the same time. Its creator, Harrison Chase, made the first commit in late October 2022. Leaving a short couple of months of development before getting caught in the LLM wave.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
808b16fa5fc3-19
|
At its core, LangChain is a framework built around LLMs. We can use it for chatbots, Generative Question-Answering (GQA), summarization, and much more. The core idea of the library is that we can “chain” together different components to create more advanced use cases around LLMs.
>>> USER INPUT: how are the origin stories of langchain and bitcoin similar or different?
>>> RESPONSE:
> Finished chain.
> Finished chain.
' The origin stories of LangChain and Bitcoin are quite different. Bitcoin was created in 2009 by an unknown person using the alias Satoshi Nakamoto. LangChain was created in late October 2022 by Harrison Chase. Bitcoin is a decentralized cryptocurrency, while LangChain is a framework built around LLMs. '
previous
Extraction
next
GraphCypherQAChain
Contents
Imports
Retriever
FLARE Chain
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/flare.html
|
91cf89266cf6-0
|
.ipynb
.pdf
GraphCypherQAChain
Contents
Seeding the database
Refresh graph schema information
Querying the graph
Limit the number of results
Return intermediate results
Return direct results
GraphCypherQAChain#
This notebook shows how to use LLMs to provide a natural language interface to a graph database you can query with the Cypher query language.
You will need to have a running Neo4j instance. One option is to create a free Neo4j database instance in their Aura cloud service. You can also run the database locally using the Neo4j Desktop application, or running a docker container.
You can run a local docker container by running the executing the following script:
docker run \
--name neo4j \
-p 7474:7474 -p 7687:7687 \
-d \
-e NEO4J_AUTH=neo4j/pleaseletmein \
-e NEO4J_PLUGINS=\[\"apoc\"\] \
neo4j:latest
If you are using the docker container, you need to wait a couple of second for the database to start.
from langchain.chat_models import ChatOpenAI
from langchain.chains import GraphCypherQAChain
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url="bolt://localhost:7687", username="neo4j", password="pleaseletmein"
)
Seeding the database#
Assuming your database is empty, you can populate it using Cypher query language. The following Cypher statement is idempotent, which means the database information will be the same if you run it one or multiple times.
graph.query(
"""
MERGE (m:Movie {name:"Top Gun"})
WITH m
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/graph_cypher_qa.html
|
91cf89266cf6-1
|
"""
MERGE (m:Movie {name:"Top Gun"})
WITH m
UNWIND ["Tom Cruise", "Val Kilmer", "Anthony Edwards", "Meg Ryan"] AS actor
MERGE (a:Actor {name:actor})
MERGE (a)-[:ACTED_IN]->(m)
"""
)
[]
Refresh graph schema information#
If the schema of database changes, you can refresh the schema information needed to generate Cypher statements.
graph.refresh_schema()
print(graph.get_schema)
Node properties are the following:
[{'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Movie'}, {'properties': [{'property': 'name', 'type': 'STRING'}], 'labels': 'Actor'}]
Relationship properties are the following:
[]
The relationships are the following:
['(:Actor)-[:ACTED_IN]->(:Movie)']
Querying the graph#
We can now use the graph cypher QA chain to ask question of the graph
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True
)
chain.run("Who played in Top Gun?")
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})
RETURN a.name
Full Context:
[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]
> Finished chain.
'Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.'
Limit the number of results#
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/graph_cypher_qa.html
|
91cf89266cf6-2
|
Limit the number of results#
You can limit the number of results from the Cypher QA Chain using the top_k parameter.
The default is 10.
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True, top_k=2
)
chain.run("Who played in Top Gun?")
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})
RETURN a.name
Full Context:
[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}]
> Finished chain.
'Val Kilmer and Anthony Edwards played in Top Gun.'
Return intermediate results#
You can return intermediate steps from the Cypher QA Chain using the return_intermediate_steps parameter
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True, return_intermediate_steps=True
)
result = chain("Who played in Top Gun?")
print(f"Intermediate steps: {result['intermediate_steps']}")
print(f"Final answer: {result['result']}")
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})
RETURN a.name
Full Context:
[{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]
> Finished chain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/graph_cypher_qa.html
|
91cf89266cf6-3
|
> Finished chain.
Intermediate steps: [{'query': "MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})\nRETURN a.name"}, {'context': [{'a.name': 'Val Kilmer'}, {'a.name': 'Anthony Edwards'}, {'a.name': 'Meg Ryan'}, {'a.name': 'Tom Cruise'}]}]
Final answer: Val Kilmer, Anthony Edwards, Meg Ryan, and Tom Cruise played in Top Gun.
Return direct results#
You can return direct results from the Cypher QA Chain using the return_direct parameter
chain = GraphCypherQAChain.from_llm(
ChatOpenAI(temperature=0), graph=graph, verbose=True, return_direct=True
)
chain.run("Who played in Top Gun?")
> Entering new GraphCypherQAChain chain...
Generated Cypher:
MATCH (a:Actor)-[:ACTED_IN]->(m:Movie {name: 'Top Gun'})
RETURN a.name
> Finished chain.
[{'a.name': 'Val Kilmer'},
{'a.name': 'Anthony Edwards'},
{'a.name': 'Meg Ryan'},
{'a.name': 'Tom Cruise'}]
previous
FLARE
next
NebulaGraphQAChain
Contents
Seeding the database
Refresh graph schema information
Querying the graph
Limit the number of results
Return intermediate results
Return direct results
By Harrison Chase
© Copyright 2023, Harrison Chase.
Last updated on Jun 16, 2023.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/graph_cypher_qa.html
|
13efa52e14c6-0
|
.ipynb
.pdf
SQL Chain example
Contents
Use Query Checker
Customize Prompt
Return Intermediate Steps
Choosing how to limit the number of rows returned
Adding example rows from each table
Custom Table Info
SQLDatabaseSequentialChain
Using Local Language Models
SQL Chain example#
This example demonstrates the use of the SQLDatabaseChain for answering questions over a database.
Under the hood, LangChain uses SQLAlchemy to connect to SQL databases. The SQLDatabaseChain can therefore be used with any SQL dialect supported by SQLAlchemy, such as MS SQL, MySQL, MariaDB, PostgreSQL, Oracle SQL, Databricks and SQLite. Please refer to the SQLAlchemy documentation for more information about requirements for connecting to your database. For example, a connection to MySQL requires an appropriate connector such as PyMySQL. A URI for a MySQL connection might look like: mysql+pymysql://user:pass@some_mysql_db_address/db_name.
This demonstration uses SQLite and the example Chinook database.
To set it up, follow the instructions on https://database.guide/2-sample-databases-sqlite/, placing the .db file in a notebooks folder at the root of this repository.
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
db = SQLDatabase.from_uri("sqlite:///../../../../notebooks/Chinook.db")
llm = OpenAI(temperature=0, verbose=True)
NOTE: For data-sensitive projects, you can specify return_direct=True in the SQLDatabaseChain initialization to directly return the output of the SQL query without any additional formatting. This prevents the LLM from seeing any contents within the database. Note, however, the LLM still has access to the database scheme (i.e. dialect, table and key names) by default.
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
db_chain.run("How many employees are there?")
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-1
|
db_chain.run("How many employees are there?")
> Entering new SQLDatabaseChain chain...
How many employees are there?
SQLQuery:
/workspace/langchain/langchain/sql_database.py:191: SAWarning: Dialect sqlite+pysqlite does *not* support Decimal objects natively, and SQLAlchemy must convert from floating point - rounding errors and other issues may occur. Please consider storing Decimal numbers as strings or integers on this platform for lossless storage.
sample_rows = connection.execute(command)
SELECT COUNT(*) FROM "Employee";
SQLResult: [(8,)]
Answer:There are 8 employees.
> Finished chain.
'There are 8 employees.'
Use Query Checker#
Sometimes the Language Model generates invalid SQL with small mistakes that can be self-corrected using the same technique used by the SQL Database Agent to try and fix the SQL using the LLM. You can simply specify this option when creating the chain:
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True, use_query_checker=True)
db_chain.run("How many albums by Aerosmith?")
> Entering new SQLDatabaseChain chain...
How many albums by Aerosmith?
SQLQuery:SELECT COUNT(*) FROM Album WHERE ArtistId = 3;
SQLResult: [(1,)]
Answer:There is 1 album by Aerosmith.
> Finished chain.
'There is 1 album by Aerosmith.'
Customize Prompt#
You can also customize the prompt that is used. Here is an example prompting it to understand that foobar is the same as the Employee table
from langchain.prompts.prompt import PromptTemplate
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question: "Question here"
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-2
|
Use the following format:
Question: "Question here"
SQLQuery: "SQL Query to run"
SQLResult: "Result of the SQLQuery"
Answer: "Final answer here"
Only use the following tables:
{table_info}
If someone asks for the table foobar, they really mean the employee table.
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True)
db_chain.run("How many employees are there in the foobar table?")
> Entering new SQLDatabaseChain chain...
How many employees are there in the foobar table?
SQLQuery:SELECT COUNT(*) FROM Employee;
SQLResult: [(8,)]
Answer:There are 8 employees in the foobar table.
> Finished chain.
'There are 8 employees in the foobar table.'
Return Intermediate Steps#
You can also return the intermediate steps of the SQLDatabaseChain. This allows you to access the SQL statement that was generated, as well as the result of running that against the SQL Database.
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, use_query_checker=True, return_intermediate_steps=True)
result = db_chain("How many employees are there in the foobar table?")
result["intermediate_steps"]
> Entering new SQLDatabaseChain chain...
How many employees are there in the foobar table?
SQLQuery:SELECT COUNT(*) FROM Employee;
SQLResult: [(8,)]
Answer:There are 8 employees in the foobar table.
> Finished chain.
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-3
|
Answer:There are 8 employees in the foobar table.
> Finished chain.
[{'input': 'How many employees are there in the foobar table?\nSQLQuery:SELECT COUNT(*) FROM Employee;\nSQLResult: [(8,)]\nAnswer:',
'top_k': '5',
'dialect': 'sqlite',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-4
|
'table_info': '\nCREATE TABLE "Artist" (\n\t"ArtistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("ArtistId")\n)\n\n/*\n3 rows from Artist table:\nArtistId\tName\n1\tAC/DC\n2\tAccept\n3\tAerosmith\n*/\n\n\nCREATE TABLE "Employee" (\n\t"EmployeeId" INTEGER NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"FirstName" NVARCHAR(20) NOT NULL, \n\t"Title" NVARCHAR(30), \n\t"ReportsTo" INTEGER, \n\t"BirthDate" DATETIME, \n\t"HireDate" DATETIME, \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60), \n\tPRIMARY KEY ("EmployeeId"), \n\tFOREIGN KEY("ReportsTo") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Employee table:\nEmployeeId\tLastName\tFirstName\tTitle\tReportsTo\tBirthDate\tHireDate\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\n1\tAdams\tAndrew\tGeneral Manager\tNone\t1962-02-18 00:00:00\t2002-08-14 00:00:00\t11120 Jasper Ave NW\tEdmonton\tAB\tCanada\tT5K 2N1\t+1 (780)
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-5
|
2N1\t+1 (780) 428-9482\t+1 (780) 428-3457\tandrew@chinookcorp.com\n2\tEdwards\tNancy\tSales Manager\t1\t1958-12-08 00:00:00\t2002-05-01 00:00:00\t825 8 Ave SW\tCalgary\tAB\tCanada\tT2P 2T3\t+1 (403) 262-3443\t+1 (403) 262-3322\tnancy@chinookcorp.com\n3\tPeacock\tJane\tSales Support Agent\t2\t1973-08-29 00:00:00\t2002-04-01 00:00:00\t1111 6 Ave SW\tCalgary\tAB\tCanada\tT2P 5M5\t+1 (403) 262-3443\t+1 (403) 262-6712\tjane@chinookcorp.com\n*/\n\n\nCREATE TABLE "Genre" (\n\t"GenreId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("GenreId")\n)\n\n/*\n3 rows from Genre table:\nGenreId\tName\n1\tRock\n2\tJazz\n3\tMetal\n*/\n\n\nCREATE TABLE "MediaType" (\n\t"MediaTypeId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("MediaTypeId")\n)\n\n/*\n3 rows from MediaType table:\nMediaTypeId\tName\n1\tMPEG audio file\n2\tProtected AAC audio file\n3\tProtected MPEG-4 video file\n*/\n\n\nCREATE TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL,
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-6
|
TABLE "Playlist" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(120), \n\tPRIMARY KEY ("PlaylistId")\n)\n\n/*\n3 rows from Playlist table:\nPlaylistId\tName\n1\tMusic\n2\tMovies\n3\tTV Shows\n*/\n\n\nCREATE TABLE "Album" (\n\t"AlbumId" INTEGER NOT NULL, \n\t"Title" NVARCHAR(160) NOT NULL, \n\t"ArtistId" INTEGER NOT NULL, \n\tPRIMARY KEY ("AlbumId"), \n\tFOREIGN KEY("ArtistId") REFERENCES "Artist" ("ArtistId")\n)\n\n/*\n3 rows from Album table:\nAlbumId\tTitle\tArtistId\n1\tFor Those About To Rock We Salute You\t1\n2\tBalls to the Wall\t2\n3\tRestless and Wild\t2\n*/\n\n\nCREATE TABLE "Customer" (\n\t"CustomerId" INTEGER NOT NULL, \n\t"FirstName" NVARCHAR(40) NOT NULL, \n\t"LastName" NVARCHAR(20) NOT NULL, \n\t"Company" NVARCHAR(80), \n\t"Address" NVARCHAR(70), \n\t"City" NVARCHAR(40), \n\t"State" NVARCHAR(40), \n\t"Country" NVARCHAR(40), \n\t"PostalCode" NVARCHAR(10), \n\t"Phone" NVARCHAR(24), \n\t"Fax" NVARCHAR(24), \n\t"Email" NVARCHAR(60) NOT NULL, \n\t"SupportRepId" INTEGER, \n\tPRIMARY KEY ("CustomerId"), \n\tFOREIGN KEY("SupportRepId") REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-7
|
REFERENCES "Employee" ("EmployeeId")\n)\n\n/*\n3 rows from Customer table:\nCustomerId\tFirstName\tLastName\tCompany\tAddress\tCity\tState\tCountry\tPostalCode\tPhone\tFax\tEmail\tSupportRepId\n1\tLuís\tGonçalves\tEmbraer - Empresa Brasileira de Aeronáutica S.A.\tAv. Brigadeiro Faria Lima, 2170\tSão José dos Campos\tSP\tBrazil\t12227-000\t+55 (12) 3923-5555\t+55 (12) 3923-5566\tluisg@embraer.com.br\t3\n2\tLeonie\tKöhler\tNone\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t+49 0711 2842222\tNone\tleonekohler@surfeu.de\t5\n3\tFrançois\tTremblay\tNone\t1498 rue Bélanger\tMontréal\tQC\tCanada\tH2G 1A7\t+1 (514) 721-4711\tNone\tftremblay@gmail.com\t3\n*/\n\n\nCREATE TABLE "Invoice" (\n\t"InvoiceId" INTEGER NOT NULL, \n\t"CustomerId" INTEGER NOT NULL, \n\t"InvoiceDate" DATETIME NOT NULL, \n\t"BillingAddress" NVARCHAR(70), \n\t"BillingCity" NVARCHAR(40), \n\t"BillingState" NVARCHAR(40), \n\t"BillingCountry" NVARCHAR(40), \n\t"BillingPostalCode" NVARCHAR(10), \n\t"Total" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("InvoiceId"), \n\tFOREIGN KEY("CustomerId") REFERENCES "Customer"
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-8
|
KEY ("InvoiceId"), \n\tFOREIGN KEY("CustomerId") REFERENCES "Customer" ("CustomerId")\n)\n\n/*\n3 rows from Invoice table:\nInvoiceId\tCustomerId\tInvoiceDate\tBillingAddress\tBillingCity\tBillingState\tBillingCountry\tBillingPostalCode\tTotal\n1\t2\t2009-01-01 00:00:00\tTheodor-Heuss-Straße 34\tStuttgart\tNone\tGermany\t70174\t1.98\n2\t4\t2009-01-02 00:00:00\tUllevålsveien 14\tOslo\tNone\tNorway\t0171\t3.96\n3\t8\t2009-01-03 00:00:00\tGrétrystraat 63\tBrussels\tNone\tBelgium\t1000\t5.94\n*/\n\n\nCREATE TABLE "Track" (\n\t"TrackId" INTEGER NOT NULL, \n\t"Name" NVARCHAR(200) NOT NULL, \n\t"AlbumId" INTEGER, \n\t"MediaTypeId" INTEGER NOT NULL, \n\t"GenreId" INTEGER, \n\t"Composer" NVARCHAR(220), \n\t"Milliseconds" INTEGER NOT NULL, \n\t"Bytes" INTEGER, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\tPRIMARY KEY ("TrackId"), \n\tFOREIGN KEY("MediaTypeId") REFERENCES "MediaType" ("MediaTypeId"), \n\tFOREIGN KEY("GenreId") REFERENCES "Genre" ("GenreId"), \n\tFOREIGN KEY("AlbumId") REFERENCES "Album" ("AlbumId")\n)\n\n/*\n3 rows from Track table:\nTrackId\tName\tAlbumId\tMediaTypeId\tGenreId\tComposer\tMilliseconds\tBytes\tUnitPrice\n1\tFor
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-9
|
Those About To Rock (We Salute You)\t1\t1\t1\tAngus Young, Malcolm Young, Brian Johnson\t343719\t11170334\t0.99\n2\tBalls to the Wall\t2\t2\t1\tNone\t342562\t5510424\t0.99\n3\tFast As a Shark\t3\t2\t1\tF. Baltes, S. Kaufman, U. Dirkscneider & W. Hoffman\t230619\t3990994\t0.99\n*/\n\n\nCREATE TABLE "InvoiceLine" (\n\t"InvoiceLineId" INTEGER NOT NULL, \n\t"InvoiceId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\t"UnitPrice" NUMERIC(10, 2) NOT NULL, \n\t"Quantity" INTEGER NOT NULL, \n\tPRIMARY KEY ("InvoiceLineId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("InvoiceId") REFERENCES "Invoice" ("InvoiceId")\n)\n\n/*\n3 rows from InvoiceLine table:\nInvoiceLineId\tInvoiceId\tTrackId\tUnitPrice\tQuantity\n1\t1\t2\t0.99\t1\n2\t1\t4\t0.99\t1\n3\t2\t6\t0.99\t1\n*/\n\n\nCREATE TABLE "PlaylistTrack" (\n\t"PlaylistId" INTEGER NOT NULL, \n\t"TrackId" INTEGER NOT NULL, \n\tPRIMARY KEY ("PlaylistId", "TrackId"), \n\tFOREIGN KEY("TrackId") REFERENCES "Track" ("TrackId"), \n\tFOREIGN KEY("PlaylistId") REFERENCES "Playlist" ("PlaylistId")\n)\n\n/*\n3 rows from PlaylistTrack
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
13efa52e14c6-10
|
"Playlist" ("PlaylistId")\n)\n\n/*\n3 rows from PlaylistTrack table:\nPlaylistId\tTrackId\n1\t3402\n1\t3389\n1\t3390\n*/',
|
rtdocs_stable/api.python.langchain.com/en/stable/modules/chains/examples/sqlite.html
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.