id
stringlengths 14
16
| text
stringlengths 31
2.41k
| source
stringlengths 53
121
|
---|---|---|
f5ed643c3227-52 | Construct a sql agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) β
agent_type (langchain.agents.agent_types.AgentType) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
suffix (Optional[str]) β
format_instructions (str) β
input_variables (Optional[List[str]]) β
top_k (int) β
max_iterations (Optional[int]) β
max_execution_time (Optional[float]) β
early_stopping_method (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β | https://api.python.langchain.com/en/latest/modules/agents.html |
f5ed643c3227-53 | prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source]ο
Construct a vectorstore router agent from an LLM and tools.
Parameters
llm (langchain.base_language.BaseLanguageModel) β
toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) β
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β
prefix (str) β
verbose (bool) β
agent_executor_kwargs (Optional[Dict[str, Any]]) β
kwargs (Dict[str, Any]) β
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.get_all_tool_names()[source]ο
Get a list of all possible tool names.
Return type
List[str]
langchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source]ο
Load an agent executor given tools and LLM.
Parameters
tools (Sequence[langchain.tools.base.BaseTool]) β List of tools this agent has access to. | https://api.python.langchain.com/en/latest/modules/agents.html |
f5ed643c3227-54 | llm (langchain.base_language.BaseLanguageModel) β Language model to use as the agent.
agent (Optional[langchain.agents.agent_types.AgentType]) β Agent type to use. If None and agent_path is also None, will default to
AgentType.ZERO_SHOT_REACT_DESCRIPTION.
callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) β CallbackManager to use. Global callback manager is used if
not provided. Defaults to None.
agent_path (Optional[str]) β Path to serialized agent to use.
agent_kwargs (Optional[dict]) β Additional key word arguments to pass to the underlying agent
tags (Optional[Sequence[str]]) β Tags to apply to the traced runs.
**kwargs β Additional key word arguments passed to the agent executor
kwargs (Any) β
Returns
An agent executor
Return type
langchain.agents.agent.AgentExecutor
langchain.agents.load_agent(path, **kwargs)[source]ο
Unified method for loading a agent from LangChainHub or local fs.
Parameters
path (Union[str, pathlib.Path]) β
kwargs (Any) β
Return type
Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent]
langchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source]ο
Loads a tool from the HuggingFace Hub.
Parameters
task_or_repo_id (str) β Task or model repo id.
model_repo_id (Optional[str]) β Optional model repo id.
token (Optional[str]) β Optional token.
remote (bool) β Optional remote. Defaults to False.
**kwargs β
kwargs (Any) β
Returns
A tool.
Return type
langchain.tools.base.BaseTool | https://api.python.langchain.com/en/latest/modules/agents.html |
f5ed643c3227-55 | Returns
A tool.
Return type
langchain.tools.base.BaseTool
langchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source]ο
Load tools based on their name.
Parameters
tool_names (List[str]) β name of tools to load.
llm (Optional[langchain.base_language.BaseLanguageModel]) β Optional language model, may be needed to initialize certain tools.
callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) β Optional callback manager or list of callback handlers.
If not provided, default global callback manager will be used.
kwargs (Any) β
Returns
List of tools.
Return type
List[langchain.tools.base.BaseTool]
langchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source]ο
Make tools out of functions, can be used with or without arguments.
Parameters
*args β The arguments to the tool.
return_direct (bool) β Whether to return directly from the tool rather
than continuing the agent loop.
args_schema (Optional[Type[pydantic.main.BaseModel]]) β optional argument schema for user to specify
infer_schema (bool) β Whether to infer the schema of the arguments from
the functionβs signature. This also makes the resultant tool
accept a dictionary input to its run() function.
args (Union[str, Callable]) β
Return type
Callable
Requires:
Function must be of type (str) -> str
Function must have a docstring
Examples
@tool
def search_api(query: str) -> str:
# Searches the API for the query.
return
@tool("search", return_direct=True)
def search_api(query: str) -> str:
# Searches the API for the query.
return | https://api.python.langchain.com/en/latest/modules/agents.html |
e84f0fb94639-0 | Document Loadersο
All different types of document loaders.
class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads AZLyrics webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirbyteJSONLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads local airbyte json files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for Airtable tables.
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-1 | Loader for Airtable tables.
Parameters
api_token (str) β
table_id (str) β
base_id (str) β
lazy_load()[source]ο
Lazy load records from table.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load Table.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Logic for loading documents from Apify datasets.
Parameters
dataset_id (str) β
dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) β
Return type
None
attribute apify_client: Any = Noneο
attribute dataset_id: str [Required]ο
The ID of the dataset on the Apify platform.
attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required]ο
A custom function that takes a single dictionary (an Apify dataset item)
and converts it to an instance of the Document class.
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from arxiv.org into a list of Documents.
Each document represents one Document.
The loader converts the original PDF format into the text.
Parameters
query (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-2 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Azure Blob Storage.
Parameters
conn_str (str) β
container (str) β
blob_name (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a bibtex file into a list of Documents. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-3 | Loads a bibtex file into a list of Documents.
Each document represents one entry from the bibtex file.
If a PDF file is present in the file bibtex field, the original PDF
is loaded into the document text. If no such file entry is present,
the abstract field is used instead.
Parameters
file_path (str) β
parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) β
max_docs (Optional[int]) β
max_content_chars (Optional[int]) β
load_extra_metadata (bool) β
file_pattern (str) β
lazy_load()[source]ο
Load bibtex file using bibtexparser and get the article texts plus the
article metadata.
See https://bibtexparser.readthedocs.io/en/master/
Returns
a list of documents with the document.page_content in text format
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load bibtex file documents from the given bibtex file path.
See https://bibtexparser.readthedocs.io/en/master/
Parameters
file_path β the path to the bibtex file
Returns
a list of documents with the document.page_content in text format
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from BigQuery into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-4 | are written into the page_content and none into the metadata.
Parameters
query (str) β
project (Optional[str]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
credentials (Optional[Credentials]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BiliBiliLoader(video_urls)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads bilibili transcripts.
Parameters
video_urls (List[str]) β
load()[source]ο
Load from bilibili url.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads all documents from a Blackboard course.
This loader is not compatible with all Blackboard courses. It is only
compatible with courses that use the new Blackboard interface.
To use this loader, you must have the BbRouter cookie. You can get this
cookie by logging into the course and then copying the value of the
BbRouter cookie from the browserβs developer tools.
Example
from langchain.document_loaders import BlackboardLoader
loader = BlackboardLoader(
blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1",
bbrouter="expires:12345...",
)
documents = loader.load()
Parameters
blackboard_course_url (str) β
bbrouter (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-5 | blackboard_course_url (str) β
bbrouter (str) β
load_all_recursively (bool) β
basic_auth (Optional[Tuple[str, str]]) β
cookies (Optional[dict]) β
folder_path: strο
base_url: strο
load_all_recursively: boolο
check_bs4()[source]ο
Check if BeautifulSoup4 is installed.
Raises
ImportError β If BeautifulSoup4 is not installed.
Return type
None
load()[source]ο
Load data into document objects.
Returns
List of documents.
Return type
List[langchain.schema.Document]
download(path)[source]ο
Download a file from a url.
Parameters
path (str) β Path to the file.
Return type
None
parse_filename(url)[source]ο
Parse the filename from a url.
Parameters
url (str) β Url to parse the filename from.
Returns
The filename.
Return type
str
class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source]ο
Bases: pydantic.main.BaseModel
A blob is used to represent raw data by either reference or value.
Provides an interface to materialize the blob in different representations, and
help to decouple the development of data loaders from the downstream parsing of
the raw data.
Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob
Parameters
data (Optional[Union[bytes, str]]) β
mimetype (Optional[str]) β
encoding (str) β
path (Optional[Union[str, pathlib.PurePath]]) β
Return type
None
attribute data: Optional[Union[bytes, str]] = Noneο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-6 | None
attribute data: Optional[Union[bytes, str]] = Noneο
attribute encoding: str = 'utf-8'ο
attribute mimetype: Optional[str] = Noneο
attribute path: Optional[Union[str, pathlib.PurePath]] = Noneο
as_bytes()[source]ο
Read data as bytes.
Return type
bytes
as_bytes_io()[source]ο
Read data as a byte stream.
Return type
Generator[Union[_io.BytesIO, _io.BufferedReader], None, None]
as_string()[source]ο
Read data as a string.
Return type
str
classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source]ο
Initialize the blob from in-memory data.
Parameters
data (Union[str, bytes]) β the in-memory data associated with the blob
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
path (Optional[str]) β if provided, will be set as the source from which the data came
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source]ο
Load the blob from a path like object.
Parameters
path (Union[str, pathlib.PurePath]) β path like object to file to be read
encoding (str) β Encoding to use if decoding the bytes into a string
mime_type (Optional[str]) β if provided, will be set as the mime-type of the data
guess_type (bool) β If True, the mimetype will be guessed from the file extension,
if a mime-type was not provided
Returns
Blob instance
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-7 | if a mime-type was not provided
Returns
Blob instance
Return type
langchain.document_loaders.blob_loaders.schema.Blob
property source: Optional[str]ο
The source location of the blob as string if known otherwise none.
class langchain.document_loaders.BlobLoader[source]ο
Bases: abc.ABC
Abstract interface for blob loaders implementation.
Implementer should be able to load raw content from a storage system according
to some criteria and return the raw content lazily as a stream of blobs.
abstract yield_blobs()[source]ο
A lazy loader for raw data represented by LangChainβs Blob object.
Returns
A generator over blobs
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads elements from a blockchain smart contract into Langchain documents.
The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet,
Polygon mainnet, and Polygon Mumbai testnet.
If no BlockchainType is specified, the default is Ethereum mainnet.
The Loader uses the Alchemy API to interact with the blockchain.
ALCHEMY_API_KEY environment variable must be set to use this loader.
The API returns 100 NFTs per request and can be paginated using the
startToken parameter.
If get_all_tokens is set to True, the loader will get all tokens
on the contract. Note that for contracts with a large number of tokens,
this may take a long time (e.g. 10k tokens is 100 requests).
Default value is false for this reason. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-8 | Default value is false for this reason.
The max_execution_time (sec) can be set to limit the execution time
of the loader.
Future versions of this loader can:
Support additional Alchemy APIs (e.g. getTransactions, etc.)
Support additional blockain APIs (e.g. Infura, Opensea, etc.)
Parameters
contract_address (str) β
blockchainType (langchain.document_loaders.blockchain.BlockchainType) β
api_key (str) β
startToken (str) β
get_all_tokens (bool) β
max_execution_time (Optional[int]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a CSV file into a list of documents.
Each document represents one row of the CSV file. Every row is converted into a
key/value pair and outputted to a new line in the documentβs page_content.
The source for each document loaded from csv is set to the value of the
file_path argument for all doucments by default.
You can override this by setting the source_column argument to the
name of a column in the CSV file.
The source of each document will then be set to the value of the column
with the name specified in source_column.
Output Example:column1: value1
column2: value2
column3: value3
Parameters
file_path (str) β
source_column (Optional[str]) β
csv_args (Optional[Dict]) β
encoding (Optional[str]) β
load()[source]ο
Load data into document objects.
Return type | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-9 | load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads conversations from exported ChatGPT data.
Parameters
log_file (str) β
num_logs (int) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CoNLLULoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load CoNLL-U files.
Parameters
file_path (str) β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads College Confidential webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Confluence pages. Port of https://llamahub.ai/l/confluence | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-10 | Load Confluence pages. Port of https://llamahub.ai/l/confluence
This currently supports username/api_key, Oauth2 login or personal access token
authentication.
Specify a list page_ids and/or space_key to load in the corresponding pages into
Document objects, if both are specified the union of both sets will be returned.
You can also specify a boolean include_attachments to include attachments, this
is set to False by default, if set to True all attachments will be downloaded and
ConfluenceReader will extract the text from the attachments and add it to the
Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG,
SVG, Word and Excel.
Confluence API supports difference format of page content. The storage format is the
raw XML representation for storage. The view format is the HTML representation for
viewing with macros are rendered as though it is viewed by users. You can pass
a enum content_format argument to load() to specify the content format, this is
set to ContentFormat.STORAGE by default.
Hint: space_key and page_id can both be found in the URL of a page in Confluence
- https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id>
Example
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE",limit=50)
Parameters
url (str) β _description_
api_key (str, optional) β _description_, defaults to None
username (str, optional) β _description_, defaults to None
oauth2 (dict, optional) β _description_, defaults to {}
token (str, optional) β _description_, defaults to None | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-11 | token (str, optional) β _description_, defaults to None
cloud (bool, optional) β _description_, defaults to True
number_of_retries (Optional[int], optional) β How many times to retry, defaults to 3
min_retry_seconds (Optional[int], optional) β defaults to 2
max_retry_seconds (Optional[int], optional) β defaults to 10
confluence_kwargs (dict, optional) β additional kwargs to initialize confluence with
Raises
ValueError β Errors while validating input
ImportError β Required dependencies not installed.
static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source]ο
Validates proper combinations of init arguments
Parameters
url (Optional[str]) β
api_key (Optional[str]) β
username (Optional[str]) β
oauth2 (Optional[dict]) β
token (Optional[str]) β
Return type
Optional[List]
load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source]ο
Parameters
space_key (Optional[str], optional) β Space key retrieved from a confluence URL, defaults to None
page_ids (Optional[List[str]], optional) β List of specific page IDs to load, defaults to None
label (Optional[str], optional) β Get all pages with this label, defaults to None
cql (Optional[str], optional) β CQL Expression, defaults to None
include_restricted_content (bool, optional) β defaults to False
include_archived_content (bool, optional) β Whether to include archived content,
defaults to False
include_attachments (bool, optional) β defaults to False | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-12 | defaults to False
include_attachments (bool, optional) β defaults to False
include_comments (bool, optional) β defaults to False
content_format (ContentFormat) β Specify content format, defaults to ContentFormat.STORAGE
limit (int, optional) β Maximum number of pages to retrieve per request, defaults to 50
max_pages (int, optional) β Maximum number of pages to retrieve in total, defaults 1000
ocr_languages (str, optional) β The languages to use for the Tesseract agent. To use a
language, youβll first need to install the appropriate
Tesseract language pack.
Raises
ValueError β _description_
ImportError β _description_
Returns
_description_
Return type
List[Document]
paginate_request(retrieval_method, **kwargs)[source]ο
Paginate the various methods to retrieve groups of pages.
Unfortunately, due to page size, sometimes the Confluence API
doesnβt match the limit value. If limit is >100 confluence
seems to cap the response to 100. Also, due to the Atlassian Python
package, we donβt get the βnextβ values from the β_linksβ key because
they only return the value from the results key. So here, the pagination
starts from 0 and goes until the max_pages, getting the limit number
of pages with each request. We have to manually check if there
are more docs based on the length of the returned list of pages, rather than
just checking for the presence of a next key in the response like this page
would have you do:
https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/
Parameters
retrieval_method (callable) β Function used to retrieve docs
kwargs (Any) β
Returns
List of documents
Return type
List
is_public_page(page)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-13 | List of documents
Return type
List
is_public_page(page)[source]ο
Check if a page is publicly accessible.
Parameters
page (dict) β
Return type
bool
process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Process a list of pages into a list of documents.
Parameters
pages (List[dict]) β
include_restricted_content (bool) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
List[langchain.schema.Document]
process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source]ο
Parameters
page (dict) β
include_attachments (bool) β
include_comments (bool) β
content_format (langchain.document_loaders.confluence.ContentFormat) β
ocr_languages (Optional[str]) β
Return type
langchain.schema.Document
process_attachment(page_id, ocr_languages=None)[source]ο
Parameters
page_id (str) β
ocr_languages (Optional[str]) β
Return type
List[str]
process_pdf(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_image(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
process_doc(link)[source]ο
Parameters
link (str) β
Return type
str
process_xls(link)[source]ο
Parameters
link (str) β
Return type
str | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-14 | Parameters
link (str) β
Return type
str
process_svg(link, ocr_languages=None)[source]ο
Parameters
link (str) β
ocr_languages (Optional[str]) β
Return type
str
class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Pandas DataFrames.
Parameters
data_frame (Any) β
page_content_column (str) β
lazy_load()[source]ο
Lazy load records from dataframe.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load full dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Diffbot file json.
Parameters
api_token (str) β
urls (List[str]) β
continue_on_failure (bool) β
load()[source]ο
Extract text from Diffbot on all the URLs and return Document instances
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from a directory.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-15 | silent_errors (bool) β
load_hidden (bool) β
loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) β
loader_kwargs (Optional[dict]) β
recursive (bool) β
show_progress (bool) β
use_multithreading (bool) β
max_concurrency (int) β
load_file(item, path, docs, pbar)[source]ο
Parameters
item (pathlib.Path) β
path (pathlib.Path) β
docs (List[langchain.schema.Document]) β
pbar (Optional[Any]) β
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load Discord chat logs.
Parameters
chat_log (pd.DataFrame) β
user_id_col (str) β
load()[source]ο
Load all chat messages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads processed docs from Docugami.
To use, you should have the lxml python package installed.
Parameters
api (str) β
access_token (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-16 | Parameters
api (str) β
access_token (Optional[str]) β
docset_id (Optional[str]) β
document_ids (Optional[Sequence[str]]) β
file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) β
min_chunk_size (int) β
Return type
None
attribute access_token: Optional[str] = Noneο
attribute api: str = 'https://api.docugami.com/v1preview1'ο
attribute docset_id: Optional[str] = Noneο
attribute document_ids: Optional[Sequence[str]] = Noneο
attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = Noneο
attribute min_chunk_size: int = 32ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.Docx2txtLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, abc.ABC
Loads a DOCX with docx2txt and chunks at character level.
Defaults to check for local file, but if the file is a web path, it will download it
to a temporary file, and use that, then clean up the temporary file after completion
Parameters
file_path (str) β
load()[source]ο
Load given path as single page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from DuckDB into a list of documents.
Each document represents one row of the result. The page_content_columns | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-17 | Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
database (str) β
read_only (bool) β
config (Optional[Dict[str, str]]) β
page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser
Wrapper around embaasβs document byte loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader()
blob = Blob.from_path(path="example.mp3")
documents = loader.parse(blob=blob)
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
) | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-18 | "chunk_splitter": "CharacterTextSplitter"
}
)
blob = Blob.from_path(path="example.pdf")
documents = loader.parse(blob=blob)
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
Return type
None
lazy_parse(blob)[source]ο
Lazy parsing interface.
Subclasses are required to implement this method.
Parameters
blob (langchain.document_loaders.blob_loaders.schema.Blob) β Blob instance
Returns
Generator of documents
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source]ο
Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader
Wrapper around embaasβs document loader service.
To use, you should have the
environment variable EMBAAS_API_KEY set with your API key, or pass
it as a named parameter to the constructor.
Example
# Default parsing
from langchain.document_loaders.embaas import EmbaasLoader
loader = EmbaasLoader(file_path="example.mp3")
documents = loader.load()
# Custom api parameters (create embeddings automatically)
from langchain.document_loaders.embaas import EmbaasBlobLoader
loader = EmbaasBlobLoader(
file_path="example.pdf",
params={
"should_embed": True,
"model": "e5-large-v2",
"chunk_size": 256, | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-19 | "chunk_size": 256,
"chunk_splitter": "CharacterTextSplitter"
}
)
documents = loader.load()
Parameters
embaas_api_key (Optional[str]) β
api_url (str) β
params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) β
file_path (str) β
blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) β
Return type
None
attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = Noneο
The blob loader to use. If not provided, a default one will be created.
attribute file_path: str [Required]ο
The path to the file to load.
lazy_load()[source]ο
Load the documents from the file path lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
load_and_split(text_splitter=None)[source]ο
Load documents and split into chunks.
Parameters
text_splitter (Optional[langchain.text_splitter.TextSplitter]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
EverNote Loader.
Loads an EverNote notebook export file e.g. my_notebook.enex into Documents.
Instructions on producing this file can be found at
https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML
Currently only the plain text in the note is extracted and stored as the contents | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-20 | Currently only the plain text in the note is extracted and stored as the contents
of the Document, any non content metadata (e.g. βauthorβ, βcreatedβ, βupdatedβ etc.
but not βcontent-rawβ or βresourceβ) tags on the note will be extracted and stored
as metadata on the Document.
Parameters
file_path (str) β The path to the notebook export with a .enex extension
load_single_document (bool) β Whether or not to concatenate the content of all
notes into a single long Document.
True (If this is set to) β the βsourceβ which contains the file name of the export.
load()[source]ο
Load documents from EverNote export file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FacebookChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Facebook messages json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
FaunaDB Loader.
Parameters
query (str) β
page_content_field (str) β
secret (str) β
metadata_fields (Optional[Sequence[str]]) β
queryο
The FQL query string to execute.
Type
str
page_content_fieldο
The field that contains the content of each page.
Type
str
secretο
The secret key for authenticating to FaunaDB.
Type
str
metadata_fieldsο
Optional list of field names to include in metadata.
Type
Optional[Sequence[str]] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-21 | Optional list of field names to include in metadata.
Type
Optional[Sequence[str]]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Figma file json.
Parameters
access_token (str) β
ids (str) β
key (str) β
load()[source]ο
Load file
Return type
List[langchain.schema.Document]
class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Blob loader for the local file system.
Example:
from langchain.document_loaders.blob_loaders import FileSystemBlobLoader
loader = FileSystemBlobLoader("/path/to/directory")
for blob in loader.yield_blobs():
print(blob)
Parameters
path (Union[str, pathlib.Path]) β
glob (str) β
suffixes (Optional[Sequence[str]]) β
show_progress (bool) β
Return type
None
yield_blobs()[source]ο
Yield blobs that match the requested pattern.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
count_matching_files()[source]ο
Count files that match the pattern without loading them.
Return type
int
class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-22 | Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from GCS.
Parameters
project_name (str) β
bucket (str) β
blob (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source]ο
Bases: langchain.document_loaders.github.BaseGitHubLoader
Parameters
repo (str) β
access_token (str) β
include_prs (bool) β
milestone (Optional[Union[int, Literal['*', 'none']]]) β
state (Optional[Literal['open', 'closed', 'all']]) β
assignee (Optional[str]) β
creator (Optional[str]) β
mentioned (Optional[str]) β
labels (Optional[List[str]]) β
sort (Optional[Literal['created', 'updated', 'comments']]) β
direction (Optional[Literal['asc', 'desc']]) β
since (Optional[str]) β
Return type
None
attribute assignee: Optional[str] = Noneο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-23 | Return type
None
attribute assignee: Optional[str] = Noneο
Filter on assigned user. Pass βnoneβ for no user and β*β for any user.
attribute creator: Optional[str] = Noneο
Filter on the user that created the issue.
attribute direction: Optional[Literal['asc', 'desc']] = Noneο
The direction to sort the results by. Can be one of: βascβ, βdescβ.
attribute include_prs: bool = Trueο
If True include Pull Requests in results, otherwise ignore them.
attribute labels: Optional[List[str]] = Noneο
Label names to filter one. Example: bug,ui,@high.
attribute mentioned: Optional[str] = Noneο
Filter on a user thatβs mentioned in the issue.
attribute milestone: Optional[Union[int, Literal['*', 'none']]] = Noneο
If integer is passed, it should be a milestoneβs number field.
If the string β*β is passed, issues with any milestone are accepted.
If the string βnoneβ is passed, issues without milestones are returned.
attribute since: Optional[str] = Noneο
Only show notifications updated after the given time.
This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ.
attribute sort: Optional[Literal['created', 'updated', 'comments']] = Noneο
What to sort results by. Can be one of: βcreatedβ, βupdatedβ, βcommentsβ.
Default is βcreatedβ.
attribute state: Optional[Literal['open', 'closed', 'all']] = Noneο
Filter on issue state. Can be one of: βopenβ, βclosedβ, βallβ.
lazy_load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-24 | Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
load()[source]ο
Get issues of a GitHub repository.
Returns
page_content
metadata
url
title
creator
created_at
last_update_time
closed_time
number of comments
state
labels
assignee
assignees
milestone
locked
number
is_pull_request
Return type
A list of Documents with attributes
parse_issue(issue)[source]ο
Create Document objects from a list of GitHub issues.
Parameters
issue (dict) β
Return type
langchain.schema.Document
property query_params: strο
property url: strο
class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads files from a Git repository into a list of documents.
Repository can be local on disk available at repo_path,
or remote at clone_url that will be cloned to repo_path.
Currently supports only text files.
Each document represents one file in the repository. The path points to
the local Git repository, and the branch specifies the branch to load
files from. By default, it loads from the main branch.
Parameters
repo_path (str) β
clone_url (Optional[str]) β
branch (Optional[str]) β
file_filter (Optional[Callable[[str], bool]]) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-25 | Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load GitBook data.
load from either a single page, or
load all (relative) paths in the navbar.
Parameters
web_page (str) β
load_all_paths (bool) β
base_url (Optional[str]) β
content_selector (str) β
load()[source]ο
Fetch text from one single GitBook page.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source]ο
Bases: object
A Generic Google Api Client.
To use, you should have the google_auth_oauthlib,youtube_transcript_api,google
python package installed.
As the google api expects credentials you need to set up a google account and
register your Service. βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
Parameters
credentials_path (pathlib.Path) β
service_account_path (pathlib.Path) β
token_path (pathlib.Path) β
Return type
None
credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-26 | service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all Videos from a Channel
To use, you should have the googleapiclient,youtube_transcript_api
python package installed.
As the service needs a google_api_client, you first have to initialize
the GoogleApiClient.
Additionally you have to either provide a channel name or a list of videoids
βhttps://developers.google.com/docs/api/quickstart/pythonβ
Example
from langchain.document_loaders import GoogleApiClient
from langchain.document_loaders import GoogleApiYoutubeLoader
google_api_client = GoogleApiClient(
service_account_path=Path("path_to_your_sec_file.json")
)
loader = GoogleApiYoutubeLoader(
google_api_client=google_api_client,
channel_name = "CodeAesthetic"
)
load.load()
Parameters
google_api_client (langchain.document_loaders.youtube.GoogleApiClient) β
channel_name (Optional[str]) β
video_ids (Optional[List[str]]) β
add_video_info (bool) β
captions_language (str) β
continue_on_failure (bool) β
Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-27 | Return type
None
google_api_client: langchain.document_loaders.youtube.GoogleApiClientο
channel_name: Optional[str] = Noneο
video_ids: Optional[List[str]] = Noneο
add_video_info: bool = Trueο
captions_language: str = 'en'ο
continue_on_failure: bool = Falseο
classmethod validate_channel_or_videoIds_is_set(values)[source]ο
Validate that either folder_id or document_ids is set, but not both.
Parameters
values (Dict[str, Any]) β
Return type
Dict[str, Any]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Loader that loads Google Docs from Google Drive.
Parameters
service_account_key (pathlib.Path) β
credentials_path (pathlib.Path) β
token_path (pathlib.Path) β
folder_id (Optional[str]) β
document_ids (Optional[List[str]]) β
file_ids (Optional[List[str]]) β
recursive (bool) β
file_types (Optional[Sequence[str]]) β
load_trashed_files (bool) β
file_loader_cls (Any) β
file_loader_kwargs (Dict[str, Any]) β
Return type
None | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-28 | file_loader_kwargs (Dict[str, Any]) β
Return type
None
attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')ο
attribute document_ids: Optional[List[str]] = Noneο
attribute file_ids: Optional[List[str]] = Noneο
attribute file_loader_cls: Any = Noneο
attribute file_loader_kwargs: Dict[str, Any] = {}ο
attribute file_types: Optional[Sequence[str]] = Noneο
attribute folder_id: Optional[str] = Noneο
attribute load_trashed_files: bool = Falseο
attribute recursive: bool = Falseο
attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json')ο
attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json')ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.GutenbergLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib to load .txt web files.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Load Hacker News data from either main page results or the comments page.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Get important HN webpage information.
Components are:
title
content | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-29 | Get important HN webpage information.
Components are:
title
content
source url,
time of post
author of the post
number of comments
rank of the post
Return type
List[langchain.schema.Document]
load_comments(soup_info)[source]ο
Load comments from a HN post.
Parameters
soup_info (Any) β
Return type
List[langchain.schema.Document]
load_results(soup)[source]ο
Load items from an HN page.
Parameters
soup (Any) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from the Hugging Face Hub.
Parameters
path (str) β
page_content_column (str) β
name (Optional[str]) β
data_dir (Optional[str]) β
data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) β
cache_dir (Optional[str]) β
keep_in_memory (Optional[bool]) β
save_infos (bool) β
use_auth_token (Optional[Union[bool, str]]) β
num_proc (Optional[int]) β
lazy_load()[source]ο
Load documents lazily.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IFixitLoader(web_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-30 | Bases: langchain.document_loaders.base.BaseLoader
Load iFixit repair guides, device wikis and answers.
iFixit is the largest, open repair community on the web. The site contains nearly
100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is
licensed under CC-BY.
This loader will allow you to download the text of a repair guide, text of Q&Aβs
and wikis from devices on iFixit using their open APIs and web scraping.
Parameters
web_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
static load_suggestions(query='', doc_type='all')[source]ο
Parameters
query (str) β
doc_type (str) β
Return type
List[langchain.schema.Document]
load_questions_and_answers(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
load_device(url_override=None, include_guides=True)[source]ο
Parameters
url_override (Optional[str]) β
include_guides (bool) β
Return type
List[langchain.schema.Document]
load_guide(url_override=None)[source]ο
Parameters
url_override (Optional[str]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that loads IMSDb webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-31 | header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
load()[source]ο
Load webpage.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads the captions of an image
Parameters
path_images (Union[str, List[str]]) β
blip_processor (str) β
blip_model (str) β
load()[source]ο
Load from a list of image files
Return type
List[langchain.schema.Document]
class langchain.document_loaders.IuguLoader(resource, api_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from IUGU.
Parameters
resource (str) β
api_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a JSON file and references a jq schema provided to load the text into
documents.
Example
[{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}] -> schema = .[].text
{βkeyβ: [{βtextβ: β¦}, {βtextβ: β¦}, {βtextβ: β¦}]} -> schema = .key[].text
[ββ, ββ, ββ] -> schema = .[]
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-32 | [ββ, ββ, ββ] -> schema = .[]
Parameters
file_path (Union[str, pathlib.Path]) β
jq_schema (str) β
content_key (Optional[str]) β
metadata_func (Optional[Callable[[Dict, Dict], Dict]]) β
text_content (bool) β
load()[source]ο
Load and return documents from the JSON file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches notes from Joplin.
In order to use this loader, you need to have Joplin running with the
Web Clipper enabled (look for βWeb Clipperβ in the app settings).
To get the access token, you need to go to the Web Clipper options and
under βAdvanced Optionsβ you will find the access token.
You can find more information about the Web Clipper service here:
https://joplinapp.org/clipper/
Parameters
access_token (Optional[str]) β
port (int) β
host (str) β
Return type
None
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.LarkSuiteDocLoader(domain, access_token, document_id)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads LarkSuite (FeiShu) document.
Parameters
domain (str) β
access_token (str) β
document_id (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-33 | access_token (str) β
document_id (str) β
lazy_load()[source]ο
Lazy load LarkSuite (FeiShu) document.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load LarkSuite (FeiShu) document.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load MediaWiki dump from XML file
.. rubric:: Example
from langchain.document_loaders import MWDumpLoader
loader = MWDumpLoader(
file_path="myWiki.xml",
encoding="utf8"
)
docs = loader.load()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000, chunk_overlap=0
)
texts = text_splitter.split_documents(docs)
Parameters
file_path (str) β XML local file path
encoding (str, optional) β Charset encoding, defaults to βutf8β
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Mastodon toots loader.
Parameters
mastodon_accounts (Sequence[str]) β
number_toots (Optional[int]) β
exclude_replies (bool) β
access_token (Optional[str]) β
api_base_url (str) β
load()[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-34 | api_base_url (str) β
load()[source]ο
Load toots into documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Parameters
file_path (str) β
processed_file_format (str) β
max_wait_time_seconds (int) β
should_clean_pdf (bool) β
kwargs (Any) β
Return type
None
property headers: dictο
property url: strο
property data: dictο
send_pdf()[source]ο
Return type
str
wait_for_processing(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
None
get_processed_pdf(pdf_id)[source]ο
Parameters
pdf_id (str) β
Return type
str
clean_pdf(contents)[source]ο
Parameters
contents (str) β
Return type
str
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Alibaba Cloud MaxCompute table into documents.
Parameters
query (str) β
api_wrapper (MaxComputeAPIWrapper) β
page_content_columns (Optional[Sequence[str]]) β
metadata_columns (Optional[Sequence[str]]) β
classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-35 | Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters.
Parameters
query (str) β SQL query to execute.
endpoint (str) β MaxCompute endpoint.
project (str) β A project is a basic organizational unit of MaxCompute, which is
similar to a database.
access_id (Optional[str]) β MaxCompute access ID. Should be passed in directly or set as the
environment variable MAX_COMPUTE_ACCESS_ID.
secret_access_key (Optional[str]) β MaxCompute secret access key. Should be passed in
directly or set as the environment variable
MAX_COMPUTE_SECRET_ACCESS_KEY.
kwargs (Any) β
Return type
langchain.document_loaders.max_compute.MaxComputeLoader
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MergedDataLoader(loaders)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Merge documents from a list of loaders
Parameters
loaders (List) β
lazy_load()[source]ο
Lazy load docs from each individual loader.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load docs.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses beautiful soup to parse HTML files.
Parameters
file_path (str) β
open_encoding (Optional[str]) β
bs_kwargs (Optional[dict]) β
get_text_separator (str) β
Return type
None | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-36 | get_text_separator (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Modern Treasury.
Parameters
resource (str) β
organization_id (Optional[str]) β
api_key (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads .ipynb notebook files.
Parameters
path (str) β
include_outputs (bool) β
max_output_length (int) β
remove_newline (bool) β
traceback (bool) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Notion DB Loader.
Reads content from pages within a Noton Database.
:param integration_token: Notion integration token.
:type integration_token: str
:param database_id: Notion database id.
:type database_id: str
:param request_timeout_sec: Timeout for Notion requests in seconds.
:type request_timeout_sec: int
Parameters
integration_token (str) β
database_id (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-37 | Parameters
integration_token (str) β
database_id (str) β
request_timeout_sec (Optional[int]) β
Return type
None
load()[source]ο
Load documents from the Notion database.
:returns: List of documents.
:rtype: List[Document]
Return type
List[langchain.schema.Document]
load_page(page_summary)[source]ο
Read a page.
Parameters
page_summary (Dict[str, Any]) β
Return type
langchain.schema.Document
class langchain.document_loaders.NotionDirectoryLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Notion directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Obsidian files from disk.
Parameters
path (str) β
encoding (str) β
collect_metadata (bool) β
FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL)ο
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveFileLoader(*, file)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
file (File) β
Return type
None
attribute file: File [Required]ο
load()[source]ο
Load Documents
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-38 | Load Documents
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel
Parameters
settings (langchain.document_loaders.onedrive._OneDriveSettings) β
drive_id (str) β
folder_path (Optional[str]) β
object_ids (Optional[List[str]]) β
auth_with_token (bool) β
Return type
None
attribute auth_with_token: bool = Falseο
attribute drive_id: str [Required]ο
attribute folder_path: Optional[str] = Noneο
attribute object_ids: Optional[List[str]] = Noneο
attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional]ο
load()[source]ο
Loads all supported document files from the specified OneDrive drive a
nd returns a list of Document objects.
Returns
A list of Document objects
representing the loaded documents.
Return type
List[Document]
Raises
ValueError β If the specified drive ID
does not correspond to a drive in the OneDrive storage. β
class langchain.document_loaders.OnlinePDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that loads online PDFs.
Parameters
file_path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OutlookMessageLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Outlook Message files using extract_msg.
https://github.com/TeamMsgExtractor/msg-extractor | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-39 | https://github.com/TeamMsgExtractor/msg-extractor
Parameters
file_path (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Open city data.
Parameters
city_id (str) β
dataset_id (str) β
limit (int) β
lazy_load()[source]ο
Lazy load records.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load records.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFMinerLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Eagerly load the content.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily lod documents.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PDFMiner to load PDF files as HTML content.
Parameters
file_path (str) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-40 | Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses pdfplumber to load PDF files.
Parameters
file_path (str) β
text_kwargs (Optional[Mapping[str, Any]]) β
Return type
None
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
langchain.document_loaders.PagedPDFSplitterο
alias of langchain.document_loaders.pdf.PyPDFLoader
class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Playwright and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
headless (bool) β
remove_selectors (Optional[List[str]]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
headlessο
If True, the browser will run in headless mode.
Type
bool
load()[source]ο
Load the specified URLs using Playwright and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.PsychicLoader(api_key, account_id, connector_id=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads documents from Psychic.dev.
Parameters
api_key (str) β
account_id (str) β
connector_id (Optional[str]) β
load()[source]ο
Load documents. | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-41 | connector_id (Optional[str]) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyMuPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loader that uses PyMuPDF to load PDF files.
Parameters
file_path (str) β
Return type
None
load(**kwargs)[source]ο
Load file.
Parameters
kwargs (Optional[Any]) β
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a directory with PDF files with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
path (str) β
glob (str) β
silent_errors (bool) β
load_hidden (bool) β
recursive (bool) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PyPDFLoader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdf and chunks at character level.
Loader also stores page numbers in metadatas.
Parameters
file_path (str) β
Return type
None
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-42 | Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PyPDFium2Loader(file_path)[source]ο
Bases: langchain.document_loaders.pdf.BasePDFLoader
Loads a PDF with pypdfium2 and chunks at character level.
Parameters
file_path (str) β
load()[source]ο
Load given path as pages.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazy load given path as pages.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load PySpark DataFrames
Parameters
spark_session (Optional[SparkSession]) β
df (Optional[Any]) β
page_content_column (str) β
fraction_of_memory (float) β
get_num_rows()[source]ο
Gets the amount of βfeasibleβ rows for the DataFrame
Return type
Tuple[int, int]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load from the dataframe.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.PythonLoader(file_path)[source]ο
Bases: langchain.document_loaders.text.TextLoader
Load Python files, respecting any non-default encoding if specified.
Parameters
file_path (str) β
class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-43 | Bases: langchain.document_loaders.base.BaseLoader
Loader that loads ReadTheDocs documentation directory dump.
Parameters
path (Union[str, pathlib.Path]) β
encoding (Optional[str]) β
errors (Optional[str]) β
custom_html_tag (Optional[Tuple[str, dict]]) β
kwargs (Optional[Any]) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads all child links from a given url.
Parameters
url (str) β
exclude_dirs (Optional[str]) β
Return type
None
get_child_links_recursive(url, visited=None)[source]ο
Recursively get all child links starting with the path of the input URL.
Parameters
url (str) β
visited (Optional[Set[str]]) β
Return type
Set[str]
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load web pages.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Reddit posts loader.
Read posts on a subreddit.
First you need to go to
https://www.reddit.com/prefs/apps/
and create your application
Parameters
client_id (str) β
client_secret (str) β
user_agent (str) β
search_queries (Sequence[str]) β
mode (str) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-44 | search_queries (Sequence[str]) β
mode (str) β
categories (Sequence[str]) β
number_posts (Optional[int]) β
load()[source]ο
Load reddits.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.RoamLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Roam files from disk.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
prefix (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.S3FileLoader(bucket, key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from s3.
Parameters
bucket (str) β
key (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SRTLoader(file_path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for .srt (subtitle) files.
Parameters
file_path (str) β
load()[source]ο
Load using pysrt file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-45 | Bases: langchain.document_loaders.base.BaseLoader
Loader that uses Selenium and to load a page and unstructured to load the html.
This is useful for loading pages that require javascript to render.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
browser (Literal['chrome', 'firefox']) β
binary_location (Optional[str]) β
executable_path (Optional[str]) β
headless (bool) β
arguments (List[str]) β
urlsο
List of URLs to load.
Type
List[str]
continue_on_failureο
If True, continue loading other URLs on failure.
Type
bool
browserο
The browser to use, either βchromeβ or βfirefoxβ.
Type
str
binary_locationο
The location of the browser binary.
Type
Optional[str]
executable_pathο
The path to the browser executable.
Type
Optional[str]
headlessο
If True, the browser will run in headless mode.
Type
bool
arguments [List[str]]
List of arguments to pass to the browser.
load()[source]ο
Load the specified URLs using Selenium and create Document instances.
Returns
A list of Document instances with loaded content.
Return type
List[Document]
class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source]ο
Bases: langchain.document_loaders.web_base.WebBaseLoader
Loader that fetches a sitemap and loads those URLs.
Parameters
web_path (str) β
filter_urls (Optional[List[str]]) β
parsing_function (Optional[Callable]) β
blocksize (Optional[int]) β
blocknum (int) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-46 | blocksize (Optional[int]) β
blocknum (int) β
meta_function (Optional[Callable]) β
is_local (bool) β
parse_sitemap(soup)[source]ο
Parse sitemap xml and load into a list of dicts.
Parameters
soup (Any) β
Return type
List[dict]
load()[source]ο
Load sitemap.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader for loading documents from a Slack directory dump.
Parameters
zip_path (str) β
workspace_url (Optional[str]) β
load()[source]ο
Load and return documents from the Slack directory dump.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from Snowflake into a list of documents.
Each document represents one row of the result. The page_content_columns
are written into the page_content of the document. The metadata_columns
are written into the metadata of the document. By default, all columns
are written into the page_content and none into the metadata.
Parameters
query (str) β
user (str) β
password (str) β
account (str) β
warehouse (str) β
role (str) β
database (str) β
schema (str) β
parameters (Optional[Dict[str, Any]]) β
page_content_columns (Optional[List[str]]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-47 | page_content_columns (Optional[List[str]]) β
metadata_columns (Optional[List[str]]) β
lazy_load()[source]ο
A lazy loader for document content.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Spreedly API.
Parameters
access_token (str) β
resource (str) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.StripeLoader(resource, access_token=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that fetches data from Stripe.
Parameters
resource (str) β
access_token (Optional[str]) β
Return type
None
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TencentCOSDirectoryLoader(conf, bucket, prefix='')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Tencent Cloud COS.
Parameters
conf (Any) β
bucket (str) β
prefix (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Load documents.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.TencentCOSFileLoader(conf, bucket, key)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-48 | Bases: langchain.document_loaders.base.BaseLoader
Loading logic for loading documents from Tencent Cloud COS.
Parameters
conf (Any) β
bucket (str) β
key (str) β
load()[source]ο
Load data into document objects.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Load documents.
Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
chat_entity (Optional[EntityLike]) β
api_id (Optional[int]) β
api_hash (Optional[str]) β
username (Optional[str]) β
file_path (str) β
async fetch_data_from_telegram()[source]ο
Fetch data from Telegram API and save it as a JSON file.
Return type
None
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TelegramChatFileLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Telegram chat json directory dump.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
langchain.document_loaders.TelegramChatLoaderο
alias of langchain.document_loaders.telegram.TelegramChatFileLoader
class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Load text files.
Parameters | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-49 | Bases: langchain.document_loaders.base.BaseLoader
Load text files.
Parameters
file_path (str) β Path to the file to load.
encoding (Optional[str]) β File encoding to use. If None, the file will be loaded
encoding. (with the default system) β
autodetect_encoding (bool) β Whether to try to autodetect the file encoding
if the specified encoding fails.
load()[source]ο
Load from file path.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads HTML to markdown using 2markdown.
Parameters
url (str) β
api_key (str) β
lazy_load()[source]ο
Lazily load the file.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TomlLoader(source)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
A TOML document loader that inherits from the BaseLoader class.
This class can be initialized with either a single source file or a source
directory containing TOML files.
Parameters
source (Union[str, pathlib.Path]) β
load()[source]ο
Load and return all documents.
Return type
List[langchain.schema.Document]
lazy_load()[source]ο
Lazily load the TOML documents from the source file or directory.
Return type
Iterator[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-50 | Return type
Iterator[langchain.schema.Document]
class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Trello loader. Reads all cards from a Trello board.
Parameters
client (TrelloClient) β
board_name (str) β
include_card_name (bool) β
include_comments (bool) β
include_checklist (bool) β
card_filter (Literal['closed', 'open', 'all']) β
extra_metadata (Tuple[str, ...]) β
classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source]ο
Convenience constructor that builds TrelloClient init param for you.
Parameters
board_name (str) β The name of the Trello board.
api_key (Optional[str]) β Trello API key. Can also be specified as environment variable
TRELLO_API_KEY.
token (Optional[str]) β Trello token. Can also be specified as environment variable
TRELLO_TOKEN.
include_card_name β Whether to include the name of the card in the document.
include_comments β Whether to include the comments on the card in the
document.
include_checklist β Whether to include the checklist on the card in the
document.
card_filter β Filter on card status. Valid values are βclosedβ, βopenβ,
βallβ.
extra_metadata β List of additional metadata fields to include as document
metadata.Valid values are βdue_dateβ, βlabelsβ, βlistβ, βclosedβ.
kwargs (Any) β
Return type
langchain.document_loaders.trello.TrelloLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-51 | Return type
langchain.document_loaders.trello.TrelloLoader
load()[source]ο
Loads all cards from the specified Trello board.
You can filter the cards, metadata and text included by using the optional
parameters.
Returns:A list of documents, one for each card in the board.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Twitter tweets loader.
Read tweets of user twitter handle.
First you need to go to
https://developer.twitter.com/en/docs/twitter-api
/getting-started/getting-access-to-the-twitter-api
to get your token. And create a v2 version of the app.
Parameters
auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
load()[source]ο
Load tweets.
Return type
List[langchain.schema.Document]
classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from OAuth2 bearer token.
Parameters
oauth2_bearer_token (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source]ο
Create a TwitterTweetLoader from access tokens and secrets.
Parameters
access_token (str) β
access_token_secret (str) β
consumer_key (str) β
consumer_secret (str) β
twitter_users (Sequence[str]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-52 | consumer_secret (str) β
twitter_users (Sequence[str]) β
number_tweets (Optional[int]) β
Return type
langchain.document_loaders.twitter.TwitterTweetLoader
class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader
Loader that uses the unstructured web API to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses the unstructured web API to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
url (str) β
api_key (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load CSV files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]ο | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-53 | Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load epub files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load email files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load Microsoft Excel files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load file IO objects.
Parameters
file (Union[IO, Sequence[IO]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader
Loader that uses unstructured to load files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-54 | mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load HTML files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load image files, such as PNGs and JPGs.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load markdown files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load open office ODT files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredOrgModeLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-55 | Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load Org-Mode files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load PDF files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load powerpoint files.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load RST files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load rtf files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-56 | mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses unstructured to load HTML files.
Parameters
urls (List[str]) β
continue_on_failure (bool) β
mode (str) β
show_progress_bar (bool) β
unstructured_kwargs (Any) β
load()[source]ο
Load file.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load word documents.
Parameters
file_path (Union[str, List[str]]) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source]ο
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
Loader that uses unstructured to load XML files.
Parameters
file_path (str) β
mode (str) β
unstructured_kwargs (Any) β
class langchain.document_loaders.WeatherDataLoader(client, places)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Weather Reader.
Reads the forecast & current weather of any location using OpenWeatherMapβs free
API. Checkout βhttps://openweathermap.org/appidβ for more on how to generate a free
OpenWeatherMap API.
Parameters
client (OpenWeatherMapAPIWrapper) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-57 | OpenWeatherMap API.
Parameters
client (OpenWeatherMapAPIWrapper) β
places (Sequence[str]) β
Return type
None
classmethod from_params(places, *, openweathermap_api_key=None)[source]ο
Parameters
places (Sequence[str]) β
openweathermap_api_key (Optional[str]) β
Return type
langchain.document_loaders.weather.WeatherDataLoader
lazy_load()[source]ο
Lazily load weather data for the given locations.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load weather data for the given locations.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True, proxies=None)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that uses urllib and beautiful soup to load webpages.
Parameters
web_path (Union[str, List[str]]) β
header_template (Optional[dict]) β
verify (Optional[bool]) β
proxies (Optional[dict]) β
requests_per_second: int = 2ο
Max number of concurrent requests to make.
default_parser: str = 'html.parser'ο
Default parser to use for BeautifulSoup.
requests_kwargs: Dict[str, Any] = {}ο
kwargs for requests
raise_for_status: bool = Falseο
Raise an exception if http status code denotes an error.
bs_get_text_kwargs: Dict[str, Any] = {}ο
kwargs for beatifulsoup4 get_text
web_paths: List[str]ο
property web_path: strο
async fetch_all(urls)[source]ο
Fetch all urls concurrently with rate limiting.
Parameters
urls (List[str]) β
Return type
Any | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-58 | Parameters
urls (List[str]) β
Return type
Any
scrape_all(urls, parser=None)[source]ο
Fetch all urls, then return soups for all results.
Parameters
urls (List[str]) β
parser (Optional[str]) β
Return type
List[Any]
scrape(parser=None)[source]ο
Scrape data from webpage and return it in BeautifulSoup format.
Parameters
parser (Optional[str]) β
Return type
Any
lazy_load()[source]ο
Lazy load text from the url(s) in web_path.
Return type
Iterator[langchain.schema.Document]
load()[source]ο
Load text from the url(s) in web_path.
Return type
List[langchain.schema.Document]
aload()[source]ο
Load text from the urls in web_path async into Documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WhatsAppChatLoader(path)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads WhatsApp messages text file.
Parameters
path (str) β
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document]
class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loads a query result from www.wikipedia.org into a list of Documents.
The hard limit on the number of downloaded Documents is 300 for now.
Each wiki page represents one Document.
Parameters
query (str) β
lang (str) β
load_max_docs (Optional[int]) β
load_all_available_meta (Optional[bool]) β | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
e84f0fb94639-59 | load_all_available_meta (Optional[bool]) β
doc_content_chars_max (Optional[int]) β
load()[source]ο
Loads the query result from Wikipedia into a list of Documents.
Returns
A list of Document objects representing the loadedWikipedia pages.
Return type
List[Document]
class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source]ο
Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader
Load YouTube urls as audio file(s).
Parameters
urls (List[str]) β
save_dir (str) β
yield_blobs()[source]ο
Yield audio blobs for each url.
Return type
Iterable[langchain.document_loaders.blob_loaders.schema.Blob]
class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source]ο
Bases: langchain.document_loaders.base.BaseLoader
Loader that loads Youtube transcripts.
Parameters
video_id (str) β
add_video_info (bool) β
language (Union[str, Sequence[str]]) β
translation (str) β
continue_on_failure (bool) β
static extract_video_id(youtube_url)[source]ο
Extract video id from common YT urls.
Parameters
youtube_url (str) β
Return type
str
classmethod from_youtube_url(youtube_url, **kwargs)[source]ο
Given youtube URL, load video.
Parameters
youtube_url (str) β
kwargs (Any) β
Return type
langchain.document_loaders.youtube.YoutubeLoader
load()[source]ο
Load documents.
Return type
List[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_loaders.html |
47c6ebf9eed9-0 | Document Transformersο
Transform documents
langchain.document_transformers.get_stateful_documents(documents)[source]ο
Convert a list of documents to a list of documents with state.
Parameters
documents (Sequence[langchain.schema.Document]) β The documents to convert.
Returns
A list of documents with state.
Return type
Sequence[langchain.document_transformers._DocumentWithState]
class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings, similarity_fn=<function cosine_similarity>, similarity_threshold=0.95)[source]ο
Bases: langchain.schema.BaseDocumentTransformer, pydantic.main.BaseModel
Filter that drops redundant documents by comparing their embeddings.
Parameters
embeddings (langchain.embeddings.base.Embeddings) β
similarity_fn (Callable) β
similarity_threshold (float) β
Return type
None
attribute embeddings: langchain.embeddings.base.Embeddings [Required]ο
Embeddings to use for embedding document contents.
attribute similarity_fn: Callable = <function cosine_similarity>ο
Similarity function for comparing documents. Function expected to take as input
two matrices (List[List[float]]) and return a matrix of scores where higher values
indicate greater similarity.
attribute similarity_threshold: float = 0.95ο
Threshold for determining when two documents are similar enough
to be considered redundant.
async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a list of documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
transform_documents(documents, **kwargs)[source]ο
Filter down documents.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document] | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-1 | kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
Text Splittersο
Functionality for splitting text.
class langchain.text_splitter.TextSplitter(chunk_size=4000, chunk_overlap=200, length_function=<built-in function len>, keep_separator=False, add_start_index=False)[source]ο
Bases: langchain.schema.BaseDocumentTransformer, abc.ABC
Interface for splitting text into chunks.
Parameters
chunk_size (int) β
chunk_overlap (int) β
length_function (Callable[[str], int]) β
keep_separator (bool) β
add_start_index (bool) β
Return type
None
abstract split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
create_documents(texts, metadatas=None)[source]ο
Create documents from a list of texts.
Parameters
texts (List[str]) β
metadatas (Optional[List[dict]]) β
Return type
List[langchain.schema.Document]
split_documents(documents)[source]ο
Split documents.
Parameters
documents (Iterable[langchain.schema.Document]) β
Return type
List[langchain.schema.Document]
classmethod from_huggingface_tokenizer(tokenizer, **kwargs)[source]ο
Text splitter that uses HuggingFace tokenizer to count length.
Parameters
tokenizer (Any) β
kwargs (Any) β
Return type
langchain.text_splitter.TextSplitter
classmethod from_tiktoken_encoder(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]ο
Text splitter that uses tiktoken encoder to count length.
Parameters
encoding_name (str) β
model_name (Optional[str]) β | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-2 | Parameters
encoding_name (str) β
model_name (Optional[str]) β
allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], typing.Collection[str]]) β
kwargs (Any) β
Return type
langchain.text_splitter.TS
transform_documents(documents, **kwargs)[source]ο
Transform sequence of documents by splitting them.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
async atransform_documents(documents, **kwargs)[source]ο
Asynchronously transform a sequence of documents by splitting them.
Parameters
documents (Sequence[langchain.schema.Document]) β
kwargs (Any) β
Return type
Sequence[langchain.schema.Document]
class langchain.text_splitter.CharacterTextSplitter(separator='\n\n', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at characters.
Parameters
separator (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.LineType[source]ο
Bases: TypedDict
Line type as typed dict.
metadata: Dict[str, str]ο
content: strο
class langchain.text_splitter.HeaderType[source]ο
Bases: TypedDict
Header type as typed dict.
level: intο
name: strο
data: strο
class langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on, return_each_line=False)[source]ο
Bases: object | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-3 | Bases: object
Implementation of splitting markdown files based on specified headers.
Parameters
headers_to_split_on (List[Tuple[str, str]]) β
return_each_line (bool) β
aggregate_lines_to_chunks(lines)[source]ο
Combine lines with common metadata into chunks
:param lines: Line of text / associated header metadata
Parameters
lines (List[langchain.text_splitter.LineType]) β
Return type
List[langchain.schema.Document]
split_text(text)[source]ο
Split markdown file
:param text: Markdown file
Parameters
text (str) β
Return type
List[langchain.schema.Document]
class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source]ο
Bases: object
Parameters
chunk_overlap (int) β
tokens_per_chunk (int) β
decode (Callable[[list[int]], str]) β
encode (Callable[[str], List[int]]) β
Return type
None
chunk_overlap: intο
tokens_per_chunk: intο
decode: Callable[[list[int]], str]ο
encode: Callable[[str], List[int]]ο
langchain.text_splitter.split_text_on_tokens(*, text, tokenizer)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
tokenizer (langchain.text_splitter.Tokenizer) β
Return type
List[str]
class langchain.text_splitter.TokenTextSplitter(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at tokens.
Parameters | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-4 | Implementation of splitting text that looks at tokens.
Parameters
encoding_name (str) β
model_name (Optional[str]) β
allowed_special (Union[Literal['all'], AbstractSet[str]]) β
disallowed_special (Union[Literal['all'], Collection[str]]) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap=50, model_name='sentence-transformers/all-mpnet-base-v2', tokens_per_chunk=None, **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at tokens.
Parameters
chunk_overlap (int) β
model_name (str) β
tokens_per_chunk (Optional[int]) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
count_tokens(*, text)[source]ο
Parameters
text (str) β
Return type
int
class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source]ο
Bases: str, enum.Enum
CPP = 'cpp'ο
GO = 'go'ο
JAVA = 'java'ο
JS = 'js'ο
PHP = 'php'ο
PROTO = 'proto'ο
PYTHON = 'python'ο
RST = 'rst'ο
RUBY = 'ruby'ο
RUST = 'rust'ο | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-5 | RUBY = 'ruby'ο
RUST = 'rust'ο
SCALA = 'scala'ο
SWIFT = 'swift'ο
MARKDOWN = 'markdown'ο
LATEX = 'latex'ο
HTML = 'html'ο
SOL = 'sol'ο
class langchain.text_splitter.RecursiveCharacterTextSplitter(separators=None, keep_separator=True, **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
Parameters
separators (Optional[List[str]]) β
keep_separator (bool) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split text into multiple components.
Parameters
text (str) β
Return type
List[str]
classmethod from_language(language, **kwargs)[source]ο
Parameters
language (langchain.text_splitter.Language) β
kwargs (Any) β
Return type
langchain.text_splitter.RecursiveCharacterTextSplitter
static get_separators_for_language(language)[source]ο
Parameters
language (langchain.text_splitter.Language) β
Return type
List[str]
class langchain.text_splitter.NLTKTextSplitter(separator='\n\n', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at sentences using NLTK.
Parameters
separator (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str] | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
47c6ebf9eed9-6 | Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.SpacyTextSplitter(separator='\n\n', pipeline='en_core_web_sm', **kwargs)[source]ο
Bases: langchain.text_splitter.TextSplitter
Implementation of splitting text that looks at sentences using Spacy.
Parameters
separator (str) β
pipeline (str) β
kwargs (Any) β
Return type
None
split_text(text)[source]ο
Split incoming text and return chunks.
Parameters
text (str) β
Return type
List[str]
class langchain.text_splitter.PythonCodeTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Python syntax.
Parameters
kwargs (Any) β
Return type
None
class langchain.text_splitter.MarkdownTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Markdown-formatted headings.
Parameters
kwargs (Any) β
Return type
None
class langchain.text_splitter.LatexTextSplitter(**kwargs)[source]ο
Bases: langchain.text_splitter.RecursiveCharacterTextSplitter
Attempts to split the text along Latex-formatted layout elements.
Parameters
kwargs (Any) β
Return type
None | https://api.python.langchain.com/en/latest/modules/document_transformers.html |
32e5ce03a7a6-0 | All modules for which code is available
langchain.agents.agent
langchain.agents.agent_toolkits.azure_cognitive_services.toolkit
langchain.agents.agent_toolkits.csv.base
langchain.agents.agent_toolkits.file_management.toolkit
langchain.agents.agent_toolkits.gmail.toolkit
langchain.agents.agent_toolkits.jira.toolkit
langchain.agents.agent_toolkits.json.base
langchain.agents.agent_toolkits.json.toolkit
langchain.agents.agent_toolkits.nla.toolkit
langchain.agents.agent_toolkits.openapi.base
langchain.agents.agent_toolkits.openapi.toolkit
langchain.agents.agent_toolkits.pandas.base
langchain.agents.agent_toolkits.playwright.toolkit
langchain.agents.agent_toolkits.powerbi.base
langchain.agents.agent_toolkits.powerbi.chat_base
langchain.agents.agent_toolkits.powerbi.toolkit
langchain.agents.agent_toolkits.python.base
langchain.agents.agent_toolkits.spark.base
langchain.agents.agent_toolkits.spark_sql.base
langchain.agents.agent_toolkits.spark_sql.toolkit
langchain.agents.agent_toolkits.sql.base
langchain.agents.agent_toolkits.sql.toolkit
langchain.agents.agent_toolkits.vectorstore.base
langchain.agents.agent_toolkits.vectorstore.toolkit
langchain.agents.agent_toolkits.zapier.toolkit
langchain.agents.agent_types
langchain.agents.conversational.base
langchain.agents.conversational_chat.base
langchain.agents.initialize
langchain.agents.load_tools
langchain.agents.loading
langchain.agents.mrkl.base
langchain.agents.openai_functions_agent.base
langchain.agents.react.base
langchain.agents.self_ask_with_search.base
langchain.agents.structured_chat.base
langchain.callbacks.aim_callback
langchain.callbacks.argilla_callback
langchain.callbacks.arize_callback | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-1 | langchain.callbacks.argilla_callback
langchain.callbacks.arize_callback
langchain.callbacks.clearml_callback
langchain.callbacks.comet_ml_callback
langchain.callbacks.file
langchain.callbacks.human
langchain.callbacks.infino_callback
langchain.callbacks.manager
langchain.callbacks.mlflow_callback
langchain.callbacks.openai_info
langchain.callbacks.stdout
langchain.callbacks.streaming_aiter
langchain.callbacks.streaming_stdout
langchain.callbacks.streaming_stdout_final_only
langchain.callbacks.streamlit
langchain.callbacks.streamlit.streamlit_callback_handler
langchain.callbacks.wandb_callback
langchain.callbacks.whylabs_callback
langchain.chains.api.base
langchain.chains.api.openapi.chain
langchain.chains.combine_documents.base
langchain.chains.combine_documents.map_reduce
langchain.chains.combine_documents.map_rerank
langchain.chains.combine_documents.refine
langchain.chains.combine_documents.stuff
langchain.chains.constitutional_ai.base
langchain.chains.conversation.base
langchain.chains.conversational_retrieval.base
langchain.chains.flare.base
langchain.chains.graph_qa.base
langchain.chains.graph_qa.cypher
langchain.chains.graph_qa.kuzu
langchain.chains.graph_qa.nebulagraph
langchain.chains.hyde.base
langchain.chains.llm
langchain.chains.llm_bash.base
langchain.chains.llm_checker.base
langchain.chains.llm_math.base
langchain.chains.llm_requests
langchain.chains.llm_summarization_checker.base
langchain.chains.loading
langchain.chains.mapreduce
langchain.chains.moderation
langchain.chains.natbot.base
langchain.chains.openai_functions.citation_fuzzy_match
langchain.chains.openai_functions.extraction
langchain.chains.openai_functions.qa_with_structure | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-2 | langchain.chains.openai_functions.qa_with_structure
langchain.chains.openai_functions.tagging
langchain.chains.pal.base
langchain.chains.qa_generation.base
langchain.chains.qa_with_sources.base
langchain.chains.qa_with_sources.retrieval
langchain.chains.qa_with_sources.vector_db
langchain.chains.retrieval_qa.base
langchain.chains.router.base
langchain.chains.router.llm_router
langchain.chains.router.multi_prompt
langchain.chains.router.multi_retrieval_qa
langchain.chains.sequential
langchain.chains.sql_database.base
langchain.chains.transform
langchain.chat_models.anthropic
langchain.chat_models.azure_openai
langchain.chat_models.fake
langchain.chat_models.google_palm
langchain.chat_models.openai
langchain.chat_models.promptlayer_openai
langchain.chat_models.vertexai
langchain.document_loaders.acreom
langchain.document_loaders.airbyte_json
langchain.document_loaders.airtable
langchain.document_loaders.apify_dataset
langchain.document_loaders.arxiv
langchain.document_loaders.azlyrics
langchain.document_loaders.azure_blob_storage_container
langchain.document_loaders.azure_blob_storage_file
langchain.document_loaders.bibtex
langchain.document_loaders.bigquery
langchain.document_loaders.bilibili
langchain.document_loaders.blackboard
langchain.document_loaders.blob_loaders.file_system
langchain.document_loaders.blob_loaders.schema
langchain.document_loaders.blob_loaders.youtube_audio
langchain.document_loaders.blockchain
langchain.document_loaders.chatgpt
langchain.document_loaders.college_confidential
langchain.document_loaders.confluence
langchain.document_loaders.conllu
langchain.document_loaders.csv_loader
langchain.document_loaders.dataframe
langchain.document_loaders.diffbot | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-3 | langchain.document_loaders.dataframe
langchain.document_loaders.diffbot
langchain.document_loaders.directory
langchain.document_loaders.discord
langchain.document_loaders.docugami
langchain.document_loaders.duckdb_loader
langchain.document_loaders.email
langchain.document_loaders.embaas
langchain.document_loaders.epub
langchain.document_loaders.evernote
langchain.document_loaders.excel
langchain.document_loaders.facebook_chat
langchain.document_loaders.fauna
langchain.document_loaders.figma
langchain.document_loaders.gcs_directory
langchain.document_loaders.gcs_file
langchain.document_loaders.git
langchain.document_loaders.gitbook
langchain.document_loaders.github
langchain.document_loaders.googledrive
langchain.document_loaders.gutenberg
langchain.document_loaders.hn
langchain.document_loaders.html
langchain.document_loaders.html_bs
langchain.document_loaders.hugging_face_dataset
langchain.document_loaders.ifixit
langchain.document_loaders.image
langchain.document_loaders.image_captions
langchain.document_loaders.imsdb
langchain.document_loaders.iugu
langchain.document_loaders.joplin
langchain.document_loaders.json_loader
langchain.document_loaders.larksuite
langchain.document_loaders.markdown
langchain.document_loaders.mastodon
langchain.document_loaders.max_compute
langchain.document_loaders.mediawikidump
langchain.document_loaders.merge
langchain.document_loaders.mhtml
langchain.document_loaders.modern_treasury
langchain.document_loaders.notebook
langchain.document_loaders.notion
langchain.document_loaders.notiondb
langchain.document_loaders.obsidian
langchain.document_loaders.odt
langchain.document_loaders.onedrive
langchain.document_loaders.onedrive_file | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-4 | langchain.document_loaders.onedrive
langchain.document_loaders.onedrive_file
langchain.document_loaders.open_city_data
langchain.document_loaders.org_mode
langchain.document_loaders.pdf
langchain.document_loaders.powerpoint
langchain.document_loaders.psychic
langchain.document_loaders.pyspark_dataframe
langchain.document_loaders.python
langchain.document_loaders.readthedocs
langchain.document_loaders.recursive_url_loader
langchain.document_loaders.reddit
langchain.document_loaders.roam
langchain.document_loaders.rst
langchain.document_loaders.rtf
langchain.document_loaders.s3_directory
langchain.document_loaders.s3_file
langchain.document_loaders.sitemap
langchain.document_loaders.slack_directory
langchain.document_loaders.snowflake_loader
langchain.document_loaders.spreedly
langchain.document_loaders.srt
langchain.document_loaders.stripe
langchain.document_loaders.telegram
langchain.document_loaders.tencent_cos_directory
langchain.document_loaders.tencent_cos_file
langchain.document_loaders.text
langchain.document_loaders.tomarkdown
langchain.document_loaders.toml
langchain.document_loaders.trello
langchain.document_loaders.twitter
langchain.document_loaders.unstructured
langchain.document_loaders.url
langchain.document_loaders.url_playwright
langchain.document_loaders.url_selenium
langchain.document_loaders.weather
langchain.document_loaders.web_base
langchain.document_loaders.whatsapp_chat
langchain.document_loaders.wikipedia
langchain.document_loaders.word_document
langchain.document_loaders.xml
langchain.document_loaders.youtube
langchain.document_transformers
langchain.embeddings.aleph_alpha
langchain.embeddings.bedrock
langchain.embeddings.cohere
langchain.embeddings.dashscope
langchain.embeddings.deepinfra
langchain.embeddings.elasticsearch | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-5 | langchain.embeddings.deepinfra
langchain.embeddings.elasticsearch
langchain.embeddings.embaas
langchain.embeddings.fake
langchain.embeddings.huggingface
langchain.embeddings.huggingface_hub
langchain.embeddings.llamacpp
langchain.embeddings.minimax
langchain.embeddings.modelscope_hub
langchain.embeddings.mosaicml
langchain.embeddings.openai
langchain.embeddings.sagemaker_endpoint
langchain.embeddings.self_hosted
langchain.embeddings.self_hosted_hugging_face
langchain.embeddings.tensorflow_hub
langchain.experimental.autonomous_agents.autogpt.agent
langchain.experimental.autonomous_agents.baby_agi.baby_agi
langchain.experimental.generative_agents.generative_agent
langchain.experimental.generative_agents.memory
langchain.llms.ai21
langchain.llms.aleph_alpha
langchain.llms.amazon_api_gateway
langchain.llms.anthropic
langchain.llms.anyscale
langchain.llms.aviary
langchain.llms.azureml_endpoint
langchain.llms.bananadev
langchain.llms.baseten
langchain.llms.beam
langchain.llms.bedrock
langchain.llms.cerebriumai
langchain.llms.clarifai
langchain.llms.cohere
langchain.llms.ctransformers
langchain.llms.databricks
langchain.llms.deepinfra
langchain.llms.fake
langchain.llms.forefrontai
langchain.llms.google_palm
langchain.llms.gooseai
langchain.llms.gpt4all
langchain.llms.huggingface_endpoint
langchain.llms.huggingface_hub
langchain.llms.huggingface_pipeline
langchain.llms.huggingface_text_gen_inference
langchain.llms.human
langchain.llms.llamacpp
langchain.llms.manifest | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-6 | langchain.llms.llamacpp
langchain.llms.manifest
langchain.llms.modal
langchain.llms.mosaicml
langchain.llms.nlpcloud
langchain.llms.octoai_endpoint
langchain.llms.openai
langchain.llms.openllm
langchain.llms.openlm
langchain.llms.petals
langchain.llms.pipelineai
langchain.llms.predictionguard
langchain.llms.promptlayer_openai
langchain.llms.replicate
langchain.llms.rwkv
langchain.llms.sagemaker_endpoint
langchain.llms.self_hosted
langchain.llms.self_hosted_hugging_face
langchain.llms.stochasticai
langchain.llms.textgen
langchain.llms.vertexai
langchain.llms.writer
langchain.memory.buffer
langchain.memory.buffer_window
langchain.memory.chat_message_histories.cassandra
langchain.memory.chat_message_histories.cosmos_db
langchain.memory.chat_message_histories.dynamodb
langchain.memory.chat_message_histories.file
langchain.memory.chat_message_histories.in_memory
langchain.memory.chat_message_histories.momento
langchain.memory.chat_message_histories.mongodb
langchain.memory.chat_message_histories.postgres
langchain.memory.chat_message_histories.redis
langchain.memory.chat_message_histories.sql
langchain.memory.chat_message_histories.zep
langchain.memory.combined
langchain.memory.entity
langchain.memory.kg
langchain.memory.motorhead_memory
langchain.memory.readonly
langchain.memory.simple
langchain.memory.summary
langchain.memory.summary_buffer
langchain.memory.token_buffer
langchain.memory.vectorstore
langchain.output_parsers.boolean
langchain.output_parsers.combining
langchain.output_parsers.datetime
langchain.output_parsers.enum
langchain.output_parsers.fix
langchain.output_parsers.list | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-7 | langchain.output_parsers.fix
langchain.output_parsers.list
langchain.output_parsers.pydantic
langchain.output_parsers.rail_parser
langchain.output_parsers.regex
langchain.output_parsers.regex_dict
langchain.output_parsers.retry
langchain.output_parsers.structured
langchain.prompts.base
langchain.prompts.chat
langchain.prompts.example_selector.length_based
langchain.prompts.example_selector.ngram_overlap
langchain.prompts.example_selector.semantic_similarity
langchain.prompts.few_shot
langchain.prompts.few_shot_with_templates
langchain.prompts.loading
langchain.prompts.pipeline
langchain.prompts.prompt
langchain.requests
langchain.retrievers.arxiv
langchain.retrievers.azure_cognitive_search
langchain.retrievers.chatgpt_plugin_retriever
langchain.retrievers.contextual_compression
langchain.retrievers.databerry
langchain.retrievers.docarray
langchain.retrievers.document_compressors.base
langchain.retrievers.document_compressors.chain_extract
langchain.retrievers.document_compressors.chain_filter
langchain.retrievers.document_compressors.cohere_rerank
langchain.retrievers.document_compressors.embeddings_filter
langchain.retrievers.elastic_search_bm25
langchain.retrievers.kendra
langchain.retrievers.knn
langchain.retrievers.llama_index
langchain.retrievers.merger_retriever
langchain.retrievers.metal
langchain.retrievers.milvus
langchain.retrievers.multi_query
langchain.retrievers.pinecone_hybrid_search
langchain.retrievers.pupmed
langchain.retrievers.remote_retriever
langchain.retrievers.self_query.base
langchain.retrievers.svm
langchain.retrievers.tfidf
langchain.retrievers.time_weighted_retriever | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-8 | langchain.retrievers.tfidf
langchain.retrievers.time_weighted_retriever
langchain.retrievers.vespa_retriever
langchain.retrievers.weaviate_hybrid_search
langchain.retrievers.wikipedia
langchain.retrievers.zep
langchain.retrievers.zilliz
langchain.schema
langchain.text_splitter
langchain.tools.arxiv.tool
langchain.tools.azure_cognitive_services.form_recognizer
langchain.tools.azure_cognitive_services.image_analysis
langchain.tools.azure_cognitive_services.speech2text
langchain.tools.azure_cognitive_services.text2speech
langchain.tools.base
langchain.tools.bing_search.tool
langchain.tools.brave_search.tool
langchain.tools.convert_to_openai
langchain.tools.ddg_search.tool
langchain.tools.file_management.copy
langchain.tools.file_management.delete
langchain.tools.file_management.file_search
langchain.tools.file_management.list_dir
langchain.tools.file_management.move
langchain.tools.file_management.read
langchain.tools.file_management.write
langchain.tools.gmail.create_draft
langchain.tools.gmail.get_message
langchain.tools.gmail.get_thread
langchain.tools.gmail.search
langchain.tools.gmail.send_message
langchain.tools.google_places.tool
langchain.tools.google_search.tool
langchain.tools.google_serper.tool
langchain.tools.graphql.tool
langchain.tools.human.tool
langchain.tools.ifttt
langchain.tools.interaction.tool
langchain.tools.jira.tool
langchain.tools.json.tool
langchain.tools.metaphor_search.tool
langchain.tools.openapi.utils.api_models
langchain.tools.openweathermap.tool
langchain.tools.playwright.click
langchain.tools.playwright.current_page
langchain.tools.playwright.extract_hyperlinks
langchain.tools.playwright.extract_text
langchain.tools.playwright.get_elements
langchain.tools.playwright.navigate
langchain.tools.playwright.navigate_back
langchain.tools.plugin | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-9 | langchain.tools.playwright.navigate
langchain.tools.playwright.navigate_back
langchain.tools.plugin
langchain.tools.powerbi.tool
langchain.tools.pubmed.tool
langchain.tools.python.tool
langchain.tools.requests.tool
langchain.tools.scenexplain.tool
langchain.tools.searx_search.tool
langchain.tools.shell.tool
langchain.tools.sleep.tool
langchain.tools.spark_sql.tool
langchain.tools.sql_database.tool
langchain.tools.steamship_image_generation.tool
langchain.tools.vectorstore.tool
langchain.tools.wikipedia.tool
langchain.tools.wolfram_alpha.tool
langchain.tools.youtube.search
langchain.tools.zapier.tool
langchain.utilities.apify
langchain.utilities.arxiv
langchain.utilities.awslambda
langchain.utilities.bash
langchain.utilities.bibtex
langchain.utilities.bing_search
langchain.utilities.brave_search
langchain.utilities.duckduckgo_search
langchain.utilities.google_places_api
langchain.utilities.google_search
langchain.utilities.google_serper
langchain.utilities.graphql
langchain.utilities.jira
langchain.utilities.max_compute
langchain.utilities.metaphor_search
langchain.utilities.openapi
langchain.utilities.openweathermap
langchain.utilities.powerbi
langchain.utilities.pupmed
langchain.utilities.python
langchain.utilities.scenexplain
langchain.utilities.searx_search
langchain.utilities.serpapi
langchain.utilities.spark_sql
langchain.utilities.twilio
langchain.utilities.wikipedia
langchain.utilities.wolfram_alpha
langchain.utilities.zapier
langchain.vectorstores.alibabacloud_opensearch
langchain.vectorstores.analyticdb
langchain.vectorstores.annoy
langchain.vectorstores.atlas
langchain.vectorstores.awadb
langchain.vectorstores.azuresearch
langchain.vectorstores.base
langchain.vectorstores.cassandra
langchain.vectorstores.chroma
langchain.vectorstores.clarifai | https://api.python.langchain.com/en/latest/_modules/index.html |
32e5ce03a7a6-10 | langchain.vectorstores.chroma
langchain.vectorstores.clarifai
langchain.vectorstores.clickhouse
langchain.vectorstores.deeplake
langchain.vectorstores.docarray.hnsw
langchain.vectorstores.docarray.in_memory
langchain.vectorstores.elastic_vector_search
langchain.vectorstores.faiss
langchain.vectorstores.hologres
langchain.vectorstores.lancedb
langchain.vectorstores.matching_engine
langchain.vectorstores.milvus
langchain.vectorstores.mongodb_atlas
langchain.vectorstores.myscale
langchain.vectorstores.opensearch_vector_search
langchain.vectorstores.pinecone
langchain.vectorstores.qdrant
langchain.vectorstores.redis
langchain.vectorstores.rocksetdb
langchain.vectorstores.singlestoredb
langchain.vectorstores.sklearn
langchain.vectorstores.starrocks
langchain.vectorstores.supabase
langchain.vectorstores.tair
langchain.vectorstores.tigris
langchain.vectorstores.typesense
langchain.vectorstores.vectara
langchain.vectorstores.weaviate
langchain.vectorstores.zilliz
pydantic.config
pydantic.main | https://api.python.langchain.com/en/latest/_modules/index.html |
94e2996c717b-0 | Source code for langchain.text_splitter
"""Functionality for splitting text."""
from __future__ import annotations
import copy
import logging
import re
from abc import ABC, abstractmethod
from dataclasses import dataclass
from enum import Enum
from typing import (
AbstractSet,
Any,
Callable,
Collection,
Dict,
Iterable,
List,
Literal,
Optional,
Sequence,
Tuple,
Type,
TypedDict,
TypeVar,
Union,
cast,
)
from langchain.docstore.document import Document
from langchain.schema import BaseDocumentTransformer
logger = logging.getLogger(__name__)
TS = TypeVar("TS", bound="TextSplitter")
def _split_text_with_regex(
text: str, separator: str, keep_separator: bool
) -> List[str]:
# Now that we have the separator, split the text
if separator:
if keep_separator:
# The parentheses in the pattern keep the delimiters in the result.
_splits = re.split(f"({separator})", text)
splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)]
if len(_splits) % 2 == 0:
splits += _splits[-1:]
splits = [_splits[0]] + splits
else:
splits = text.split(separator)
else:
splits = list(text)
return [s for s in splits if s != ""]
[docs]class TextSplitter(BaseDocumentTransformer, ABC):
"""Interface for splitting text into chunks."""
def __init__(
self, | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-1 | """Interface for splitting text into chunks."""
def __init__(
self,
chunk_size: int = 4000,
chunk_overlap: int = 200,
length_function: Callable[[str], int] = len,
keep_separator: bool = False,
add_start_index: bool = False,
) -> None:
"""Create a new TextSplitter.
Args:
chunk_size: Maximum size of chunks to return
chunk_overlap: Overlap in characters between chunks
length_function: Function that measures the length of given chunks
keep_separator: Whether or not to keep the separator in the chunks
add_start_index: If `True`, includes chunk's start index in metadata
"""
if chunk_overlap > chunk_size:
raise ValueError(
f"Got a larger chunk overlap ({chunk_overlap}) than chunk size "
f"({chunk_size}), should be smaller."
)
self._chunk_size = chunk_size
self._chunk_overlap = chunk_overlap
self._length_function = length_function
self._keep_separator = keep_separator
self._add_start_index = add_start_index
[docs] @abstractmethod
def split_text(self, text: str) -> List[str]:
"""Split text into multiple components."""
[docs] def create_documents(
self, texts: List[str], metadatas: Optional[List[dict]] = None
) -> List[Document]:
"""Create documents from a list of texts."""
_metadatas = metadatas or [{}] * len(texts)
documents = []
for i, text in enumerate(texts):
index = -1
for chunk in self.split_text(text):
metadata = copy.deepcopy(_metadatas[i]) | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-2 | metadata = copy.deepcopy(_metadatas[i])
if self._add_start_index:
index = text.find(chunk, index + 1)
metadata["start_index"] = index
new_doc = Document(page_content=chunk, metadata=metadata)
documents.append(new_doc)
return documents
[docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]:
"""Split documents."""
texts, metadatas = [], []
for doc in documents:
texts.append(doc.page_content)
metadatas.append(doc.metadata)
return self.create_documents(texts, metadatas=metadatas)
def _join_docs(self, docs: List[str], separator: str) -> Optional[str]:
text = separator.join(docs)
text = text.strip()
if text == "":
return None
else:
return text
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0: | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-3 | )
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
[docs] @classmethod
def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter:
"""Text splitter that uses HuggingFace tokenizer to count length."""
try:
from transformers import PreTrainedTokenizerBase
if not isinstance(tokenizer, PreTrainedTokenizerBase):
raise ValueError(
"Tokenizer received was not an instance of PreTrainedTokenizerBase"
)
def _huggingface_tokenizer_length(text: str) -> int:
return len(tokenizer.encode(text))
except ImportError:
raise ValueError(
"Could not import transformers python package. "
"Please install it with `pip install transformers`."
) | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-4 | "Please install it with `pip install transformers`."
)
return cls(length_function=_huggingface_tokenizer_length, **kwargs)
[docs] @classmethod
def from_tiktoken_encoder(
cls: Type[TS],
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> TS:
"""Text splitter that uses tiktoken encoder to count length."""
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to calculate max_tokens_for_prompt. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
def _tiktoken_encoder(text: str) -> int:
return len(
enc.encode(
text,
allowed_special=allowed_special,
disallowed_special=disallowed_special,
)
)
if issubclass(cls, TokenTextSplitter):
extra_kwargs = {
"encoding_name": encoding_name,
"model_name": model_name,
"allowed_special": allowed_special,
"disallowed_special": disallowed_special,
}
kwargs = {**kwargs, **extra_kwargs}
return cls(length_function=_tiktoken_encoder, **kwargs)
[docs] def transform_documents( | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-5 | [docs] def transform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Transform sequence of documents by splitting them."""
return self.split_documents(list(documents))
[docs] async def atransform_documents(
self, documents: Sequence[Document], **kwargs: Any
) -> Sequence[Document]:
"""Asynchronously transform a sequence of documents by splitting them."""
raise NotImplementedError
[docs]class CharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters."""
def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
self._separator = separator
[docs] def split_text(self, text: str) -> List[str]:
"""Split incoming text and return chunks."""
# First we naively split the large input into a bunch of smaller ones.
splits = _split_text_with_regex(text, self._separator, self._keep_separator)
_separator = "" if self._keep_separator else self._separator
return self._merge_splits(splits, _separator)
[docs]class LineType(TypedDict):
"""Line type as typed dict."""
metadata: Dict[str, str]
content: str
[docs]class HeaderType(TypedDict):
"""Header type as typed dict."""
level: int
name: str
data: str
[docs]class MarkdownHeaderTextSplitter:
"""Implementation of splitting markdown files based on specified headers."""
def __init__(
self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False
): | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-6 | ):
"""Create a new MarkdownHeaderTextSplitter.
Args:
headers_to_split_on: Headers we want to track
return_each_line: Return each line w/ associated headers
"""
# Output line-by-line or aggregated into chunks w/ common headers
self.return_each_line = return_each_line
# Given the headers we want to split on,
# (e.g., "#, ##, etc") order by length
self.headers_to_split_on = sorted(
headers_to_split_on, key=lambda split: len(split[0]), reverse=True
)
[docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]:
"""Combine lines with common metadata into chunks
Args:
lines: Line of text / associated header metadata
"""
aggregated_chunks: List[LineType] = []
for line in lines:
if (
aggregated_chunks
and aggregated_chunks[-1]["metadata"] == line["metadata"]
):
# If the last line in the aggregated list
# has the same metadata as the current line,
# append the current content to the last lines's content
aggregated_chunks[-1]["content"] += " \n" + line["content"]
else:
# Otherwise, append the current line to the aggregated list
aggregated_chunks.append(line)
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in aggregated_chunks
]
[docs] def split_text(self, text: str) -> List[Document]:
"""Split markdown file
Args:
text: Markdown file"""
# Split the input text by newline character ("\n").
lines = text.split("\n")
# Final output | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-7 | lines = text.split("\n")
# Final output
lines_with_metadata: List[LineType] = []
# Content and metadata of the chunk currently being processed
current_content: List[str] = []
current_metadata: Dict[str, str] = {}
# Keep track of the nested header structure
# header_stack: List[Dict[str, Union[int, str]]] = []
header_stack: List[HeaderType] = []
initial_metadata: Dict[str, str] = {}
for line in lines:
stripped_line = line.strip()
# Check each line against each of the header types (e.g., #, ##)
for sep, name in self.headers_to_split_on:
# Check if line starts with a header that we intend to split on
if stripped_line.startswith(sep) and (
# Header with no text OR header is followed by space
# Both are valid conditions that sep is being used a header
len(stripped_line) == len(sep)
or stripped_line[len(sep)] == " "
):
# Ensure we are tracking the header as metadata
if name is not None:
# Get the current header level
current_header_level = sep.count("#")
# Pop out headers of lower or same level from the stack
while (
header_stack
and header_stack[-1]["level"] >= current_header_level
):
# We have encountered a new header
# at the same or higher level
popped_header = header_stack.pop()
# Clear the metadata for the
# popped header in initial_metadata
if popped_header["name"] in initial_metadata:
initial_metadata.pop(popped_header["name"])
# Push the current header to the stack | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-8 | # Push the current header to the stack
header: HeaderType = {
"level": current_header_level,
"name": name,
"data": stripped_line[len(sep) :].strip(),
}
header_stack.append(header)
# Update initial_metadata with the current header
initial_metadata[name] = header["data"]
# Add the previous line to the lines_with_metadata
# only if current_content is not empty
if current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
break
else:
if stripped_line:
current_content.append(stripped_line)
elif current_content:
lines_with_metadata.append(
{
"content": "\n".join(current_content),
"metadata": current_metadata.copy(),
}
)
current_content.clear()
current_metadata = initial_metadata.copy()
if current_content:
lines_with_metadata.append(
{"content": "\n".join(current_content), "metadata": current_metadata}
)
# lines_with_metadata has each line with associated header metadata
# aggregate these into chunks based on common metadata
if not self.return_each_line:
return self.aggregate_lines_to_chunks(lines_with_metadata)
else:
return [
Document(page_content=chunk["content"], metadata=chunk["metadata"])
for chunk in lines_with_metadata
]
# should be in newer Python versions (3.10+)
# @dataclass(frozen=True, kw_only=True, slots=True)
[docs]@dataclass(frozen=True)
class Tokenizer:
chunk_overlap: int | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-9 | class Tokenizer:
chunk_overlap: int
tokens_per_chunk: int
decode: Callable[[list[int]], str]
encode: Callable[[str], List[int]]
[docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]:
"""Split incoming text and return chunks."""
splits: List[str] = []
input_ids = tokenizer.encode(text)
start_idx = 0
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
while start_idx < len(input_ids):
splits.append(tokenizer.decode(chunk_ids))
start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap
cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids))
chunk_ids = input_ids[start_idx:cur_idx]
return splits
[docs]class TokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
encoding_name: str = "gpt2",
model_name: Optional[str] = None,
allowed_special: Union[Literal["all"], AbstractSet[str]] = set(),
disallowed_special: Union[Literal["all"], Collection[str]] = "all",
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs)
try:
import tiktoken
except ImportError:
raise ImportError(
"Could not import tiktoken python package. "
"This is needed in order to for TokenTextSplitter. "
"Please install it with `pip install tiktoken`."
)
if model_name is not None: | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-10 | )
if model_name is not None:
enc = tiktoken.encoding_for_model(model_name)
else:
enc = tiktoken.get_encoding(encoding_name)
self._tokenizer = enc
self._allowed_special = allowed_special
self._disallowed_special = disallowed_special
[docs] def split_text(self, text: str) -> List[str]:
def _encode(_text: str) -> List[int]:
return self._tokenizer.encode(
_text,
allowed_special=self._allowed_special,
disallowed_special=self._disallowed_special,
)
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self._chunk_size,
decode=self._tokenizer.decode,
encode=_encode,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs]class SentenceTransformersTokenTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at tokens."""
def __init__(
self,
chunk_overlap: int = 50,
model_name: str = "sentence-transformers/all-mpnet-base-v2",
tokens_per_chunk: Optional[int] = None,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(**kwargs, chunk_overlap=chunk_overlap)
try:
from sentence_transformers import SentenceTransformer
except ImportError:
raise ImportError(
"Could not import sentence_transformer python package. "
"This is needed in order to for SentenceTransformersTokenTextSplitter. "
"Please install it with `pip install sentence-transformers`."
)
self.model_name = model_name | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-11 | )
self.model_name = model_name
self._model = SentenceTransformer(self.model_name)
self.tokenizer = self._model.tokenizer
self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk)
def _initialize_chunk_configuration(
self, *, tokens_per_chunk: Optional[int]
) -> None:
self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length)
if tokens_per_chunk is None:
self.tokens_per_chunk = self.maximum_tokens_per_chunk
else:
self.tokens_per_chunk = tokens_per_chunk
if self.tokens_per_chunk > self.maximum_tokens_per_chunk:
raise ValueError(
f"The token limit of the models '{self.model_name}'"
f" is: {self.maximum_tokens_per_chunk}."
f" Argument tokens_per_chunk={self.tokens_per_chunk}"
f" > maximum token limit."
)
[docs] def split_text(self, text: str) -> List[str]:
def encode_strip_start_and_stop_token_ids(text: str) -> List[int]:
return self._encode(text)[1:-1]
tokenizer = Tokenizer(
chunk_overlap=self._chunk_overlap,
tokens_per_chunk=self.tokens_per_chunk,
decode=self.tokenizer.decode,
encode=encode_strip_start_and_stop_token_ids,
)
return split_text_on_tokens(text=text, tokenizer=tokenizer)
[docs] def count_tokens(self, *, text: str) -> int:
return len(self._encode(text))
_max_length_equal_32_bit_integer = 2**32
def _encode(self, text: str) -> List[int]:
token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text, | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-12 | token_ids_with_start_and_end_token_ids = self.tokenizer.encode(
text,
max_length=self._max_length_equal_32_bit_integer,
truncation="do_not_truncate",
)
return token_ids_with_start_and_end_token_ids
[docs]class Language(str, Enum):
CPP = "cpp"
GO = "go"
JAVA = "java"
JS = "js"
PHP = "php"
PROTO = "proto"
PYTHON = "python"
RST = "rst"
RUBY = "ruby"
RUST = "rust"
SCALA = "scala"
SWIFT = "swift"
MARKDOWN = "markdown"
LATEX = "latex"
HTML = "html"
SOL = "sol"
[docs]class RecursiveCharacterTextSplitter(TextSplitter):
"""Implementation of splitting text that looks at characters.
Recursively tries to split by different characters to find one
that works.
"""
def __init__(
self,
separators: Optional[List[str]] = None,
keep_separator: bool = True,
**kwargs: Any,
) -> None:
"""Create a new TextSplitter."""
super().__init__(keep_separator=keep_separator, **kwargs)
self._separators = separators or ["\n\n", "\n", " ", ""]
def _split_text(self, text: str, separators: List[str]) -> List[str]:
"""Split incoming text and return chunks."""
final_chunks = []
# Get appropriate separator to use
separator = separators[-1]
new_separators = []
for i, _s in enumerate(separators): | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-13 | for i, _s in enumerate(separators):
if _s == "":
separator = _s
break
if re.search(_s, text):
separator = _s
new_separators = separators[i + 1 :]
break
splits = _split_text_with_regex(text, separator, self._keep_separator)
# Now go merging things, recursively splitting longer texts.
_good_splits = []
_separator = "" if self._keep_separator else separator
for s in splits:
if self._length_function(s) < self._chunk_size:
_good_splits.append(s)
else:
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
_good_splits = []
if not new_separators:
final_chunks.append(s)
else:
other_info = self._split_text(s, new_separators)
final_chunks.extend(other_info)
if _good_splits:
merged_text = self._merge_splits(_good_splits, _separator)
final_chunks.extend(merged_text)
return final_chunks
[docs] def split_text(self, text: str) -> List[str]:
return self._split_text(text, self._separators)
[docs] @classmethod
def from_language(
cls, language: Language, **kwargs: Any
) -> RecursiveCharacterTextSplitter:
separators = cls.get_separators_for_language(language)
return cls(separators=separators, **kwargs)
[docs] @staticmethod
def get_separators_for_language(language: Language) -> List[str]:
if language == Language.CPP:
return [ | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-14 | if language == Language.CPP:
return [
# Split along class definitions
"\nclass ",
# Split along function definitions
"\nvoid ",
"\nint ",
"\nfloat ",
"\ndouble ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.GO:
return [
# Split along function definitions
"\nfunc ",
"\nvar ",
"\nconst ",
"\ntype ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JAVA:
return [
# Split along class definitions
"\nclass ",
# Split along method definitions
"\npublic ",
"\nprotected ",
"\nprivate ",
"\nstatic ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.JS:
return [
# Split along function definitions
"\nfunction ",
"\nconst ",
"\nlet ", | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-15 | "\nfunction ",
"\nconst ",
"\nlet ",
"\nvar ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nswitch ",
"\ncase ",
"\ndefault ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PHP:
return [
# Split along function definitions
"\nfunction ",
# Split along class definitions
"\nclass ",
# Split along control flow statements
"\nif ",
"\nforeach ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PROTO:
return [
# Split along message definitions
"\nmessage ",
# Split along service definitions
"\nservice ",
# Split along enum definitions
"\nenum ",
# Split along option definitions
"\noption ",
# Split along import statements
"\nimport ",
# Split along syntax declarations
"\nsyntax ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.PYTHON:
return [
# First, try to split along class definitions
"\nclass ",
"\ndef ",
"\n\tdef ",
# Now split by the normal type of lines
"\n\n", | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-16 | # Now split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RST:
return [
# Split along section titles
"\n=+\n",
"\n-+\n",
"\n\*+\n",
# Split along directive markers
"\n\n.. *\n\n",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUBY:
return [
# Split along method definitions
"\ndef ",
"\nclass ",
# Split along control flow statements
"\nif ",
"\nunless ",
"\nwhile ",
"\nfor ",
"\ndo ",
"\nbegin ",
"\nrescue ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.RUST:
return [
# Split along function definitions
"\nfn ",
"\nconst ",
"\nlet ",
# Split along control flow statements
"\nif ",
"\nwhile ",
"\nfor ",
"\nloop ",
"\nmatch ",
"\nconst ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SCALA:
return [
# Split along class definitions
"\nclass ",
"\nobject ",
# Split along method definitions
"\ndef ", | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
94e2996c717b-17 | "\nobject ",
# Split along method definitions
"\ndef ",
"\nval ",
"\nvar ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\nmatch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.SWIFT:
return [
# Split along function definitions
"\nfunc ",
# Split along class definitions
"\nclass ",
"\nstruct ",
"\nenum ",
# Split along control flow statements
"\nif ",
"\nfor ",
"\nwhile ",
"\ndo ",
"\nswitch ",
"\ncase ",
# Split by the normal type of lines
"\n\n",
"\n",
" ",
"",
]
elif language == Language.MARKDOWN:
return [
# First, try to split along Markdown headings (starting with level 2)
"\n#{1,6} ",
# Note the alternative syntax for headings (below) is not handled here
# Heading level 2
# ---------------
# End of code block
"```\n",
# Horizontal lines
"\n\*\*\*+\n",
"\n---+\n",
"\n___+\n",
# Note that this splitter doesn't handle horizontal lines defined
# by *three or more* of ***, ---, or ___, but this is not handled
"\n\n",
"\n",
" ",
"",
] | https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.