id
stringlengths
14
16
text
stringlengths
31
2.41k
source
stringlengths
53
121
f5ed643c3227-52
Construct a sql agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.sql.toolkit.SQLDatabaseToolkit) – agent_type (langchain.agents.agent_types.AgentType) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – suffix (Optional[str]) – format_instructions (str) – input_variables (Optional[List[str]]) – top_k (int) – max_iterations (Optional[int]) – max_execution_time (Optional[float]) – early_stopping_method (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_vectorstore_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions about sets of documents.\nYou have access to tools for interacting with the documents, and the inputs to the tools are questions.\nSometimes, you will be asked to provide sources for your questions, in which case you should use the appropriate tool to do so.\nIf the question does not seem relevant to any of the tools provided, just return "I don\'t know" as the answer.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) –
https://api.python.langchain.com/en/latest/modules/agents.html
f5ed643c3227-53
prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.create_vectorstore_router_agent(llm, toolkit, callback_manager=None, prefix='You are an agent designed to answer questions.\nYou have access to tools for interacting with different sources, and the inputs to the tools are questions.\nYour main task is to decide which of the tools is relevant for answering question at hand.\nFor complex questions, you can break the question down into sub questions and use tools to answers the sub questions.\n', verbose=False, agent_executor_kwargs=None, **kwargs)[source] Construct a vectorstore router agent from an LLM and tools. Parameters llm (langchain.base_language.BaseLanguageModel) – toolkit (langchain.agents.agent_toolkits.vectorstore.toolkit.VectorStoreRouterToolkit) – callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – prefix (str) – verbose (bool) – agent_executor_kwargs (Optional[Dict[str, Any]]) – kwargs (Dict[str, Any]) – Return type langchain.agents.agent.AgentExecutor langchain.agents.get_all_tool_names()[source] Get a list of all possible tool names. Return type List[str] langchain.agents.initialize_agent(tools, llm, agent=None, callback_manager=None, agent_path=None, agent_kwargs=None, *, tags=None, **kwargs)[source] Load an agent executor given tools and LLM. Parameters tools (Sequence[langchain.tools.base.BaseTool]) – List of tools this agent has access to.
https://api.python.langchain.com/en/latest/modules/agents.html
f5ed643c3227-54
llm (langchain.base_language.BaseLanguageModel) – Language model to use as the agent. agent (Optional[langchain.agents.agent_types.AgentType]) – Agent type to use. If None and agent_path is also None, will default to AgentType.ZERO_SHOT_REACT_DESCRIPTION. callback_manager (Optional[langchain.callbacks.base.BaseCallbackManager]) – CallbackManager to use. Global callback manager is used if not provided. Defaults to None. agent_path (Optional[str]) – Path to serialized agent to use. agent_kwargs (Optional[dict]) – Additional key word arguments to pass to the underlying agent tags (Optional[Sequence[str]]) – Tags to apply to the traced runs. **kwargs – Additional key word arguments passed to the agent executor kwargs (Any) – Returns An agent executor Return type langchain.agents.agent.AgentExecutor langchain.agents.load_agent(path, **kwargs)[source] Unified method for loading a agent from LangChainHub or local fs. Parameters path (Union[str, pathlib.Path]) – kwargs (Any) – Return type Union[langchain.agents.agent.BaseSingleActionAgent, langchain.agents.agent.BaseMultiActionAgent] langchain.agents.load_huggingface_tool(task_or_repo_id, model_repo_id=None, token=None, remote=False, **kwargs)[source] Loads a tool from the HuggingFace Hub. Parameters task_or_repo_id (str) – Task or model repo id. model_repo_id (Optional[str]) – Optional model repo id. token (Optional[str]) – Optional token. remote (bool) – Optional remote. Defaults to False. **kwargs – kwargs (Any) – Returns A tool. Return type langchain.tools.base.BaseTool
https://api.python.langchain.com/en/latest/modules/agents.html
f5ed643c3227-55
Returns A tool. Return type langchain.tools.base.BaseTool langchain.agents.load_tools(tool_names, llm=None, callbacks=None, **kwargs)[source] Load tools based on their name. Parameters tool_names (List[str]) – name of tools to load. llm (Optional[langchain.base_language.BaseLanguageModel]) – Optional language model, may be needed to initialize certain tools. callbacks (Optional[Union[List[langchain.callbacks.base.BaseCallbackHandler], langchain.callbacks.base.BaseCallbackManager]]) – Optional callback manager or list of callback handlers. If not provided, default global callback manager will be used. kwargs (Any) – Returns List of tools. Return type List[langchain.tools.base.BaseTool] langchain.agents.tool(*args, return_direct=False, args_schema=None, infer_schema=True)[source] Make tools out of functions, can be used with or without arguments. Parameters *args – The arguments to the tool. return_direct (bool) – Whether to return directly from the tool rather than continuing the agent loop. args_schema (Optional[Type[pydantic.main.BaseModel]]) – optional argument schema for user to specify infer_schema (bool) – Whether to infer the schema of the arguments from the function’s signature. This also makes the resultant tool accept a dictionary input to its run() function. args (Union[str, Callable]) – Return type Callable Requires: Function must be of type (str) -> str Function must have a docstring Examples @tool def search_api(query: str) -> str: # Searches the API for the query. return @tool("search", return_direct=True) def search_api(query: str) -> str: # Searches the API for the query. return
https://api.python.langchain.com/en/latest/modules/agents.html
e84f0fb94639-0
Document Loaders All different types of document loaders. class langchain.document_loaders.AcreomLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AZLyricsLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads AZLyrics webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.AirbyteJSONLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads local airbyte json files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.AirtableLoader(api_token, table_id, base_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for Airtable tables. Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-1
Loader for Airtable tables. Parameters api_token (str) – table_id (str) – base_id (str) – lazy_load()[source] Lazy load records from table. Return type Iterator[langchain.schema.Document] load()[source] Load Table. Return type List[langchain.schema.Document] class langchain.document_loaders.ApifyDatasetLoader(dataset_id, dataset_mapping_function)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Logic for loading documents from Apify datasets. Parameters dataset_id (str) – dataset_mapping_function (Callable[[Dict], langchain.schema.Document]) – Return type None attribute apify_client: Any = None attribute dataset_id: str [Required] The ID of the dataset on the Apify platform. attribute dataset_mapping_function: Callable[[Dict], langchain.schema.Document] [Required] A custom function that takes a single dictionary (an Apify dataset item) and converts it to an instance of the Document class. load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ArxivLoader(query, load_max_docs=100, load_all_available_meta=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from arxiv.org into a list of Documents. Each document represents one Document. The loader converts the original PDF format into the text. Parameters query (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-2
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageContainerLoader(conn_str, container, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.AzureBlobStorageFileLoader(conn_str, container, blob_name)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Azure Blob Storage. Parameters conn_str (str) – container (str) – blob_name (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.BSHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BibtexLoader(file_path, *, parser=None, max_docs=None, max_content_chars=4000, load_extra_metadata=False, file_pattern='[^:]+\\.pdf')[source] Bases: langchain.document_loaders.base.BaseLoader Loads a bibtex file into a list of Documents.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-3
Loads a bibtex file into a list of Documents. Each document represents one entry from the bibtex file. If a PDF file is present in the file bibtex field, the original PDF is loaded into the document text. If no such file entry is present, the abstract field is used instead. Parameters file_path (str) – parser (Optional[langchain.utilities.bibtex.BibtexparserWrapper]) – max_docs (Optional[int]) – max_content_chars (Optional[int]) – load_extra_metadata (bool) – file_pattern (str) – lazy_load()[source] Load bibtex file using bibtexparser and get the article texts plus the article metadata. See https://bibtexparser.readthedocs.io/en/master/ Returns a list of documents with the document.page_content in text format Return type Iterator[langchain.schema.Document] load()[source] Load bibtex file documents from the given bibtex file path. See https://bibtexparser.readthedocs.io/en/master/ Parameters file_path – the path to the bibtex file Returns a list of documents with the document.page_content in text format Return type List[langchain.schema.Document] class langchain.document_loaders.BigQueryLoader(query, project=None, page_content_columns=None, metadata_columns=None, credentials=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from BigQuery into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-4
are written into the page_content and none into the metadata. Parameters query (str) – project (Optional[str]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – credentials (Optional[Credentials]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.BiliBiliLoader(video_urls)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads bilibili transcripts. Parameters video_urls (List[str]) – load()[source] Load from bilibili url. Return type List[langchain.schema.Document] class langchain.document_loaders.BlackboardLoader(blackboard_course_url, bbrouter, load_all_recursively=True, basic_auth=None, cookies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads all documents from a Blackboard course. This loader is not compatible with all Blackboard courses. It is only compatible with courses that use the new Blackboard interface. To use this loader, you must have the BbRouter cookie. You can get this cookie by logging into the course and then copying the value of the BbRouter cookie from the browser’s developer tools. Example from langchain.document_loaders import BlackboardLoader loader = BlackboardLoader( blackboard_course_url="https://blackboard.example.com/webapps/blackboard/execute/announcement?method=search&context=course_entry&course_id=_123456_1", bbrouter="expires:12345...", ) documents = loader.load() Parameters blackboard_course_url (str) – bbrouter (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-5
blackboard_course_url (str) – bbrouter (str) – load_all_recursively (bool) – basic_auth (Optional[Tuple[str, str]]) – cookies (Optional[dict]) – folder_path: str base_url: str load_all_recursively: bool check_bs4()[source] Check if BeautifulSoup4 is installed. Raises ImportError – If BeautifulSoup4 is not installed. Return type None load()[source] Load data into document objects. Returns List of documents. Return type List[langchain.schema.Document] download(path)[source] Download a file from a url. Parameters path (str) – Path to the file. Return type None parse_filename(url)[source] Parse the filename from a url. Parameters url (str) – Url to parse the filename from. Returns The filename. Return type str class langchain.document_loaders.Blob(*, data=None, mimetype=None, encoding='utf-8', path=None)[source] Bases: pydantic.main.BaseModel A blob is used to represent raw data by either reference or value. Provides an interface to materialize the blob in different representations, and help to decouple the development of data loaders from the downstream parsing of the raw data. Inspired by: https://developer.mozilla.org/en-US/docs/Web/API/Blob Parameters data (Optional[Union[bytes, str]]) – mimetype (Optional[str]) – encoding (str) – path (Optional[Union[str, pathlib.PurePath]]) – Return type None attribute data: Optional[Union[bytes, str]] = None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-6
None attribute data: Optional[Union[bytes, str]] = None attribute encoding: str = 'utf-8' attribute mimetype: Optional[str] = None attribute path: Optional[Union[str, pathlib.PurePath]] = None as_bytes()[source] Read data as bytes. Return type bytes as_bytes_io()[source] Read data as a byte stream. Return type Generator[Union[_io.BytesIO, _io.BufferedReader], None, None] as_string()[source] Read data as a string. Return type str classmethod from_data(data, *, encoding='utf-8', mime_type=None, path=None)[source] Initialize the blob from in-memory data. Parameters data (Union[str, bytes]) – the in-memory data associated with the blob encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data path (Optional[str]) – if provided, will be set as the source from which the data came Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob classmethod from_path(path, *, encoding='utf-8', mime_type=None, guess_type=True)[source] Load the blob from a path like object. Parameters path (Union[str, pathlib.PurePath]) – path like object to file to be read encoding (str) – Encoding to use if decoding the bytes into a string mime_type (Optional[str]) – if provided, will be set as the mime-type of the data guess_type (bool) – If True, the mimetype will be guessed from the file extension, if a mime-type was not provided Returns Blob instance Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-7
if a mime-type was not provided Returns Blob instance Return type langchain.document_loaders.blob_loaders.schema.Blob property source: Optional[str] The source location of the blob as string if known otherwise none. class langchain.document_loaders.BlobLoader[source] Bases: abc.ABC Abstract interface for blob loaders implementation. Implementer should be able to load raw content from a storage system according to some criteria and return the raw content lazily as a stream of blobs. abstract yield_blobs()[source] A lazy loader for raw data represented by LangChain’s Blob object. Returns A generator over blobs Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.BlockchainDocumentLoader(contract_address, blockchainType=BlockchainType.ETH_MAINNET, api_key='docs-demo', startToken='', get_all_tokens=False, max_execution_time=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads elements from a blockchain smart contract into Langchain documents. The supported blockchains are: Ethereum mainnet, Ethereum Goerli testnet, Polygon mainnet, and Polygon Mumbai testnet. If no BlockchainType is specified, the default is Ethereum mainnet. The Loader uses the Alchemy API to interact with the blockchain. ALCHEMY_API_KEY environment variable must be set to use this loader. The API returns 100 NFTs per request and can be paginated using the startToken parameter. If get_all_tokens is set to True, the loader will get all tokens on the contract. Note that for contracts with a large number of tokens, this may take a long time (e.g. 10k tokens is 100 requests). Default value is false for this reason.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-8
Default value is false for this reason. The max_execution_time (sec) can be set to limit the execution time of the loader. Future versions of this loader can: Support additional Alchemy APIs (e.g. getTransactions, etc.) Support additional blockain APIs (e.g. Infura, Opensea, etc.) Parameters contract_address (str) – blockchainType (langchain.document_loaders.blockchain.BlockchainType) – api_key (str) – startToken (str) – get_all_tokens (bool) – max_execution_time (Optional[int]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CSVLoader(file_path, source_column=None, csv_args=None, encoding=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a CSV file into a list of documents. Each document represents one row of the CSV file. Every row is converted into a key/value pair and outputted to a new line in the document’s page_content. The source for each document loaded from csv is set to the value of the file_path argument for all doucments by default. You can override this by setting the source_column argument to the name of a column in the CSV file. The source of each document will then be set to the value of the column with the name specified in source_column. Output Example:column1: value1 column2: value2 column3: value3 Parameters file_path (str) – source_column (Optional[str]) – csv_args (Optional[Dict]) – encoding (Optional[str]) – load()[source] Load data into document objects. Return type
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-9
load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ChatGPTLoader(log_file, num_logs=- 1)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads conversations from exported ChatGPT data. Parameters log_file (str) – num_logs (int) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.CoNLLULoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Load CoNLL-U files. Parameters file_path (str) – load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.CollegeConfidentialLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads College Confidential webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ConfluenceLoader(url, api_key=None, username=None, oauth2=None, token=None, cloud=True, number_of_retries=3, min_retry_seconds=2, max_retry_seconds=10, confluence_kwargs=None)[source] Bases: langchain.document_loaders.base.BaseLoader Load Confluence pages. Port of https://llamahub.ai/l/confluence
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-10
Load Confluence pages. Port of https://llamahub.ai/l/confluence This currently supports username/api_key, Oauth2 login or personal access token authentication. Specify a list page_ids and/or space_key to load in the corresponding pages into Document objects, if both are specified the union of both sets will be returned. You can also specify a boolean include_attachments to include attachments, this is set to False by default, if set to True all attachments will be downloaded and ConfluenceReader will extract the text from the attachments and add it to the Document object. Currently supported attachment types are: PDF, PNG, JPEG/JPG, SVG, Word and Excel. Confluence API supports difference format of page content. The storage format is the raw XML representation for storage. The view format is the HTML representation for viewing with macros are rendered as though it is viewed by users. You can pass a enum content_format argument to load() to specify the content format, this is set to ContentFormat.STORAGE by default. Hint: space_key and page_id can both be found in the URL of a page in Confluence - https://yoursite.atlassian.com/wiki/spaces/<space_key>/pages/<page_id> Example from langchain.document_loaders import ConfluenceLoader loader = ConfluenceLoader( url="https://yoursite.atlassian.com/wiki", username="me", api_key="12345" ) documents = loader.load(space_key="SPACE",limit=50) Parameters url (str) – _description_ api_key (str, optional) – _description_, defaults to None username (str, optional) – _description_, defaults to None oauth2 (dict, optional) – _description_, defaults to {} token (str, optional) – _description_, defaults to None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-11
token (str, optional) – _description_, defaults to None cloud (bool, optional) – _description_, defaults to True number_of_retries (Optional[int], optional) – How many times to retry, defaults to 3 min_retry_seconds (Optional[int], optional) – defaults to 2 max_retry_seconds (Optional[int], optional) – defaults to 10 confluence_kwargs (dict, optional) – additional kwargs to initialize confluence with Raises ValueError – Errors while validating input ImportError – Required dependencies not installed. static validate_init_args(url=None, api_key=None, username=None, oauth2=None, token=None)[source] Validates proper combinations of init arguments Parameters url (Optional[str]) – api_key (Optional[str]) – username (Optional[str]) – oauth2 (Optional[dict]) – token (Optional[str]) – Return type Optional[List] load(space_key=None, page_ids=None, label=None, cql=None, include_restricted_content=False, include_archived_content=False, include_attachments=False, include_comments=False, content_format=ContentFormat.STORAGE, limit=50, max_pages=1000, ocr_languages=None)[source] Parameters space_key (Optional[str], optional) – Space key retrieved from a confluence URL, defaults to None page_ids (Optional[List[str]], optional) – List of specific page IDs to load, defaults to None label (Optional[str], optional) – Get all pages with this label, defaults to None cql (Optional[str], optional) – CQL Expression, defaults to None include_restricted_content (bool, optional) – defaults to False include_archived_content (bool, optional) – Whether to include archived content, defaults to False include_attachments (bool, optional) – defaults to False
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-12
defaults to False include_attachments (bool, optional) – defaults to False include_comments (bool, optional) – defaults to False content_format (ContentFormat) – Specify content format, defaults to ContentFormat.STORAGE limit (int, optional) – Maximum number of pages to retrieve per request, defaults to 50 max_pages (int, optional) – Maximum number of pages to retrieve in total, defaults 1000 ocr_languages (str, optional) – The languages to use for the Tesseract agent. To use a language, you’ll first need to install the appropriate Tesseract language pack. Raises ValueError – _description_ ImportError – _description_ Returns _description_ Return type List[Document] paginate_request(retrieval_method, **kwargs)[source] Paginate the various methods to retrieve groups of pages. Unfortunately, due to page size, sometimes the Confluence API doesn’t match the limit value. If limit is >100 confluence seems to cap the response to 100. Also, due to the Atlassian Python package, we don’t get the β€œnext” values from the β€œ_links” key because they only return the value from the results key. So here, the pagination starts from 0 and goes until the max_pages, getting the limit number of pages with each request. We have to manually check if there are more docs based on the length of the returned list of pages, rather than just checking for the presence of a next key in the response like this page would have you do: https://developer.atlassian.com/server/confluence/pagination-in-the-rest-api/ Parameters retrieval_method (callable) – Function used to retrieve docs kwargs (Any) – Returns List of documents Return type List is_public_page(page)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-13
List of documents Return type List is_public_page(page)[source] Check if a page is publicly accessible. Parameters page (dict) – Return type bool process_pages(pages, include_restricted_content, include_attachments, include_comments, content_format, ocr_languages=None)[source] Process a list of pages into a list of documents. Parameters pages (List[dict]) – include_restricted_content (bool) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type List[langchain.schema.Document] process_page(page, include_attachments, include_comments, content_format, ocr_languages=None)[source] Parameters page (dict) – include_attachments (bool) – include_comments (bool) – content_format (langchain.document_loaders.confluence.ContentFormat) – ocr_languages (Optional[str]) – Return type langchain.schema.Document process_attachment(page_id, ocr_languages=None)[source] Parameters page_id (str) – ocr_languages (Optional[str]) – Return type List[str] process_pdf(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_image(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str process_doc(link)[source] Parameters link (str) – Return type str process_xls(link)[source] Parameters link (str) – Return type str
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-14
Parameters link (str) – Return type str process_svg(link, ocr_languages=None)[source] Parameters link (str) – ocr_languages (Optional[str]) – Return type str class langchain.document_loaders.DataFrameLoader(data_frame, page_content_column='text')[source] Bases: langchain.document_loaders.base.BaseLoader Load Pandas DataFrames. Parameters data_frame (Any) – page_content_column (str) – lazy_load()[source] Lazy load records from dataframe. Return type Iterator[langchain.schema.Document] load()[source] Load full dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.DiffbotLoader(api_token, urls, continue_on_failure=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Diffbot file json. Parameters api_token (str) – urls (List[str]) – continue_on_failure (bool) – load()[source] Extract text from Diffbot on all the URLs and return Document instances Return type List[langchain.schema.Document] class langchain.document_loaders.DirectoryLoader(path, glob='**/[!.]*', silent_errors=False, load_hidden=False, loader_cls=<class 'langchain.document_loaders.unstructured.UnstructuredFileLoader'>, loader_kwargs=None, recursive=False, show_progress=False, use_multithreading=False, max_concurrency=4)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from a directory. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-15
silent_errors (bool) – load_hidden (bool) – loader_cls (Union[Type[langchain.document_loaders.unstructured.UnstructuredFileLoader], Type[langchain.document_loaders.text.TextLoader], Type[langchain.document_loaders.html_bs.BSHTMLLoader]]) – loader_kwargs (Optional[dict]) – recursive (bool) – show_progress (bool) – use_multithreading (bool) – max_concurrency (int) – load_file(item, path, docs, pbar)[source] Parameters item (pathlib.Path) – path (pathlib.Path) – docs (List[langchain.schema.Document]) – pbar (Optional[Any]) – Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.DiscordChatLoader(chat_log, user_id_col='ID')[source] Bases: langchain.document_loaders.base.BaseLoader Load Discord chat logs. Parameters chat_log (pd.DataFrame) – user_id_col (str) – load()[source] Load all chat messages. Return type List[langchain.schema.Document] class langchain.document_loaders.DocugamiLoader(*, api='https://api.docugami.com/v1preview1', access_token=None, docset_id=None, document_ids=None, file_paths=None, min_chunk_size=32)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads processed docs from Docugami. To use, you should have the lxml python package installed. Parameters api (str) – access_token (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-16
Parameters api (str) – access_token (Optional[str]) – docset_id (Optional[str]) – document_ids (Optional[Sequence[str]]) – file_paths (Optional[Sequence[Union[pathlib.Path, str]]]) – min_chunk_size (int) – Return type None attribute access_token: Optional[str] = None attribute api: str = 'https://api.docugami.com/v1preview1' attribute docset_id: Optional[str] = None attribute document_ids: Optional[Sequence[str]] = None attribute file_paths: Optional[Sequence[Union[pathlib.Path, str]]] = None attribute min_chunk_size: int = 32 load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.Docx2txtLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader, abc.ABC Loads a DOCX with docx2txt and chunks at character level. Defaults to check for local file, but if the file is a web path, it will download it to a temporary file, and use that, then clean up the temporary file after completion Parameters file_path (str) – load()[source] Load given path as single page. Return type List[langchain.schema.Document] class langchain.document_loaders.DuckDBLoader(query, database=':memory:', read_only=False, config=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from DuckDB into a list of documents. Each document represents one row of the result. The page_content_columns
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-17
Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – database (str) – read_only (bool) – config (Optional[Dict[str, str]]) – page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.EmbaasBlobLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={})[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseBlobParser Wrapper around embaas’s document byte loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader() blob = Blob.from_path(path="example.mp3") documents = loader.parse(blob=blob) # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } )
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-18
"chunk_splitter": "CharacterTextSplitter" } ) blob = Blob.from_path(path="example.pdf") documents = loader.parse(blob=blob) Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – Return type None lazy_parse(blob)[source] Lazy parsing interface. Subclasses are required to implement this method. Parameters blob (langchain.document_loaders.blob_loaders.schema.Blob) – Blob instance Returns Generator of documents Return type Iterator[langchain.schema.Document] class langchain.document_loaders.EmbaasLoader(*, embaas_api_key=None, api_url='https://api.embaas.io/v1/document/extract-text/bytes/', params={}, file_path, blob_loader=None)[source] Bases: langchain.document_loaders.embaas.BaseEmbaasLoader, langchain.document_loaders.base.BaseLoader Wrapper around embaas’s document loader service. To use, you should have the environment variable EMBAAS_API_KEY set with your API key, or pass it as a named parameter to the constructor. Example # Default parsing from langchain.document_loaders.embaas import EmbaasLoader loader = EmbaasLoader(file_path="example.mp3") documents = loader.load() # Custom api parameters (create embeddings automatically) from langchain.document_loaders.embaas import EmbaasBlobLoader loader = EmbaasBlobLoader( file_path="example.pdf", params={ "should_embed": True, "model": "e5-large-v2", "chunk_size": 256,
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-19
"chunk_size": 256, "chunk_splitter": "CharacterTextSplitter" } ) documents = loader.load() Parameters embaas_api_key (Optional[str]) – api_url (str) – params (langchain.document_loaders.embaas.EmbaasDocumentExtractionParameters) – file_path (str) – blob_loader (Optional[langchain.document_loaders.embaas.EmbaasBlobLoader]) – Return type None attribute blob_loader: Optional[langchain.document_loaders.embaas.EmbaasBlobLoader] = None The blob loader to use. If not provided, a default one will be created. attribute file_path: str [Required] The path to the file to load. lazy_load()[source] Load the documents from the file path lazily. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] load_and_split(text_splitter=None)[source] Load documents and split into chunks. Parameters text_splitter (Optional[langchain.text_splitter.TextSplitter]) – Return type List[langchain.schema.Document] class langchain.document_loaders.EverNoteLoader(file_path, load_single_document=True)[source] Bases: langchain.document_loaders.base.BaseLoader EverNote Loader. Loads an EverNote notebook export file e.g. my_notebook.enex into Documents. Instructions on producing this file can be found at https://help.evernote.com/hc/en-us/articles/209005557-Export-notes-and-notebooks-as-ENEX-or-HTML Currently only the plain text in the note is extracted and stored as the contents
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-20
Currently only the plain text in the note is extracted and stored as the contents of the Document, any non content metadata (e.g. β€˜author’, β€˜created’, β€˜updated’ etc. but not β€˜content-raw’ or β€˜resource’) tags on the note will be extracted and stored as metadata on the Document. Parameters file_path (str) – The path to the notebook export with a .enex extension load_single_document (bool) – Whether or not to concatenate the content of all notes into a single long Document. True (If this is set to) – the β€˜source’ which contains the file name of the export. load()[source] Load documents from EverNote export file. Return type List[langchain.schema.Document] class langchain.document_loaders.FacebookChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Facebook messages json directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.FaunaLoader(query, page_content_field, secret, metadata_fields=None)[source] Bases: langchain.document_loaders.base.BaseLoader FaunaDB Loader. Parameters query (str) – page_content_field (str) – secret (str) – metadata_fields (Optional[Sequence[str]]) – query The FQL query string to execute. Type str page_content_field The field that contains the content of each page. Type str secret The secret key for authenticating to FaunaDB. Type str metadata_fields Optional list of field names to include in metadata. Type Optional[Sequence[str]]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-21
Optional list of field names to include in metadata. Type Optional[Sequence[str]] load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.FigmaFileLoader(access_token, ids, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Figma file json. Parameters access_token (str) – ids (str) – key (str) – load()[source] Load file Return type List[langchain.schema.Document] class langchain.document_loaders.FileSystemBlobLoader(path, *, glob='**/[!.]*', suffixes=None, show_progress=False)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Blob loader for the local file system. Example: from langchain.document_loaders.blob_loaders import FileSystemBlobLoader loader = FileSystemBlobLoader("/path/to/directory") for blob in loader.yield_blobs(): print(blob) Parameters path (Union[str, pathlib.Path]) – glob (str) – suffixes (Optional[Sequence[str]]) – show_progress (bool) – Return type None yield_blobs()[source] Yield blobs that match the requested pattern. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] count_matching_files()[source] Count files that match the pattern without loading them. Return type int class langchain.document_loaders.GCSDirectoryLoader(project_name, bucket, prefix='')[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-22
Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GCSFileLoader(project_name, bucket, blob)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from GCS. Parameters project_name (str) – bucket (str) – blob (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GitHubIssuesLoader(*, repo, access_token, include_prs=True, milestone=None, state=None, assignee=None, creator=None, mentioned=None, labels=None, sort=None, direction=None, since=None)[source] Bases: langchain.document_loaders.github.BaseGitHubLoader Parameters repo (str) – access_token (str) – include_prs (bool) – milestone (Optional[Union[int, Literal['*', 'none']]]) – state (Optional[Literal['open', 'closed', 'all']]) – assignee (Optional[str]) – creator (Optional[str]) – mentioned (Optional[str]) – labels (Optional[List[str]]) – sort (Optional[Literal['created', 'updated', 'comments']]) – direction (Optional[Literal['asc', 'desc']]) – since (Optional[str]) – Return type None attribute assignee: Optional[str] = None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-23
Return type None attribute assignee: Optional[str] = None Filter on assigned user. Pass β€˜none’ for no user and β€˜*’ for any user. attribute creator: Optional[str] = None Filter on the user that created the issue. attribute direction: Optional[Literal['asc', 'desc']] = None The direction to sort the results by. Can be one of: β€˜asc’, β€˜desc’. attribute include_prs: bool = True If True include Pull Requests in results, otherwise ignore them. attribute labels: Optional[List[str]] = None Label names to filter one. Example: bug,ui,@high. attribute mentioned: Optional[str] = None Filter on a user that’s mentioned in the issue. attribute milestone: Optional[Union[int, Literal['*', 'none']]] = None If integer is passed, it should be a milestone’s number field. If the string β€˜*’ is passed, issues with any milestone are accepted. If the string β€˜none’ is passed, issues without milestones are returned. attribute since: Optional[str] = None Only show notifications updated after the given time. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ. attribute sort: Optional[Literal['created', 'updated', 'comments']] = None What to sort results by. Can be one of: β€˜created’, β€˜updated’, β€˜comments’. Default is β€˜created’. attribute state: Optional[Literal['open', 'closed', 'all']] = None Filter on issue state. Can be one of: β€˜open’, β€˜closed’, β€˜all’. lazy_load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-24
Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes load()[source] Get issues of a GitHub repository. Returns page_content metadata url title creator created_at last_update_time closed_time number of comments state labels assignee assignees milestone locked number is_pull_request Return type A list of Documents with attributes parse_issue(issue)[source] Create Document objects from a list of GitHub issues. Parameters issue (dict) – Return type langchain.schema.Document property query_params: str property url: str class langchain.document_loaders.GitLoader(repo_path, clone_url=None, branch='main', file_filter=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads files from a Git repository into a list of documents. Repository can be local on disk available at repo_path, or remote at clone_url that will be cloned to repo_path. Currently supports only text files. Each document represents one file in the repository. The path points to the local Git repository, and the branch specifies the branch to load files from. By default, it loads from the main branch. Parameters repo_path (str) – clone_url (Optional[str]) – branch (Optional[str]) – file_filter (Optional[Callable[[str], bool]]) – load()[source] Load data into document objects. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-25
Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.GitbookLoader(web_page, load_all_paths=False, base_url=None, content_selector='main')[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load GitBook data. load from either a single page, or load all (relative) paths in the navbar. Parameters web_page (str) – load_all_paths (bool) – base_url (Optional[str]) – content_selector (str) – load()[source] Fetch text from one single GitBook page. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleApiClient(credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), service_account_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'))[source] Bases: object A Generic Google Api Client. To use, you should have the google_auth_oauthlib,youtube_transcript_api,google python package installed. As the google api expects credentials you need to set up a google account and register your Service. β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) Parameters credentials_path (pathlib.Path) – service_account_path (pathlib.Path) – token_path (pathlib.Path) – Return type None credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json')
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-26
service_account_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] class langchain.document_loaders.GoogleApiYoutubeLoader(google_api_client, channel_name=None, video_ids=None, add_video_info=True, captions_language='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads all Videos from a Channel To use, you should have the googleapiclient,youtube_transcript_api python package installed. As the service needs a google_api_client, you first have to initialize the GoogleApiClient. Additionally you have to either provide a channel name or a list of videoids β€œhttps://developers.google.com/docs/api/quickstart/python” Example from langchain.document_loaders import GoogleApiClient from langchain.document_loaders import GoogleApiYoutubeLoader google_api_client = GoogleApiClient( service_account_path=Path("path_to_your_sec_file.json") ) loader = GoogleApiYoutubeLoader( google_api_client=google_api_client, channel_name = "CodeAesthetic" ) load.load() Parameters google_api_client (langchain.document_loaders.youtube.GoogleApiClient) – channel_name (Optional[str]) – video_ids (Optional[List[str]]) – add_video_info (bool) – captions_language (str) – continue_on_failure (bool) – Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-27
Return type None google_api_client: langchain.document_loaders.youtube.GoogleApiClient channel_name: Optional[str] = None video_ids: Optional[List[str]] = None add_video_info: bool = True captions_language: str = 'en' continue_on_failure: bool = False classmethod validate_channel_or_videoIds_is_set(values)[source] Validate that either folder_id or document_ids is set, but not both. Parameters values (Dict[str, Any]) – Return type Dict[str, Any] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GoogleDriveLoader(*, service_account_key=PosixPath('/home/docs/.credentials/keys.json'), credentials_path=PosixPath('/home/docs/.credentials/credentials.json'), token_path=PosixPath('/home/docs/.credentials/token.json'), folder_id=None, document_ids=None, file_ids=None, recursive=False, file_types=None, load_trashed_files=False, file_loader_cls=None, file_loader_kwargs={})[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Loader that loads Google Docs from Google Drive. Parameters service_account_key (pathlib.Path) – credentials_path (pathlib.Path) – token_path (pathlib.Path) – folder_id (Optional[str]) – document_ids (Optional[List[str]]) – file_ids (Optional[List[str]]) – recursive (bool) – file_types (Optional[Sequence[str]]) – load_trashed_files (bool) – file_loader_cls (Any) – file_loader_kwargs (Dict[str, Any]) – Return type None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-28
file_loader_kwargs (Dict[str, Any]) – Return type None attribute credentials_path: pathlib.Path = PosixPath('/home/docs/.credentials/credentials.json') attribute document_ids: Optional[List[str]] = None attribute file_ids: Optional[List[str]] = None attribute file_loader_cls: Any = None attribute file_loader_kwargs: Dict[str, Any] = {} attribute file_types: Optional[Sequence[str]] = None attribute folder_id: Optional[str] = None attribute load_trashed_files: bool = False attribute recursive: bool = False attribute service_account_key: pathlib.Path = PosixPath('/home/docs/.credentials/keys.json') attribute token_path: pathlib.Path = PosixPath('/home/docs/.credentials/token.json') load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.GutenbergLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib to load .txt web files. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.HNLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Load Hacker News data from either main page results or the comments page. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Get important HN webpage information. Components are: title content
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-29
Get important HN webpage information. Components are: title content source url, time of post author of the post number of comments rank of the post Return type List[langchain.schema.Document] load_comments(soup_info)[source] Load comments from a HN post. Parameters soup_info (Any) – Return type List[langchain.schema.Document] load_results(soup)[source] Load items from an HN page. Parameters soup (Any) – Return type List[langchain.schema.Document] class langchain.document_loaders.HuggingFaceDatasetLoader(path, page_content_column='text', name=None, data_dir=None, data_files=None, cache_dir=None, keep_in_memory=None, save_infos=False, use_auth_token=None, num_proc=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from the Hugging Face Hub. Parameters path (str) – page_content_column (str) – name (Optional[str]) – data_dir (Optional[str]) – data_files (Optional[Union[str, Sequence[str], Mapping[str, Union[str, Sequence[str]]]]]) – cache_dir (Optional[str]) – keep_in_memory (Optional[bool]) – save_infos (bool) – use_auth_token (Optional[Union[bool, str]]) – num_proc (Optional[int]) – lazy_load()[source] Load documents lazily. Return type Iterator[langchain.schema.Document] load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.IFixitLoader(web_path)[source] Bases: langchain.document_loaders.base.BaseLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-30
Bases: langchain.document_loaders.base.BaseLoader Load iFixit repair guides, device wikis and answers. iFixit is the largest, open repair community on the web. The site contains nearly 100k repair manuals, 200k Questions & Answers on 42k devices, and all the data is licensed under CC-BY. This loader will allow you to download the text of a repair guide, text of Q&A’s and wikis from devices on iFixit using their open APIs and web scraping. Parameters web_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] static load_suggestions(query='', doc_type='all')[source] Parameters query (str) – doc_type (str) – Return type List[langchain.schema.Document] load_questions_and_answers(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] load_device(url_override=None, include_guides=True)[source] Parameters url_override (Optional[str]) – include_guides (bool) – Return type List[langchain.schema.Document] load_guide(url_override=None)[source] Parameters url_override (Optional[str]) – Return type List[langchain.schema.Document] class langchain.document_loaders.IMSDbLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that loads IMSDb webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-31
header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – load()[source] Load webpage. Return type List[langchain.schema.Document] class langchain.document_loaders.ImageCaptionLoader(path_images, blip_processor='Salesforce/blip-image-captioning-base', blip_model='Salesforce/blip-image-captioning-base')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads the captions of an image Parameters path_images (Union[str, List[str]]) – blip_processor (str) – blip_model (str) – load()[source] Load from a list of image files Return type List[langchain.schema.Document] class langchain.document_loaders.IuguLoader(resource, api_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from IUGU. Parameters resource (str) – api_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.JSONLoader(file_path, jq_schema, content_key=None, metadata_func=None, text_content=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a JSON file and references a jq schema provided to load the text into documents. Example [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}] -> schema = .[].text {β€œkey”: [{β€œtext”: …}, {β€œtext”: …}, {β€œtext”: …}]} -> schema = .key[].text [β€œβ€, β€œβ€, β€œβ€] -> schema = .[] Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-32
[β€œβ€, β€œβ€, β€œβ€] -> schema = .[] Parameters file_path (Union[str, pathlib.Path]) – jq_schema (str) – content_key (Optional[str]) – metadata_func (Optional[Callable[[Dict, Dict], Dict]]) – text_content (bool) – load()[source] Load and return documents from the JSON file. Return type List[langchain.schema.Document] class langchain.document_loaders.JoplinLoader(access_token=None, port=41184, host='localhost')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches notes from Joplin. In order to use this loader, you need to have Joplin running with the Web Clipper enabled (look for β€œWeb Clipper” in the app settings). To get the access token, you need to go to the Web Clipper options and under β€œAdvanced Options” you will find the access token. You can find more information about the Web Clipper service here: https://joplinapp.org/clipper/ Parameters access_token (Optional[str]) – port (int) – host (str) – Return type None lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.LarkSuiteDocLoader(domain, access_token, document_id)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads LarkSuite (FeiShu) document. Parameters domain (str) – access_token (str) – document_id (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-33
access_token (str) – document_id (str) – lazy_load()[source] Lazy load LarkSuite (FeiShu) document. Return type Iterator[langchain.schema.Document] load()[source] Load LarkSuite (FeiShu) document. Return type List[langchain.schema.Document] class langchain.document_loaders.MWDumpLoader(file_path, encoding='utf8')[source] Bases: langchain.document_loaders.base.BaseLoader Load MediaWiki dump from XML file .. rubric:: Example from langchain.document_loaders import MWDumpLoader loader = MWDumpLoader( file_path="myWiki.xml", encoding="utf8" ) docs = loader.load() from langchain.text_splitter import RecursiveCharacterTextSplitter text_splitter = RecursiveCharacterTextSplitter( chunk_size=1000, chunk_overlap=0 ) texts = text_splitter.split_documents(docs) Parameters file_path (str) – XML local file path encoding (str, optional) – Charset encoding, defaults to β€œutf8” load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.MastodonTootsLoader(mastodon_accounts, number_toots=100, exclude_replies=False, access_token=None, api_base_url='https://mastodon.social')[source] Bases: langchain.document_loaders.base.BaseLoader Mastodon toots loader. Parameters mastodon_accounts (Sequence[str]) – number_toots (Optional[int]) – exclude_replies (bool) – access_token (Optional[str]) – api_base_url (str) – load()[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-34
api_base_url (str) – load()[source] Load toots into documents. Return type List[langchain.schema.Document] class langchain.document_loaders.MathpixPDFLoader(file_path, processed_file_format='mmd', max_wait_time_seconds=500, should_clean_pdf=False, **kwargs)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Parameters file_path (str) – processed_file_format (str) – max_wait_time_seconds (int) – should_clean_pdf (bool) – kwargs (Any) – Return type None property headers: dict property url: str property data: dict send_pdf()[source] Return type str wait_for_processing(pdf_id)[source] Parameters pdf_id (str) – Return type None get_processed_pdf(pdf_id)[source] Parameters pdf_id (str) – Return type str clean_pdf(contents)[source] Parameters contents (str) – Return type str load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MaxComputeLoader(query, api_wrapper, *, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Alibaba Cloud MaxCompute table into documents. Parameters query (str) – api_wrapper (MaxComputeAPIWrapper) – page_content_columns (Optional[Sequence[str]]) – metadata_columns (Optional[Sequence[str]]) – classmethod from_params(query, endpoint, project, *, access_id=None, secret_access_key=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-35
Convenience constructor that builds the MaxCompute API wrapper fromgiven parameters. Parameters query (str) – SQL query to execute. endpoint (str) – MaxCompute endpoint. project (str) – A project is a basic organizational unit of MaxCompute, which is similar to a database. access_id (Optional[str]) – MaxCompute access ID. Should be passed in directly or set as the environment variable MAX_COMPUTE_ACCESS_ID. secret_access_key (Optional[str]) – MaxCompute secret access key. Should be passed in directly or set as the environment variable MAX_COMPUTE_SECRET_ACCESS_KEY. kwargs (Any) – Return type langchain.document_loaders.max_compute.MaxComputeLoader lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.MergedDataLoader(loaders)[source] Bases: langchain.document_loaders.base.BaseLoader Merge documents from a list of loaders Parameters loaders (List) – lazy_load()[source] Lazy load docs from each individual loader. Return type Iterator[langchain.schema.Document] load()[source] Load docs. Return type List[langchain.schema.Document] class langchain.document_loaders.MHTMLLoader(file_path, open_encoding=None, bs_kwargs=None, get_text_separator='')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses beautiful soup to parse HTML files. Parameters file_path (str) – open_encoding (Optional[str]) – bs_kwargs (Optional[dict]) – get_text_separator (str) – Return type None
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-36
get_text_separator (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.ModernTreasuryLoader(resource, organization_id=None, api_key=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Modern Treasury. Parameters resource (str) – organization_id (Optional[str]) – api_key (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.NotebookLoader(path, include_outputs=False, max_output_length=10, remove_newline=False, traceback=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads .ipynb notebook files. Parameters path (str) – include_outputs (bool) – max_output_length (int) – remove_newline (bool) – traceback (bool) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.NotionDBLoader(integration_token, database_id, request_timeout_sec=10)[source] Bases: langchain.document_loaders.base.BaseLoader Notion DB Loader. Reads content from pages within a Noton Database. :param integration_token: Notion integration token. :type integration_token: str :param database_id: Notion database id. :type database_id: str :param request_timeout_sec: Timeout for Notion requests in seconds. :type request_timeout_sec: int Parameters integration_token (str) – database_id (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-37
Parameters integration_token (str) – database_id (str) – request_timeout_sec (Optional[int]) – Return type None load()[source] Load documents from the Notion database. :returns: List of documents. :rtype: List[Document] Return type List[langchain.schema.Document] load_page(page_summary)[source] Read a page. Parameters page_summary (Dict[str, Any]) – Return type langchain.schema.Document class langchain.document_loaders.NotionDirectoryLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Notion directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.ObsidianLoader(path, encoding='UTF-8', collect_metadata=True)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Obsidian files from disk. Parameters path (str) – encoding (str) – collect_metadata (bool) – FRONT_MATTER_REGEX = re.compile('^---\\n(.*?)\\n---\\n', re.MULTILINE|re.DOTALL) load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveFileLoader(*, file)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters file (File) – Return type None attribute file: File [Required] load()[source] Load Documents Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-38
Load Documents Return type List[langchain.schema.Document] class langchain.document_loaders.OneDriveLoader(*, settings=None, drive_id, folder_path=None, object_ids=None, auth_with_token=False)[source] Bases: langchain.document_loaders.base.BaseLoader, pydantic.main.BaseModel Parameters settings (langchain.document_loaders.onedrive._OneDriveSettings) – drive_id (str) – folder_path (Optional[str]) – object_ids (Optional[List[str]]) – auth_with_token (bool) – Return type None attribute auth_with_token: bool = False attribute drive_id: str [Required] attribute folder_path: Optional[str] = None attribute object_ids: Optional[List[str]] = None attribute settings: langchain.document_loaders.onedrive._OneDriveSettings [Optional] load()[source] Loads all supported document files from the specified OneDrive drive a nd returns a list of Document objects. Returns A list of Document objects representing the loaded documents. Return type List[Document] Raises ValueError – If the specified drive ID does not correspond to a drive in the OneDrive storage. – class langchain.document_loaders.OnlinePDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that loads online PDFs. Parameters file_path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.OutlookMessageLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Outlook Message files using extract_msg. https://github.com/TeamMsgExtractor/msg-extractor
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-39
https://github.com/TeamMsgExtractor/msg-extractor Parameters file_path (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.OpenCityDataLoader(city_id, dataset_id, limit)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Open city data. Parameters city_id (str) – dataset_id (str) – limit (int) – lazy_load()[source] Lazy load records. Return type Iterator[langchain.schema.Document] load()[source] Load records. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFMinerLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files. Parameters file_path (str) – Return type None load()[source] Eagerly load the content. Return type List[langchain.schema.Document] lazy_load()[source] Lazily lod documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PDFMinerPDFasHTMLLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PDFMiner to load PDF files as HTML content. Parameters file_path (str) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.PDFPlumberLoader(file_path, text_kwargs=None)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-40
Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses pdfplumber to load PDF files. Parameters file_path (str) – text_kwargs (Optional[Mapping[str, Any]]) – Return type None load()[source] Load file. Return type List[langchain.schema.Document] langchain.document_loaders.PagedPDFSplitter alias of langchain.document_loaders.pdf.PyPDFLoader class langchain.document_loaders.PlaywrightURLLoader(urls, continue_on_failure=True, headless=True, remove_selectors=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses Playwright and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – headless (bool) – remove_selectors (Optional[List[str]]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool headless If True, the browser will run in headless mode. Type bool load()[source] Load the specified URLs using Playwright and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.PsychicLoader(api_key, account_id, connector_id=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads documents from Psychic.dev. Parameters api_key (str) – account_id (str) – connector_id (Optional[str]) – load()[source] Load documents.
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-41
connector_id (Optional[str]) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.PyMuPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loader that uses PyMuPDF to load PDF files. Parameters file_path (str) – Return type None load(**kwargs)[source] Load file. Parameters kwargs (Optional[Any]) – Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFDirectoryLoader(path, glob='**/[!.]*.pdf', silent_errors=False, load_hidden=False, recursive=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a directory with PDF files with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters path (str) – glob (str) – silent_errors (bool) – load_hidden (bool) – recursive (bool) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.PyPDFLoader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdf and chunks at character level. Loader also stores page numbers in metadatas. Parameters file_path (str) – Return type None load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-42
Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PyPDFium2Loader(file_path)[source] Bases: langchain.document_loaders.pdf.BasePDFLoader Loads a PDF with pypdfium2 and chunks at character level. Parameters file_path (str) – load()[source] Load given path as pages. Return type List[langchain.schema.Document] lazy_load()[source] Lazy load given path as pages. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.PySparkDataFrameLoader(spark_session=None, df=None, page_content_column='text', fraction_of_memory=0.1)[source] Bases: langchain.document_loaders.base.BaseLoader Load PySpark DataFrames Parameters spark_session (Optional[SparkSession]) – df (Optional[Any]) – page_content_column (str) – fraction_of_memory (float) – get_num_rows()[source] Gets the amount of β€œfeasible” rows for the DataFrame Return type Tuple[int, int] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load from the dataframe. Return type List[langchain.schema.Document] class langchain.document_loaders.PythonLoader(file_path)[source] Bases: langchain.document_loaders.text.TextLoader Load Python files, respecting any non-default encoding if specified. Parameters file_path (str) – class langchain.document_loaders.ReadTheDocsLoader(path, encoding=None, errors=None, custom_html_tag=None, **kwargs)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-43
Bases: langchain.document_loaders.base.BaseLoader Loader that loads ReadTheDocs documentation directory dump. Parameters path (Union[str, pathlib.Path]) – encoding (Optional[str]) – errors (Optional[str]) – custom_html_tag (Optional[Tuple[str, dict]]) – kwargs (Optional[Any]) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.RecursiveUrlLoader(url, exclude_dirs=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads all child links from a given url. Parameters url (str) – exclude_dirs (Optional[str]) – Return type None get_child_links_recursive(url, visited=None)[source] Recursively get all child links starting with the path of the input URL. Parameters url (str) – visited (Optional[Set[str]]) – Return type Set[str] lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load web pages. Return type List[langchain.schema.Document] class langchain.document_loaders.RedditPostsLoader(client_id, client_secret, user_agent, search_queries, mode, categories=['new'], number_posts=10)[source] Bases: langchain.document_loaders.base.BaseLoader Reddit posts loader. Read posts on a subreddit. First you need to go to https://www.reddit.com/prefs/apps/ and create your application Parameters client_id (str) – client_secret (str) – user_agent (str) – search_queries (Sequence[str]) – mode (str) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-44
search_queries (Sequence[str]) – mode (str) – categories (Sequence[str]) – number_posts (Optional[int]) – load()[source] Load reddits. Return type List[langchain.schema.Document] class langchain.document_loaders.RoamLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Roam files from disk. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3DirectoryLoader(bucket, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – prefix (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.S3FileLoader(bucket, key)[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from s3. Parameters bucket (str) – key (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.SRTLoader(file_path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for .srt (subtitle) files. Parameters file_path (str) – load()[source] Load using pysrt file. Return type List[langchain.schema.Document] class langchain.document_loaders.SeleniumURLLoader(urls, continue_on_failure=True, browser='chrome', binary_location=None, executable_path=None, headless=True, arguments=[])[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-45
Bases: langchain.document_loaders.base.BaseLoader Loader that uses Selenium and to load a page and unstructured to load the html. This is useful for loading pages that require javascript to render. Parameters urls (List[str]) – continue_on_failure (bool) – browser (Literal['chrome', 'firefox']) – binary_location (Optional[str]) – executable_path (Optional[str]) – headless (bool) – arguments (List[str]) – urls List of URLs to load. Type List[str] continue_on_failure If True, continue loading other URLs on failure. Type bool browser The browser to use, either β€˜chrome’ or β€˜firefox’. Type str binary_location The location of the browser binary. Type Optional[str] executable_path The path to the browser executable. Type Optional[str] headless If True, the browser will run in headless mode. Type bool arguments [List[str]] List of arguments to pass to the browser. load()[source] Load the specified URLs using Selenium and create Document instances. Returns A list of Document instances with loaded content. Return type List[Document] class langchain.document_loaders.SitemapLoader(web_path, filter_urls=None, parsing_function=None, blocksize=None, blocknum=0, meta_function=None, is_local=False)[source] Bases: langchain.document_loaders.web_base.WebBaseLoader Loader that fetches a sitemap and loads those URLs. Parameters web_path (str) – filter_urls (Optional[List[str]]) – parsing_function (Optional[Callable]) – blocksize (Optional[int]) – blocknum (int) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-46
blocksize (Optional[int]) – blocknum (int) – meta_function (Optional[Callable]) – is_local (bool) – parse_sitemap(soup)[source] Parse sitemap xml and load into a list of dicts. Parameters soup (Any) – Return type List[dict] load()[source] Load sitemap. Return type List[langchain.schema.Document] class langchain.document_loaders.SlackDirectoryLoader(zip_path, workspace_url=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader for loading documents from a Slack directory dump. Parameters zip_path (str) – workspace_url (Optional[str]) – load()[source] Load and return documents from the Slack directory dump. Return type List[langchain.schema.Document] class langchain.document_loaders.SnowflakeLoader(query, user, password, account, warehouse, role, database, schema, parameters=None, page_content_columns=None, metadata_columns=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from Snowflake into a list of documents. Each document represents one row of the result. The page_content_columns are written into the page_content of the document. The metadata_columns are written into the metadata of the document. By default, all columns are written into the page_content and none into the metadata. Parameters query (str) – user (str) – password (str) – account (str) – warehouse (str) – role (str) – database (str) – schema (str) – parameters (Optional[Dict[str, Any]]) – page_content_columns (Optional[List[str]]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-47
page_content_columns (Optional[List[str]]) – metadata_columns (Optional[List[str]]) – lazy_load()[source] A lazy loader for document content. Return type Iterator[langchain.schema.Document] load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.SpreedlyLoader(access_token, resource)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Spreedly API. Parameters access_token (str) – resource (str) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.StripeLoader(resource, access_token=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that fetches data from Stripe. Parameters resource (str) – access_token (Optional[str]) – Return type None load()[source] Load data into document objects. Return type List[langchain.schema.Document] class langchain.document_loaders.TencentCOSDirectoryLoader(conf, bucket, prefix='')[source] Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Tencent Cloud COS. Parameters conf (Any) – bucket (str) – prefix (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] Load documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TencentCOSFileLoader(conf, bucket, key)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-48
Bases: langchain.document_loaders.base.BaseLoader Loading logic for loading documents from Tencent Cloud COS. Parameters conf (Any) – bucket (str) – key (str) – load()[source] Load data into document objects. Return type List[langchain.schema.Document] lazy_load()[source] Load documents. Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TelegramChatApiLoader(chat_entity=None, api_id=None, api_hash=None, username=None, file_path='telegram_data.json')[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters chat_entity (Optional[EntityLike]) – api_id (Optional[int]) – api_hash (Optional[str]) – username (Optional[str]) – file_path (str) – async fetch_data_from_telegram()[source] Fetch data from Telegram API and save it as a JSON file. Return type None load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.TelegramChatFileLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Telegram chat json directory dump. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] langchain.document_loaders.TelegramChatLoader alias of langchain.document_loaders.telegram.TelegramChatFileLoader class langchain.document_loaders.TextLoader(file_path, encoding=None, autodetect_encoding=False)[source] Bases: langchain.document_loaders.base.BaseLoader Load text files. Parameters
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-49
Bases: langchain.document_loaders.base.BaseLoader Load text files. Parameters file_path (str) – Path to the file to load. encoding (Optional[str]) – File encoding to use. If None, the file will be loaded encoding. (with the default system) – autodetect_encoding (bool) – Whether to try to autodetect the file encoding if the specified encoding fails. load()[source] Load from file path. Return type List[langchain.schema.Document] class langchain.document_loaders.ToMarkdownLoader(url, api_key)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads HTML to markdown using 2markdown. Parameters url (str) – api_key (str) – lazy_load()[source] Lazily load the file. Return type Iterator[langchain.schema.Document] load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.TomlLoader(source)[source] Bases: langchain.document_loaders.base.BaseLoader A TOML document loader that inherits from the BaseLoader class. This class can be initialized with either a single source file or a source directory containing TOML files. Parameters source (Union[str, pathlib.Path]) – load()[source] Load and return all documents. Return type List[langchain.schema.Document] lazy_load()[source] Lazily load the TOML documents from the source file or directory. Return type Iterator[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-50
Return type Iterator[langchain.schema.Document] class langchain.document_loaders.TrelloLoader(client, board_name, *, include_card_name=True, include_comments=True, include_checklist=True, card_filter='all', extra_metadata=('due_date', 'labels', 'list', 'closed'))[source] Bases: langchain.document_loaders.base.BaseLoader Trello loader. Reads all cards from a Trello board. Parameters client (TrelloClient) – board_name (str) – include_card_name (bool) – include_comments (bool) – include_checklist (bool) – card_filter (Literal['closed', 'open', 'all']) – extra_metadata (Tuple[str, ...]) – classmethod from_credentials(board_name, *, api_key=None, token=None, **kwargs)[source] Convenience constructor that builds TrelloClient init param for you. Parameters board_name (str) – The name of the Trello board. api_key (Optional[str]) – Trello API key. Can also be specified as environment variable TRELLO_API_KEY. token (Optional[str]) – Trello token. Can also be specified as environment variable TRELLO_TOKEN. include_card_name – Whether to include the name of the card in the document. include_comments – Whether to include the comments on the card in the document. include_checklist – Whether to include the checklist on the card in the document. card_filter – Filter on card status. Valid values are β€œclosed”, β€œopen”, β€œall”. extra_metadata – List of additional metadata fields to include as document metadata.Valid values are β€œdue_date”, β€œlabels”, β€œlist”, β€œclosed”. kwargs (Any) – Return type langchain.document_loaders.trello.TrelloLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-51
Return type langchain.document_loaders.trello.TrelloLoader load()[source] Loads all cards from the specified Trello board. You can filter the cards, metadata and text included by using the optional parameters. Returns:A list of documents, one for each card in the board. Return type List[langchain.schema.Document] class langchain.document_loaders.TwitterTweetLoader(auth_handler, twitter_users, number_tweets=100)[source] Bases: langchain.document_loaders.base.BaseLoader Twitter tweets loader. Read tweets of user twitter handle. First you need to go to https://developer.twitter.com/en/docs/twitter-api /getting-started/getting-access-to-the-twitter-api to get your token. And create a v2 version of the app. Parameters auth_handler (Union[OAuthHandler, OAuth2BearerHandler]) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – load()[source] Load tweets. Return type List[langchain.schema.Document] classmethod from_bearer_token(oauth2_bearer_token, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from OAuth2 bearer token. Parameters oauth2_bearer_token (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader classmethod from_secrets(access_token, access_token_secret, consumer_key, consumer_secret, twitter_users, number_tweets=100)[source] Create a TwitterTweetLoader from access tokens and secrets. Parameters access_token (str) – access_token_secret (str) – consumer_key (str) – consumer_secret (str) – twitter_users (Sequence[str]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-52
consumer_secret (str) – twitter_users (Sequence[str]) – number_tweets (Optional[int]) – Return type langchain.document_loaders.twitter.TwitterTweetLoader class langchain.document_loaders.UnstructuredAPIFileIOLoader(file, mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileIOLoader Loader that uses the unstructured web API to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredAPIFileLoader(file_path='', mode='single', url='https://api.unstructured.io/general/v0/general', api_key='', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses the unstructured web API to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – url (str) – api_key (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredCSVLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load CSV files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEPubLoader(file_path, mode='single', **unstructured_kwargs)[source]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-53
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load epub files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredEmailLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load email files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredExcelLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load Microsoft Excel files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileIOLoader(file, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load file IO objects. Parameters file (Union[IO, Sequence[IO]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredFileLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredBaseLoader Loader that uses unstructured to load files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-54
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredHTMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load HTML files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredImageLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load image files, such as PNGs and JPGs. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredMarkdownLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load markdown files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredODTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load open office ODT files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredOrgModeLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-55
Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load Org-Mode files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPDFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load PDF files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredPowerPointLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load powerpoint files. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRSTLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load RST files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredRTFLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load rtf files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-56
mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredURLLoader(urls, continue_on_failure=True, mode='single', show_progress_bar=False, **unstructured_kwargs)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses unstructured to load HTML files. Parameters urls (List[str]) – continue_on_failure (bool) – mode (str) – show_progress_bar (bool) – unstructured_kwargs (Any) – load()[source] Load file. Return type List[langchain.schema.Document] class langchain.document_loaders.UnstructuredWordDocumentLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load word documents. Parameters file_path (Union[str, List[str]]) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.UnstructuredXMLLoader(file_path, mode='single', **unstructured_kwargs)[source] Bases: langchain.document_loaders.unstructured.UnstructuredFileLoader Loader that uses unstructured to load XML files. Parameters file_path (str) – mode (str) – unstructured_kwargs (Any) – class langchain.document_loaders.WeatherDataLoader(client, places)[source] Bases: langchain.document_loaders.base.BaseLoader Weather Reader. Reads the forecast & current weather of any location using OpenWeatherMap’s free API. Checkout β€˜https://openweathermap.org/appid’ for more on how to generate a free OpenWeatherMap API. Parameters client (OpenWeatherMapAPIWrapper) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-57
OpenWeatherMap API. Parameters client (OpenWeatherMapAPIWrapper) – places (Sequence[str]) – Return type None classmethod from_params(places, *, openweathermap_api_key=None)[source] Parameters places (Sequence[str]) – openweathermap_api_key (Optional[str]) – Return type langchain.document_loaders.weather.WeatherDataLoader lazy_load()[source] Lazily load weather data for the given locations. Return type Iterator[langchain.schema.Document] load()[source] Load weather data for the given locations. Return type List[langchain.schema.Document] class langchain.document_loaders.WebBaseLoader(web_path, header_template=None, verify=True, proxies=None)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that uses urllib and beautiful soup to load webpages. Parameters web_path (Union[str, List[str]]) – header_template (Optional[dict]) – verify (Optional[bool]) – proxies (Optional[dict]) – requests_per_second: int = 2 Max number of concurrent requests to make. default_parser: str = 'html.parser' Default parser to use for BeautifulSoup. requests_kwargs: Dict[str, Any] = {} kwargs for requests raise_for_status: bool = False Raise an exception if http status code denotes an error. bs_get_text_kwargs: Dict[str, Any] = {} kwargs for beatifulsoup4 get_text web_paths: List[str] property web_path: str async fetch_all(urls)[source] Fetch all urls concurrently with rate limiting. Parameters urls (List[str]) – Return type Any
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-58
Parameters urls (List[str]) – Return type Any scrape_all(urls, parser=None)[source] Fetch all urls, then return soups for all results. Parameters urls (List[str]) – parser (Optional[str]) – Return type List[Any] scrape(parser=None)[source] Scrape data from webpage and return it in BeautifulSoup format. Parameters parser (Optional[str]) – Return type Any lazy_load()[source] Lazy load text from the url(s) in web_path. Return type Iterator[langchain.schema.Document] load()[source] Load text from the url(s) in web_path. Return type List[langchain.schema.Document] aload()[source] Load text from the urls in web_path async into Documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WhatsAppChatLoader(path)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads WhatsApp messages text file. Parameters path (str) – load()[source] Load documents. Return type List[langchain.schema.Document] class langchain.document_loaders.WikipediaLoader(query, lang='en', load_max_docs=100, load_all_available_meta=False, doc_content_chars_max=4000)[source] Bases: langchain.document_loaders.base.BaseLoader Loads a query result from www.wikipedia.org into a list of Documents. The hard limit on the number of downloaded Documents is 300 for now. Each wiki page represents one Document. Parameters query (str) – lang (str) – load_max_docs (Optional[int]) – load_all_available_meta (Optional[bool]) –
https://api.python.langchain.com/en/latest/modules/document_loaders.html
e84f0fb94639-59
load_all_available_meta (Optional[bool]) – doc_content_chars_max (Optional[int]) – load()[source] Loads the query result from Wikipedia into a list of Documents. Returns A list of Document objects representing the loadedWikipedia pages. Return type List[Document] class langchain.document_loaders.YoutubeAudioLoader(urls, save_dir)[source] Bases: langchain.document_loaders.blob_loaders.schema.BlobLoader Load YouTube urls as audio file(s). Parameters urls (List[str]) – save_dir (str) – yield_blobs()[source] Yield audio blobs for each url. Return type Iterable[langchain.document_loaders.blob_loaders.schema.Blob] class langchain.document_loaders.YoutubeLoader(video_id, add_video_info=False, language='en', translation='en', continue_on_failure=False)[source] Bases: langchain.document_loaders.base.BaseLoader Loader that loads Youtube transcripts. Parameters video_id (str) – add_video_info (bool) – language (Union[str, Sequence[str]]) – translation (str) – continue_on_failure (bool) – static extract_video_id(youtube_url)[source] Extract video id from common YT urls. Parameters youtube_url (str) – Return type str classmethod from_youtube_url(youtube_url, **kwargs)[source] Given youtube URL, load video. Parameters youtube_url (str) – kwargs (Any) – Return type langchain.document_loaders.youtube.YoutubeLoader load()[source] Load documents. Return type List[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_loaders.html
47c6ebf9eed9-0
Document Transformers Transform documents langchain.document_transformers.get_stateful_documents(documents)[source] Convert a list of documents to a list of documents with state. Parameters documents (Sequence[langchain.schema.Document]) – The documents to convert. Returns A list of documents with state. Return type Sequence[langchain.document_transformers._DocumentWithState] class langchain.document_transformers.EmbeddingsRedundantFilter(*, embeddings, similarity_fn=<function cosine_similarity>, similarity_threshold=0.95)[source] Bases: langchain.schema.BaseDocumentTransformer, pydantic.main.BaseModel Filter that drops redundant documents by comparing their embeddings. Parameters embeddings (langchain.embeddings.base.Embeddings) – similarity_fn (Callable) – similarity_threshold (float) – Return type None attribute embeddings: langchain.embeddings.base.Embeddings [Required] Embeddings to use for embedding document contents. attribute similarity_fn: Callable = <function cosine_similarity> Similarity function for comparing documents. Function expected to take as input two matrices (List[List[float]]) and return a matrix of scores where higher values indicate greater similarity. attribute similarity_threshold: float = 0.95 Threshold for determining when two documents are similar enough to be considered redundant. async atransform_documents(documents, **kwargs)[source] Asynchronously transform a list of documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] transform_documents(documents, **kwargs)[source] Filter down documents. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document]
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-1
kwargs (Any) – Return type Sequence[langchain.schema.Document] Text Splitters Functionality for splitting text. class langchain.text_splitter.TextSplitter(chunk_size=4000, chunk_overlap=200, length_function=<built-in function len>, keep_separator=False, add_start_index=False)[source] Bases: langchain.schema.BaseDocumentTransformer, abc.ABC Interface for splitting text into chunks. Parameters chunk_size (int) – chunk_overlap (int) – length_function (Callable[[str], int]) – keep_separator (bool) – add_start_index (bool) – Return type None abstract split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] create_documents(texts, metadatas=None)[source] Create documents from a list of texts. Parameters texts (List[str]) – metadatas (Optional[List[dict]]) – Return type List[langchain.schema.Document] split_documents(documents)[source] Split documents. Parameters documents (Iterable[langchain.schema.Document]) – Return type List[langchain.schema.Document] classmethod from_huggingface_tokenizer(tokenizer, **kwargs)[source] Text splitter that uses HuggingFace tokenizer to count length. Parameters tokenizer (Any) – kwargs (Any) – Return type langchain.text_splitter.TextSplitter classmethod from_tiktoken_encoder(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source] Text splitter that uses tiktoken encoder to count length. Parameters encoding_name (str) – model_name (Optional[str]) –
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-2
Parameters encoding_name (str) – model_name (Optional[str]) – allowed_special (Union[Literal['all'], typing.AbstractSet[str]]) – disallowed_special (Union[Literal['all'], typing.Collection[str]]) – kwargs (Any) – Return type langchain.text_splitter.TS transform_documents(documents, **kwargs)[source] Transform sequence of documents by splitting them. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] async atransform_documents(documents, **kwargs)[source] Asynchronously transform a sequence of documents by splitting them. Parameters documents (Sequence[langchain.schema.Document]) – kwargs (Any) – Return type Sequence[langchain.schema.Document] class langchain.text_splitter.CharacterTextSplitter(separator='\n\n', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at characters. Parameters separator (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str] class langchain.text_splitter.LineType[source] Bases: TypedDict Line type as typed dict. metadata: Dict[str, str] content: str class langchain.text_splitter.HeaderType[source] Bases: TypedDict Header type as typed dict. level: int name: str data: str class langchain.text_splitter.MarkdownHeaderTextSplitter(headers_to_split_on, return_each_line=False)[source] Bases: object
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-3
Bases: object Implementation of splitting markdown files based on specified headers. Parameters headers_to_split_on (List[Tuple[str, str]]) – return_each_line (bool) – aggregate_lines_to_chunks(lines)[source] Combine lines with common metadata into chunks :param lines: Line of text / associated header metadata Parameters lines (List[langchain.text_splitter.LineType]) – Return type List[langchain.schema.Document] split_text(text)[source] Split markdown file :param text: Markdown file Parameters text (str) – Return type List[langchain.schema.Document] class langchain.text_splitter.Tokenizer(chunk_overlap: 'int', tokens_per_chunk: 'int', decode: 'Callable[[list[int]], str]', encode: 'Callable[[str], List[int]]')[source] Bases: object Parameters chunk_overlap (int) – tokens_per_chunk (int) – decode (Callable[[list[int]], str]) – encode (Callable[[str], List[int]]) – Return type None chunk_overlap: int tokens_per_chunk: int decode: Callable[[list[int]], str] encode: Callable[[str], List[int]] langchain.text_splitter.split_text_on_tokens(*, text, tokenizer)[source] Split incoming text and return chunks. Parameters text (str) – tokenizer (langchain.text_splitter.Tokenizer) – Return type List[str] class langchain.text_splitter.TokenTextSplitter(encoding_name='gpt2', model_name=None, allowed_special={}, disallowed_special='all', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at tokens. Parameters
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-4
Implementation of splitting text that looks at tokens. Parameters encoding_name (str) – model_name (Optional[str]) – allowed_special (Union[Literal['all'], AbstractSet[str]]) – disallowed_special (Union[Literal['all'], Collection[str]]) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] class langchain.text_splitter.SentenceTransformersTokenTextSplitter(chunk_overlap=50, model_name='sentence-transformers/all-mpnet-base-v2', tokens_per_chunk=None, **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at tokens. Parameters chunk_overlap (int) – model_name (str) – tokens_per_chunk (Optional[int]) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] count_tokens(*, text)[source] Parameters text (str) – Return type int class langchain.text_splitter.Language(value, names=None, *, module=None, qualname=None, type=None, start=1, boundary=None)[source] Bases: str, enum.Enum CPP = 'cpp' GO = 'go' JAVA = 'java' JS = 'js' PHP = 'php' PROTO = 'proto' PYTHON = 'python' RST = 'rst' RUBY = 'ruby' RUST = 'rust'
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-5
RUBY = 'ruby' RUST = 'rust' SCALA = 'scala' SWIFT = 'swift' MARKDOWN = 'markdown' LATEX = 'latex' HTML = 'html' SOL = 'sol' class langchain.text_splitter.RecursiveCharacterTextSplitter(separators=None, keep_separator=True, **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. Parameters separators (Optional[List[str]]) – keep_separator (bool) – kwargs (Any) – Return type None split_text(text)[source] Split text into multiple components. Parameters text (str) – Return type List[str] classmethod from_language(language, **kwargs)[source] Parameters language (langchain.text_splitter.Language) – kwargs (Any) – Return type langchain.text_splitter.RecursiveCharacterTextSplitter static get_separators_for_language(language)[source] Parameters language (langchain.text_splitter.Language) – Return type List[str] class langchain.text_splitter.NLTKTextSplitter(separator='\n\n', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at sentences using NLTK. Parameters separator (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str]
https://api.python.langchain.com/en/latest/modules/document_transformers.html
47c6ebf9eed9-6
Parameters text (str) – Return type List[str] class langchain.text_splitter.SpacyTextSplitter(separator='\n\n', pipeline='en_core_web_sm', **kwargs)[source] Bases: langchain.text_splitter.TextSplitter Implementation of splitting text that looks at sentences using Spacy. Parameters separator (str) – pipeline (str) – kwargs (Any) – Return type None split_text(text)[source] Split incoming text and return chunks. Parameters text (str) – Return type List[str] class langchain.text_splitter.PythonCodeTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Python syntax. Parameters kwargs (Any) – Return type None class langchain.text_splitter.MarkdownTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Markdown-formatted headings. Parameters kwargs (Any) – Return type None class langchain.text_splitter.LatexTextSplitter(**kwargs)[source] Bases: langchain.text_splitter.RecursiveCharacterTextSplitter Attempts to split the text along Latex-formatted layout elements. Parameters kwargs (Any) – Return type None
https://api.python.langchain.com/en/latest/modules/document_transformers.html
32e5ce03a7a6-0
All modules for which code is available langchain.agents.agent langchain.agents.agent_toolkits.azure_cognitive_services.toolkit langchain.agents.agent_toolkits.csv.base langchain.agents.agent_toolkits.file_management.toolkit langchain.agents.agent_toolkits.gmail.toolkit langchain.agents.agent_toolkits.jira.toolkit langchain.agents.agent_toolkits.json.base langchain.agents.agent_toolkits.json.toolkit langchain.agents.agent_toolkits.nla.toolkit langchain.agents.agent_toolkits.openapi.base langchain.agents.agent_toolkits.openapi.toolkit langchain.agents.agent_toolkits.pandas.base langchain.agents.agent_toolkits.playwright.toolkit langchain.agents.agent_toolkits.powerbi.base langchain.agents.agent_toolkits.powerbi.chat_base langchain.agents.agent_toolkits.powerbi.toolkit langchain.agents.agent_toolkits.python.base langchain.agents.agent_toolkits.spark.base langchain.agents.agent_toolkits.spark_sql.base langchain.agents.agent_toolkits.spark_sql.toolkit langchain.agents.agent_toolkits.sql.base langchain.agents.agent_toolkits.sql.toolkit langchain.agents.agent_toolkits.vectorstore.base langchain.agents.agent_toolkits.vectorstore.toolkit langchain.agents.agent_toolkits.zapier.toolkit langchain.agents.agent_types langchain.agents.conversational.base langchain.agents.conversational_chat.base langchain.agents.initialize langchain.agents.load_tools langchain.agents.loading langchain.agents.mrkl.base langchain.agents.openai_functions_agent.base langchain.agents.react.base langchain.agents.self_ask_with_search.base langchain.agents.structured_chat.base langchain.callbacks.aim_callback langchain.callbacks.argilla_callback langchain.callbacks.arize_callback
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-1
langchain.callbacks.argilla_callback langchain.callbacks.arize_callback langchain.callbacks.clearml_callback langchain.callbacks.comet_ml_callback langchain.callbacks.file langchain.callbacks.human langchain.callbacks.infino_callback langchain.callbacks.manager langchain.callbacks.mlflow_callback langchain.callbacks.openai_info langchain.callbacks.stdout langchain.callbacks.streaming_aiter langchain.callbacks.streaming_stdout langchain.callbacks.streaming_stdout_final_only langchain.callbacks.streamlit langchain.callbacks.streamlit.streamlit_callback_handler langchain.callbacks.wandb_callback langchain.callbacks.whylabs_callback langchain.chains.api.base langchain.chains.api.openapi.chain langchain.chains.combine_documents.base langchain.chains.combine_documents.map_reduce langchain.chains.combine_documents.map_rerank langchain.chains.combine_documents.refine langchain.chains.combine_documents.stuff langchain.chains.constitutional_ai.base langchain.chains.conversation.base langchain.chains.conversational_retrieval.base langchain.chains.flare.base langchain.chains.graph_qa.base langchain.chains.graph_qa.cypher langchain.chains.graph_qa.kuzu langchain.chains.graph_qa.nebulagraph langchain.chains.hyde.base langchain.chains.llm langchain.chains.llm_bash.base langchain.chains.llm_checker.base langchain.chains.llm_math.base langchain.chains.llm_requests langchain.chains.llm_summarization_checker.base langchain.chains.loading langchain.chains.mapreduce langchain.chains.moderation langchain.chains.natbot.base langchain.chains.openai_functions.citation_fuzzy_match langchain.chains.openai_functions.extraction langchain.chains.openai_functions.qa_with_structure
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-2
langchain.chains.openai_functions.qa_with_structure langchain.chains.openai_functions.tagging langchain.chains.pal.base langchain.chains.qa_generation.base langchain.chains.qa_with_sources.base langchain.chains.qa_with_sources.retrieval langchain.chains.qa_with_sources.vector_db langchain.chains.retrieval_qa.base langchain.chains.router.base langchain.chains.router.llm_router langchain.chains.router.multi_prompt langchain.chains.router.multi_retrieval_qa langchain.chains.sequential langchain.chains.sql_database.base langchain.chains.transform langchain.chat_models.anthropic langchain.chat_models.azure_openai langchain.chat_models.fake langchain.chat_models.google_palm langchain.chat_models.openai langchain.chat_models.promptlayer_openai langchain.chat_models.vertexai langchain.document_loaders.acreom langchain.document_loaders.airbyte_json langchain.document_loaders.airtable langchain.document_loaders.apify_dataset langchain.document_loaders.arxiv langchain.document_loaders.azlyrics langchain.document_loaders.azure_blob_storage_container langchain.document_loaders.azure_blob_storage_file langchain.document_loaders.bibtex langchain.document_loaders.bigquery langchain.document_loaders.bilibili langchain.document_loaders.blackboard langchain.document_loaders.blob_loaders.file_system langchain.document_loaders.blob_loaders.schema langchain.document_loaders.blob_loaders.youtube_audio langchain.document_loaders.blockchain langchain.document_loaders.chatgpt langchain.document_loaders.college_confidential langchain.document_loaders.confluence langchain.document_loaders.conllu langchain.document_loaders.csv_loader langchain.document_loaders.dataframe langchain.document_loaders.diffbot
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-3
langchain.document_loaders.dataframe langchain.document_loaders.diffbot langchain.document_loaders.directory langchain.document_loaders.discord langchain.document_loaders.docugami langchain.document_loaders.duckdb_loader langchain.document_loaders.email langchain.document_loaders.embaas langchain.document_loaders.epub langchain.document_loaders.evernote langchain.document_loaders.excel langchain.document_loaders.facebook_chat langchain.document_loaders.fauna langchain.document_loaders.figma langchain.document_loaders.gcs_directory langchain.document_loaders.gcs_file langchain.document_loaders.git langchain.document_loaders.gitbook langchain.document_loaders.github langchain.document_loaders.googledrive langchain.document_loaders.gutenberg langchain.document_loaders.hn langchain.document_loaders.html langchain.document_loaders.html_bs langchain.document_loaders.hugging_face_dataset langchain.document_loaders.ifixit langchain.document_loaders.image langchain.document_loaders.image_captions langchain.document_loaders.imsdb langchain.document_loaders.iugu langchain.document_loaders.joplin langchain.document_loaders.json_loader langchain.document_loaders.larksuite langchain.document_loaders.markdown langchain.document_loaders.mastodon langchain.document_loaders.max_compute langchain.document_loaders.mediawikidump langchain.document_loaders.merge langchain.document_loaders.mhtml langchain.document_loaders.modern_treasury langchain.document_loaders.notebook langchain.document_loaders.notion langchain.document_loaders.notiondb langchain.document_loaders.obsidian langchain.document_loaders.odt langchain.document_loaders.onedrive langchain.document_loaders.onedrive_file
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-4
langchain.document_loaders.onedrive langchain.document_loaders.onedrive_file langchain.document_loaders.open_city_data langchain.document_loaders.org_mode langchain.document_loaders.pdf langchain.document_loaders.powerpoint langchain.document_loaders.psychic langchain.document_loaders.pyspark_dataframe langchain.document_loaders.python langchain.document_loaders.readthedocs langchain.document_loaders.recursive_url_loader langchain.document_loaders.reddit langchain.document_loaders.roam langchain.document_loaders.rst langchain.document_loaders.rtf langchain.document_loaders.s3_directory langchain.document_loaders.s3_file langchain.document_loaders.sitemap langchain.document_loaders.slack_directory langchain.document_loaders.snowflake_loader langchain.document_loaders.spreedly langchain.document_loaders.srt langchain.document_loaders.stripe langchain.document_loaders.telegram langchain.document_loaders.tencent_cos_directory langchain.document_loaders.tencent_cos_file langchain.document_loaders.text langchain.document_loaders.tomarkdown langchain.document_loaders.toml langchain.document_loaders.trello langchain.document_loaders.twitter langchain.document_loaders.unstructured langchain.document_loaders.url langchain.document_loaders.url_playwright langchain.document_loaders.url_selenium langchain.document_loaders.weather langchain.document_loaders.web_base langchain.document_loaders.whatsapp_chat langchain.document_loaders.wikipedia langchain.document_loaders.word_document langchain.document_loaders.xml langchain.document_loaders.youtube langchain.document_transformers langchain.embeddings.aleph_alpha langchain.embeddings.bedrock langchain.embeddings.cohere langchain.embeddings.dashscope langchain.embeddings.deepinfra langchain.embeddings.elasticsearch
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-5
langchain.embeddings.deepinfra langchain.embeddings.elasticsearch langchain.embeddings.embaas langchain.embeddings.fake langchain.embeddings.huggingface langchain.embeddings.huggingface_hub langchain.embeddings.llamacpp langchain.embeddings.minimax langchain.embeddings.modelscope_hub langchain.embeddings.mosaicml langchain.embeddings.openai langchain.embeddings.sagemaker_endpoint langchain.embeddings.self_hosted langchain.embeddings.self_hosted_hugging_face langchain.embeddings.tensorflow_hub langchain.experimental.autonomous_agents.autogpt.agent langchain.experimental.autonomous_agents.baby_agi.baby_agi langchain.experimental.generative_agents.generative_agent langchain.experimental.generative_agents.memory langchain.llms.ai21 langchain.llms.aleph_alpha langchain.llms.amazon_api_gateway langchain.llms.anthropic langchain.llms.anyscale langchain.llms.aviary langchain.llms.azureml_endpoint langchain.llms.bananadev langchain.llms.baseten langchain.llms.beam langchain.llms.bedrock langchain.llms.cerebriumai langchain.llms.clarifai langchain.llms.cohere langchain.llms.ctransformers langchain.llms.databricks langchain.llms.deepinfra langchain.llms.fake langchain.llms.forefrontai langchain.llms.google_palm langchain.llms.gooseai langchain.llms.gpt4all langchain.llms.huggingface_endpoint langchain.llms.huggingface_hub langchain.llms.huggingface_pipeline langchain.llms.huggingface_text_gen_inference langchain.llms.human langchain.llms.llamacpp langchain.llms.manifest
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-6
langchain.llms.llamacpp langchain.llms.manifest langchain.llms.modal langchain.llms.mosaicml langchain.llms.nlpcloud langchain.llms.octoai_endpoint langchain.llms.openai langchain.llms.openllm langchain.llms.openlm langchain.llms.petals langchain.llms.pipelineai langchain.llms.predictionguard langchain.llms.promptlayer_openai langchain.llms.replicate langchain.llms.rwkv langchain.llms.sagemaker_endpoint langchain.llms.self_hosted langchain.llms.self_hosted_hugging_face langchain.llms.stochasticai langchain.llms.textgen langchain.llms.vertexai langchain.llms.writer langchain.memory.buffer langchain.memory.buffer_window langchain.memory.chat_message_histories.cassandra langchain.memory.chat_message_histories.cosmos_db langchain.memory.chat_message_histories.dynamodb langchain.memory.chat_message_histories.file langchain.memory.chat_message_histories.in_memory langchain.memory.chat_message_histories.momento langchain.memory.chat_message_histories.mongodb langchain.memory.chat_message_histories.postgres langchain.memory.chat_message_histories.redis langchain.memory.chat_message_histories.sql langchain.memory.chat_message_histories.zep langchain.memory.combined langchain.memory.entity langchain.memory.kg langchain.memory.motorhead_memory langchain.memory.readonly langchain.memory.simple langchain.memory.summary langchain.memory.summary_buffer langchain.memory.token_buffer langchain.memory.vectorstore langchain.output_parsers.boolean langchain.output_parsers.combining langchain.output_parsers.datetime langchain.output_parsers.enum langchain.output_parsers.fix langchain.output_parsers.list
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-7
langchain.output_parsers.fix langchain.output_parsers.list langchain.output_parsers.pydantic langchain.output_parsers.rail_parser langchain.output_parsers.regex langchain.output_parsers.regex_dict langchain.output_parsers.retry langchain.output_parsers.structured langchain.prompts.base langchain.prompts.chat langchain.prompts.example_selector.length_based langchain.prompts.example_selector.ngram_overlap langchain.prompts.example_selector.semantic_similarity langchain.prompts.few_shot langchain.prompts.few_shot_with_templates langchain.prompts.loading langchain.prompts.pipeline langchain.prompts.prompt langchain.requests langchain.retrievers.arxiv langchain.retrievers.azure_cognitive_search langchain.retrievers.chatgpt_plugin_retriever langchain.retrievers.contextual_compression langchain.retrievers.databerry langchain.retrievers.docarray langchain.retrievers.document_compressors.base langchain.retrievers.document_compressors.chain_extract langchain.retrievers.document_compressors.chain_filter langchain.retrievers.document_compressors.cohere_rerank langchain.retrievers.document_compressors.embeddings_filter langchain.retrievers.elastic_search_bm25 langchain.retrievers.kendra langchain.retrievers.knn langchain.retrievers.llama_index langchain.retrievers.merger_retriever langchain.retrievers.metal langchain.retrievers.milvus langchain.retrievers.multi_query langchain.retrievers.pinecone_hybrid_search langchain.retrievers.pupmed langchain.retrievers.remote_retriever langchain.retrievers.self_query.base langchain.retrievers.svm langchain.retrievers.tfidf langchain.retrievers.time_weighted_retriever
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-8
langchain.retrievers.tfidf langchain.retrievers.time_weighted_retriever langchain.retrievers.vespa_retriever langchain.retrievers.weaviate_hybrid_search langchain.retrievers.wikipedia langchain.retrievers.zep langchain.retrievers.zilliz langchain.schema langchain.text_splitter langchain.tools.arxiv.tool langchain.tools.azure_cognitive_services.form_recognizer langchain.tools.azure_cognitive_services.image_analysis langchain.tools.azure_cognitive_services.speech2text langchain.tools.azure_cognitive_services.text2speech langchain.tools.base langchain.tools.bing_search.tool langchain.tools.brave_search.tool langchain.tools.convert_to_openai langchain.tools.ddg_search.tool langchain.tools.file_management.copy langchain.tools.file_management.delete langchain.tools.file_management.file_search langchain.tools.file_management.list_dir langchain.tools.file_management.move langchain.tools.file_management.read langchain.tools.file_management.write langchain.tools.gmail.create_draft langchain.tools.gmail.get_message langchain.tools.gmail.get_thread langchain.tools.gmail.search langchain.tools.gmail.send_message langchain.tools.google_places.tool langchain.tools.google_search.tool langchain.tools.google_serper.tool langchain.tools.graphql.tool langchain.tools.human.tool langchain.tools.ifttt langchain.tools.interaction.tool langchain.tools.jira.tool langchain.tools.json.tool langchain.tools.metaphor_search.tool langchain.tools.openapi.utils.api_models langchain.tools.openweathermap.tool langchain.tools.playwright.click langchain.tools.playwright.current_page langchain.tools.playwright.extract_hyperlinks langchain.tools.playwright.extract_text langchain.tools.playwright.get_elements langchain.tools.playwright.navigate langchain.tools.playwright.navigate_back langchain.tools.plugin
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-9
langchain.tools.playwright.navigate langchain.tools.playwright.navigate_back langchain.tools.plugin langchain.tools.powerbi.tool langchain.tools.pubmed.tool langchain.tools.python.tool langchain.tools.requests.tool langchain.tools.scenexplain.tool langchain.tools.searx_search.tool langchain.tools.shell.tool langchain.tools.sleep.tool langchain.tools.spark_sql.tool langchain.tools.sql_database.tool langchain.tools.steamship_image_generation.tool langchain.tools.vectorstore.tool langchain.tools.wikipedia.tool langchain.tools.wolfram_alpha.tool langchain.tools.youtube.search langchain.tools.zapier.tool langchain.utilities.apify langchain.utilities.arxiv langchain.utilities.awslambda langchain.utilities.bash langchain.utilities.bibtex langchain.utilities.bing_search langchain.utilities.brave_search langchain.utilities.duckduckgo_search langchain.utilities.google_places_api langchain.utilities.google_search langchain.utilities.google_serper langchain.utilities.graphql langchain.utilities.jira langchain.utilities.max_compute langchain.utilities.metaphor_search langchain.utilities.openapi langchain.utilities.openweathermap langchain.utilities.powerbi langchain.utilities.pupmed langchain.utilities.python langchain.utilities.scenexplain langchain.utilities.searx_search langchain.utilities.serpapi langchain.utilities.spark_sql langchain.utilities.twilio langchain.utilities.wikipedia langchain.utilities.wolfram_alpha langchain.utilities.zapier langchain.vectorstores.alibabacloud_opensearch langchain.vectorstores.analyticdb langchain.vectorstores.annoy langchain.vectorstores.atlas langchain.vectorstores.awadb langchain.vectorstores.azuresearch langchain.vectorstores.base langchain.vectorstores.cassandra langchain.vectorstores.chroma langchain.vectorstores.clarifai
https://api.python.langchain.com/en/latest/_modules/index.html
32e5ce03a7a6-10
langchain.vectorstores.chroma langchain.vectorstores.clarifai langchain.vectorstores.clickhouse langchain.vectorstores.deeplake langchain.vectorstores.docarray.hnsw langchain.vectorstores.docarray.in_memory langchain.vectorstores.elastic_vector_search langchain.vectorstores.faiss langchain.vectorstores.hologres langchain.vectorstores.lancedb langchain.vectorstores.matching_engine langchain.vectorstores.milvus langchain.vectorstores.mongodb_atlas langchain.vectorstores.myscale langchain.vectorstores.opensearch_vector_search langchain.vectorstores.pinecone langchain.vectorstores.qdrant langchain.vectorstores.redis langchain.vectorstores.rocksetdb langchain.vectorstores.singlestoredb langchain.vectorstores.sklearn langchain.vectorstores.starrocks langchain.vectorstores.supabase langchain.vectorstores.tair langchain.vectorstores.tigris langchain.vectorstores.typesense langchain.vectorstores.vectara langchain.vectorstores.weaviate langchain.vectorstores.zilliz pydantic.config pydantic.main
https://api.python.langchain.com/en/latest/_modules/index.html
94e2996c717b-0
Source code for langchain.text_splitter """Functionality for splitting text.""" from __future__ import annotations import copy import logging import re from abc import ABC, abstractmethod from dataclasses import dataclass from enum import Enum from typing import ( AbstractSet, Any, Callable, Collection, Dict, Iterable, List, Literal, Optional, Sequence, Tuple, Type, TypedDict, TypeVar, Union, cast, ) from langchain.docstore.document import Document from langchain.schema import BaseDocumentTransformer logger = logging.getLogger(__name__) TS = TypeVar("TS", bound="TextSplitter") def _split_text_with_regex( text: str, separator: str, keep_separator: bool ) -> List[str]: # Now that we have the separator, split the text if separator: if keep_separator: # The parentheses in the pattern keep the delimiters in the result. _splits = re.split(f"({separator})", text) splits = [_splits[i] + _splits[i + 1] for i in range(1, len(_splits), 2)] if len(_splits) % 2 == 0: splits += _splits[-1:] splits = [_splits[0]] + splits else: splits = text.split(separator) else: splits = list(text) return [s for s in splits if s != ""] [docs]class TextSplitter(BaseDocumentTransformer, ABC): """Interface for splitting text into chunks.""" def __init__( self,
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-1
"""Interface for splitting text into chunks.""" def __init__( self, chunk_size: int = 4000, chunk_overlap: int = 200, length_function: Callable[[str], int] = len, keep_separator: bool = False, add_start_index: bool = False, ) -> None: """Create a new TextSplitter. Args: chunk_size: Maximum size of chunks to return chunk_overlap: Overlap in characters between chunks length_function: Function that measures the length of given chunks keep_separator: Whether or not to keep the separator in the chunks add_start_index: If `True`, includes chunk's start index in metadata """ if chunk_overlap > chunk_size: raise ValueError( f"Got a larger chunk overlap ({chunk_overlap}) than chunk size " f"({chunk_size}), should be smaller." ) self._chunk_size = chunk_size self._chunk_overlap = chunk_overlap self._length_function = length_function self._keep_separator = keep_separator self._add_start_index = add_start_index [docs] @abstractmethod def split_text(self, text: str) -> List[str]: """Split text into multiple components.""" [docs] def create_documents( self, texts: List[str], metadatas: Optional[List[dict]] = None ) -> List[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] for i, text in enumerate(texts): index = -1 for chunk in self.split_text(text): metadata = copy.deepcopy(_metadatas[i])
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-2
metadata = copy.deepcopy(_metadatas[i]) if self._add_start_index: index = text.find(chunk, index + 1) metadata["start_index"] = index new_doc = Document(page_content=chunk, metadata=metadata) documents.append(new_doc) return documents [docs] def split_documents(self, documents: Iterable[Document]) -> List[Document]: """Split documents.""" texts, metadatas = [], [] for doc in documents: texts.append(doc.page_content) metadatas.append(doc.metadata) return self.create_documents(texts, metadatas=metadatas) def _join_docs(self, docs: List[str], separator: str) -> Optional[str]: text = separator.join(docs) text = text.strip() if text == "": return None else: return text def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]: # We now want to combine these smaller pieces into medium size # chunks to send to the LLM. separator_len = self._length_function(separator) docs = [] current_doc: List[str] = [] total = 0 for d in splits: _len = self._length_function(d) if ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size ): if total > self._chunk_size: logger.warning( f"Created a chunk of size {total}, " f"which is longer than the specified {self._chunk_size}" ) if len(current_doc) > 0:
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-3
) if len(current_doc) > 0: doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) # Keep on popping if: # - we have a larger chunk than in the chunk overlap # - or if we still have any chunks and the length is long while total > self._chunk_overlap or ( total + _len + (separator_len if len(current_doc) > 0 else 0) > self._chunk_size and total > 0 ): total -= self._length_function(current_doc[0]) + ( separator_len if len(current_doc) > 1 else 0 ) current_doc = current_doc[1:] current_doc.append(d) total += _len + (separator_len if len(current_doc) > 1 else 0) doc = self._join_docs(current_doc, separator) if doc is not None: docs.append(doc) return docs [docs] @classmethod def from_huggingface_tokenizer(cls, tokenizer: Any, **kwargs: Any) -> TextSplitter: """Text splitter that uses HuggingFace tokenizer to count length.""" try: from transformers import PreTrainedTokenizerBase if not isinstance(tokenizer, PreTrainedTokenizerBase): raise ValueError( "Tokenizer received was not an instance of PreTrainedTokenizerBase" ) def _huggingface_tokenizer_length(text: str) -> int: return len(tokenizer.encode(text)) except ImportError: raise ValueError( "Could not import transformers python package. " "Please install it with `pip install transformers`." )
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-4
"Please install it with `pip install transformers`." ) return cls(length_function=_huggingface_tokenizer_length, **kwargs) [docs] @classmethod def from_tiktoken_encoder( cls: Type[TS], encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> TS: """Text splitter that uses tiktoken encoder to count length.""" try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to calculate max_tokens_for_prompt. " "Please install it with `pip install tiktoken`." ) if model_name is not None: enc = tiktoken.encoding_for_model(model_name) else: enc = tiktoken.get_encoding(encoding_name) def _tiktoken_encoder(text: str) -> int: return len( enc.encode( text, allowed_special=allowed_special, disallowed_special=disallowed_special, ) ) if issubclass(cls, TokenTextSplitter): extra_kwargs = { "encoding_name": encoding_name, "model_name": model_name, "allowed_special": allowed_special, "disallowed_special": disallowed_special, } kwargs = {**kwargs, **extra_kwargs} return cls(length_function=_tiktoken_encoder, **kwargs) [docs] def transform_documents(
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-5
[docs] def transform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Transform sequence of documents by splitting them.""" return self.split_documents(list(documents)) [docs] async def atransform_documents( self, documents: Sequence[Document], **kwargs: Any ) -> Sequence[Document]: """Asynchronously transform a sequence of documents by splitting them.""" raise NotImplementedError [docs]class CharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters.""" def __init__(self, separator: str = "\n\n", **kwargs: Any) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs) self._separator = separator [docs] def split_text(self, text: str) -> List[str]: """Split incoming text and return chunks.""" # First we naively split the large input into a bunch of smaller ones. splits = _split_text_with_regex(text, self._separator, self._keep_separator) _separator = "" if self._keep_separator else self._separator return self._merge_splits(splits, _separator) [docs]class LineType(TypedDict): """Line type as typed dict.""" metadata: Dict[str, str] content: str [docs]class HeaderType(TypedDict): """Header type as typed dict.""" level: int name: str data: str [docs]class MarkdownHeaderTextSplitter: """Implementation of splitting markdown files based on specified headers.""" def __init__( self, headers_to_split_on: List[Tuple[str, str]], return_each_line: bool = False ):
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-6
): """Create a new MarkdownHeaderTextSplitter. Args: headers_to_split_on: Headers we want to track return_each_line: Return each line w/ associated headers """ # Output line-by-line or aggregated into chunks w/ common headers self.return_each_line = return_each_line # Given the headers we want to split on, # (e.g., "#, ##, etc") order by length self.headers_to_split_on = sorted( headers_to_split_on, key=lambda split: len(split[0]), reverse=True ) [docs] def aggregate_lines_to_chunks(self, lines: List[LineType]) -> List[Document]: """Combine lines with common metadata into chunks Args: lines: Line of text / associated header metadata """ aggregated_chunks: List[LineType] = [] for line in lines: if ( aggregated_chunks and aggregated_chunks[-1]["metadata"] == line["metadata"] ): # If the last line in the aggregated list # has the same metadata as the current line, # append the current content to the last lines's content aggregated_chunks[-1]["content"] += " \n" + line["content"] else: # Otherwise, append the current line to the aggregated list aggregated_chunks.append(line) return [ Document(page_content=chunk["content"], metadata=chunk["metadata"]) for chunk in aggregated_chunks ] [docs] def split_text(self, text: str) -> List[Document]: """Split markdown file Args: text: Markdown file""" # Split the input text by newline character ("\n"). lines = text.split("\n") # Final output
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-7
lines = text.split("\n") # Final output lines_with_metadata: List[LineType] = [] # Content and metadata of the chunk currently being processed current_content: List[str] = [] current_metadata: Dict[str, str] = {} # Keep track of the nested header structure # header_stack: List[Dict[str, Union[int, str]]] = [] header_stack: List[HeaderType] = [] initial_metadata: Dict[str, str] = {} for line in lines: stripped_line = line.strip() # Check each line against each of the header types (e.g., #, ##) for sep, name in self.headers_to_split_on: # Check if line starts with a header that we intend to split on if stripped_line.startswith(sep) and ( # Header with no text OR header is followed by space # Both are valid conditions that sep is being used a header len(stripped_line) == len(sep) or stripped_line[len(sep)] == " " ): # Ensure we are tracking the header as metadata if name is not None: # Get the current header level current_header_level = sep.count("#") # Pop out headers of lower or same level from the stack while ( header_stack and header_stack[-1]["level"] >= current_header_level ): # We have encountered a new header # at the same or higher level popped_header = header_stack.pop() # Clear the metadata for the # popped header in initial_metadata if popped_header["name"] in initial_metadata: initial_metadata.pop(popped_header["name"]) # Push the current header to the stack
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-8
# Push the current header to the stack header: HeaderType = { "level": current_header_level, "name": name, "data": stripped_line[len(sep) :].strip(), } header_stack.append(header) # Update initial_metadata with the current header initial_metadata[name] = header["data"] # Add the previous line to the lines_with_metadata # only if current_content is not empty if current_content: lines_with_metadata.append( { "content": "\n".join(current_content), "metadata": current_metadata.copy(), } ) current_content.clear() break else: if stripped_line: current_content.append(stripped_line) elif current_content: lines_with_metadata.append( { "content": "\n".join(current_content), "metadata": current_metadata.copy(), } ) current_content.clear() current_metadata = initial_metadata.copy() if current_content: lines_with_metadata.append( {"content": "\n".join(current_content), "metadata": current_metadata} ) # lines_with_metadata has each line with associated header metadata # aggregate these into chunks based on common metadata if not self.return_each_line: return self.aggregate_lines_to_chunks(lines_with_metadata) else: return [ Document(page_content=chunk["content"], metadata=chunk["metadata"]) for chunk in lines_with_metadata ] # should be in newer Python versions (3.10+) # @dataclass(frozen=True, kw_only=True, slots=True) [docs]@dataclass(frozen=True) class Tokenizer: chunk_overlap: int
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-9
class Tokenizer: chunk_overlap: int tokens_per_chunk: int decode: Callable[[list[int]], str] encode: Callable[[str], List[int]] [docs]def split_text_on_tokens(*, text: str, tokenizer: Tokenizer) -> List[str]: """Split incoming text and return chunks.""" splits: List[str] = [] input_ids = tokenizer.encode(text) start_idx = 0 cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] while start_idx < len(input_ids): splits.append(tokenizer.decode(chunk_ids)) start_idx += tokenizer.tokens_per_chunk - tokenizer.chunk_overlap cur_idx = min(start_idx + tokenizer.tokens_per_chunk, len(input_ids)) chunk_ids = input_ids[start_idx:cur_idx] return splits [docs]class TokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, encoding_name: str = "gpt2", model_name: Optional[str] = None, allowed_special: Union[Literal["all"], AbstractSet[str]] = set(), disallowed_special: Union[Literal["all"], Collection[str]] = "all", **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs) try: import tiktoken except ImportError: raise ImportError( "Could not import tiktoken python package. " "This is needed in order to for TokenTextSplitter. " "Please install it with `pip install tiktoken`." ) if model_name is not None:
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-10
) if model_name is not None: enc = tiktoken.encoding_for_model(model_name) else: enc = tiktoken.get_encoding(encoding_name) self._tokenizer = enc self._allowed_special = allowed_special self._disallowed_special = disallowed_special [docs] def split_text(self, text: str) -> List[str]: def _encode(_text: str) -> List[int]: return self._tokenizer.encode( _text, allowed_special=self._allowed_special, disallowed_special=self._disallowed_special, ) tokenizer = Tokenizer( chunk_overlap=self._chunk_overlap, tokens_per_chunk=self._chunk_size, decode=self._tokenizer.decode, encode=_encode, ) return split_text_on_tokens(text=text, tokenizer=tokenizer) [docs]class SentenceTransformersTokenTextSplitter(TextSplitter): """Implementation of splitting text that looks at tokens.""" def __init__( self, chunk_overlap: int = 50, model_name: str = "sentence-transformers/all-mpnet-base-v2", tokens_per_chunk: Optional[int] = None, **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(**kwargs, chunk_overlap=chunk_overlap) try: from sentence_transformers import SentenceTransformer except ImportError: raise ImportError( "Could not import sentence_transformer python package. " "This is needed in order to for SentenceTransformersTokenTextSplitter. " "Please install it with `pip install sentence-transformers`." ) self.model_name = model_name
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-11
) self.model_name = model_name self._model = SentenceTransformer(self.model_name) self.tokenizer = self._model.tokenizer self._initialize_chunk_configuration(tokens_per_chunk=tokens_per_chunk) def _initialize_chunk_configuration( self, *, tokens_per_chunk: Optional[int] ) -> None: self.maximum_tokens_per_chunk = cast(int, self._model.max_seq_length) if tokens_per_chunk is None: self.tokens_per_chunk = self.maximum_tokens_per_chunk else: self.tokens_per_chunk = tokens_per_chunk if self.tokens_per_chunk > self.maximum_tokens_per_chunk: raise ValueError( f"The token limit of the models '{self.model_name}'" f" is: {self.maximum_tokens_per_chunk}." f" Argument tokens_per_chunk={self.tokens_per_chunk}" f" > maximum token limit." ) [docs] def split_text(self, text: str) -> List[str]: def encode_strip_start_and_stop_token_ids(text: str) -> List[int]: return self._encode(text)[1:-1] tokenizer = Tokenizer( chunk_overlap=self._chunk_overlap, tokens_per_chunk=self.tokens_per_chunk, decode=self.tokenizer.decode, encode=encode_strip_start_and_stop_token_ids, ) return split_text_on_tokens(text=text, tokenizer=tokenizer) [docs] def count_tokens(self, *, text: str) -> int: return len(self._encode(text)) _max_length_equal_32_bit_integer = 2**32 def _encode(self, text: str) -> List[int]: token_ids_with_start_and_end_token_ids = self.tokenizer.encode( text,
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-12
token_ids_with_start_and_end_token_ids = self.tokenizer.encode( text, max_length=self._max_length_equal_32_bit_integer, truncation="do_not_truncate", ) return token_ids_with_start_and_end_token_ids [docs]class Language(str, Enum): CPP = "cpp" GO = "go" JAVA = "java" JS = "js" PHP = "php" PROTO = "proto" PYTHON = "python" RST = "rst" RUBY = "ruby" RUST = "rust" SCALA = "scala" SWIFT = "swift" MARKDOWN = "markdown" LATEX = "latex" HTML = "html" SOL = "sol" [docs]class RecursiveCharacterTextSplitter(TextSplitter): """Implementation of splitting text that looks at characters. Recursively tries to split by different characters to find one that works. """ def __init__( self, separators: Optional[List[str]] = None, keep_separator: bool = True, **kwargs: Any, ) -> None: """Create a new TextSplitter.""" super().__init__(keep_separator=keep_separator, **kwargs) self._separators = separators or ["\n\n", "\n", " ", ""] def _split_text(self, text: str, separators: List[str]) -> List[str]: """Split incoming text and return chunks.""" final_chunks = [] # Get appropriate separator to use separator = separators[-1] new_separators = [] for i, _s in enumerate(separators):
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-13
for i, _s in enumerate(separators): if _s == "": separator = _s break if re.search(_s, text): separator = _s new_separators = separators[i + 1 :] break splits = _split_text_with_regex(text, separator, self._keep_separator) # Now go merging things, recursively splitting longer texts. _good_splits = [] _separator = "" if self._keep_separator else separator for s in splits: if self._length_function(s) < self._chunk_size: _good_splits.append(s) else: if _good_splits: merged_text = self._merge_splits(_good_splits, _separator) final_chunks.extend(merged_text) _good_splits = [] if not new_separators: final_chunks.append(s) else: other_info = self._split_text(s, new_separators) final_chunks.extend(other_info) if _good_splits: merged_text = self._merge_splits(_good_splits, _separator) final_chunks.extend(merged_text) return final_chunks [docs] def split_text(self, text: str) -> List[str]: return self._split_text(text, self._separators) [docs] @classmethod def from_language( cls, language: Language, **kwargs: Any ) -> RecursiveCharacterTextSplitter: separators = cls.get_separators_for_language(language) return cls(separators=separators, **kwargs) [docs] @staticmethod def get_separators_for_language(language: Language) -> List[str]: if language == Language.CPP: return [
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-14
if language == Language.CPP: return [ # Split along class definitions "\nclass ", # Split along function definitions "\nvoid ", "\nint ", "\nfloat ", "\ndouble ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.GO: return [ # Split along function definitions "\nfunc ", "\nvar ", "\nconst ", "\ntype ", # Split along control flow statements "\nif ", "\nfor ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.JAVA: return [ # Split along class definitions "\nclass ", # Split along method definitions "\npublic ", "\nprotected ", "\nprivate ", "\nstatic ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.JS: return [ # Split along function definitions "\nfunction ", "\nconst ", "\nlet ",
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-15
"\nfunction ", "\nconst ", "\nlet ", "\nvar ", "\nclass ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nswitch ", "\ncase ", "\ndefault ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PHP: return [ # Split along function definitions "\nfunction ", # Split along class definitions "\nclass ", # Split along control flow statements "\nif ", "\nforeach ", "\nwhile ", "\ndo ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PROTO: return [ # Split along message definitions "\nmessage ", # Split along service definitions "\nservice ", # Split along enum definitions "\nenum ", # Split along option definitions "\noption ", # Split along import statements "\nimport ", # Split along syntax declarations "\nsyntax ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.PYTHON: return [ # First, try to split along class definitions "\nclass ", "\ndef ", "\n\tdef ", # Now split by the normal type of lines "\n\n",
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-16
# Now split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RST: return [ # Split along section titles "\n=+\n", "\n-+\n", "\n\*+\n", # Split along directive markers "\n\n.. *\n\n", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RUBY: return [ # Split along method definitions "\ndef ", "\nclass ", # Split along control flow statements "\nif ", "\nunless ", "\nwhile ", "\nfor ", "\ndo ", "\nbegin ", "\nrescue ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.RUST: return [ # Split along function definitions "\nfn ", "\nconst ", "\nlet ", # Split along control flow statements "\nif ", "\nwhile ", "\nfor ", "\nloop ", "\nmatch ", "\nconst ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.SCALA: return [ # Split along class definitions "\nclass ", "\nobject ", # Split along method definitions "\ndef ",
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html
94e2996c717b-17
"\nobject ", # Split along method definitions "\ndef ", "\nval ", "\nvar ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\nmatch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.SWIFT: return [ # Split along function definitions "\nfunc ", # Split along class definitions "\nclass ", "\nstruct ", "\nenum ", # Split along control flow statements "\nif ", "\nfor ", "\nwhile ", "\ndo ", "\nswitch ", "\ncase ", # Split by the normal type of lines "\n\n", "\n", " ", "", ] elif language == Language.MARKDOWN: return [ # First, try to split along Markdown headings (starting with level 2) "\n#{1,6} ", # Note the alternative syntax for headings (below) is not handled here # Heading level 2 # --------------- # End of code block "```\n", # Horizontal lines "\n\*\*\*+\n", "\n---+\n", "\n___+\n", # Note that this splitter doesn't handle horizontal lines defined # by *three or more* of ***, ---, or ___, but this is not handled "\n\n", "\n", " ", "", ]
https://api.python.langchain.com/en/latest/_modules/langchain/text_splitter.html