id
stringlengths
14
15
text
stringlengths
35
2.07k
embedding
sequence
source
stringlengths
61
154
5766f2f470dc-0
langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector¶ class langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector(*, vectorstore: VectorStore, k: int = 4, example_keys: Optional[List[str]] = None, input_keys: Optional[List[str]] = None, fetch_k: int = 20)[source]¶ Bases: SemanticSimilarityExampleSelector ExampleSelector that selects examples based on Max Marginal Relevance. This was shown to improve performance in this paper: https://arxiv.org/pdf/2211.13892.pdf Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_keys: Optional[List[str]] = None¶ Optional keys to filter examples to. param fetch_k: int = 20¶ Number of examples to fetch to rerank. param input_keys: Optional[List[str]] = None¶ Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. param k: int = 4¶ Number of examples to select. param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶ VectorStore than contains information about examples. add_example(example: Dict[str, str]) → str¶ Add new example to vectorstore. classmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, fetch_k: int = 20, **vectorstore_cls_kwargs: Any) → MaxMarginalRelevanceExampleSelector[source]¶ Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters
[ 5317, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 14877, 45337, 992, 697, 33194, 13617, 6001, 55609, 198, 1058, 8859, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 14877, 45337, 992, 697, 33194, 13617, 6001, 4163, 11, 4724, 4412, 25, 4290, 6221, 11, 597, 25, 528, 284, 220, 19, 11, 3187, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 7963, 4803, 25, 528, 284, 220, 508, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 75433, 35502, 488, 13617, 6001, 198, 13617, 6001, 430, 50243, 10507, 3196, 389, 7639, 24079, 992, 1050, 33194, 627, 2028, 574, 6982, 311, 7417, 5178, 304, 420, 5684, 512, 2485, 1129, 277, 89833, 2726, 48851, 14, 12425, 16, 13, 10350, 6083, 16378, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 3187, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 7039, 311, 4141, 10507, 311, 627, 913, 7963, 4803, 25, 528, 284, 220, 508, 55609, 198, 2903, 315, 10507, 311, 7963, 311, 49647, 1201, 627, 913, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 7039, 311, 4141, 1988, 311, 13, 1442, 3984, 11, 279, 2778, 374, 3196, 389, 198, 1820, 1988, 7482, 4619, 315, 682, 7482, 627, 913, 597, 25, 528, 284, 220, 19, 55609, 198, 2903, 315, 10507, 311, 3373, 627, 913, 4724, 4412, 25, 8859, 8995, 48203, 44569, 9105, 14621, 6221, 510, 8327, 60, 55609, 198, 3866, 6221, 1109, 5727, 2038, 922, 10507, 627, 723, 40404, 67303, 25, 30226, 17752, 11, 610, 2526, 11651, 610, 55609, 198, 2261, 502, 3187, 311, 4724, 4412, 627, 27853, 505, 46379, 5580, 4112, 25, 1796, 58, 8644, 1145, 71647, 25, 38168, 25624, 11, 4724, 4412, 39756, 25, 4078, 58, 3866, 6221, 1145, 597, 25, 528, 284, 220, 19, 11, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 7963, 4803, 25, 528, 284, 220, 508, 11, 3146, 3295, 4412, 39756, 37335, 25, 5884, 8, 11651, 7639, 45337, 992, 697, 33194, 13617, 6001, 76747, 60, 55609, 198, 4110, 597, 64630, 3187, 9551, 1701, 3187, 1160, 323, 71647, 627, 1079, 71, 78553, 10507, 43111, 3196, 389, 3319, 38723, 627, 9905 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
5766f2f470dc-1
Reshuffles examples dynamically based on query similarity. Parameters examples – List of examples to use in the prompt. embeddings – An iniialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls – A vector store DB interface class, e.g. FAISS. k – Number of examples to select input_keys – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store. select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on semantic similarity. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 1079, 71, 78553, 10507, 43111, 3196, 389, 3319, 38723, 627, 9905, 198, 52768, 1389, 1796, 315, 10507, 311, 1005, 304, 279, 10137, 627, 12529, 25624, 1389, 1556, 17225, 78074, 40188, 5446, 3834, 11, 384, 1326, 13, 5377, 15836, 26566, 25624, 26914, 3295, 4412, 39756, 1389, 362, 4724, 3637, 6078, 3834, 538, 11, 384, 1326, 13, 15358, 31949, 627, 74, 1389, 5742, 315, 10507, 311, 3373, 198, 1379, 12919, 1389, 1442, 3984, 11, 279, 2778, 374, 3196, 389, 279, 1988, 7482, 198, 65937, 315, 682, 7482, 627, 3295, 4412, 39756, 37335, 1389, 10309, 16901, 8649, 2576, 369, 4724, 3637, 198, 16851, 198, 791, 13688, 6001, 55686, 11, 22126, 555, 264, 4724, 3637, 627, 1779, 46379, 5498, 29282, 25, 30226, 17752, 11, 610, 2526, 11651, 1796, 58, 8644, 1483, 2484, 60, 55609, 198, 3461, 902, 10507, 311, 1005, 3196, 389, 42833, 38723, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.MaxMarginalRelevanceExampleSelector.html
e918766f14fa-0
langchain.prompts.base.BasePromptTemplate¶ class langchain.prompts.base.BasePromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]¶ Bases: Serializable, ABC Base class for all prompt templates, returning a prompt. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[langchain.schema.BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ dict(**kwargs: Any) → Dict[source]¶ Return dictionary representation of prompt. abstract format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") abstract format_prompt(**kwargs: Any) → PromptValue[source]¶ Create Chat Messages. partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate[source]¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None[source]¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
[ 5317, 8995, 61848, 13044, 9105, 13316, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 9105, 13316, 55715, 7423, 4163, 11, 1988, 29282, 25, 1796, 17752, 1145, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 11, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 25901, 11, 19921, 198, 4066, 538, 369, 682, 10137, 20506, 11, 13758, 264, 10137, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 1988, 29282, 25, 1796, 17752, 60, 510, 8327, 60, 55609, 198, 32, 1160, 315, 279, 5144, 315, 279, 7482, 279, 10137, 3896, 25283, 627, 913, 2612, 19024, 25, 12536, 58, 5317, 8995, 31992, 13316, 5207, 6707, 60, 284, 2290, 55609, 198, 4438, 311, 4820, 279, 2612, 315, 8260, 459, 445, 11237, 389, 420, 24001, 10137, 627, 913, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 510, 15669, 60, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 76747, 60, 55609, 198, 5715, 11240, 13340, 315, 10137, 627, 16647, 3645, 22551, 9872, 25, 5884, 8, 11651, 610, 76747, 60, 55609, 198, 4152, 279, 10137, 449, 279, 11374, 627, 9905, 198, 9872, 1389, 5884, 6105, 311, 387, 5946, 311, 279, 10137, 3896, 627, 16851, 198, 32, 24001, 925, 627, 13617, 512, 41681, 8180, 46129, 16, 429, 8134, 1158, 16647, 3645, 62521, 22551, 9872, 25, 5884, 8, 11651, 60601, 1150, 76747, 60, 55609, 198, 4110, 13149, 27827, 627, 38520, 22551, 9872, 25, 9323, 17752, 11, 54223, 58, 13292, 610, 30716, 11651, 5464, 55715, 7423, 76747, 60, 55609, 198, 5715, 264, 7276, 315, 279, 10137, 3896, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 76747, 60, 55609, 198, 8960, 279, 10137, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 6352, 311, 3665, 10137, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 41681, 5799, 4971, 2703, 45221, 2398, 4420, 15091, 34506, 863, 340, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.base.BasePromptTemplate.html
e918766f14fa-1
to_json_not_implemented() → SerializedNotImplemented¶ validator validate_variable_names  »  all fields[source]¶ Validate variable names do not include restricted names. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 16503, 9788, 14977, 9366, 4194, 8345, 4194, 682, 5151, 76747, 60, 55609, 198, 18409, 3977, 5144, 656, 539, 2997, 22486, 5144, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 76747, 60, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.base.BasePromptTemplate.html
54dd344fac0a-0
langchain.prompts.base.StringPromptTemplate¶ class langchain.prompts.base.StringPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None)[source]¶ Bases: BasePromptTemplate, ABC String prompt should expose the format method, returning a prompt. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[langchain.schema.BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. abstract format(**kwargs: Any) → str¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue[source]¶ Create Chat Messages. partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶
[ 5317, 8995, 61848, 13044, 9105, 6567, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 9105, 6567, 55715, 7423, 4163, 11, 1988, 29282, 25, 1796, 17752, 1145, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 11, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 55715, 7423, 11, 19921, 198, 707, 10137, 1288, 29241, 279, 3645, 1749, 11, 13758, 264, 10137, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 1988, 29282, 25, 1796, 17752, 60, 510, 8327, 60, 55609, 198, 32, 1160, 315, 279, 5144, 315, 279, 7482, 279, 10137, 3896, 25283, 627, 913, 2612, 19024, 25, 12536, 58, 5317, 8995, 31992, 13316, 5207, 6707, 60, 284, 2290, 55609, 198, 4438, 311, 4820, 279, 2612, 315, 8260, 459, 445, 11237, 389, 420, 24001, 10137, 627, 913, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 510, 15669, 60, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 10137, 627, 16647, 3645, 22551, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 279, 10137, 449, 279, 11374, 627, 9905, 198, 9872, 1389, 5884, 6105, 311, 387, 5946, 311, 279, 10137, 3896, 627, 16851, 198, 32, 24001, 925, 627, 13617, 512, 41681, 8180, 46129, 16, 429, 8134, 1158, 2293, 62521, 22551, 9872, 25, 5884, 8, 11651, 60601, 1150, 76747, 60, 55609, 198, 4110, 13149, 27827, 627, 38520, 22551, 9872, 25, 9323, 17752, 11, 54223, 58, 13292, 610, 30716, 11651, 5464, 55715, 7423, 55609, 198, 5715, 264, 7276, 315, 279, 10137, 3896, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 10137, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 6352, 311, 3665, 10137, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 41681, 5799, 4971, 2703, 45221, 2398, 4420, 15091, 34506, 863, 340, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
54dd344fac0a-1
to_json_not_implemented() → SerializedNotImplemented¶ validator validate_variable_names  »  all fields¶ Validate variable names do not include restricted names. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 16503, 9788, 14977, 9366, 4194, 8345, 4194, 682, 5151, 55609, 198, 18409, 3977, 5144, 656, 539, 2997, 22486, 5144, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.base.StringPromptTemplate.html
dfeb42cd9470-0
langchain.prompts.chat.ChatMessagePromptTemplate¶ class langchain.prompts.chat.ChatMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None, role: str)[source]¶ Bases: BaseStringMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ param role: str [Required]¶ format(**kwargs: Any) → BaseMessage[source]¶ To a BaseMessage. format_messages(**kwargs: Any) → List[BaseMessage]¶ To messages. classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶ classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_variables: List[str]¶ Input variables for this prompt template. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶
[ 5317, 8995, 61848, 13044, 27215, 59944, 2097, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 27215, 59944, 2097, 55715, 7423, 4163, 11, 10137, 25, 935, 55715, 7423, 11, 5217, 37335, 25, 6587, 284, 2290, 11, 3560, 25, 610, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 707, 2097, 55715, 7423, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 5217, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 10137, 25, 8859, 8995, 61848, 13044, 9105, 6567, 55715, 7423, 510, 8327, 60, 55609, 198, 913, 3560, 25, 610, 510, 8327, 60, 55609, 198, 2293, 22551, 9872, 25, 5884, 8, 11651, 5464, 2097, 76747, 60, 55609, 198, 1271, 264, 5464, 2097, 627, 2293, 24321, 22551, 9872, 25, 5884, 8, 11651, 1796, 58, 4066, 2097, 60, 55609, 198, 1271, 6743, 627, 27853, 505, 8864, 31063, 25, 610, 11, 3896, 9132, 25, 610, 284, 364, 69, 31981, 518, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 27853, 505, 8864, 2517, 31063, 2517, 25, 9323, 17752, 11, 8092, 1145, 1988, 29282, 25, 1796, 17752, 1145, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 29282, 25, 1796, 17752, 60, 55609, 198, 2566, 7482, 369, 420, 10137, 3896, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
dfeb42cd9470-1
Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.ChatMessagePromptTemplate.html
1d0b198c5eff-0
langchain.prompts.chat.MessagesPlaceholder¶ class langchain.prompts.chat.MessagesPlaceholder(*, variable_name: str)[source]¶ Bases: BaseMessagePromptTemplate Prompt template that assumes variable is already list of messages. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param variable_name: str [Required]¶ format_messages(**kwargs: Any) → List[BaseMessage][source]¶ To a BaseMessage. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_variables: List[str]¶ Input variables for this prompt template. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 5317, 8995, 61848, 13044, 27215, 55545, 49405, 55609, 198, 1058, 8859, 8995, 61848, 13044, 27215, 55545, 49405, 4163, 11, 3977, 1292, 25, 610, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 2097, 55715, 7423, 198, 55715, 3896, 430, 22204, 3977, 374, 2736, 1160, 315, 6743, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 3977, 1292, 25, 610, 510, 8327, 60, 55609, 198, 2293, 24321, 22551, 9872, 25, 5884, 8, 11651, 1796, 58, 4066, 2097, 1483, 2484, 60, 55609, 198, 1271, 264, 5464, 2097, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 29282, 25, 1796, 17752, 60, 55609, 198, 2566, 7482, 369, 420, 10137, 3896, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.MessagesPlaceholder.html
455de99bcc95-0
langchain.prompts.chat.AIMessagePromptTemplate¶ class langchain.prompts.chat.AIMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]¶ Bases: BaseStringMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ format(**kwargs: Any) → BaseMessage[source]¶ To a BaseMessage. format_messages(**kwargs: Any) → List[BaseMessage]¶ To messages. classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶ classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_variables: List[str]¶ Input variables for this prompt template. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 5317, 8995, 61848, 13044, 27215, 885, 1829, 808, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 27215, 885, 1829, 808, 55715, 7423, 4163, 11, 10137, 25, 935, 55715, 7423, 11, 5217, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 707, 2097, 55715, 7423, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 5217, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 10137, 25, 8859, 8995, 61848, 13044, 9105, 6567, 55715, 7423, 510, 8327, 60, 55609, 198, 2293, 22551, 9872, 25, 5884, 8, 11651, 5464, 2097, 76747, 60, 55609, 198, 1271, 264, 5464, 2097, 627, 2293, 24321, 22551, 9872, 25, 5884, 8, 11651, 1796, 58, 4066, 2097, 60, 55609, 198, 1271, 6743, 627, 27853, 505, 8864, 31063, 25, 610, 11, 3896, 9132, 25, 610, 284, 364, 69, 31981, 518, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 27853, 505, 8864, 2517, 31063, 2517, 25, 9323, 17752, 11, 8092, 1145, 1988, 29282, 25, 1796, 17752, 1145, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 29282, 25, 1796, 17752, 60, 55609, 198, 2566, 7482, 369, 420, 10137, 3896, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.AIMessagePromptTemplate.html
368edaf94c27-0
langchain.prompts.example_selector.base.BaseExampleSelector¶ class langchain.prompts.example_selector.base.BaseExampleSelector[source]¶ Bases: ABC Interface for selecting examples to include in prompts. Methods __init__() add_example(example) Add new example to store for a key. select_examples(input_variables) Select which examples to use based on the inputs. abstract add_example(example: Dict[str, str]) → Any[source]¶ Add new example to store for a key. abstract select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on the inputs.
[ 5317, 8995, 61848, 13044, 7880, 29990, 9105, 13316, 13617, 6001, 55609, 198, 1058, 8859, 8995, 61848, 13044, 7880, 29990, 9105, 13316, 13617, 6001, 76747, 60, 55609, 198, 33, 2315, 25, 19921, 198, 5160, 369, 27397, 10507, 311, 2997, 304, 52032, 627, 18337, 198, 565, 2381, 33716, 723, 40404, 67303, 340, 2261, 502, 3187, 311, 3637, 369, 264, 1401, 627, 1779, 46379, 5498, 29282, 340, 3461, 902, 10507, 311, 1005, 3196, 389, 279, 11374, 627, 16647, 923, 40404, 67303, 25, 30226, 17752, 11, 610, 2526, 11651, 5884, 76747, 60, 55609, 198, 2261, 502, 3187, 311, 3637, 369, 264, 1401, 627, 16647, 3373, 46379, 5498, 29282, 25, 30226, 17752, 11, 610, 2526, 11651, 1796, 58, 8644, 1483, 2484, 60, 55609, 198, 3461, 902, 10507, 311, 1005, 3196, 389, 279, 11374, 13 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.base.BaseExampleSelector.html
15f65866ffbc-0
langchain.prompts.base.validate_jinja2¶ langchain.prompts.base.validate_jinja2(template: str, input_variables: List[str]) → None[source]¶ Validate that the input variables are valid for the template. Raise an exception if missing or extra variables are found. Parameters template – The template string. input_variables – The input variables.
[ 5317, 8995, 61848, 13044, 9105, 20090, 5487, 42520, 17, 55609, 198, 5317, 8995, 61848, 13044, 9105, 20090, 5487, 42520, 17, 31063, 25, 610, 11, 1988, 29282, 25, 1796, 17752, 2526, 11651, 2290, 76747, 60, 55609, 198, 18409, 430, 279, 1988, 7482, 527, 2764, 369, 279, 3896, 627, 94201, 459, 4788, 422, 7554, 477, 5066, 7482, 527, 1766, 627, 9905, 198, 4308, 1389, 578, 3896, 925, 627, 1379, 29282, 1389, 578, 1988, 7482, 13 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.base.validate_jinja2.html
ec520651f1a1-0
langchain.prompts.chat.ChatPromptTemplate¶ class langchain.prompts.chat.ChatPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, messages: List[Union[BaseMessagePromptTemplate, BaseMessage]])[source]¶ Bases: BaseChatPromptTemplate, ABC Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param messages: List[Union[BaseMessagePromptTemplate, BaseMessage]] [Required]¶ param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_messages(**kwargs: Any) → List[BaseMessage][source]¶ Format kwargs into a list of messages. format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_messages(messages: Sequence[Union[BaseMessagePromptTemplate, BaseMessage]]) → ChatPromptTemplate[source]¶ classmethod from_role_strings(string_messages: List[Tuple[str, str]]) → ChatPromptTemplate[source]¶
[ 5317, 8995, 61848, 13044, 27215, 59944, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 27215, 59944, 55715, 7423, 4163, 11, 1988, 29282, 25, 1796, 17752, 1145, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 11, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 284, 2290, 11, 6743, 25, 1796, 58, 33758, 58, 4066, 2097, 55715, 7423, 11, 5464, 2097, 30716, 58, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 16047, 55715, 7423, 11, 19921, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 1988, 29282, 25, 1796, 17752, 60, 510, 8327, 60, 55609, 198, 32, 1160, 315, 279, 5144, 315, 279, 7482, 279, 10137, 3896, 25283, 627, 913, 6743, 25, 1796, 58, 33758, 58, 4066, 2097, 55715, 7423, 11, 5464, 2097, 5163, 510, 8327, 60, 55609, 198, 913, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 55609, 198, 4438, 311, 4820, 279, 2612, 315, 8260, 459, 445, 11237, 389, 420, 24001, 10137, 627, 913, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 510, 15669, 60, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 10137, 627, 2293, 22551, 9872, 25, 5884, 8, 11651, 610, 76747, 60, 55609, 198, 4152, 279, 10137, 449, 279, 11374, 627, 9905, 198, 9872, 1389, 5884, 6105, 311, 387, 5946, 311, 279, 10137, 3896, 627, 16851, 198, 32, 24001, 925, 627, 13617, 512, 41681, 8180, 46129, 16, 429, 8134, 1158, 2293, 24321, 22551, 9872, 25, 5884, 8, 11651, 1796, 58, 4066, 2097, 1483, 2484, 60, 55609, 198, 4152, 16901, 1139, 264, 1160, 315, 6743, 627, 2293, 62521, 22551, 9872, 25, 5884, 8, 11651, 60601, 1150, 55609, 198, 4110, 13149, 27827, 627, 27853, 505, 24321, 56805, 25, 29971, 58, 33758, 58, 4066, 2097, 55715, 7423, 11, 5464, 2097, 30716, 11651, 13149, 55715, 7423, 76747, 60, 55609, 198, 27853, 505, 20378, 34600, 3693, 24321, 25, 1796, 20961, 6189, 17752, 11, 610, 30716, 11651, 13149, 55715, 7423, 76747, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
ec520651f1a1-1
classmethod from_strings(string_messages: List[Tuple[Type[BaseMessagePromptTemplate], str]]) → ChatPromptTemplate[source]¶ classmethod from_template(template: str, **kwargs: Any) → ChatPromptTemplate[source]¶ partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate[source]¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None[source]¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_input_variables  »  all fields[source]¶ validator validate_variable_names  »  all fields¶ Validate variable names do not include restricted names. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 27853, 505, 34600, 3693, 24321, 25, 1796, 20961, 6189, 58, 941, 58, 4066, 2097, 55715, 7423, 1145, 610, 30716, 11651, 13149, 55715, 7423, 76747, 60, 55609, 198, 27853, 505, 8864, 31063, 25, 610, 11, 3146, 9872, 25, 5884, 8, 11651, 13149, 55715, 7423, 76747, 60, 55609, 198, 38520, 22551, 9872, 25, 9323, 17752, 11, 54223, 58, 13292, 610, 30716, 11651, 5464, 55715, 7423, 76747, 60, 55609, 198, 5715, 264, 7276, 315, 279, 10137, 3896, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 76747, 60, 55609, 198, 8960, 279, 10137, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 6352, 311, 3665, 10137, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 41681, 5799, 4971, 2703, 45221, 2398, 4420, 15091, 34506, 863, 340, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 16503, 9788, 6022, 29282, 4194, 8345, 4194, 682, 5151, 76747, 60, 55609, 198, 16503, 9788, 14977, 9366, 4194, 8345, 4194, 682, 5151, 55609, 198, 18409, 3977, 5144, 656, 539, 2997, 22486, 5144, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.ChatPromptTemplate.html
c1deffb2ca29-0
langchain.prompts.example_selector.semantic_similarity.sorted_values¶ langchain.prompts.example_selector.semantic_similarity.sorted_values(values: Dict[str, str]) → List[Any][source]¶ Return a list of values in dict sorted by key.
[ 5317, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 70197, 9324, 55609, 198, 5317, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 70197, 9324, 20706, 25, 30226, 17752, 11, 610, 2526, 11651, 1796, 71401, 1483, 2484, 60, 55609, 198, 5715, 264, 1160, 315, 2819, 304, 6587, 10839, 555, 1401, 13 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.sorted_values.html
e96090323bc2-0
langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector¶ class langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector(*, vectorstore: VectorStore, k: int = 4, example_keys: Optional[List[str]] = None, input_keys: Optional[List[str]] = None)[source]¶ Bases: BaseExampleSelector, BaseModel Example selector that selects examples based on SemanticSimilarity. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param example_keys: Optional[List[str]] = None¶ Optional keys to filter examples to. param input_keys: Optional[List[str]] = None¶ Optional keys to filter input to. If provided, the search is based on the input variables instead of all variables. param k: int = 4¶ Number of examples to select. param vectorstore: langchain.vectorstores.base.VectorStore [Required]¶ VectorStore than contains information about examples. add_example(example: Dict[str, str]) → str[source]¶ Add new example to vectorstore. classmethod from_examples(examples: List[dict], embeddings: Embeddings, vectorstore_cls: Type[VectorStore], k: int = 4, input_keys: Optional[List[str]] = None, **vectorstore_cls_kwargs: Any) → SemanticSimilarityExampleSelector[source]¶ Create k-shot example selector using example list and embeddings. Reshuffles examples dynamically based on query similarity. Parameters examples – List of examples to use in the prompt. embeddings – An initialized embedding API interface, e.g. OpenAIEmbeddings(). vectorstore_cls – A vector store DB interface class, e.g. FAISS. k – Number of examples to select
[ 5317, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 815, 336, 8322, 35502, 488, 13617, 6001, 55609, 198, 1058, 8859, 8995, 61848, 13044, 7880, 29990, 95139, 77336, 815, 336, 8322, 35502, 488, 13617, 6001, 4163, 11, 4724, 4412, 25, 4290, 6221, 11, 597, 25, 528, 284, 220, 19, 11, 3187, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 13617, 6001, 11, 65705, 198, 13617, 9551, 430, 50243, 10507, 3196, 389, 75433, 35502, 488, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 3187, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 7039, 311, 4141, 10507, 311, 627, 913, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 7039, 311, 4141, 1988, 311, 13, 1442, 3984, 11, 279, 2778, 374, 3196, 389, 198, 1820, 1988, 7482, 4619, 315, 682, 7482, 627, 913, 597, 25, 528, 284, 220, 19, 55609, 198, 2903, 315, 10507, 311, 3373, 627, 913, 4724, 4412, 25, 8859, 8995, 48203, 44569, 9105, 14621, 6221, 510, 8327, 60, 55609, 198, 3866, 6221, 1109, 5727, 2038, 922, 10507, 627, 723, 40404, 67303, 25, 30226, 17752, 11, 610, 2526, 11651, 610, 76747, 60, 55609, 198, 2261, 502, 3187, 311, 4724, 4412, 627, 27853, 505, 46379, 5580, 4112, 25, 1796, 58, 8644, 1145, 71647, 25, 38168, 25624, 11, 4724, 4412, 39756, 25, 4078, 58, 3866, 6221, 1145, 597, 25, 528, 284, 220, 19, 11, 1988, 12919, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 3295, 4412, 39756, 37335, 25, 5884, 8, 11651, 75433, 35502, 488, 13617, 6001, 76747, 60, 55609, 198, 4110, 597, 64630, 3187, 9551, 1701, 3187, 1160, 323, 71647, 627, 1079, 71, 78553, 10507, 43111, 3196, 389, 3319, 38723, 627, 9905, 198, 52768, 1389, 1796, 315, 10507, 311, 1005, 304, 279, 10137, 627, 12529, 25624, 1389, 1556, 17719, 40188, 5446, 3834, 11, 384, 1326, 13, 5377, 15836, 26566, 25624, 26914, 3295, 4412, 39756, 1389, 362, 4724, 3637, 6078, 3834, 538, 11, 384, 1326, 13, 15358, 31949, 627, 74, 1389, 5742, 315, 10507, 311, 3373 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
e96090323bc2-1
k – Number of examples to select input_keys – If provided, the search is based on the input variables instead of all variables. vectorstore_cls_kwargs – optional kwargs containing url for vector store Returns The ExampleSelector instantiated, backed by a vector store. select_examples(input_variables: Dict[str, str]) → List[dict][source]¶ Select which examples to use based on semantic similarity. model Config[source]¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 74, 1389, 5742, 315, 10507, 311, 3373, 198, 1379, 12919, 1389, 1442, 3984, 11, 279, 2778, 374, 3196, 389, 279, 1988, 7482, 198, 65937, 315, 682, 7482, 627, 3295, 4412, 39756, 37335, 1389, 10309, 16901, 8649, 2576, 369, 4724, 3637, 198, 16851, 198, 791, 13688, 6001, 55686, 11, 22126, 555, 264, 4724, 3637, 627, 1779, 46379, 5498, 29282, 25, 30226, 17752, 11, 610, 2526, 11651, 1796, 58, 8644, 1483, 2484, 60, 55609, 198, 3461, 902, 10507, 311, 1005, 3196, 389, 42833, 38723, 627, 2590, 5649, 76747, 60, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.example_selector.semantic_similarity.SemanticSimilarityExampleSelector.html
59d5b9d9fc95-0
langchain.prompts.prompt.PromptTemplate¶ class langchain.prompts.prompt.PromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, template: str, template_format: str = 'f-string', validate_template: bool = True)[source]¶ Bases: StringPromptTemplate Schema to represent a prompt for an LLM. Example from langchain import PromptTemplate prompt = PromptTemplate(input_variables=["foo"], template="Say {foo}") Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param input_variables: List[str] [Required]¶ A list of the names of the variables the prompt template expects. param output_parser: Optional[BaseOutputParser] = None¶ How to parse the output of calling an LLM on this formatted prompt. param partial_variables: Mapping[str, Union[str, Callable[[], str]]] [Optional]¶ param template: str [Required]¶ The prompt template. param template_format: str = 'f-string'¶ The format of the prompt template. Options are: ‘f-string’, ‘jinja2’. param validate_template: bool = True¶ Whether or not to try validating the template. dict(**kwargs: Any) → Dict¶ Return dictionary representation of prompt. format(**kwargs: Any) → str[source]¶ Format the prompt with the inputs. Parameters kwargs – Any arguments to be passed to the prompt template. Returns A formatted string. Example: prompt.format(variable1="foo") format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages.
[ 5317, 8995, 61848, 13044, 66499, 1087, 15091, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 66499, 1087, 15091, 7423, 4163, 11, 1988, 29282, 25, 1796, 17752, 1145, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 11, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 284, 2290, 11, 3896, 25, 610, 11, 3896, 9132, 25, 610, 284, 364, 69, 31981, 518, 9788, 8864, 25, 1845, 284, 3082, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 935, 55715, 7423, 198, 8802, 311, 4097, 264, 10137, 369, 459, 445, 11237, 627, 13617, 198, 1527, 8859, 8995, 1179, 60601, 7423, 198, 41681, 284, 60601, 7423, 5498, 29282, 29065, 8134, 8073, 3896, 429, 46864, 314, 8134, 14790, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 1988, 29282, 25, 1796, 17752, 60, 510, 8327, 60, 55609, 198, 32, 1160, 315, 279, 5144, 315, 279, 7482, 279, 10137, 3896, 25283, 627, 913, 2612, 19024, 25, 12536, 58, 4066, 5207, 6707, 60, 284, 2290, 55609, 198, 4438, 311, 4820, 279, 2612, 315, 8260, 459, 445, 11237, 389, 420, 24001, 10137, 627, 913, 7276, 29282, 25, 39546, 17752, 11, 9323, 17752, 11, 54223, 58, 13292, 610, 5163, 60, 510, 15669, 60, 55609, 198, 913, 3896, 25, 610, 510, 8327, 60, 55609, 198, 791, 10137, 3896, 627, 913, 3896, 9132, 25, 610, 284, 364, 69, 31981, 6, 55609, 198, 791, 3645, 315, 279, 10137, 3896, 13, 14908, 527, 25, 3451, 69, 31981, 20182, 3451, 73, 42520, 17, 529, 627, 913, 9788, 8864, 25, 1845, 284, 3082, 55609, 198, 25729, 477, 539, 311, 1456, 69772, 279, 3896, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 10137, 627, 2293, 22551, 9872, 25, 5884, 8, 11651, 610, 76747, 60, 55609, 198, 4152, 279, 10137, 449, 279, 11374, 627, 9905, 198, 9872, 1389, 5884, 6105, 311, 387, 5946, 311, 279, 10137, 3896, 627, 16851, 198, 32, 24001, 925, 627, 13617, 512, 41681, 8180, 46129, 16, 429, 8134, 1158, 2293, 62521, 22551, 9872, 25, 5884, 8, 11651, 60601, 1150, 55609, 198, 4110, 13149, 27827, 13 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
59d5b9d9fc95-1
format_prompt(**kwargs: Any) → PromptValue¶ Create Chat Messages. classmethod from_examples(examples: List[str], suffix: str, input_variables: List[str], example_separator: str = '\n\n', prefix: str = '', **kwargs: Any) → PromptTemplate[source]¶ Take examples in list format with prefix and suffix to create a prompt. Intended to be used as a way to dynamically create a prompt from examples. Parameters examples – List of examples to use in the prompt. suffix – String to go after the list of examples. Should generally set up the user’s input. input_variables – A list of variable names the final prompt template will expect. example_separator – The separator to use in between examples. Defaults to two new line characters. prefix – String that should go before any examples. Generally includes examples. Default to an empty string. Returns The final prompt generated. classmethod from_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → PromptTemplate[source]¶ Load a prompt from a file. Parameters template_file – The path to the file containing the prompt template. input_variables – A list of variable names the final prompt template will expect. Returns The prompt loaded from the file. classmethod from_template(template: str, **kwargs: Any) → PromptTemplate[source]¶ Load a prompt template from a template. partial(**kwargs: Union[str, Callable[[], str]]) → BasePromptTemplate¶ Return a partial of the prompt template. save(file_path: Union[Path, str]) → None¶ Save the prompt. Parameters file_path – Path to directory to save prompt to. Example: .. code-block:: python prompt.save(file_path=”path/prompt.yaml”) validator template_is_valid  »  all fields[source]¶
[ 2293, 62521, 22551, 9872, 25, 5884, 8, 11651, 60601, 1150, 55609, 198, 4110, 13149, 27827, 627, 27853, 505, 46379, 5580, 4112, 25, 1796, 17752, 1145, 21166, 25, 610, 11, 1988, 29282, 25, 1796, 17752, 1145, 3187, 59304, 25, 610, 284, 5307, 77, 1734, 518, 9436, 25, 610, 284, 9158, 3146, 9872, 25, 5884, 8, 11651, 60601, 7423, 76747, 60, 55609, 198, 18293, 10507, 304, 1160, 3645, 449, 9436, 323, 21166, 311, 1893, 264, 10137, 627, 1090, 2954, 311, 387, 1511, 439, 264, 1648, 311, 43111, 1893, 264, 10137, 505, 10507, 627, 9905, 198, 52768, 1389, 1796, 315, 10507, 311, 1005, 304, 279, 10137, 627, 27884, 1389, 935, 311, 733, 1306, 279, 1160, 315, 10507, 13, 12540, 8965, 198, 751, 709, 279, 1217, 753, 1988, 627, 1379, 29282, 1389, 362, 1160, 315, 3977, 5144, 279, 1620, 10137, 3896, 198, 14724, 1755, 627, 8858, 59304, 1389, 578, 25829, 311, 1005, 304, 1990, 10507, 13, 37090, 198, 998, 1403, 502, 1584, 5885, 627, 12113, 1389, 935, 430, 1288, 733, 1603, 904, 10507, 13, 44493, 5764, 198, 52768, 13, 8058, 311, 459, 4384, 925, 627, 16851, 198, 791, 1620, 10137, 8066, 627, 27853, 505, 2517, 31063, 2517, 25, 9323, 17752, 11, 8092, 1145, 1988, 29282, 25, 1796, 17752, 1145, 3146, 9872, 25, 5884, 8, 11651, 60601, 7423, 76747, 60, 55609, 198, 6003, 264, 10137, 505, 264, 1052, 627, 9905, 198, 4308, 2517, 1389, 578, 1853, 311, 279, 1052, 8649, 279, 10137, 3896, 627, 1379, 29282, 1389, 362, 1160, 315, 3977, 5144, 279, 1620, 10137, 3896, 198, 14724, 1755, 627, 16851, 198, 791, 10137, 6799, 505, 279, 1052, 627, 27853, 505, 8864, 31063, 25, 610, 11, 3146, 9872, 25, 5884, 8, 11651, 60601, 7423, 76747, 60, 55609, 198, 6003, 264, 10137, 3896, 505, 264, 3896, 627, 38520, 22551, 9872, 25, 9323, 17752, 11, 54223, 58, 13292, 610, 30716, 11651, 5464, 55715, 7423, 55609, 198, 5715, 264, 7276, 315, 279, 10137, 3896, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 10137, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 6352, 311, 3665, 10137, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 41681, 5799, 4971, 2703, 45221, 2398, 4420, 15091, 34506, 863, 340, 16503, 3896, 7037, 8501, 4194, 8345, 4194, 682, 5151, 76747, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
59d5b9d9fc95-2
validator template_is_valid  »  all fields[source]¶ Check that template and input variables are consistent. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ validator validate_variable_names  »  all fields¶ Validate variable names do not include restricted names. property lc_attributes: Dict[str, Any]¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 16503, 3896, 7037, 8501, 4194, 8345, 4194, 682, 5151, 76747, 60, 55609, 198, 4061, 430, 3896, 323, 1988, 7482, 527, 13263, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 16503, 9788, 14977, 9366, 4194, 8345, 4194, 682, 5151, 55609, 198, 18409, 3977, 5144, 656, 539, 2997, 22486, 5144, 627, 3784, 37313, 18741, 25, 30226, 17752, 11, 5884, 60, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.prompt.PromptTemplate.html
7122e9176f8f-0
langchain.prompts.chat.HumanMessagePromptTemplate¶ class langchain.prompts.chat.HumanMessagePromptTemplate(*, prompt: StringPromptTemplate, additional_kwargs: dict = None)[source]¶ Bases: BaseStringMessagePromptTemplate Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param additional_kwargs: dict [Optional]¶ param prompt: langchain.prompts.base.StringPromptTemplate [Required]¶ format(**kwargs: Any) → BaseMessage[source]¶ To a BaseMessage. format_messages(**kwargs: Any) → List[BaseMessage]¶ To messages. classmethod from_template(template: str, template_format: str = 'f-string', **kwargs: Any) → MessagePromptTemplateT¶ classmethod from_template_file(template_file: Union[str, Path], input_variables: List[str], **kwargs: Any) → MessagePromptTemplateT¶ to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_variables: List[str]¶ Input variables for this prompt template. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 5317, 8995, 61848, 13044, 27215, 3924, 7282, 2097, 55715, 7423, 55609, 198, 1058, 8859, 8995, 61848, 13044, 27215, 3924, 7282, 2097, 55715, 7423, 4163, 11, 10137, 25, 935, 55715, 7423, 11, 5217, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 707, 2097, 55715, 7423, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 5217, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 10137, 25, 8859, 8995, 61848, 13044, 9105, 6567, 55715, 7423, 510, 8327, 60, 55609, 198, 2293, 22551, 9872, 25, 5884, 8, 11651, 5464, 2097, 76747, 60, 55609, 198, 1271, 264, 5464, 2097, 627, 2293, 24321, 22551, 9872, 25, 5884, 8, 11651, 1796, 58, 4066, 2097, 60, 55609, 198, 1271, 6743, 627, 27853, 505, 8864, 31063, 25, 610, 11, 3896, 9132, 25, 610, 284, 364, 69, 31981, 518, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 27853, 505, 8864, 2517, 31063, 2517, 25, 9323, 17752, 11, 8092, 1145, 1988, 29282, 25, 1796, 17752, 1145, 3146, 9872, 25, 5884, 8, 11651, 4961, 55715, 7423, 51, 55609, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 29282, 25, 1796, 17752, 60, 55609, 198, 2566, 7482, 369, 420, 10137, 3896, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/prompts/langchain.prompts.chat.HumanMessagePromptTemplate.html
170f32d63e43-0
langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser¶ class langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str)[source]¶ Bases: RunEvaluatorOutputParser Parse a criteria results into an evaluation result. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param eval_chain_output_key: str = 'text'¶ param evaluation_name: str [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(parsed_output: Union[str, dict]) → EvaluationResult[source]¶ Parse the last line of the text and return an evaluation result. parse_chain_output(output: Dict[str, Any]) → EvaluationResult¶ Parse the output of a run. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 93689, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 93689, 5207, 6707, 4163, 11, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 518, 16865, 1292, 25, 610, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 6588, 90142, 5207, 6707, 198, 14802, 264, 13186, 3135, 1139, 459, 16865, 1121, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 913, 16865, 1292, 25, 610, 510, 8327, 60, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 6534, 49562, 7800, 25, 9323, 17752, 11, 6587, 2526, 11651, 40388, 2122, 76747, 60, 55609, 198, 14802, 279, 1566, 1584, 315, 279, 1495, 323, 471, 459, 16865, 1121, 627, 6534, 31683, 7800, 11304, 25, 30226, 17752, 11, 5884, 2526, 11651, 40388, 2122, 55609, 198, 14802, 279, 2612, 315, 264, 1629, 627, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html
170f32d63e43-1
constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.CriteriaOutputParser.html
677ed501b2b4-0
langchain.evaluation.run_evaluators.base.RunEvaluatorChain¶ class langchain.evaluation.run_evaluators.base.RunEvaluatorChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, input_mapper: RunEvaluatorInputMapper, eval_chain: Chain, output_parser: RunEvaluatorOutputParser)[source]¶ Bases: Chain, RunEvaluator Evaluate Run and optional examples. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param eval_chain: Chain [Required]¶ The evaluation chain. param input_mapper: RunEvaluatorInputMapper [Required]¶ Maps the Run and Optional example to a dictionary for the eval chain. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param output_parser: RunEvaluatorOutputParser [Required]¶
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 9105, 17123, 90142, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 9105, 17123, 90142, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 1988, 77764, 25, 6588, 90142, 2566, 11232, 11, 5720, 31683, 25, 29625, 11, 2612, 19024, 25, 6588, 90142, 5207, 6707, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 29625, 11, 6588, 90142, 198, 83445, 6588, 323, 10309, 10507, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 5720, 31683, 25, 29625, 510, 8327, 60, 55609, 198, 791, 16865, 8957, 627, 913, 1988, 77764, 25, 6588, 90142, 2566, 11232, 510, 8327, 60, 55609, 198, 37662, 279, 6588, 323, 12536, 3187, 311, 264, 11240, 369, 279, 5720, 8957, 627, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 19024, 25, 6588, 90142, 5207, 6707, 510, 8327, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html
677ed501b2b4-1
for the full catalog. param output_parser: RunEvaluatorOutputParser [Required]¶ Parse the output of the eval chain into feedback. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False.
[ 2000, 279, 2539, 16808, 627, 913, 2612, 19024, 25, 6588, 90142, 5207, 6707, 510, 8327, 60, 55609, 198, 14802, 279, 2612, 315, 279, 5720, 8957, 1139, 11302, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html
677ed501b2b4-2
include_run_info – Whether to include run info in the response. Defaults to False. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶ Evaluate an example. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate_run(run: Run, example: Optional[Example] = None) → EvaluationResult[source]¶
[ 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 14334, 39872, 25, 6588, 11, 3187, 25, 12536, 58, 13617, 60, 284, 2290, 8, 11651, 40388, 2122, 76747, 60, 55609, 198, 83445, 459, 3187, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 7368, 279, 8957, 389, 682, 11374, 304, 279, 1160, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 14334, 39872, 25, 6588, 11, 3187, 25, 12536, 58, 13617, 60, 284, 2290, 8, 11651, 40388, 2122, 76747, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html
677ed501b2b4-3
Evaluate an example. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_keys: List[str]¶ Input keys this chain expects. property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”}
[ 83445, 459, 3187, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 12919, 25, 1796, 17752, 60, 55609, 198, 2566, 7039, 420, 8957, 25283, 627, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 92 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html
677ed501b2b4-4
eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property output_keys: List[str]¶ Output keys this chain expects. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 3784, 2612, 12919, 25, 1796, 17752, 60, 55609, 198, 5207, 7039, 420, 8957, 25283, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorChain.html
77fe0133d904-0
langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator¶ langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator(llm: BaseLanguageModel, criteria: Union[Mapping[str, str], Sequence[str], str], *, input_key: str = 'input', prediction_key: str = 'output', prompt: BasePromptTemplate = PromptTemplate(input_variables=['input', 'output', 'criteria'], output_parser=None, partial_variables={}, template='You are assessing a submitted answer on a given task or input based on a set of criteria. Here is the data:\n[BEGIN DATA]\n***\n[Task]: {input}\n***\n[Submission]: {output}\n***\n[Criteria]: {criteria}\n***\n[END DATA]\nDoes the submission meet all the Criteria? First, write out in a step by step manner your reasoning about each criterion to be sure that your conclusion is correct. Avoid simply stating the correct answers at the outset. Then print only the single character "Y" or "N" (without quotes or punctuation) on its own line corresponding to the correct answer of whether the submission meets all criteria. At the end, repeat just the letter again by itself on a new line.', template_format='f-string', validate_template=True), evaluation_name: Optional[str] = None, **kwargs: Any) → RunEvaluatorChain[source]¶ Get an eval chain for grading a model’s response against a map of criteria.
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 82090, 22001, 46262, 55609, 198, 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 82090, 22001, 46262, 36621, 76, 25, 5464, 14126, 1747, 11, 13186, 25, 9323, 58, 6950, 17752, 11, 610, 1145, 29971, 17752, 1145, 610, 1145, 12039, 1988, 3173, 25, 610, 284, 364, 1379, 518, 20212, 3173, 25, 610, 284, 364, 3081, 518, 10137, 25, 5464, 55715, 7423, 284, 60601, 7423, 5498, 29282, 14314, 1379, 518, 364, 3081, 518, 364, 21159, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 1151, 2675, 527, 47614, 264, 14976, 4320, 389, 264, 2728, 3465, 477, 1988, 3196, 389, 264, 743, 315, 13186, 13, 5810, 374, 279, 828, 7338, 77, 33722, 16841, 14444, 18444, 77, 12488, 59, 77, 58, 6396, 5787, 314, 1379, 11281, 77, 12488, 59, 77, 58, 87721, 5787, 314, 3081, 11281, 77, 12488, 59, 77, 58, 13117, 5787, 314, 21159, 11281, 77, 12488, 59, 77, 58, 4794, 14444, 18444, 77, 22186, 279, 21142, 3449, 682, 279, 14577, 30, 5629, 11, 3350, 704, 304, 264, 3094, 555, 3094, 11827, 701, 33811, 922, 1855, 37057, 311, 387, 2771, 430, 701, 17102, 374, 4495, 13, 35106, 5042, 28898, 279, 4495, 11503, 520, 279, 68864, 13, 5112, 1194, 1193, 279, 3254, 3752, 330, 56, 1, 477, 330, 45, 1, 320, 30096, 17637, 477, 62603, 8, 389, 1202, 1866, 1584, 12435, 311, 279, 4495, 4320, 315, 3508, 279, 21142, 20628, 682, 13186, 13, 2468, 279, 842, 11, 13454, 1120, 279, 6661, 1578, 555, 5196, 389, 264, 502, 1584, 16045, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 16865, 1292, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6588, 90142, 19368, 76747, 60, 55609, 198, 1991, 459, 5720, 8957, 369, 66288, 264, 1646, 753, 2077, 2403, 264, 2472, 315, 13186, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_criteria_evaluator.html
8c3fae3455cd-0
langchain.evaluation.run_evaluators.implementations.get_qa_evaluator¶ langchain.evaluation.run_evaluators.implementations.get_qa_evaluator(llm: BaseLanguageModel, *, prompt: Union[PromptTemplate, str] = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.\n\nExample Format:\nQUESTION: question here\nSTUDENT ANSWER: student's answer here\nTRUE ANSWER: true answer here\nGRADE: CORRECT or INCORRECT here\n\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n\nQUESTION: {query}\nSTUDENT ANSWER: {result}\nTRUE ANSWER: {answer}\nGRADE:", template_format='f-string', validate_template=True), input_key: str = 'input', prediction_key: str = 'output', answer_key: str = 'output', evaluation_name: Optional[str] = None, **kwargs: Any) → RunEvaluatorChain[source]¶ Get an eval chain that compares response against ground truth.
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 9153, 64, 22001, 46262, 55609, 198, 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 9153, 64, 22001, 46262, 36621, 76, 25, 5464, 14126, 1747, 11, 12039, 10137, 25, 9323, 43447, 15091, 7423, 11, 610, 60, 284, 60601, 7423, 5498, 29282, 14314, 1663, 518, 364, 1407, 518, 364, 9399, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 429, 2675, 527, 264, 11326, 66288, 264, 28223, 7255, 77, 2675, 527, 2728, 264, 3488, 11, 279, 5575, 596, 4320, 11, 323, 279, 837, 4320, 11, 323, 527, 4691, 311, 5573, 279, 5575, 4320, 439, 3060, 27563, 45940, 477, 18610, 878, 45940, 7255, 77, 1734, 13617, 15392, 7338, 77, 53528, 25, 3488, 1618, 1734, 790, 82050, 97804, 643, 25, 5575, 596, 4320, 1618, 1734, 21260, 97804, 643, 25, 837, 4320, 1618, 1734, 80391, 25, 27563, 45940, 477, 18610, 878, 45940, 1618, 1734, 1734, 36457, 279, 5575, 11503, 3196, 27785, 389, 872, 61001, 13708, 13, 40071, 12062, 304, 62603, 323, 1343, 97578, 1990, 279, 5575, 4320, 323, 837, 4320, 13, 1102, 374, 10619, 422, 279, 5575, 4320, 5727, 810, 2038, 1109, 279, 837, 4320, 11, 439, 1317, 439, 433, 1587, 539, 6782, 904, 52133, 12518, 13, 19110, 0, 1144, 77, 1734, 53528, 25, 314, 1663, 11281, 77, 790, 82050, 97804, 643, 25, 314, 1407, 11281, 77, 21260, 97804, 643, 25, 314, 9399, 11281, 77, 80391, 12421, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 1988, 3173, 25, 610, 284, 364, 1379, 518, 20212, 3173, 25, 610, 284, 364, 3081, 518, 4320, 3173, 25, 610, 284, 364, 3081, 518, 16865, 1292, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6588, 90142, 19368, 76747, 60, 55609, 198, 1991, 459, 5720, 8957, 430, 43565, 2077, 2403, 5015, 8206, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_qa_evaluator.html
8c07c281db38-0
langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper¶ class langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper(*, prediction_map: Dict[str, str], input_map: Dict[str, str], answer_map: Optional[Dict[str, str]] = None)[source]¶ Bases: RunEvaluatorInputMapper, BaseModel Maps the Run and Optional[Example] to a dictionary. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param answer_map: Optional[Dict[str, str]] = None¶ Map from example outputs to the evaluation inputs. param input_map: Dict[str, str] [Required]¶ Map from run inputs to the evaluation inputs. param prediction_map: Dict[str, str] [Required]¶ Map from run outputs to the evaluation inputs. map(run: Run, example: Optional[Example] = None) → Dict[str, Any][source]¶ Maps the Run and Optional[Example] to a dictionary
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 6567, 6869, 90142, 2566, 11232, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 6567, 6869, 90142, 2566, 11232, 4163, 11, 20212, 5489, 25, 30226, 17752, 11, 610, 1145, 1988, 5489, 25, 30226, 17752, 11, 610, 1145, 4320, 5489, 25, 12536, 58, 13755, 17752, 11, 610, 5163, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 6588, 90142, 2566, 11232, 11, 65705, 198, 37662, 279, 6588, 323, 12536, 58, 13617, 60, 311, 264, 11240, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4320, 5489, 25, 12536, 58, 13755, 17752, 11, 610, 5163, 284, 2290, 55609, 198, 2276, 505, 3187, 16674, 311, 279, 16865, 11374, 627, 913, 1988, 5489, 25, 30226, 17752, 11, 610, 60, 510, 8327, 60, 55609, 198, 2276, 505, 1629, 11374, 311, 279, 16865, 11374, 627, 913, 20212, 5489, 25, 30226, 17752, 11, 610, 60, 510, 8327, 60, 55609, 198, 2276, 505, 1629, 16674, 311, 279, 16865, 11374, 627, 2235, 39872, 25, 6588, 11, 3187, 25, 12536, 58, 13617, 60, 284, 2290, 8, 11651, 30226, 17752, 11, 5884, 1483, 2484, 60, 55609, 198, 37662, 279, 6588, 323, 12536, 58, 13617, 60, 311, 264, 11240 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.StringRunEvaluatorInputMapper.html
73f7cfc84258-0
langchain.evaluation.criteria.eval_chain.CriteriaEvalChain¶ class langchain.evaluation.criteria.eval_chain.CriteriaEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None, requires_reference: bool = False)[source]¶ Bases: LLMChain LLM Chain for evaluating runs against criteria. Parameters llm (BaseLanguageModel) – The language model to use for evaluation. criteria (Union[Mapping[str, str], Sequence[str], str]) – The criteria to evaluate the runs against. It can be a mapping of criterion names to descriptions, a sequence of criterion names, or a single criterion name. prompt (Optional[BasePromptTemplate], default=None) – The prompt template to use for generating prompts. If not provided, a default prompt template will be used based on the value of requires_reference. requires_reference (bool, default=False) – Whether the evaluation requires a reference text. If True, the PROMPT_WITH_REFERENCES template will be used, which includes the reference labels in the prompt. Otherwise, the PROMPT template will be used, which is a reference-free prompt. **kwargs (Any) – Additional keyword arguments to pass to the LLMChain constructor. Returns An instance of the CriteriaEvalChain class. Return type CriteriaEvalChain Examples >>> from langchain.chat_models import ChatAnthropic >>> from langchain.evaluation.criteria import CriteriaEvalChain
[ 5317, 8995, 1770, 24756, 90307, 32810, 31683, 93689, 55569, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 90307, 32810, 31683, 93689, 55569, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 10137, 25, 5464, 55715, 7423, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 2612, 3173, 25, 610, 284, 364, 1342, 518, 2612, 19024, 25, 5464, 5207, 6707, 284, 2290, 11, 471, 21333, 18917, 25, 1845, 284, 3082, 11, 9507, 76, 37335, 25, 6587, 284, 2290, 11, 7612, 26508, 25, 1845, 284, 3641, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 445, 11237, 19368, 198, 4178, 44, 29625, 369, 38663, 8640, 2403, 13186, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 578, 4221, 1646, 311, 1005, 369, 16865, 627, 21159, 320, 33758, 58, 6950, 17752, 11, 610, 1145, 29971, 17752, 1145, 610, 2526, 1389, 578, 13186, 311, 15806, 279, 8640, 2403, 13, 1102, 649, 387, 264, 13021, 315, 198, 66, 12614, 5144, 311, 28887, 11, 264, 8668, 315, 37057, 5144, 11, 477, 264, 198, 15698, 37057, 836, 627, 41681, 320, 15669, 58, 4066, 55715, 7423, 1145, 1670, 5980, 8, 1389, 578, 10137, 3896, 311, 1005, 369, 24038, 52032, 13, 1442, 539, 3984, 11, 264, 198, 2309, 10137, 3896, 690, 387, 1511, 3196, 389, 279, 907, 315, 198, 42475, 26508, 627, 42475, 26508, 320, 2707, 11, 1670, 5725, 8, 1389, 13440, 279, 16865, 7612, 264, 5905, 1495, 13, 1442, 3082, 11, 279, 198, 47, 3442, 2898, 24880, 72010, 43753, 3896, 690, 387, 1511, 11, 902, 5764, 279, 198, 16690, 9382, 304, 279, 10137, 13, 18715, 11, 279, 68788, 2898, 3896, 690, 387, 198, 2656, 11, 902, 374, 264, 5905, 12862, 10137, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 311, 1522, 311, 279, 445, 11237, 19368, 4797, 627, 16851, 198, 2127, 2937, 315, 279, 14577, 55569, 19368, 538, 627, 5715, 955, 198, 13117, 55569, 19368, 198, 41481, 198, 20761, 505, 8859, 8995, 27215, 31892, 1179, 13149, 62804, 45036, 198, 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-1
>>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = ChatAnthropic() >>> criteria = {"my-custom-criterion": "Is the submission the most amazing ever?"} >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param output_parser: BaseOutputParser [Optional]¶ The parser to use to map the output to a structured result. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param requires_reference: bool = False¶ Whether the evaluation template expects a reference text. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True.
[ 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368, 198, 20761, 9507, 76, 284, 13149, 62804, 45036, 746, 20761, 13186, 284, 5324, 2465, 37998, 1824, 12614, 794, 330, 3957, 279, 21142, 279, 1455, 8056, 3596, 7673, 534, 20761, 8957, 284, 14577, 55569, 19368, 6521, 44095, 76, 36621, 76, 28, 657, 76, 11, 13186, 20105, 5204, 340, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 9507, 76, 25, 5464, 14126, 1747, 510, 8327, 60, 55609, 198, 14126, 1646, 311, 1650, 627, 913, 9507, 76, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 19024, 25, 5464, 5207, 6707, 510, 15669, 60, 55609, 198, 791, 6871, 311, 1005, 311, 2472, 279, 2612, 311, 264, 34030, 1121, 627, 913, 10137, 25, 5464, 55715, 7423, 510, 8327, 60, 55609, 198, 55715, 1665, 311, 1005, 627, 913, 7612, 26508, 25, 1845, 284, 3641, 55609, 198, 25729, 279, 16865, 3896, 25283, 264, 5905, 1495, 627, 913, 471, 21333, 18917, 25, 1845, 284, 3082, 55609, 198, 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-2
Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains.
[ 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 627, 2746, 905, 11, 690, 471, 264, 15860, 315, 5066, 2038, 922, 279, 9659, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-3
Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶ Asynchronously evaluate a prediction against the criteria. Parameters prediction (str) – The predicted text to evaluate. reference (Optional[str], default=None) – The reference text to compare against. This is required if requires_reference is True. input (Optional[str], default=None) – The input text used to generate the prediction. **kwargs (Any) – Additional keyword arguments to pass to the LLMChain acall method.
[ 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 7847, 264, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 2170, 55294, 15806, 264, 20212, 2403, 279, 13186, 627, 9905, 198, 70031, 320, 496, 8, 1389, 578, 19698, 1495, 311, 15806, 627, 16690, 320, 15669, 17752, 1145, 1670, 5980, 8, 1389, 578, 5905, 1495, 311, 9616, 2403, 13, 1115, 374, 2631, 422, 198, 42475, 26508, 374, 3082, 627, 1379, 320, 15669, 17752, 1145, 1670, 5980, 8, 1389, 578, 1988, 1495, 1511, 311, 7068, 279, 20212, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 311, 1522, 311, 279, 445, 11237, 19368, 1645, 543, 198, 4492, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-4
method. Returns The evaluation results. Return type dict Examples >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI() >>> criteria = "conciseness" >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) >>> await chain.aevaluate_strings( prediction="The answer is 42.", reference="42", input="What is the answer to life, the universe, and everything?", ) async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny")
[ 4492, 627, 16851, 198, 791, 16865, 3135, 627, 5715, 955, 198, 8644, 198, 41481, 198, 20761, 505, 8859, 8995, 60098, 1026, 1179, 5377, 15836, 198, 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368, 198, 20761, 9507, 76, 284, 5377, 15836, 746, 20761, 13186, 284, 330, 41546, 24004, 434, 702, 20761, 8957, 284, 14577, 55569, 19368, 6521, 44095, 76, 36621, 76, 28, 657, 76, 11, 13186, 20105, 5204, 340, 20761, 2597, 8957, 5973, 48391, 34600, 1021, 286, 20212, 429, 791, 4320, 374, 220, 2983, 10560, 286, 5905, 429, 2983, 761, 286, 1988, 429, 3923, 374, 279, 4320, 311, 2324, 11, 279, 15861, 11, 323, 4395, 36818, 262, 1763, 7847, 945, 13523, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 9037, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 909 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-5
Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶ Evaluate a prediction against the criteria. Parameters prediction (str) – The predicted text to evaluate. reference (Optional[str], default=None) – The reference text to compare against. This is required if requires_reference is True. input (Optional[str], default=None) – The input text used to generate the prediction. **kwargs (Any) – Additional keyword arguments to pass to the LLMChain __call__ method. Returns The evaluation results. Return type dict Examples >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI()
[ 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 7847, 1469, 9037, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 55609, 198, 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 10200, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 3261, 36289, 36621, 76, 5400, 25, 445, 11237, 2122, 8, 11651, 1796, 58, 13755, 17752, 11, 5884, 5163, 55609, 198, 4110, 16674, 505, 2077, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 83445, 264, 20212, 2403, 279, 13186, 627, 9905, 198, 70031, 320, 496, 8, 1389, 578, 19698, 1495, 311, 15806, 627, 16690, 320, 15669, 17752, 1145, 1670, 5980, 8, 1389, 578, 5905, 1495, 311, 9616, 2403, 13, 1115, 374, 2631, 422, 198, 42475, 26508, 374, 3082, 627, 1379, 320, 15669, 17752, 1145, 1670, 5980, 8, 1389, 578, 1988, 1495, 1511, 311, 7068, 279, 20212, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 311, 1522, 311, 279, 445, 11237, 19368, 1328, 6797, 12423, 4492, 627, 16851, 198, 791, 16865, 3135, 627, 5715, 955, 198, 8644, 198, 41481, 198, 20761, 505, 8859, 8995, 60098, 1026, 1179, 5377, 15836, 198, 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368, 198, 20761, 9507, 76, 284, 5377, 15836, 368 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-6
>>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI() >>> criteria = "conciseness" >>> chain = CriteriaEvalChain.from_llm(llm=llm, criteria=criteria) >>> chain.evaluate_strings( prediction="The answer is 42.", reference="42", input="What is the answer to life, the universe, and everything?", ) classmethod from_llm(llm: BaseLanguageModel, criteria: Union[Mapping[str, str], Sequence[str], Sequence[ConstitutionalPrinciple], str, ConstitutionalPrinciple], *, prompt: Optional[BasePromptTemplate] = None, requires_reference: bool = False, **kwargs: Any) → CriteriaEvalChain[source]¶ Create a CriteriaEvalChain instance from an llm and criteria. Parameters llm (BaseLanguageModel) – The language model to use for evaluation. criteria (CRITERIA_TYPE) – The criteria to evaluate the runs against. It can be: a mapping of criterion names to descriptions a sequence of criterion names a single criterion name present in one of the default criteria a sequence of ConstitutionalPrinciple instances a single ConstitutionalPrinciple instance prompt (Optional[BasePromptTemplate], default=None) – The prompt template to use for generating prompts. If not provided, a default prompt template will be used based on the value of requires_reference. requires_reference (bool, default=False) – Whether the evaluation requires a reference text. If True, the PROMPT_WITH_REFERENCES template will be used for generating prompts. If False, the PROMPT template will be used. **kwargs (Any) – Additional keyword arguments to pass to the LLMChain constructor. Returns An instance of the CriteriaEvalChain class. Return type CriteriaEvalChain Examples
[ 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368, 198, 20761, 9507, 76, 284, 5377, 15836, 746, 20761, 13186, 284, 330, 41546, 24004, 434, 702, 20761, 8957, 284, 14577, 55569, 19368, 6521, 44095, 76, 36621, 76, 28, 657, 76, 11, 13186, 20105, 5204, 340, 20761, 8957, 37136, 34600, 1021, 286, 20212, 429, 791, 4320, 374, 220, 2983, 10560, 286, 5905, 429, 2983, 761, 286, 1988, 429, 3923, 374, 279, 4320, 311, 2324, 11, 279, 15861, 11, 323, 4395, 36818, 262, 1763, 27853, 505, 44095, 76, 36621, 76, 25, 5464, 14126, 1747, 11, 13186, 25, 9323, 58, 6950, 17752, 11, 610, 1145, 29971, 17752, 1145, 29971, 58, 1128, 10663, 278, 3617, 16379, 1145, 610, 11, 63285, 3617, 16379, 1145, 12039, 10137, 25, 12536, 58, 4066, 55715, 7423, 60, 284, 2290, 11, 7612, 26508, 25, 1845, 284, 3641, 11, 3146, 9872, 25, 5884, 8, 11651, 14577, 55569, 19368, 76747, 60, 55609, 198, 4110, 264, 14577, 55569, 19368, 2937, 505, 459, 9507, 76, 323, 13186, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 578, 4221, 1646, 311, 1005, 369, 16865, 627, 21159, 320, 9150, 25764, 5987, 4283, 8, 1389, 720, 791, 13186, 311, 15806, 279, 8640, 2403, 13, 1102, 649, 387, 512, 64, 13021, 315, 37057, 5144, 311, 28887, 198, 64, 8668, 315, 37057, 5144, 198, 64, 3254, 37057, 836, 3118, 304, 832, 315, 279, 1670, 13186, 198, 64, 8668, 315, 63285, 3617, 16379, 13422, 198, 64, 3254, 63285, 3617, 16379, 2937, 198, 41681, 320, 15669, 58, 4066, 55715, 7423, 1145, 1670, 5980, 8, 1389, 578, 10137, 3896, 311, 1005, 369, 24038, 52032, 13, 1442, 539, 3984, 345, 64, 1670, 10137, 3896, 690, 387, 1511, 3196, 389, 279, 907, 315, 198, 42475, 26508, 627, 42475, 26508, 320, 2707, 11, 1670, 5725, 8, 1389, 13440, 279, 16865, 7612, 264, 5905, 1495, 13, 1442, 3082, 11, 279, 198, 47, 3442, 2898, 24880, 72010, 43753, 3896, 690, 387, 1511, 369, 24038, 198, 25475, 13044, 13, 1442, 3641, 11, 279, 68788, 2898, 3896, 690, 387, 1511, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 311, 1522, 311, 279, 445, 11237, 19368, 198, 22602, 627, 16851, 198, 2127, 2937, 315, 279, 14577, 55569, 19368, 538, 627, 5715, 955, 198, 13117, 55569, 19368, 198, 41481 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-7
An instance of the CriteriaEvalChain class. Return type CriteriaEvalChain Examples >>> from langchain.llms import OpenAI >>> from langchain.evaluation.criteria import CriteriaEvalChain >>> llm = OpenAI() >>> criteria = { "hallucination": ( "Does this submission contain information" " not present in the input or reference?" ), } >>> chain = CriteriaEvalChain.from_llm( llm=llm, criteria=criteria, requires_reference=True, ) classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. static get_supported_default_criteria() → List[str][source]¶ Get the list of supported default criteria. Returns The list of supported default criteria. Return type List[str] Examples >>> CriteriaEvalChain.supported_default_criteria() ['conciseness', 'relevance', 'coherence', 'harmfulness', 'maliciousness', 'helpfulness', 'controversiality', 'mysogyny', 'criminality', 'insensitive'] predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny")
[ 2127, 2937, 315, 279, 14577, 55569, 19368, 538, 627, 5715, 955, 198, 13117, 55569, 19368, 198, 41481, 198, 20761, 505, 8859, 8995, 60098, 1026, 1179, 5377, 15836, 198, 20761, 505, 8859, 8995, 1770, 24756, 90307, 1179, 14577, 55569, 19368, 198, 20761, 9507, 76, 284, 5377, 15836, 746, 20761, 13186, 284, 341, 286, 330, 43341, 1791, 2617, 794, 2456, 310, 330, 22186, 420, 21142, 6782, 2038, 702, 310, 330, 539, 3118, 304, 279, 1988, 477, 5905, 48469, 286, 2907, 262, 457, 20761, 8957, 284, 14577, 55569, 19368, 6521, 44095, 76, 1021, 286, 9507, 76, 28, 657, 76, 345, 286, 13186, 20105, 5204, 345, 286, 7612, 26508, 3702, 345, 262, 1763, 27853, 505, 3991, 36621, 76, 25, 5464, 14126, 1747, 11, 3896, 25, 610, 8, 11651, 445, 11237, 19368, 55609, 198, 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 627, 19927, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 2020, 636, 58985, 10198, 82090, 368, 11651, 1796, 17752, 1483, 2484, 60, 55609, 198, 1991, 279, 1160, 315, 7396, 1670, 13186, 627, 16851, 198, 791, 1160, 315, 7396, 1670, 13186, 627, 5715, 955, 198, 861, 17752, 933, 41481, 198, 20761, 14577, 55569, 19368, 9582, 291, 10198, 82090, 746, 681, 41546, 24004, 434, 518, 364, 265, 33194, 518, 364, 1030, 52461, 518, 364, 71, 2227, 31514, 756, 262, 364, 14991, 9824, 2136, 518, 364, 8823, 31514, 756, 262, 364, 778, 12848, 532, 488, 518, 364, 8373, 540, 57504, 518, 364, 99495, 488, 518, 364, 1354, 19245, 4532, 35798, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 909 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-8
Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. classmethod resolve_criteria(criteria: Union[Mapping[str, str], Sequence[str], Sequence[ConstitutionalPrinciple], str, ConstitutionalPrinciple]) → Dict[str, str][source]¶ Resolve the criteria to evaluate. Parameters criteria (CRITERIA_TYPE) – The criteria to evaluate the runs against. It can be: a mapping of criterion names to descriptions a sequence of criterion names a single criterion name present in one of the default criteria a sequence of ConstitutionalPrinciple instances a single ConstitutionalPrinciple instance Returns A dictionary mapping criterion names to descriptions. Return type Dict[str, str] Examples >>> criteria = ["relevance", "coherence"] >>> CriteriaEvalChain.resolve_criteria(criteria) {'relevance': 'Is the submission referring to a real quote from the text?', 'coherence': 'Is the submission coherent, well-structured, and organized?'}
[ 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 35798, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 5884, 5163, 55609, 198, 7368, 7168, 323, 1243, 4820, 279, 3135, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 72874, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 27853, 9006, 82090, 75934, 25, 9323, 58, 6950, 17752, 11, 610, 1145, 29971, 17752, 1145, 29971, 58, 1128, 10663, 278, 3617, 16379, 1145, 610, 11, 63285, 3617, 16379, 2526, 11651, 30226, 17752, 11, 610, 1483, 2484, 60, 55609, 198, 57908, 279, 13186, 311, 15806, 627, 9905, 198, 21159, 320, 9150, 25764, 5987, 4283, 8, 1389, 720, 791, 13186, 311, 15806, 279, 8640, 2403, 13, 1102, 649, 387, 512, 64, 13021, 315, 37057, 5144, 311, 28887, 198, 64, 8668, 315, 37057, 5144, 198, 64, 3254, 37057, 836, 3118, 304, 832, 315, 279, 1670, 13186, 198, 64, 8668, 315, 63285, 3617, 16379, 13422, 198, 64, 3254, 63285, 3617, 16379, 2937, 198, 16851, 198, 32, 11240, 13021, 37057, 5144, 311, 28887, 627, 5715, 955, 198, 13755, 17752, 11, 610, 933, 41481, 198, 20761, 13186, 284, 4482, 265, 33194, 498, 330, 1030, 52461, 7171, 20761, 14577, 55569, 19368, 15033, 82090, 75934, 340, 13922, 265, 33194, 1232, 364, 3957, 279, 21142, 22797, 311, 264, 1972, 12929, 505, 279, 1495, 43020, 364, 1030, 52461, 1232, 364, 3957, 279, 21142, 56887, 11, 1664, 12, 52243, 11, 323, 17057, 30, 8439 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
73f7cfc84258-9
'coherence': 'Is the submission coherent, well-structured, and organized?'} run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 6, 1030, 52461, 1232, 364, 3957, 279, 21142, 56887, 11, 1664, 12, 52243, 11, 323, 17057, 30, 16823, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaEvalChain.html
3cc1d022259a-0
langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser¶ class langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser(*, eval_chain_output_key: str = 'text')[source]¶ Bases: BaseOutputParser[EvaluationResult] Parse the output of a run. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param eval_chain_output_key: str = 'text'¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. abstract parse(text: str) → T¶ Parse the output of an LLM call. A method which takes in a string (assumed output of a language model ) and parses it into some structure. Parameters text – output of language model Returns structured output parse_chain_output(output: Dict[str, Any]) → EvaluationResult[source]¶ Parse the output of a run. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor.
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 9105, 17123, 90142, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 9105, 17123, 90142, 5207, 6707, 4163, 11, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 13588, 2484, 60, 55609, 198, 33, 2315, 25, 5464, 5207, 6707, 61097, 24756, 2122, 933, 14802, 279, 2612, 315, 264, 1629, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 16647, 4820, 7383, 25, 610, 8, 11651, 350, 55609, 198, 14802, 279, 2612, 315, 459, 445, 11237, 1650, 627, 32, 1749, 902, 5097, 304, 264, 925, 320, 395, 39255, 2612, 315, 264, 4221, 1646, 1763, 438, 71935, 433, 1139, 1063, 6070, 627, 9905, 198, 1342, 1389, 2612, 315, 4221, 1646, 198, 16851, 198, 52243, 2612, 198, 6534, 31683, 7800, 11304, 25, 30226, 17752, 11, 5884, 2526, 11651, 40388, 2122, 76747, 60, 55609, 198, 14802, 279, 2612, 315, 264, 1629, 627, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html
3cc1d022259a-1
serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.base.RunEvaluatorOutputParser.html
27926ab2ad7a-0
langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain¶ class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, agent_tools: Optional[List[BaseTool]] = None, eval_chain: LLMChain, output_parser: TrajectoryOutputParser = None, return_reasoning: bool = False)[source]¶ Bases: Chain A chain for evaluating ReAct style agents. This chain is used to evaluate ReAct style agents by reasoning about the sequence of actions taken and their outcomes. Example Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param agent_tools: Optional[List[langchain.tools.base.BaseTool]] = None¶ A list of tools available to the agent. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param eval_chain: langchain.chains.llm.LLMChain [Required]¶ The language model chain used for evaluation. param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start
[ 5317, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 55569, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 55569, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 8479, 40823, 25, 12536, 53094, 58, 4066, 7896, 5163, 284, 2290, 11, 5720, 31683, 25, 445, 11237, 19368, 11, 2612, 19024, 25, 17747, 24251, 5207, 6707, 284, 2290, 11, 471, 39329, 287, 25, 1845, 284, 3641, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 29625, 198, 32, 8957, 369, 38663, 1050, 2471, 1742, 13307, 627, 2028, 8957, 374, 1511, 311, 15806, 1050, 2471, 1742, 13307, 555, 33811, 922, 198, 1820, 8668, 315, 6299, 4529, 323, 872, 20124, 627, 13617, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 8479, 40823, 25, 12536, 53094, 58, 5317, 8995, 24029, 9105, 13316, 7896, 5163, 284, 2290, 55609, 198, 32, 1160, 315, 7526, 2561, 311, 279, 8479, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 5720, 31683, 25, 8859, 8995, 5442, 1771, 60098, 76, 1236, 11237, 19368, 510, 8327, 60, 55609, 198, 791, 4221, 1646, 8957, 1511, 369, 16865, 627, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
27926ab2ad7a-1
Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param output_parser: langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser [Optional]¶ The output parser used to parse the output. param return_reasoning: bool = False¶ Whether to return the reasoning along with the score. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any][source]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
[ 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 19024, 25, 8859, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 5207, 6707, 510, 15669, 60, 55609, 198, 791, 2612, 6871, 1511, 311, 4820, 279, 2612, 627, 913, 471, 39329, 287, 25, 1845, 284, 3641, 55609, 198, 25729, 311, 471, 279, 33811, 3235, 449, 279, 5573, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 1483, 2484, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
27926ab2ad7a-2
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_agent_trajectory(*, prediction: str, input: str, agent_trajectory: Union[str, List[Tuple[AgentAction, str]]], reference: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ Asynchronously evaluate a trajectory. Parameters input (str) – The input question. agent_trajectory (Union[str, List[Tuple[AgentAction, str]]]) – The intermediate steps forming the agent trajectory. prediction (str) – The expected prediction. reference (Optional[str]) – The reference answer. Returns The evaluation result. Return type dict
[ 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 26814, 87131, 4163, 11, 20212, 25, 610, 11, 1988, 25, 610, 11, 8479, 87131, 25, 9323, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 1145, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 2170, 55294, 15806, 264, 35782, 627, 9905, 198, 1379, 320, 496, 8, 1389, 578, 1988, 3488, 627, 8252, 87131, 320, 33758, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 2526, 1389, 578, 29539, 7504, 30164, 279, 8479, 35782, 627, 70031, 320, 496, 8, 1389, 578, 3685, 20212, 627, 16690, 320, 15669, 17752, 2526, 1389, 578, 5905, 4320, 627, 16851, 198, 791, 16865, 1121, 627, 5715, 955, 198, 8644 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
27926ab2ad7a-3
Returns The evaluation result. Return type dict apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Call the chain on all inputs in the list. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate_agent_trajectory(*, prediction: str, input: str, agent_trajectory: Union[str, List[Tuple[AgentAction, str]]], reference: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ Evaluate a trajectory. Parameters input (str) – The input question. agent_trajectory (Union[str, List[Tuple[AgentAction, str]]]) – The intermediate steps forming the agent trajectory. prediction (str) – The expected prediction. reference (Optional[str]) – The reference answer. Returns The evaluation result. Return type dict classmethod from_llm(llm: BaseChatModel, agent_tools: Optional[Sequence[BaseTool]] = None, output_parser: Optional[TrajectoryOutputParser] = None, return_reasoning: bool = False) → TrajectoryEvalChain[source]¶ Create a TrajectoryEvalChain object from a language model chain. Parameters llm (BaseChatModel) – The language model chain. agent_tools (Optional[Sequence[BaseTool]]) – A list of tools available tothe agent.
[ 16851, 198, 791, 16865, 1121, 627, 5715, 955, 198, 8644, 198, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 7368, 279, 8957, 389, 682, 11374, 304, 279, 1160, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 26814, 87131, 4163, 11, 20212, 25, 610, 11, 1988, 25, 610, 11, 8479, 87131, 25, 9323, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 1145, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 83445, 264, 35782, 627, 9905, 198, 1379, 320, 496, 8, 1389, 578, 1988, 3488, 627, 8252, 87131, 320, 33758, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 2526, 1389, 578, 29539, 7504, 30164, 279, 8479, 35782, 627, 70031, 320, 496, 8, 1389, 578, 3685, 20212, 627, 16690, 320, 15669, 17752, 2526, 1389, 578, 5905, 4320, 627, 16851, 198, 791, 16865, 1121, 627, 5715, 955, 198, 8644, 198, 27853, 505, 44095, 76, 36621, 76, 25, 5464, 16047, 1747, 11, 8479, 40823, 25, 12536, 58, 14405, 58, 4066, 7896, 5163, 284, 2290, 11, 2612, 19024, 25, 12536, 20961, 969, 24251, 5207, 6707, 60, 284, 2290, 11, 471, 39329, 287, 25, 1845, 284, 3641, 8, 11651, 17747, 24251, 55569, 19368, 76747, 60, 55609, 198, 4110, 264, 17747, 24251, 55569, 19368, 1665, 505, 264, 4221, 1646, 8957, 627, 9905, 198, 657, 76, 320, 4066, 16047, 1747, 8, 1389, 578, 4221, 1646, 8957, 627, 8252, 40823, 320, 15669, 58, 14405, 58, 4066, 7896, 30716, 1389, 362, 1160, 315, 7526, 198, 10547, 311, 1820, 8479, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
27926ab2ad7a-4
available tothe agent. output_parser (Optional[TrajectoryOutputParser]) – The output parser used to parse the chain output into a score. return_reasoning (bool) – Whether to return the reasoning along with the score. Returns The TrajectoryEvalChain object. Return type TrajectoryEvalChain static get_agent_trajectory(steps: Union[str, List[Tuple[AgentAction, str]]]) → str[source]¶ Get the agent trajectory as a formatted string. Parameters steps (Union[str, List[Tuple[AgentAction, str]]]) – The agent trajectory. Returns The formatted agent trajectory. Return type str prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶
[ 10547, 311, 1820, 8479, 627, 3081, 19024, 320, 15669, 20961, 969, 24251, 5207, 6707, 2526, 1389, 578, 2612, 6871, 198, 2656, 311, 4820, 279, 8957, 2612, 1139, 264, 5573, 627, 693, 39329, 287, 320, 2707, 8, 1389, 13440, 311, 471, 279, 198, 20489, 287, 3235, 449, 279, 5573, 627, 16851, 198, 791, 17747, 24251, 55569, 19368, 1665, 627, 5715, 955, 198, 49238, 24251, 55569, 19368, 198, 2020, 636, 26814, 87131, 85371, 25, 9323, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 2526, 11651, 610, 76747, 60, 55609, 198, 1991, 279, 8479, 35782, 439, 264, 24001, 925, 627, 9905, 198, 25047, 320, 33758, 17752, 11, 1796, 20961, 6189, 58, 17230, 2573, 11, 610, 5163, 2526, 1389, 578, 8479, 35782, 627, 16851, 198, 791, 24001, 8479, 35782, 627, 5715, 955, 198, 496, 198, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
27926ab2ad7a-5
to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property input_keys: List[str]¶ Get the input keys for the chain. Returns The input keys. Return type List[str] property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. property output_keys: List[str]¶ Get the output keys for the chain. Returns The output keys. Return type List[str] model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶
[ 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 1988, 12919, 25, 1796, 17752, 60, 55609, 198, 1991, 279, 1988, 7039, 369, 279, 8957, 627, 16851, 198, 791, 1988, 7039, 627, 5715, 955, 198, 861, 17752, 933, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 3784, 2612, 12919, 25, 1796, 17752, 60, 55609, 198, 1991, 279, 2612, 7039, 369, 279, 8957, 627, 16851, 198, 791, 2612, 7039, 627, 5715, 955, 198, 861, 17752, 933, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEvalChain.html
b0d0b0f3d377-0
langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain¶ class langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: LLMChain A chain for comparing the output of two models. Example: >>> from langchain.chat_models import ChatOpenAI >>> from langchain.evaluation.comparison import PairwiseStringEvalChain >>> llm = ChatOpenAI(temperature=0) >>> chain = PairwiseStringEvalChain.from_llm(llm=llm) >>> result = chain.evaluate_string_pairs( … input = “What is the chemical formula for water?”, … prediction = “H2O”, … prediction_b = ( … “The chemical formula for water is H2O, which means” … ” there are two hydrogen atoms and one oxygen atom.” … referenc = “The chemical formula for water is H2O.”, … ) >>> print(result[“text”]) # { # “value”: “B”, # “comment”: “Both responses accurately state” # ” that the chemical formula for water is H2O.” # ” However, Response B provides additional information” # . ” by explaining what the formula means. [[B]]”# }
[ 5317, 8995, 1770, 24756, 916, 36642, 32810, 31683, 1087, 1334, 4583, 707, 55569, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 916, 36642, 32810, 31683, 1087, 1334, 4583, 707, 55569, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 10137, 25, 5464, 55715, 7423, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 2612, 3173, 25, 610, 284, 364, 1342, 518, 2612, 19024, 25, 5464, 5207, 6707, 284, 2290, 11, 471, 21333, 18917, 25, 1845, 284, 3082, 11, 9507, 76, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 445, 11237, 19368, 198, 32, 8957, 369, 27393, 279, 2612, 315, 1403, 4211, 627, 13617, 512, 20761, 505, 8859, 8995, 27215, 31892, 1179, 13149, 5109, 15836, 198, 20761, 505, 8859, 8995, 1770, 24756, 916, 36642, 1179, 27086, 4583, 707, 55569, 19368, 198, 20761, 9507, 76, 284, 13149, 5109, 15836, 7, 35658, 28, 15, 340, 20761, 8957, 284, 27086, 4583, 707, 55569, 19368, 6521, 44095, 76, 36621, 76, 28, 657, 76, 340, 20761, 1121, 284, 8957, 37136, 3991, 37530, 1021, 1981, 257, 1988, 284, 1054, 3923, 374, 279, 11742, 15150, 369, 3090, 12671, 345, 1981, 257, 20212, 284, 1054, 39, 17, 46, 863, 345, 1981, 257, 20212, 890, 284, 2456, 1981, 286, 1054, 791, 11742, 15150, 369, 3090, 374, 473, 17, 46, 11, 902, 3445, 89874, 1981, 286, 19524, 1070, 527, 1403, 35784, 33299, 323, 832, 24463, 19670, 49216, 1981, 257, 8464, 967, 284, 1054, 791, 11742, 15150, 369, 3090, 374, 473, 17, 46, 2029, 345, 1981, 1763, 20761, 1194, 4556, 58, 2118, 1342, 863, 2608, 2, 341, 2, 262, 1054, 970, 57633, 1054, 33, 863, 345, 2, 262, 1054, 6313, 57633, 1054, 21279, 14847, 30357, 1614, 89874, 2, 996, 19524, 430, 279, 11742, 15150, 369, 3090, 374, 473, 17, 46, 49216, 2, 996, 19524, 4452, 11, 6075, 426, 5825, 5217, 2038, 89874, 2, 662, 257, 19524, 555, 26073, 1148, 279, 15150, 3445, 627, 15873, 33, 5163, 863, 2, 335 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-1
# . ” by explaining what the formula means. [[B]]”# } Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param output_parser: BaseOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain,
[ 2, 662, 257, 19524, 555, 26073, 1148, 279, 15150, 3445, 627, 15873, 33, 5163, 863, 2, 457, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 9507, 76, 25, 5464, 14126, 1747, 510, 8327, 60, 55609, 198, 14126, 1646, 311, 1650, 627, 913, 9507, 76, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 19024, 25, 5464, 5207, 6707, 510, 15669, 60, 55609, 198, 5207, 6871, 311, 1005, 627, 16672, 311, 832, 430, 5097, 279, 1455, 4461, 925, 719, 1587, 539, 2349, 433, 198, 61036, 627, 913, 10137, 25, 5464, 55715, 7423, 510, 8327, 60, 55609, 198, 55715, 1665, 311, 1005, 627, 913, 471, 21333, 18917, 25, 1845, 284, 3082, 55609, 198, 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 627, 2746, 905, 11, 690, 471, 264, 15860, 315, 5066, 2038, 922, 279, 9659, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 11 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-2
These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results.
[ 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 7847, 264, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-3
Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_string_pairs(*, prediction: str, prediction_b: str, input: str, reference: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ Asynchronously evaluate whether output A is preferred to output B. Parameters prediction (str) – The output string from the first model. prediction_b (str) – The output string from the second model. input (str) – The input or task string. callbacks (Callbacks, optional) – The callbacks to use. reference (str, optional) – The reference string, if any. **kwargs (Any) – Additional keyword arguments. Returns A dictionary containing: reasoning: The reasoning for the preference. value: The preference value, which is either ‘A’, ‘B’, or Nonefor no preference.
[ 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 3991, 37530, 4163, 11, 20212, 25, 610, 11, 20212, 890, 25, 610, 11, 1988, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 2170, 55294, 15806, 3508, 2612, 362, 374, 15236, 311, 2612, 426, 627, 9905, 198, 70031, 320, 496, 8, 1389, 578, 2612, 925, 505, 279, 1176, 1646, 627, 70031, 890, 320, 496, 8, 1389, 578, 2612, 925, 505, 279, 2132, 1646, 627, 1379, 320, 496, 8, 1389, 578, 1988, 477, 3465, 925, 627, 69411, 320, 45561, 11, 10309, 8, 1389, 578, 27777, 311, 1005, 627, 16690, 320, 496, 11, 10309, 8, 1389, 578, 5905, 925, 11, 422, 904, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 627, 16851, 198, 32, 11240, 8649, 512, 20489, 287, 25, 578, 33811, 369, 279, 22698, 627, 970, 25, 578, 22698, 907, 11, 902, 374, 3060, 3451, 32, 20182, 3451, 33, 20182, 477, 2290, 2000, 912, 22698, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-4
score: The preference score, which is 1 for ‘A’, 0 for ‘B’,and 0.5 for None. Return type dict async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs.
[ 12618, 25, 578, 22698, 5573, 11, 902, 374, 220, 16, 369, 3451, 32, 20182, 220, 15, 369, 3451, 33, 20182, 438, 220, 15, 13, 20, 369, 2290, 627, 5715, 955, 198, 8644, 198, 7847, 945, 13523, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 9037, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 7847, 1469, 9037, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 55609, 198, 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 10200, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-5
Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate_string_pairs(*, prediction: str, prediction_b: str, input: str, reference: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ Evaluate whether output A is preferred to output B. Parameters prediction (str) – The output string from the first model. prediction_b (str) – The output string from the second model. input (str) – The input or task string. callbacks (Callbacks, optional) – The callbacks to use. reference (str, optional) – The reference string, if any. **kwargs (Any) – Additional keyword arguments. Returns A dictionary containing: reasoning: The reasoning for the preference. value: The preference value, which is either ‘A’, ‘B’, or Nonefor no preference. score: The preference score, which is 1 for ‘A’, 0 for ‘B’,and 0.5 for None. Return type dict classmethod from_llm(*, llm: BaseLanguageModel, prompt: Optional[PromptTemplate] = None, require_reference: bool = False, **kwargs: Any) → PairwiseStringEvalChain[source]¶ Initialize the PairwiseStringEvalChain from an LLM. Parameters
[ 51690, 52032, 505, 11374, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 3261, 36289, 36621, 76, 5400, 25, 445, 11237, 2122, 8, 11651, 1796, 58, 13755, 17752, 11, 5884, 5163, 55609, 198, 4110, 16674, 505, 2077, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 3991, 37530, 4163, 11, 20212, 25, 610, 11, 20212, 890, 25, 610, 11, 1988, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 83445, 3508, 2612, 362, 374, 15236, 311, 2612, 426, 627, 9905, 198, 70031, 320, 496, 8, 1389, 578, 2612, 925, 505, 279, 1176, 1646, 627, 70031, 890, 320, 496, 8, 1389, 578, 2612, 925, 505, 279, 2132, 1646, 627, 1379, 320, 496, 8, 1389, 578, 1988, 477, 3465, 925, 627, 69411, 320, 45561, 11, 10309, 8, 1389, 578, 27777, 311, 1005, 627, 16690, 320, 496, 11, 10309, 8, 1389, 578, 5905, 925, 11, 422, 904, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 627, 16851, 198, 32, 11240, 8649, 512, 20489, 287, 25, 578, 33811, 369, 279, 22698, 627, 970, 25, 578, 22698, 907, 11, 902, 374, 3060, 3451, 32, 20182, 3451, 33, 20182, 477, 2290, 2000, 912, 22698, 627, 12618, 25, 578, 22698, 5573, 11, 902, 374, 220, 16, 369, 3451, 32, 20182, 220, 15, 369, 3451, 33, 20182, 438, 220, 15, 13, 20, 369, 2290, 627, 5715, 955, 198, 8644, 198, 27853, 505, 44095, 76, 4163, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 10137, 25, 12536, 43447, 15091, 7423, 60, 284, 2290, 11, 1397, 26508, 25, 1845, 284, 3641, 11, 3146, 9872, 25, 5884, 8, 11651, 27086, 4583, 707, 55569, 19368, 76747, 60, 55609, 198, 10130, 279, 27086, 4583, 707, 55569, 19368, 505, 459, 445, 11237, 627, 9905 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-6
Initialize the PairwiseStringEvalChain from an LLM. Parameters llm (BaseLanguageModel) – The LLM to use. prompt (PromptTemplate, optional) – The prompt to use. require_reference (bool, optional) – Whether to require a reference string. Defaults to False. **kwargs (Any) – Additional keyword arguments. Returns The initialized PairwiseStringEvalChain. Return type PairwiseStringEvalChain classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs.
[ 10130, 279, 27086, 4583, 707, 55569, 19368, 505, 459, 445, 11237, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 578, 445, 11237, 311, 1005, 627, 41681, 320, 55715, 7423, 11, 10309, 8, 1389, 578, 10137, 311, 1005, 627, 4408, 26508, 320, 2707, 11, 10309, 8, 1389, 13440, 311, 1397, 264, 5905, 198, 928, 13, 37090, 311, 3641, 627, 334, 9872, 320, 8780, 8, 1389, 24086, 16570, 6105, 627, 16851, 198, 791, 17719, 27086, 4583, 707, 55569, 19368, 627, 5715, 955, 198, 12725, 4583, 707, 55569, 19368, 198, 27853, 505, 3991, 36621, 76, 25, 5464, 14126, 1747, 11, 3896, 25, 610, 8, 11651, 445, 11237, 19368, 55609, 198, 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 627, 19927, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 35798, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 35798, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 5884, 5163, 55609, 198, 7368, 7168, 323, 1243, 4820, 279, 3135, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-7
Validate and prep outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object.
[ 18409, 323, 22033, 16674, 627, 72874, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
b0d0b0f3d377-8
model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringEvalChain.html
63d1893f1122-0
langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser¶ class langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser[source]¶ Bases: BaseOutputParser[dict] A parser for the output of the PairwiseStringEvalChain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Any[source]¶ Parse the output text. Parameters text (str) – The output text to parse. Returns The parsed output. Return type Any parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶
[ 5317, 8995, 1770, 24756, 916, 36642, 32810, 31683, 1087, 1334, 4583, 707, 2122, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 916, 36642, 32810, 31683, 1087, 1334, 4583, 707, 2122, 5207, 6707, 76747, 60, 55609, 198, 33, 2315, 25, 5464, 5207, 6707, 58, 8644, 933, 32, 6871, 369, 279, 2612, 315, 279, 27086, 4583, 707, 55569, 19368, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 6534, 7383, 25, 610, 8, 11651, 5884, 76747, 60, 55609, 198, 14802, 279, 2612, 1495, 627, 9905, 198, 1342, 320, 496, 8, 1389, 578, 2612, 1495, 311, 4820, 627, 16851, 198, 791, 16051, 2612, 627, 5715, 955, 198, 8780, 198, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html
63d1893f1122-1
property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.comparison.eval_chain.PairwiseStringResultOutputParser.html
a9f162cf44a5-0
langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval¶ class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval(score, reasoning)[source]¶ Bases: NamedTuple Create new instance of TrajectoryEval(score, reasoning) Methods __init__() count(value, /) Return number of occurrences of value. index(value[, start, stop]) Return first index of value. Attributes reasoning Alias for field number 1 score Alias for field number 0 count(value, /)¶ Return number of occurrences of value. index(value, start=0, stop=9223372036854775807, /)¶ Return first index of value. Raises ValueError if the value is not present. reasoning: str¶ Alias for field number 1 score: int¶ Alias for field number 0
[ 5317, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 55569, 55609, 198, 1058, 8859, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 55569, 35580, 11, 33811, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 41559, 29781, 198, 4110, 502, 2937, 315, 17747, 24251, 55569, 35580, 11, 33811, 340, 18337, 198, 565, 2381, 33716, 1868, 3764, 11, 4194, 54660, 5715, 1396, 315, 57115, 315, 907, 627, 1275, 3764, 38372, 4194, 2527, 11, 4194, 9684, 2608, 5715, 1176, 1963, 315, 907, 627, 10738, 198, 20489, 287, 198, 23555, 369, 2115, 1396, 220, 16, 198, 12618, 198, 23555, 369, 2115, 1396, 220, 15, 198, 1868, 3764, 11, 611, 8, 55609, 198, 5715, 1396, 315, 57115, 315, 907, 627, 1275, 3764, 11, 1212, 28, 15, 11, 3009, 28, 20275, 17609, 9639, 23717, 21144, 18216, 22, 11, 611, 8, 55609, 198, 5715, 1176, 1963, 315, 907, 627, 36120, 15764, 422, 279, 907, 374, 539, 3118, 627, 20489, 287, 25, 610, 55609, 198, 23555, 369, 2115, 1396, 220, 16, 198, 12618, 25, 528, 55609, 198, 23555, 369, 2115, 1396, 220, 15 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryEval.html
cc1cd730d615-0
langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator¶
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 87131, 22001, 46262, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
cc1cd730d615-1
langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator(llm: BaseChatModel, agent_tools: Union[Sequence[str], Sequence[BaseTool]], *, input_key: str = 'input', prediction_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output', prompt: BasePromptTemplate = ChatPromptTemplate(input_variables=['reference', 'tool_descriptions', 'answer', 'agent_trajectory', 'question'], output_parser=None, partial_variables={}, messages=[SystemMessage(content='You are a helpful assistant that evaluates language models.', additional_kwargs={}), HumanMessage(content="An AI language model has been given acces to the following set of tools to help answer a user's question.\n\nThe tools given to the AI model are:\n[TOOL_DESCRIPTIONS]\nTool 1:\nName: Search\nDescription: useful for when you need to ask with search\n\nTool 2:\nName: Lookup\nDescription: useful for when you need to ask with lookup\n\nTool 3:\nName: Calculator\nDescription: useful for doing calculations\n\nTool 4:\nName: Search the Web (SerpAPI)\nDescription: useful for when you need to answer questions about current events\n[END_TOOL_DESCRIPTIONS]\n\nThe question the human asked the AI model was: If laid the Statue of Liberty end to end, how many times would it stretch across the United States?\n\nThe AI language model decided to use the following set of tools to answer the question:\n[AGENT_TRAJECTORY]\nStep 1:\nTool used: Search the Web (SerpAPI)\nTool input: If laid the Statue of Liberty end to end, how many times would it stretch across the United States?\nTool output: The Statue of Liberty was given to the United States by France, as a symbol of the two countries' friendship. It was erected
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 673, 87131, 22001, 46262, 36621, 76, 25, 5464, 16047, 1747, 11, 8479, 40823, 25, 9323, 58, 14405, 17752, 1145, 29971, 58, 4066, 7896, 21128, 12039, 1988, 3173, 25, 610, 284, 364, 1379, 518, 20212, 3173, 25, 610, 284, 364, 3081, 518, 5507, 6022, 3173, 25, 610, 284, 364, 1379, 518, 5507, 7800, 3173, 25, 610, 284, 364, 3081, 518, 10137, 25, 5464, 55715, 7423, 284, 13149, 55715, 7423, 5498, 29282, 14314, 16690, 518, 364, 14506, 16145, 25712, 518, 364, 9399, 518, 364, 8252, 87131, 518, 364, 7998, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 6743, 5941, 2374, 2097, 15413, 1151, 2675, 527, 264, 11190, 18328, 430, 67349, 4221, 4211, 16045, 5217, 37335, 1185, 39942, 11344, 2097, 15413, 429, 2127, 15592, 4221, 1646, 706, 1027, 2728, 85195, 311, 279, 2768, 743, 315, 7526, 311, 1520, 4320, 264, 1217, 596, 3488, 7255, 77, 1734, 791, 7526, 2728, 311, 279, 15592, 1646, 527, 7338, 77, 58, 5319, 1971, 39268, 50, 18444, 77, 7896, 220, 16, 7338, 77, 678, 25, 7694, 1734, 5116, 25, 5505, 369, 994, 499, 1205, 311, 2610, 449, 2778, 1734, 1734, 7896, 220, 17, 7338, 77, 678, 25, 51411, 1734, 5116, 25, 5505, 369, 994, 499, 1205, 311, 2610, 449, 19128, 1734, 1734, 7896, 220, 18, 7338, 77, 678, 25, 37128, 1734, 5116, 25, 5505, 369, 3815, 29217, 1734, 1734, 7896, 220, 19, 7338, 77, 678, 25, 7694, 279, 5000, 320, 50, 23457, 7227, 10929, 77, 5116, 25, 5505, 369, 994, 499, 1205, 311, 4320, 4860, 922, 1510, 4455, 1734, 58, 4794, 58288, 39268, 50, 18444, 77, 1734, 791, 3488, 279, 3823, 4691, 279, 15592, 1646, 574, 25, 1442, 17551, 279, 89675, 315, 32492, 842, 311, 842, 11, 1268, 1690, 3115, 1053, 433, 14841, 4028, 279, 3723, 4273, 33720, 77, 1734, 791, 15592, 4221, 1646, 6773, 311, 1005, 279, 2768, 743, 315, 7526, 311, 4320, 279, 3488, 7338, 77, 58, 91415, 40196, 7582, 7668, 18444, 77, 8468, 220, 16, 7338, 77, 7896, 1511, 25, 7694, 279, 5000, 320, 50, 23457, 7227, 10929, 77, 7896, 1988, 25, 1442, 17551, 279, 89675, 315, 32492, 842, 311, 842, 11, 1268, 1690, 3115, 1053, 433, 14841, 4028, 279, 3723, 4273, 33720, 77, 7896, 2612, 25, 578, 89675, 315, 32492, 574, 2728, 311, 279, 3723, 4273, 555, 9822, 11, 439, 264, 7891, 315, 279, 1403, 5961, 6, 27607, 13, 1102, 574, 66906 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
cc1cd730d615-2
to the United States by France, as a symbol of the two countries' friendship. It was erected atop an American-designed ...\n[END_AGENT_TRAJECTORY]\n\n[RESPONSE]\nThe AI language model's final answer to the question was: There are different ways to measure the length of the United States, but if we use the distance between the Statue of Liberty and the westernmost point of the contiguous United States (Cape Alava, Washington), which is approximately 2,857 miles (4,596 km), and assume that the Statue of Liberty is 305 feet (93 meters) tall, then the statue would stretch across the United States approximately 17.5 times if laid end to end.\n[END_RESPONSE]\n\nLet's to do a detailed evaluation of the AI language model's answer step by step.\n\nWe consider the following criteria before giving a score from 1 to 5:\n\ni. Is the final answer helpful?\nii. Does the AI language use a logical sequence of tools to answer the question?\niii. Does the AI language model use the tools in a helpful way?\niv. Does the AI language model use too many steps to answer the question?\nv. Are the appropriate tools used to answer the question?", additional_kwargs={}, example=False), AIMessage(content="First, let's evaluate the final answer. The final uses good reasoning but is wrong. 2,857 divided by 305 is not 17.5.The model should have used the calculator to figure this out. Second does the model use a logical sequence of tools to answer the question?The way model uses the search is not helpful. The model should have used the search tool to figure the width of the US or the height of the statue.The model didn't use the calculator tool and gave an incorrect answer. The search API should be used for current events or specific questions.The tools were not used in a helpful way. The model did
[ 998, 279, 3723, 4273, 555, 9822, 11, 439, 264, 7891, 315, 279, 1403, 5961, 6, 27607, 13, 1102, 574, 66906, 47088, 459, 3778, 69956, 99860, 77, 58, 4794, 53599, 40196, 7582, 7668, 18444, 77, 1734, 58, 68883, 26289, 18444, 89330, 15592, 4221, 1646, 596, 1620, 4320, 311, 279, 3488, 574, 25, 2684, 527, 2204, 5627, 311, 6767, 279, 3160, 315, 279, 3723, 4273, 11, 719, 422, 584, 1005, 279, 6138, 1990, 279, 89675, 315, 32492, 323, 279, 19001, 3646, 1486, 315, 279, 67603, 3723, 4273, 320, 34, 2070, 1708, 2979, 11, 6652, 705, 902, 374, 13489, 220, 17, 11, 20907, 8931, 320, 19, 11, 24515, 13437, 705, 323, 9855, 430, 279, 89675, 315, 32492, 374, 220, 13364, 7693, 320, 6365, 20645, 8, 16615, 11, 1243, 279, 35372, 1053, 14841, 4028, 279, 3723, 4273, 13489, 220, 1114, 13, 20, 3115, 422, 17551, 842, 311, 842, 7255, 77, 58, 4794, 37001, 18444, 77, 1734, 10267, 596, 311, 656, 264, 11944, 16865, 315, 279, 15592, 4221, 1646, 596, 4320, 3094, 555, 3094, 7255, 77, 1734, 1687, 2980, 279, 2768, 13186, 1603, 7231, 264, 5573, 505, 220, 16, 311, 220, 20, 7338, 77, 1734, 72, 13, 2209, 279, 1620, 4320, 11190, 33720, 77, 3893, 13, 12838, 279, 15592, 4221, 1005, 264, 20406, 8668, 315, 7526, 311, 4320, 279, 3488, 33720, 77, 35694, 13, 12838, 279, 15592, 4221, 1646, 1005, 279, 7526, 304, 264, 11190, 1648, 33720, 77, 344, 13, 12838, 279, 15592, 4221, 1646, 1005, 2288, 1690, 7504, 311, 4320, 279, 3488, 33720, 38041, 13, 8886, 279, 8475, 7526, 1511, 311, 4320, 279, 3488, 32111, 5217, 37335, 68525, 3187, 5725, 705, 87868, 808, 15413, 429, 5451, 11, 1095, 596, 15806, 279, 1620, 4320, 13, 578, 1620, 5829, 1695, 33811, 719, 374, 5076, 13, 220, 17, 11, 20907, 18255, 555, 220, 13364, 374, 539, 220, 1114, 13, 20, 11829, 1646, 1288, 617, 1511, 279, 31052, 311, 7216, 420, 704, 13, 10657, 1587, 279, 1646, 1005, 264, 20406, 8668, 315, 7526, 311, 4320, 279, 3488, 30, 791, 1648, 1646, 5829, 279, 2778, 374, 539, 11190, 13, 578, 1646, 1288, 617, 1511, 279, 2778, 5507, 311, 7216, 279, 2430, 315, 279, 2326, 477, 279, 2673, 315, 279, 35372, 11829, 1646, 3287, 956, 1005, 279, 31052, 5507, 323, 6688, 459, 15465, 4320, 13, 578, 2778, 5446, 1288, 387, 1511, 369, 1510, 4455, 477, 3230, 4860, 11829, 7526, 1051, 539, 1511, 304, 264, 11190, 1648, 13, 578, 1646, 1550 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
cc1cd730d615-3
used for current events or specific questions.The tools were not used in a helpful way. The model did not use too many steps to answer the question.The model did not use the appropriate tools to answer the question.    \nJudgment: Given the good reasoning in the final answer but otherwise poor performance, we give the model a score of 2.\n\nScore: 2", additional_kwargs={}, example=False), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['agent_trajectory', 'answer', 'question', 'reference', 'tool_descriptions'], output_parser=None, partial_variables={}, template="An AI language model has been given access to the following set of tools to help answer a user's question.\n\nThe tools given to the AI model are:\n[TOOL_DESCRIPTIONS]\n{tool_descriptions}\n[END_TOOL_DESCRIPTIONS]\n\nThe question the human asked the AI model was:\n[QUESTION]\n{question}\n[END_QUESTION]{reference}\n\nThe AI language model decided to use the following set of tools to answer the question:\n[AGENT_TRAJECTORY]\n{agent_trajectory}\n[END_AGENT_TRAJECTORY]\n\nThe AI language model's final answer to the question was:\n[RESPONSE]\n{answer}\n[END_RESPONSE]\n\nLet's to do a detailed evaluation of the AI language model's answer step by step.\n\nWe consider the following criteria before giving a score from 1 to 5:\n\ni. Is the final answer helpful?\nii. Does the AI language use a logical sequence of tools to answer the question?\niii. Does the AI language model use the tools in a helpful way?\niv. Does the AI language model use too many steps to answer the question?\nv. Are the appropriate tools used to answer the question?", template_format='f-string', validate_template=True), additional_kwargs={})]), evaluation_name: str =
[ 2656, 369, 1510, 4455, 477, 3230, 4860, 11829, 7526, 1051, 539, 1511, 304, 264, 11190, 1648, 13, 578, 1646, 1550, 539, 1005, 2288, 1690, 7504, 311, 4320, 279, 3488, 11829, 1646, 1550, 539, 1005, 279, 8475, 7526, 311, 4320, 279, 3488, 13, 46493, 1144, 77, 62813, 9247, 25, 16644, 279, 1695, 33811, 304, 279, 1620, 4320, 719, 6062, 8009, 5178, 11, 584, 3041, 279, 1646, 264, 5573, 315, 220, 17, 7255, 77, 1734, 10793, 25, 220, 17, 498, 5217, 37335, 68525, 3187, 5725, 705, 11344, 2097, 55715, 7423, 73353, 28, 55715, 7423, 5498, 29282, 14314, 8252, 87131, 518, 364, 9399, 518, 364, 7998, 518, 364, 16690, 518, 364, 14506, 16145, 25712, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 429, 2127, 15592, 4221, 1646, 706, 1027, 2728, 2680, 311, 279, 2768, 743, 315, 7526, 311, 1520, 4320, 264, 1217, 596, 3488, 7255, 77, 1734, 791, 7526, 2728, 311, 279, 15592, 1646, 527, 7338, 77, 58, 5319, 1971, 39268, 50, 18444, 77, 90, 14506, 16145, 25712, 11281, 77, 58, 4794, 58288, 39268, 50, 18444, 77, 1734, 791, 3488, 279, 3823, 4691, 279, 15592, 1646, 574, 7338, 77, 58, 53528, 18444, 77, 90, 7998, 11281, 77, 58, 4794, 94530, 15731, 16690, 11281, 77, 1734, 791, 15592, 4221, 1646, 6773, 311, 1005, 279, 2768, 743, 315, 7526, 311, 4320, 279, 3488, 7338, 77, 58, 91415, 40196, 7582, 7668, 18444, 77, 90, 8252, 87131, 11281, 77, 58, 4794, 53599, 40196, 7582, 7668, 18444, 77, 1734, 791, 15592, 4221, 1646, 596, 1620, 4320, 311, 279, 3488, 574, 7338, 77, 58, 68883, 26289, 18444, 77, 90, 9399, 11281, 77, 58, 4794, 37001, 18444, 77, 1734, 10267, 596, 311, 656, 264, 11944, 16865, 315, 279, 15592, 4221, 1646, 596, 4320, 3094, 555, 3094, 7255, 77, 1734, 1687, 2980, 279, 2768, 13186, 1603, 7231, 264, 5573, 505, 220, 16, 311, 220, 20, 7338, 77, 1734, 72, 13, 2209, 279, 1620, 4320, 11190, 33720, 77, 3893, 13, 12838, 279, 15592, 4221, 1005, 264, 20406, 8668, 315, 7526, 311, 4320, 279, 3488, 33720, 77, 35694, 13, 12838, 279, 15592, 4221, 1646, 1005, 279, 7526, 304, 264, 11190, 1648, 33720, 77, 344, 13, 12838, 279, 15592, 4221, 1646, 1005, 2288, 1690, 7504, 311, 4320, 279, 3488, 33720, 38041, 13, 8886, 279, 8475, 7526, 1511, 311, 4320, 279, 3488, 32111, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 5217, 37335, 1185, 5525, 9918, 16865, 1292, 25, 610, 284 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
cc1cd730d615-4
template_format='f-string', validate_template=True), additional_kwargs={})]), evaluation_name: str = 'Agent Trajectory', **kwargs: Any) → RunEvaluatorChain[source]¶
[ 4308, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 5217, 37335, 1185, 5525, 9918, 16865, 1292, 25, 610, 284, 364, 17230, 17747, 24251, 518, 3146, 9872, 25, 5884, 8, 11651, 6588, 90142, 19368, 76747, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
cc1cd730d615-5
Get an eval chain for grading a model’s response against a map of criteria.
[ 1991, 459, 5720, 8957, 369, 66288, 264, 1646, 753, 2077, 2403, 264, 2472, 315, 13186, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.get_trajectory_evaluator.html
6b7d41419d1f-0
langchain.evaluation.schema.StringEvaluator¶ class langchain.evaluation.schema.StringEvaluator(*args, **kwargs)[source]¶ Bases: Protocol Protocol for evaluating strings. Methods __init__(*args, **kwargs) aevaluate_strings(*, prediction[, ...]) Asynchronously evaluate Chain or LLM output, based on optional evaluate_strings(*, prediction[, reference, ...]) Evaluate Chain or LLM output, based on optional input and label. async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶ Asynchronously evaluate Chain or LLM output, based on optionalinput and label. Parameters prediction (str) – the LLM or chain prediction to evaluate. reference (Optional[str], optional) – the reference label to evaluate against. input (Optional[str], optional) – the input to consider during evaluation **kwargs – additional keyword arguments, including callbacks, tags, etc. Returns The evaluation results containing the score or value. Return type dict abstract evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶ Evaluate Chain or LLM output, based on optional input and label. Parameters prediction (str) – the LLM or chain prediction to evaluate. reference (Optional[str], optional) – the reference label to evaluate against. input (Optional[str], optional) – the input to consider during evaluation **kwargs – additional keyword arguments, including callbacks, tags, etc. Returns The evaluation results containing the score or value. Return type dict
[ 5317, 8995, 1770, 24756, 31992, 6567, 90142, 55609, 198, 1058, 8859, 8995, 1770, 24756, 31992, 6567, 90142, 4163, 2164, 11, 3146, 9872, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 25590, 198, 21346, 369, 38663, 9246, 627, 18337, 198, 565, 2381, 69106, 2164, 11, 4194, 334, 9872, 340, 6043, 20216, 34600, 4163, 11, 4194, 70031, 38372, 4194, 1131, 2608, 2170, 55294, 15806, 29625, 477, 445, 11237, 2612, 11, 3196, 389, 10309, 198, 48391, 34600, 4163, 11, 4194, 70031, 38372, 4194, 16690, 11, 4194, 1131, 2608, 83445, 29625, 477, 445, 11237, 2612, 11, 3196, 389, 10309, 1988, 323, 2440, 627, 7847, 264, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 2170, 55294, 15806, 29625, 477, 445, 11237, 2612, 11, 3196, 389, 10309, 1379, 323, 2440, 627, 9905, 198, 70031, 320, 496, 8, 1389, 279, 445, 11237, 477, 8957, 20212, 311, 15806, 627, 16690, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 5905, 2440, 198, 998, 15806, 2403, 627, 1379, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 1988, 311, 2980, 2391, 16865, 198, 334, 9872, 1389, 5217, 16570, 6105, 11, 2737, 27777, 11, 9681, 11, 5099, 627, 16851, 198, 791, 16865, 3135, 8649, 279, 5573, 477, 907, 627, 5715, 955, 198, 8644, 198, 16647, 15806, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 83445, 29625, 477, 445, 11237, 2612, 11, 3196, 389, 10309, 1988, 323, 2440, 627, 9905, 198, 70031, 320, 496, 8, 1389, 279, 445, 11237, 477, 8957, 20212, 311, 15806, 627, 16690, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 5905, 2440, 198, 998, 15806, 2403, 627, 1379, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 1988, 311, 2980, 2391, 16865, 198, 334, 9872, 1389, 5217, 16570, 6105, 11, 2737, 27777, 11, 9681, 11, 5099, 627, 16851, 198, 791, 16865, 3135, 8649, 279, 5573, 477, 907, 627, 5715, 955, 198, 8644 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.schema.StringEvaluator.html
141955e87608-0
langchain.evaluation.qa.eval_chain.ContextQAEvalChain¶ class langchain.evaluation.qa.eval_chain.ContextQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: LLMChain LLM Chain specifically for evaluating QA w/o GT based on context Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs
[ 5317, 8995, 1770, 24756, 11608, 64, 32810, 31683, 9512, 48, 13983, 838, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 11608, 64, 32810, 31683, 9512, 48, 13983, 838, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 10137, 25, 5464, 55715, 7423, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 2612, 3173, 25, 610, 284, 364, 1342, 518, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 284, 2290, 11, 471, 21333, 18917, 25, 1845, 284, 3082, 11, 9507, 76, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 445, 11237, 19368, 198, 4178, 44, 29625, 11951, 369, 38663, 67008, 289, 20886, 12177, 3196, 389, 2317, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 9507, 76, 25, 5464, 14126, 1747, 510, 8327, 60, 55609, 198, 14126, 1646, 311, 1650, 627, 913, 9507, 76, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-1
There are many different types of memory - please see memory docs for the full catalog. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
[ 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 913, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 510, 15669, 60, 55609, 198, 5207, 6871, 311, 1005, 627, 16672, 311, 832, 430, 5097, 279, 1455, 4461, 925, 719, 1587, 539, 2349, 433, 198, 61036, 627, 913, 10137, 25, 5464, 55715, 7423, 510, 8327, 60, 55609, 198, 55715, 1665, 311, 1005, 627, 913, 471, 21333, 18917, 25, 1845, 284, 3082, 55609, 198, 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 627, 2746, 905, 11, 690, 471, 264, 15860, 315, 5066, 2038, 922, 279, 9659, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-2
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False.
[ 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 7847, 264, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-3
include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶ async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs.
[ 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 7847, 945, 13523, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 9037, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 7847, 1469, 9037, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 55609, 198, 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 10200, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-4
Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate(examples: List[dict], predictions: List[dict], question_key: str = 'query', context_key: str = 'context', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[dict][source]¶ Evaluate question answering examples and predictions. evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict[source]¶
[ 51690, 52032, 505, 11374, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 3261, 36289, 36621, 76, 5400, 25, 445, 11237, 2122, 8, 11651, 1796, 58, 13755, 17752, 11, 5884, 5163, 55609, 198, 4110, 16674, 505, 2077, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 5580, 4112, 25, 1796, 58, 8644, 1145, 20492, 25, 1796, 58, 8644, 1145, 3488, 3173, 25, 610, 284, 364, 1663, 518, 2317, 3173, 25, 610, 284, 364, 2196, 518, 20212, 3173, 25, 610, 284, 364, 1407, 518, 12039, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 8644, 1483, 2484, 60, 55609, 198, 83445, 3488, 36864, 10507, 323, 20492, 627, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-5
classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\n\nExample Format:\nQUESTION: question here\nCONTEXT: context the question is about here\nSTUDENT ANSWER: student's answer here\nGRADE: CORRECT or INCORRECT here\n\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n\nQUESTION: {query}\nCONTEXT: {context}\nSTUDENT ANSWER: {result}\nGRADE:", template_format='f-string', validate_template=True), **kwargs: Any) → ContextQAEvalChain[source]¶ Load QA Eval Chain from LLM. Parameters llm (BaseLanguageModel) – the base language model to use. prompt ('context' and 'result' that will be used as the) – A prompt template containing the input_variables: 'query' – prompt – evaluation. (for) – PROMPT. (Defaults to) – **kwargs – additional keyword arguments. Returns the loaded QA eval chain. Return type ContextQAEvalChain classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template.
[ 27853, 505, 44095, 76, 36621, 76, 25, 5464, 14126, 1747, 11, 10137, 25, 60601, 7423, 284, 60601, 7423, 5498, 29282, 14314, 1663, 518, 364, 2196, 518, 364, 1407, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 429, 2675, 527, 264, 11326, 66288, 264, 28223, 7255, 77, 2675, 527, 2728, 264, 3488, 11, 279, 2317, 279, 3488, 374, 922, 11, 323, 279, 5575, 596, 4320, 13, 1472, 527, 4691, 311, 5573, 279, 5575, 596, 4320, 439, 3060, 27563, 45940, 477, 18610, 878, 45940, 11, 3196, 389, 279, 2317, 7255, 77, 1734, 13617, 15392, 7338, 77, 53528, 25, 3488, 1618, 1734, 99465, 25, 2317, 279, 3488, 374, 922, 1618, 1734, 790, 82050, 97804, 643, 25, 5575, 596, 4320, 1618, 1734, 80391, 25, 27563, 45940, 477, 18610, 878, 45940, 1618, 1734, 1734, 36457, 279, 5575, 11503, 3196, 27785, 389, 872, 61001, 13708, 13, 40071, 12062, 304, 62603, 323, 1343, 97578, 1990, 279, 5575, 4320, 323, 837, 4320, 13, 1102, 374, 10619, 422, 279, 5575, 4320, 5727, 810, 2038, 1109, 279, 837, 4320, 11, 439, 1317, 439, 433, 1587, 539, 6782, 904, 52133, 12518, 13, 19110, 0, 1144, 77, 1734, 53528, 25, 314, 1663, 11281, 77, 99465, 25, 314, 2196, 11281, 77, 790, 82050, 97804, 643, 25, 314, 1407, 11281, 77, 80391, 12421, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 3146, 9872, 25, 5884, 8, 11651, 9805, 48, 13983, 838, 19368, 76747, 60, 55609, 198, 6003, 67008, 59339, 29625, 505, 445, 11237, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 279, 2385, 4221, 1646, 311, 1005, 627, 41681, 4417, 2196, 6, 323, 364, 1407, 6, 430, 690, 387, 1511, 439, 279, 8, 1389, 362, 10137, 3896, 8649, 279, 1988, 29282, 512, 6, 1663, 6, 1389, 720, 41681, 1389, 720, 75114, 13, 320, 2000, 8, 1389, 720, 47, 3442, 2898, 13, 320, 16672, 311, 8, 1389, 720, 334, 9872, 1389, 5217, 16570, 6105, 627, 16851, 198, 1820, 6799, 67008, 5720, 8957, 627, 5715, 955, 198, 2014, 48, 13983, 838, 19368, 198, 27853, 505, 3991, 36621, 76, 25, 5464, 14126, 1747, 11, 3896, 25, 610, 8, 11651, 445, 11237, 19368, 55609, 198, 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-6
Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters
[ 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 627, 19927, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 35798, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 35798, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 5884, 5163, 55609, 198, 7368, 7168, 323, 1243, 4820, 279, 3135, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 72874, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
141955e87608-7
Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.ContextQAEvalChain.html
ac6a968d0a27-0
langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser¶ class langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser[source]¶ Bases: BaseOutputParser[dict] A parser for the output of the CriteriaEvalChain. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → Any[source]¶ Parse the output text. Parameters text (str) – The output text to parse. Returns The parsed output. Return type Any parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids.
[ 5317, 8995, 1770, 24756, 90307, 32810, 31683, 93689, 2122, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 90307, 32810, 31683, 93689, 2122, 5207, 6707, 76747, 60, 55609, 198, 33, 2315, 25, 5464, 5207, 6707, 58, 8644, 933, 32, 6871, 369, 279, 2612, 315, 279, 14577, 55569, 19368, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 6534, 7383, 25, 610, 8, 11651, 5884, 76747, 60, 55609, 198, 14802, 279, 2612, 1495, 627, 9905, 198, 1342, 320, 496, 8, 1389, 578, 2612, 1495, 311, 4820, 627, 16851, 198, 791, 16051, 2612, 627, 5715, 955, 198, 8780, 198, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html
ac6a968d0a27-1
Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.criteria.eval_chain.CriteriaResultOutputParser.html
98784a090f9f-0
langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser¶ class langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser(*, eval_chain_output_key: str = 'text', evaluation_name: str, choices_map: Optional[Dict[str, int]] = None)[source]¶ Bases: RunEvaluatorOutputParser Parse a feedback run with optional choices. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param choices_map: Optional[Dict[str, int]] = None¶ param eval_chain_output_key: str = 'text'¶ param evaluation_name: str [Required]¶ dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → EvaluationResult[source]¶ Parse the last line of the text and return an evaluation result. parse_chain_output(output: Dict[str, Any]) → EvaluationResult¶ Parse the output of a run. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 90157, 1238, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 90157, 1238, 5207, 6707, 4163, 11, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 518, 16865, 1292, 25, 610, 11, 11709, 5489, 25, 12536, 58, 13755, 17752, 11, 528, 5163, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 6588, 90142, 5207, 6707, 198, 14802, 264, 11302, 1629, 449, 10309, 11709, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 11709, 5489, 25, 12536, 58, 13755, 17752, 11, 528, 5163, 284, 2290, 55609, 198, 913, 5720, 31683, 7800, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 913, 16865, 1292, 25, 610, 510, 8327, 60, 55609, 198, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 6534, 7383, 25, 610, 8, 11651, 40388, 2122, 76747, 60, 55609, 198, 14802, 279, 1566, 1584, 315, 279, 1495, 323, 471, 459, 16865, 1121, 627, 6534, 31683, 7800, 11304, 25, 30226, 17752, 11, 5884, 2526, 11651, 40388, 2122, 55609, 198, 14802, 279, 2612, 315, 264, 1629, 627, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html
98784a090f9f-1
property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.ChoicesOutputParser.html
0cd4c115b6d1-0
langchain.evaluation.loading.load_dataset¶ langchain.evaluation.loading.load_dataset(uri: str) → List[Dict][source]¶
[ 5317, 8995, 1770, 24756, 25908, 5214, 19536, 55609, 198, 5317, 8995, 1770, 24756, 25908, 5214, 19536, 26882, 25, 610, 8, 11651, 1796, 58, 13755, 1483, 2484, 60, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.loading.load_dataset.html
46675ab41cb2-0
langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser¶ class langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser[source]¶ Bases: BaseOutputParser Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. dict(**kwargs: Any) → Dict¶ Return dictionary representation of output parser. get_format_instructions() → str¶ Instructions on how the LLM output should be formatted. parse(text: str) → TrajectoryEval[source]¶ Parse the output text and extract the score and reasoning. Parameters text (str) – The output text to parse. Returns A named tuple containing the score and reasoning. Return type TrajectoryEval Raises OutputParserException – If the score is not found in the output text or if the score is not a digit in the range 1-5. parse_result(result: List[Generation]) → T¶ Parse LLM Result. parse_with_prompt(completion: str, prompt: PromptValue) → Any¶ Optional method to parse the output of an LLM call with a prompt. The prompt is largely provided in the event the OutputParser wants to retry or fix the output in some way, and needs information from the prompt to do so. Parameters completion – output of language model prompt – prompt value Returns structured output to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object.
[ 5317, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 5207, 6707, 55609, 198, 1058, 8859, 8995, 1770, 24756, 29192, 812, 13, 84887, 22001, 31683, 844, 969, 24251, 5207, 6707, 76747, 60, 55609, 198, 33, 2315, 25, 5464, 5207, 6707, 198, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 2612, 6871, 627, 456, 9132, 83527, 368, 11651, 610, 55609, 198, 56391, 389, 1268, 279, 445, 11237, 2612, 1288, 387, 24001, 627, 6534, 7383, 25, 610, 8, 11651, 17747, 24251, 55569, 76747, 60, 55609, 198, 14802, 279, 2612, 1495, 323, 8819, 279, 5573, 323, 33811, 627, 9905, 198, 1342, 320, 496, 8, 1389, 578, 2612, 1495, 311, 4820, 627, 16851, 198, 32, 7086, 14743, 8649, 279, 5573, 323, 33811, 627, 5715, 955, 198, 49238, 24251, 55569, 198, 36120, 198, 5207, 6707, 1378, 1389, 1442, 279, 5573, 374, 539, 1766, 304, 279, 2612, 1495, 477, 198, 262, 422, 279, 5573, 374, 539, 264, 16099, 304, 279, 2134, 220, 16, 12, 20, 627, 6534, 5400, 4556, 25, 1796, 58, 38238, 2526, 11651, 350, 55609, 198, 14802, 445, 11237, 5832, 627, 6534, 6753, 62521, 91868, 25, 610, 11, 10137, 25, 60601, 1150, 8, 11651, 5884, 55609, 198, 15669, 1749, 311, 4820, 279, 2612, 315, 459, 445, 11237, 1650, 449, 264, 10137, 627, 791, 10137, 374, 14090, 3984, 304, 279, 1567, 279, 9442, 6707, 6944, 198, 998, 23515, 477, 5155, 279, 2612, 304, 1063, 1648, 11, 323, 3966, 2038, 505, 198, 1820, 10137, 311, 656, 779, 627, 9905, 198, 44412, 1389, 2612, 315, 4221, 1646, 198, 41681, 1389, 10137, 907, 198, 16851, 198, 52243, 2612, 198, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html
46675ab41cb2-1
property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object extra = 'ignore'¶
[ 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 15824, 284, 364, 13431, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.agents.trajectory_eval_chain.TrajectoryOutputParser.html
0583a5f506bf-0
langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper¶ class langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper(*, tool_descriptions: List[str], agent_input_key: str = 'input', agent_output_key: str = 'output', tool_input_key: str = 'input', tool_output_key: str = 'output')[source]¶ Bases: RunEvaluatorInputMapper, BaseModel Maps the Run and Optional[Example] to a dictionary. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param agent_input_key: str = 'input'¶ The key to load from the agent executor’s run input dictionary. param agent_output_key: str = 'output'¶ The key to load from the agent executor’s run output dictionary. param tool_descriptions: List[str] [Required]¶ The descriptions for each of the tools available to the agent. param tool_input_key: str = 'input'¶ The key to load from the tool executor’s run input dictionary. param tool_output_key: str = 'output'¶ The key to load from the tool executor’s run output dictionary. map(run: Run, example: Optional[Example] = None) → Dict[str, str][source]¶ Maps the Run and Optional[Example] to a dictionary
[ 5317, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 844, 969, 24251, 2566, 11232, 55609, 198, 1058, 8859, 8995, 1770, 24756, 7789, 22001, 84, 3046, 12322, 2833, 811, 844, 969, 24251, 2566, 11232, 4163, 11, 5507, 16145, 25712, 25, 1796, 17752, 1145, 8479, 6022, 3173, 25, 610, 284, 364, 1379, 518, 8479, 7800, 3173, 25, 610, 284, 364, 3081, 518, 5507, 6022, 3173, 25, 610, 284, 364, 1379, 518, 5507, 7800, 3173, 25, 610, 284, 364, 3081, 13588, 2484, 60, 55609, 198, 33, 2315, 25, 6588, 90142, 2566, 11232, 11, 65705, 198, 37662, 279, 6588, 323, 12536, 58, 13617, 60, 311, 264, 11240, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 8479, 6022, 3173, 25, 610, 284, 364, 1379, 6, 55609, 198, 791, 1401, 311, 2865, 505, 279, 8479, 32658, 753, 1629, 1988, 11240, 627, 913, 8479, 7800, 3173, 25, 610, 284, 364, 3081, 6, 55609, 198, 791, 1401, 311, 2865, 505, 279, 8479, 32658, 753, 1629, 2612, 11240, 627, 913, 5507, 16145, 25712, 25, 1796, 17752, 60, 510, 8327, 60, 55609, 198, 791, 28887, 369, 1855, 315, 279, 7526, 2561, 311, 279, 8479, 627, 913, 5507, 6022, 3173, 25, 610, 284, 364, 1379, 6, 55609, 198, 791, 1401, 311, 2865, 505, 279, 5507, 32658, 753, 1629, 1988, 11240, 627, 913, 5507, 7800, 3173, 25, 610, 284, 364, 3081, 6, 55609, 198, 791, 1401, 311, 2865, 505, 279, 5507, 32658, 753, 1629, 2612, 11240, 627, 2235, 39872, 25, 6588, 11, 3187, 25, 12536, 58, 13617, 60, 284, 2290, 8, 11651, 30226, 17752, 11, 610, 1483, 2484, 60, 55609, 198, 37662, 279, 6588, 323, 12536, 58, 13617, 60, 311, 264, 11240 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.run_evaluators.implementations.TrajectoryInputMapper.html
d87b7e23999a-0
langchain.evaluation.qa.eval_chain.CotQAEvalChain¶ class langchain.evaluation.qa.eval_chain.CotQAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: ContextQAEvalChain LLM Chain specifically for evaluating QA using chain of thought reasoning. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables.
[ 5317, 8995, 1770, 24756, 11608, 64, 32810, 31683, 732, 354, 48, 13983, 838, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 11608, 64, 32810, 31683, 732, 354, 48, 13983, 838, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 10137, 25, 5464, 55715, 7423, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 2612, 3173, 25, 610, 284, 364, 1342, 518, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 284, 2290, 11, 471, 21333, 18917, 25, 1845, 284, 3082, 11, 9507, 76, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 9805, 48, 13983, 838, 19368, 198, 4178, 44, 29625, 11951, 369, 38663, 67008, 1701, 8957, 315, 3463, 33811, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 9507, 76, 25, 5464, 14126, 1747, 510, 8327, 60, 55609, 198, 14126, 1646, 311, 1650, 627, 913, 9507, 76, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-1
them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this
[ 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 913, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 510, 15669, 60, 55609, 198, 5207, 6871, 311, 1005, 627, 16672, 311, 832, 430, 5097, 279, 1455, 4461, 925, 719, 1587, 539, 2349, 433, 198, 61036, 627, 913, 10137, 25, 5464, 55715, 7423, 510, 8327, 60, 55609, 198, 55715, 1665, 311, 1005, 627, 913, 471, 21333, 18917, 25, 1845, 284, 3082, 55609, 198, 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 627, 2746, 905, 11, 690, 471, 264, 15860, 315, 5066, 2038, 922, 279, 9659, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-2
returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False.
[ 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 7847, 264, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-3
include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict¶ async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs.
[ 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 55609, 198, 7847, 945, 13523, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 9037, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 7847, 1469, 9037, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 55609, 198, 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 10200, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-4
Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate(examples: List[dict], predictions: List[dict], question_key: str = 'query', context_key: str = 'context', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[dict]¶ Evaluate question answering examples and predictions. evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, **kwargs: Any) → dict¶
[ 51690, 52032, 505, 11374, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 3261, 36289, 36621, 76, 5400, 25, 445, 11237, 2122, 8, 11651, 1796, 58, 13755, 17752, 11, 5884, 5163, 55609, 198, 4110, 16674, 505, 2077, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 5580, 4112, 25, 1796, 58, 8644, 1145, 20492, 25, 1796, 58, 8644, 1145, 3488, 3173, 25, 610, 284, 364, 1663, 518, 2317, 3173, 25, 610, 284, 364, 2196, 518, 20212, 3173, 25, 610, 284, 364, 1407, 518, 12039, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 8644, 60, 55609, 198, 83445, 3488, 36864, 10507, 323, 20492, 627, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-5
classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'context', 'result'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the context the question is about, and the student's answer. You are asked to score the student's answer as either CORRECT or INCORRECT, based on the context.\nWrite out in a step by step manner your reasoning to be sure that your conclusion is correct. Avoid simply stating the correct answer at the outset.\n\nExample Format:\nQUESTION: question here\nCONTEXT: context the question is about here\nSTUDENT ANSWER: student's answer here\nEXPLANATION: step by step reasoning here\nGRADE: CORRECT or INCORRECT here\n\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n\nQUESTION: {query}\nCONTEXT: {context}\nSTUDENT ANSWER: {result}\nEXPLANATION:", template_format='f-string', validate_template=True), **kwargs: Any) → CotQAEvalChain[source]¶ Load QA Eval Chain from LLM. Parameters llm (BaseLanguageModel) – the base language model to use. prompt ('context' and 'result' that will be used as the) – A prompt template containing the input_variables: 'query' – prompt – evaluation. (for) – PROMPT. (Defaults to) – **kwargs – additional keyword arguments. Returns the loaded QA eval chain. Return type ContextQAEvalChain
[ 27853, 505, 44095, 76, 36621, 76, 25, 5464, 14126, 1747, 11, 10137, 25, 60601, 7423, 284, 60601, 7423, 5498, 29282, 14314, 1663, 518, 364, 2196, 518, 364, 1407, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 429, 2675, 527, 264, 11326, 66288, 264, 28223, 7255, 77, 2675, 527, 2728, 264, 3488, 11, 279, 2317, 279, 3488, 374, 922, 11, 323, 279, 5575, 596, 4320, 13, 1472, 527, 4691, 311, 5573, 279, 5575, 596, 4320, 439, 3060, 27563, 45940, 477, 18610, 878, 45940, 11, 3196, 389, 279, 2317, 7255, 77, 8144, 704, 304, 264, 3094, 555, 3094, 11827, 701, 33811, 311, 387, 2771, 430, 701, 17102, 374, 4495, 13, 35106, 5042, 28898, 279, 4495, 4320, 520, 279, 68864, 7255, 77, 1734, 13617, 15392, 7338, 77, 53528, 25, 3488, 1618, 1734, 99465, 25, 2317, 279, 3488, 374, 922, 1618, 1734, 790, 82050, 97804, 643, 25, 5575, 596, 4320, 1618, 1734, 3337, 95179, 3579, 25, 3094, 555, 3094, 33811, 1618, 1734, 80391, 25, 27563, 45940, 477, 18610, 878, 45940, 1618, 1734, 1734, 36457, 279, 5575, 11503, 3196, 27785, 389, 872, 61001, 13708, 13, 40071, 12062, 304, 62603, 323, 1343, 97578, 1990, 279, 5575, 4320, 323, 837, 4320, 13, 1102, 374, 10619, 422, 279, 5575, 4320, 5727, 810, 2038, 1109, 279, 837, 4320, 11, 439, 1317, 439, 433, 1587, 539, 6782, 904, 52133, 12518, 13, 19110, 0, 1144, 77, 1734, 53528, 25, 314, 1663, 11281, 77, 99465, 25, 314, 2196, 11281, 77, 790, 82050, 97804, 643, 25, 314, 1407, 11281, 77, 3337, 95179, 3579, 12421, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 3146, 9872, 25, 5884, 8, 11651, 68825, 48, 13983, 838, 19368, 76747, 60, 55609, 198, 6003, 67008, 59339, 29625, 505, 445, 11237, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 279, 2385, 4221, 1646, 311, 1005, 627, 41681, 4417, 2196, 6, 323, 364, 1407, 6, 430, 690, 387, 1511, 439, 279, 8, 1389, 362, 10137, 3896, 8649, 279, 1988, 29282, 512, 6, 1663, 6, 1389, 720, 41681, 1389, 720, 75114, 13, 320, 2000, 8, 1389, 720, 47, 3442, 2898, 13, 320, 16672, 311, 8, 1389, 720, 334, 9872, 1389, 5217, 16570, 6105, 627, 16851, 198, 1820, 6799, 67008, 5720, 8957, 627, 5715, 955, 198, 2014, 48, 13983, 838, 19368 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-6
Returns the loaded QA eval chain. Return type ContextQAEvalChain classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶
[ 16851, 198, 1820, 6799, 67008, 5720, 8957, 627, 5715, 955, 198, 2014, 48, 13983, 838, 19368, 198, 27853, 505, 3991, 36621, 76, 25, 5464, 14126, 1747, 11, 3896, 25, 610, 8, 11651, 445, 11237, 19368, 55609, 198, 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 627, 19927, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 35798, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 35798, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 5884, 5163, 55609, 198, 7368, 7168, 323, 1243, 4820, 279, 3135, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 72874, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
d87b7e23999a-7
Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.CotQAEvalChain.html
2bd93aaba89a-0
langchain.evaluation.qa.eval_chain.QAEvalChain¶ class langchain.evaluation.qa.eval_chain.QAEvalChain(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, prompt: BasePromptTemplate, llm: BaseLanguageModel, output_key: str = 'text', output_parser: BaseLLMOutputParser = None, return_final_only: bool = True, llm_kwargs: dict = None)[source]¶ Bases: LLMChain LLM Chain specifically for evaluating question answering. Create a new model by parsing and validating input data from keyword arguments. Raises ValidationError if the input data cannot be parsed to form a valid model. param callback_manager: Optional[BaseCallbackManager] = None¶ Deprecated, use callbacks instead. param callbacks: Callbacks = None¶ Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details. param llm: BaseLanguageModel [Required]¶ Language model to call. param llm_kwargs: dict [Optional]¶ param memory: Optional[BaseMemory] = None¶ Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog.
[ 5317, 8995, 1770, 24756, 11608, 64, 32810, 31683, 10208, 13983, 838, 19368, 55609, 198, 1058, 8859, 8995, 1770, 24756, 11608, 64, 32810, 31683, 10208, 13983, 838, 19368, 4163, 11, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 11, 14008, 25, 1845, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 10137, 25, 5464, 55715, 7423, 11, 9507, 76, 25, 5464, 14126, 1747, 11, 2612, 3173, 25, 610, 284, 364, 1342, 518, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 284, 2290, 11, 471, 21333, 18917, 25, 1845, 284, 3082, 11, 9507, 76, 37335, 25, 6587, 284, 2290, 6758, 2484, 60, 55609, 198, 33, 2315, 25, 445, 11237, 19368, 198, 4178, 44, 29625, 11951, 369, 38663, 3488, 36864, 627, 4110, 264, 502, 1646, 555, 23115, 323, 69772, 1988, 828, 505, 16570, 6105, 627, 36120, 54129, 422, 279, 1988, 828, 4250, 387, 16051, 311, 1376, 264, 2764, 1646, 627, 913, 4927, 12418, 25, 12536, 58, 4066, 7646, 2087, 60, 284, 2290, 55609, 198, 52444, 11, 1005, 27777, 4619, 627, 913, 27777, 25, 23499, 82, 284, 2290, 55609, 198, 15669, 1160, 315, 4927, 25050, 320, 269, 4927, 6783, 570, 37090, 311, 2290, 627, 7646, 25050, 527, 2663, 6957, 279, 48608, 315, 264, 1650, 311, 264, 8957, 345, 40389, 449, 389, 31683, 5011, 11, 13696, 449, 389, 31683, 6345, 477, 389, 31683, 4188, 627, 4959, 2587, 8957, 649, 46624, 1650, 5217, 4927, 5528, 11, 1518, 23499, 27437, 198, 2000, 2539, 3649, 627, 913, 9507, 76, 25, 5464, 14126, 1747, 510, 8327, 60, 55609, 198, 14126, 1646, 311, 1650, 627, 913, 9507, 76, 37335, 25, 6587, 510, 15669, 60, 55609, 198, 913, 5044, 25, 12536, 58, 4066, 10869, 60, 284, 2290, 55609, 198, 15669, 5044, 1665, 13, 37090, 311, 2290, 627, 10869, 374, 264, 538, 430, 5334, 2663, 520, 279, 1212, 198, 438, 520, 279, 842, 315, 1475, 8957, 13, 2468, 279, 1212, 11, 5044, 21577, 7482, 323, 16609, 198, 49818, 3235, 304, 279, 8957, 13, 2468, 279, 842, 11, 433, 27024, 904, 6052, 7482, 627, 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-1
There are many different types of memory - please see memory docs for the full catalog. param output_key: str = 'text'¶ param output_parser: BaseLLMOutputParser [Optional]¶ Output parser to use. Defaults to one that takes the most likely string but does not change it otherwise. param prompt: BasePromptTemplate [Required]¶ Prompt object to use. param return_final_only: bool = True¶ Whether to return only the final parsed result. Defaults to True. If false, will return a bunch of extra information about the generation. param tags: Optional[List[str]] = None¶ Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case. param verbose: bool [Optional]¶ Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value. __call__(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False.
[ 3947, 527, 1690, 2204, 4595, 315, 5044, 482, 4587, 1518, 5044, 27437, 198, 2000, 279, 2539, 16808, 627, 913, 2612, 3173, 25, 610, 284, 364, 1342, 6, 55609, 198, 913, 2612, 19024, 25, 5464, 4178, 44, 5207, 6707, 510, 15669, 60, 55609, 198, 5207, 6871, 311, 1005, 627, 16672, 311, 832, 430, 5097, 279, 1455, 4461, 925, 719, 1587, 539, 2349, 433, 198, 61036, 627, 913, 10137, 25, 5464, 55715, 7423, 510, 8327, 60, 55609, 198, 55715, 1665, 311, 1005, 627, 913, 471, 21333, 18917, 25, 1845, 284, 3082, 55609, 198, 25729, 311, 471, 1193, 279, 1620, 16051, 1121, 13, 37090, 311, 3082, 627, 2746, 905, 11, 690, 471, 264, 15860, 315, 5066, 2038, 922, 279, 9659, 627, 913, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 55609, 198, 15669, 1160, 315, 9681, 5938, 449, 279, 8957, 13, 37090, 311, 2290, 198, 9673, 9681, 690, 387, 5938, 449, 1855, 1650, 311, 420, 8957, 345, 438, 5946, 439, 6105, 311, 279, 25050, 4613, 304, 27777, 627, 2675, 649, 1005, 1521, 311, 8866, 10765, 264, 3230, 2937, 315, 264, 8957, 449, 1202, 1005, 1162, 627, 913, 14008, 25, 1845, 510, 15669, 60, 55609, 198, 25729, 477, 539, 1629, 304, 14008, 3941, 13, 763, 14008, 3941, 11, 1063, 29539, 18929, 198, 14724, 387, 17124, 311, 279, 2393, 13, 37090, 311, 8859, 8995, 45749, 907, 627, 565, 6797, 3889, 25986, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-2
chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False. async aapply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. async aapply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async acall(inputs: Union[Dict[str, Any], Any], return_only_outputs: bool = False, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, *, tags: Optional[List[str]] = None, include_run_info: bool = False) → Dict[str, Any]¶ Run the logic of this chain and add to output if desired. Parameters inputs – Dictionary of inputs, or single input if chain expects only one param. return_only_outputs – boolean for whether to return only outputs in the response. If True, only new keys generated by this chain will be returned. If False, both input keys and new keys generated by this chain will be returned. Defaults to False. callbacks – Callbacks to use for this chain run. If not provided, will use the callbacks provided to the chain. include_run_info – Whether to include run info in the response. Defaults to False.
[ 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 7847, 264, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1645, 543, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 12039, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 2997, 14334, 3186, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 5884, 60, 55609, 198, 6869, 279, 12496, 315, 420, 8957, 323, 923, 311, 2612, 422, 12974, 627, 9905, 198, 25986, 1389, 10685, 315, 11374, 11, 477, 3254, 1988, 422, 8957, 25283, 198, 3323, 832, 1719, 627, 693, 18917, 36289, 1389, 2777, 369, 3508, 311, 471, 1193, 16674, 304, 279, 198, 2376, 13, 1442, 3082, 11, 1193, 502, 7039, 8066, 555, 420, 8957, 690, 387, 198, 78691, 13, 1442, 3641, 11, 2225, 1988, 7039, 323, 502, 7039, 8066, 555, 420, 198, 8995, 690, 387, 6052, 13, 37090, 311, 3641, 627, 69411, 1389, 23499, 82, 311, 1005, 369, 420, 8957, 1629, 13, 1442, 539, 3984, 11, 690, 198, 817, 279, 27777, 3984, 311, 279, 8957, 627, 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-3
include_run_info – Whether to include run info in the response. Defaults to False. async aevaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ async agenerate(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. apply(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[Dict[str, str]]¶ Utilize the LLM generate method for speed gains. apply_and_parse(input_list: List[Dict[str, Any]], callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → Sequence[Union[str, List[str], Dict[str, str]]]¶ Call apply and then parse the results. async apredict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") async apredict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, str]]¶ Call apredict and then parse the results.
[ 1012, 14334, 3186, 1389, 13440, 311, 2997, 1629, 3630, 304, 279, 2077, 13, 37090, 198, 998, 3641, 627, 7847, 264, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 7847, 945, 13523, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 10492, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 13755, 17752, 11, 610, 5163, 55609, 198, 2810, 553, 279, 445, 11237, 7068, 1749, 369, 4732, 20192, 627, 10492, 8543, 21715, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 29971, 58, 33758, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 60, 55609, 198, 7368, 3881, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 9037, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 7847, 1469, 9037, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 610, 5163, 55609, 198, 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-4
Call apredict and then parse the results. async aprep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[AsyncCallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. async arun(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. create_outputs(llm_result: LLMResult) → List[Dict[str, Any]]¶ Create outputs from response. dict(**kwargs: Any) → Dict¶ Return dictionary representation of chain. evaluate(examples: Sequence[dict], predictions: Sequence[dict], question_key: str = 'query', answer_key: str = 'answer', prediction_key: str = 'result', *, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None) → List[dict][source]¶ Evaluate question answering examples and predictions. evaluate_strings(*, prediction: str, reference: Optional[str] = None, input: Optional[str] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → dict[source]¶ Evaluate Chain or LLM output, based on optional input and label. Parameters prediction (str) – the LLM or chain prediction to evaluate. reference (Optional[str], optional) – the reference label to evaluate against. input (Optional[str], optional) – the input to consider during evaluation callbacks (Callbacks, optional) – the callbacks to use for tracing. **kwargs – additional keyword arguments, including callbacks, tags, etc. Returns The evaluation results containing the score or value. Return type
[ 7368, 1469, 9037, 323, 1243, 4820, 279, 3135, 627, 7847, 1469, 10200, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 6662, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 7847, 802, 359, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 3261, 36289, 36621, 76, 5400, 25, 445, 11237, 2122, 8, 11651, 1796, 58, 13755, 17752, 11, 5884, 5163, 55609, 198, 4110, 16674, 505, 2077, 627, 8644, 22551, 9872, 25, 5884, 8, 11651, 30226, 55609, 198, 5715, 11240, 13340, 315, 8957, 627, 48391, 5580, 4112, 25, 29971, 58, 8644, 1145, 20492, 25, 29971, 58, 8644, 1145, 3488, 3173, 25, 610, 284, 364, 1663, 518, 4320, 3173, 25, 610, 284, 364, 9399, 518, 20212, 3173, 25, 610, 284, 364, 1407, 518, 12039, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 8, 11651, 1796, 58, 8644, 1483, 2484, 60, 55609, 198, 83445, 3488, 36864, 10507, 323, 20492, 627, 48391, 34600, 4163, 11, 20212, 25, 610, 11, 5905, 25, 12536, 17752, 60, 284, 2290, 11, 1988, 25, 12536, 17752, 60, 284, 2290, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 6587, 76747, 60, 55609, 198, 83445, 29625, 477, 445, 11237, 2612, 11, 3196, 389, 10309, 1988, 323, 2440, 627, 9905, 198, 70031, 320, 496, 8, 1389, 279, 445, 11237, 477, 8957, 20212, 311, 15806, 627, 16690, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 5905, 2440, 198, 998, 15806, 2403, 627, 1379, 320, 15669, 17752, 1145, 10309, 8, 1389, 279, 1988, 311, 2980, 2391, 16865, 198, 69411, 320, 45561, 11, 10309, 8, 1389, 279, 27777, 311, 1005, 369, 46515, 627, 334, 9872, 1389, 5217, 16570, 6105, 11, 2737, 27777, 11, 9681, 11, 5099, 627, 16851, 198, 791, 16865, 3135, 8649, 279, 5573, 477, 907, 627, 5715, 955 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-5
Returns The evaluation results containing the score or value. Return type dict classmethod from_llm(llm: BaseLanguageModel, prompt: PromptTemplate = PromptTemplate(input_variables=['query', 'result', 'answer'], output_parser=None, partial_variables={}, template="You are a teacher grading a quiz.\nYou are given a question, the student's answer, and the true answer, and are asked to score the student answer as either CORRECT or INCORRECT.\n\nExample Format:\nQUESTION: question here\nSTUDENT ANSWER: student's answer here\nTRUE ANSWER: true answer here\nGRADE: CORRECT or INCORRECT here\n\nGrade the student answers based ONLY on their factual accuracy. Ignore differences in punctuation and phrasing between the student answer and true answer. It is OK if the student answer contains more information than the true answer, as long as it does not contain any conflicting statements. Begin! \n\nQUESTION: {query}\nSTUDENT ANSWER: {result}\nTRUE ANSWER: {answer}\nGRADE:", template_format='f-string', validate_template=True), **kwargs: Any) → QAEvalChain[source]¶ Load QA Eval Chain from LLM. Parameters llm (BaseLanguageModel) – the base language model to use. prompt ('answer' and 'result' that will be used as the) – A prompt template containing the input_variables: 'input' – prompt – evaluation. (for) – PROMPT. (Defaults to) – **kwargs – additional keyword arguments. Returns the loaded QA eval chain. Return type QAEvalChain classmethod from_string(llm: BaseLanguageModel, template: str) → LLMChain¶ Create LLMChain from LLM and template.
[ 16851, 198, 791, 16865, 3135, 8649, 279, 5573, 477, 907, 627, 5715, 955, 198, 8644, 198, 27853, 505, 44095, 76, 36621, 76, 25, 5464, 14126, 1747, 11, 10137, 25, 60601, 7423, 284, 60601, 7423, 5498, 29282, 14314, 1663, 518, 364, 1407, 518, 364, 9399, 4181, 2612, 19024, 5980, 11, 7276, 29282, 68525, 3896, 429, 2675, 527, 264, 11326, 66288, 264, 28223, 7255, 77, 2675, 527, 2728, 264, 3488, 11, 279, 5575, 596, 4320, 11, 323, 279, 837, 4320, 11, 323, 527, 4691, 311, 5573, 279, 5575, 4320, 439, 3060, 27563, 45940, 477, 18610, 878, 45940, 7255, 77, 1734, 13617, 15392, 7338, 77, 53528, 25, 3488, 1618, 1734, 790, 82050, 97804, 643, 25, 5575, 596, 4320, 1618, 1734, 21260, 97804, 643, 25, 837, 4320, 1618, 1734, 80391, 25, 27563, 45940, 477, 18610, 878, 45940, 1618, 1734, 1734, 36457, 279, 5575, 11503, 3196, 27785, 389, 872, 61001, 13708, 13, 40071, 12062, 304, 62603, 323, 1343, 97578, 1990, 279, 5575, 4320, 323, 837, 4320, 13, 1102, 374, 10619, 422, 279, 5575, 4320, 5727, 810, 2038, 1109, 279, 837, 4320, 11, 439, 1317, 439, 433, 1587, 539, 6782, 904, 52133, 12518, 13, 19110, 0, 1144, 77, 1734, 53528, 25, 314, 1663, 11281, 77, 790, 82050, 97804, 643, 25, 314, 1407, 11281, 77, 21260, 97804, 643, 25, 314, 9399, 11281, 77, 80391, 12421, 3896, 9132, 1151, 69, 31981, 518, 9788, 8864, 3702, 705, 3146, 9872, 25, 5884, 8, 11651, 1229, 13983, 838, 19368, 76747, 60, 55609, 198, 6003, 67008, 59339, 29625, 505, 445, 11237, 627, 9905, 198, 657, 76, 320, 4066, 14126, 1747, 8, 1389, 279, 2385, 4221, 1646, 311, 1005, 627, 41681, 4417, 9399, 6, 323, 364, 1407, 6, 430, 690, 387, 1511, 439, 279, 8, 1389, 362, 10137, 3896, 8649, 279, 1988, 29282, 512, 6, 1379, 6, 1389, 720, 41681, 1389, 720, 75114, 13, 320, 2000, 8, 1389, 720, 47, 3442, 2898, 13, 320, 16672, 311, 8, 1389, 720, 334, 9872, 1389, 5217, 16570, 6105, 627, 16851, 198, 1820, 6799, 67008, 5720, 8957, 627, 5715, 955, 198, 48, 13983, 838, 19368, 198, 27853, 505, 3991, 36621, 76, 25, 5464, 14126, 1747, 11, 3896, 25, 610, 8, 11651, 445, 11237, 19368, 55609, 198, 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 13 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-6
Create LLMChain from LLM and template. generate(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → LLMResult¶ Generate LLM result from inputs. predict(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → str¶ Format prompt with kwargs and pass to LLM. Parameters callbacks – Callbacks to pass to LLMChain **kwargs – Keys to pass to prompt template. Returns Completion from LLM. Example completion = llm.predict(adjective="funny") predict_and_parse(callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, **kwargs: Any) → Union[str, List[str], Dict[str, Any]]¶ Call predict and then parse the results. prep_inputs(inputs: Union[Dict[str, Any], Any]) → Dict[str, str]¶ Validate and prep inputs. prep_outputs(inputs: Dict[str, str], outputs: Dict[str, str], return_only_outputs: bool = False) → Dict[str, str]¶ Validate and prep outputs. prep_prompts(input_list: List[Dict[str, Any]], run_manager: Optional[CallbackManagerForChainRun] = None) → Tuple[List[PromptValue], Optional[List[str]]]¶ Prepare prompts from inputs. validator raise_deprecation  »  all fields¶ Raise deprecation warning if callback_manager is used. run(*args: Any, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, tags: Optional[List[str]] = None, **kwargs: Any) → str¶ Run the chain as text in, text out or multiple variables, text out. save(file_path: Union[Path, str]) → None¶ Save the chain. Parameters
[ 4110, 445, 11237, 19368, 505, 445, 11237, 323, 3896, 627, 19927, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 445, 11237, 2122, 55609, 198, 32215, 445, 11237, 1121, 505, 11374, 627, 35798, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 4152, 10137, 449, 16901, 323, 1522, 311, 445, 11237, 627, 9905, 198, 69411, 1389, 23499, 82, 311, 1522, 311, 445, 11237, 19368, 198, 334, 9872, 1389, 25104, 311, 1522, 311, 10137, 3896, 627, 16851, 198, 34290, 505, 445, 11237, 627, 13617, 198, 44412, 284, 9507, 76, 24706, 44879, 51591, 429, 12158, 3919, 1158, 35798, 8543, 21715, 24885, 82, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 9323, 17752, 11, 1796, 17752, 1145, 30226, 17752, 11, 5884, 5163, 55609, 198, 7368, 7168, 323, 1243, 4820, 279, 3135, 627, 72874, 29657, 35099, 25, 9323, 58, 13755, 17752, 11, 5884, 1145, 5884, 2526, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 11374, 627, 72874, 36289, 35099, 25, 30226, 17752, 11, 610, 1145, 16674, 25, 30226, 17752, 11, 610, 1145, 471, 18917, 36289, 25, 1845, 284, 3641, 8, 11651, 30226, 17752, 11, 610, 60, 55609, 198, 18409, 323, 22033, 16674, 627, 72874, 48977, 13044, 5498, 2062, 25, 1796, 58, 13755, 17752, 11, 5884, 21128, 1629, 12418, 25, 12536, 58, 7646, 2087, 2520, 19368, 6869, 60, 284, 2290, 8, 11651, 25645, 53094, 43447, 15091, 1150, 1145, 12536, 53094, 17752, 5163, 60, 55609, 198, 51690, 52032, 505, 11374, 627, 16503, 4933, 2310, 70693, 4194, 8345, 4194, 682, 5151, 55609, 198, 94201, 409, 70693, 10163, 422, 4927, 12418, 374, 1511, 627, 6236, 4163, 2164, 25, 5884, 11, 27777, 25, 12536, 58, 33758, 53094, 58, 4066, 7646, 3126, 1145, 5464, 7646, 2087, 5163, 284, 2290, 11, 9681, 25, 12536, 53094, 17752, 5163, 284, 2290, 11, 3146, 9872, 25, 5884, 8, 11651, 610, 55609, 198, 6869, 279, 8957, 439, 1495, 304, 11, 1495, 704, 477, 5361, 7482, 11, 1495, 704, 627, 6766, 4971, 2703, 25, 9323, 58, 1858, 11, 610, 2526, 11651, 2290, 55609, 198, 8960, 279, 8957, 627, 9905 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html
2bd93aaba89a-7
Save the chain. Parameters file_path – Path to file to save the chain to. Example: .. code-block:: python chain.save(file_path=”path/chain.yaml”) validator set_verbose  »  verbose¶ If verbose is None, set it. This allows users to pass in None as verbose to access the global setting. to_json() → Union[SerializedConstructor, SerializedNotImplemented]¶ to_json_not_implemented() → SerializedNotImplemented¶ property lc_attributes: Dict¶ Return a list of attribute names that should be included in the serialized kwargs. These attributes must be accepted by the constructor. property lc_namespace: List[str]¶ Return the namespace of the langchain object. eg. [“langchain”, “llms”, “openai”] property lc_secrets: Dict[str, str]¶ Return a map of constructor argument names to secret ids. eg. {“openai_api_key”: “OPENAI_API_KEY”} property lc_serializable: bool¶ Return whether or not the class is serializable. model Config¶ Bases: object Configuration for this pydantic object. arbitrary_types_allowed = True¶ extra = 'forbid'¶
[ 8960, 279, 8957, 627, 9905, 198, 1213, 2703, 1389, 8092, 311, 1052, 311, 3665, 279, 8957, 311, 627, 13617, 512, 497, 2082, 9612, 487, 10344, 198, 8995, 5799, 4971, 2703, 45221, 2398, 14, 8995, 34506, 863, 340, 16503, 743, 69021, 4194, 8345, 4194, 14008, 55609, 198, 2746, 14008, 374, 2290, 11, 743, 433, 627, 2028, 6276, 3932, 311, 1522, 304, 2290, 439, 14008, 311, 2680, 279, 3728, 6376, 627, 998, 9643, 368, 11651, 9323, 58, 78621, 13591, 11, 92572, 2688, 18804, 60, 55609, 198, 998, 9643, 8072, 18377, 14565, 368, 11651, 92572, 2688, 18804, 55609, 198, 3784, 37313, 18741, 25, 30226, 55609, 198, 5715, 264, 1160, 315, 7180, 5144, 430, 1288, 387, 5343, 304, 279, 198, 76377, 16901, 13, 4314, 8365, 2011, 387, 11928, 555, 279, 198, 22602, 627, 3784, 37313, 42671, 25, 1796, 17752, 60, 55609, 198, 5715, 279, 4573, 315, 279, 8859, 8995, 1665, 627, 797, 13, 510, 2118, 5317, 8995, 9520, 1054, 657, 1026, 9520, 1054, 2569, 2192, 863, 933, 3784, 37313, 3537, 53810, 25, 30226, 17752, 11, 610, 60, 55609, 198, 5715, 264, 2472, 315, 4797, 5811, 5144, 311, 6367, 14483, 627, 797, 13, 314, 2118, 2569, 2192, 11959, 3173, 57633, 1054, 32033, 15836, 11669, 6738, 863, 534, 3784, 37313, 26684, 8499, 25, 1845, 55609, 198, 5715, 3508, 477, 539, 279, 538, 374, 6275, 8499, 627, 2590, 5649, 55609, 198, 33, 2315, 25, 1665, 198, 7843, 369, 420, 4611, 67, 8322, 1665, 627, 277, 88951, 9962, 43255, 284, 3082, 55609, 198, 15824, 284, 364, 2000, 21301, 6, 55609 ]
https://langchain.readthedocs.io/en/latest/evaluation/langchain.evaluation.qa.eval_chain.QAEvalChain.html