Add test dataset for evaluating LLM applications with Ragas framework
Browse files- Introduced a new CSV file `testset_2.csv` containing user inputs, reference contexts, and synthesized responses.
- Included various queries related to the integration and evaluation of Large Language Models (LLMs) in modern applications, particularly focusing on the Ragas evaluation framework.
- Enhanced the dataset with examples covering topics such as healthcare applications, synthetic data generation, and observability best practices for AI systems.
- evals/rag_eval_2.csv +64 -0
- evals/rag_eval_result_2.csv +64 -0
- evals/testset_2.csv +11 -0
- py-src/lets_talk/agent.py +59 -5
- py-src/notebooks/05_SDG_Eval.ipynb +0 -0
- py-src/notebooks/07_Fine_Tuning_Dataset.ipynb +58 -84
evals/rag_eval_2.csv
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
user_input,retrieved_contexts,reference_contexts,response,reference,synthesizer_name
|
2 |
+
"How are Large Language Models integrated into modern applications, and why is their performance evaluation considered critical according to the context?","['| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |', ""**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe'll explore each metric in detail, explaining when and how to use them effectively.\n\n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \nLearn specialized techniques for evaluating retrieval-augmented generation systems, including context precision, recall, and relevance.\n\n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \nDiscover how to create high-quality test datasets that thoroughly exercise your application's capabilities.\n\n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \nGo beyond basic metrics with custom evaluations, multi-aspect analysis, and domain-specific assessments.\n\n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \nLearn how to evaluate complex AI agents that engage in multi-turn interactions, use tools, and work toward specific goals."", ""What excites me most is that my experience in enterprise software development gives me a unique perspective on AI implementation. I understand not just the algorithms and models, but also how to integrate them into robust, production-ready systems that deliver real value.\n\n## The Best of Both Worlds\n\nComing back to AI doesn't mean leaving behind everything I learned in web and enterprise development. Quite the opposite - I believe my background gives me a particular advantage in building AI systems that are:""]","['---\ntitle: ""Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications""\ndate: 2025-04-26T18:00:00-06:00\nlayout: blog\ndescription: ""Explore the essential evaluation framework for LLM applications with Ragas. Learn how to assess performance, ensure accuracy, and improve reliability in Retrieval-Augmented Generation systems.""\ncategories: [""AI"", ""RAG"", ""Evaluation"",""Ragas""]\ncoverImage: ""https://images.unsplash.com/photo-1593642634367-d91a135587b5?q=80&w=1770&auto=format&fit=crop&ixlib=rb-4.0.3""\nreadingTime: 7\npublished: true\n---\n\nAs Large Language Models (LLMs) become fundamental components of modern applications, effectively evaluating their performance becomes increasingly critical. Whether you\'re building a question-answering system, a document retrieval tool, or a conversational agent, you need reliable metrics to assess how well your application performs. This is where Ragas steps in.\n\n## What is Ragas?']",I don't know.,"Large Language Models (LLMs) are becoming fundamental components of modern applications, such as question-answering systems, document retrieval tools, and conversational agents. Effectively evaluating their performance is considered increasingly critical to ensure reliable metrics for assessing how well these applications perform.",single_hop_specifc_query_synthesizer
|
3 |
+
"Howw does Ragas help evalute LLM applikations in the helthcare industree, and why is this importent for ensuring akuracy and reliabilitee?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable.""]",I don't know.,"Ragas is an open-source evaluation framework designed for LLM applications, with strengths in Retrieval-Augmented Generation systems. It provides specialized metrics to address challenges unique to LLM-powered systems, such as determining if the application retrieves the right information and if responses are factually accurate and consistent with the retrieved context. Evaluating LLM applications is especially important in healthcare, where accuracy and reliability are critical, because LLMs can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. Proper evaluation with frameworks like Ragas is non-negotiable in healthcare to ensure the system's accuracy and reliability.",single_hop_specifc_query_synthesizer
|
4 |
+
What specialized metrics does Ragas provide for evaluating LLM applications?,"[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '---\n*This is the eighth part of a series on Ragas, a research-driven evaluation framework for LLM and RAG systems. If you missed the previous parts, check them out below:*\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**Part 8: Building Feedback Loops β _You are here_**', ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""Evaluation serves several key purposes:\n- **Quality assurance**: Identify and fix issues before they reach users\n- **Performance tracking**: Monitor how changes impact system performance\n- **Benchmarking**: Compare different approaches objectively\n- **Continuous improvement**: Build feedback loops to enhance your application\n\n## Key Features of Ragas\n\n### π― Specialized Metrics\nRagas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications:\n\n- **Faithfulness**: Measures if the response is factually consistent with the retrieved context\n- **Context Relevancy**: Evaluates if the retrieved information is relevant to the query\n- **Answer Relevancy**: Assesses if the response addresses the user's question\n- **Topic Adherence**: Gauges how well multi-turn conversations stay on topic""]",I don't know.,"Ragas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications, including faithfulness, context relevancy, answer relevancy, and topic adherence.",single_hop_specifc_query_synthesizer
|
5 |
+
"me wanna know how LangSmith work with Ragas, like is it for test data or what, and how it help me as LLM app builder, can you tell me all about LangSmith from this info?","[""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""By extending Ragas with custom metrics, you can create evaluation frameworks that precisely measure what matters most for your LLM applications, leading to more meaningful improvements and better user experiences.\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.""]","[""### π§ͺ Test Data Generation\nCreating high-quality test data is often a bottleneck in evaluation. Ragas helps you generate comprehensive test datasets automatically, saving time and ensuring thorough coverage.\n\n### π Seamless Integrations\nRagas works with popular LLM frameworks and tools:\n- [LangChain](https://www.langchain.com/)\n- [LlamaIndex](https://www.llamaindex.ai/)\n- [Haystack](https://haystack.deepset.ai/)\n- [OpenAI](https://openai.com/)\n\nObservability platforms \n- [Phoenix](https://phoenix.arize.com/)\n- [LangSmith](https://python.langchain.com/docs/introduction/)\n- [Langfuse](https://www.langfuse.com/)\n\n### π Comprehensive Analysis\nBeyond simple scores, Ragas provides detailed insights into your application's strengths and weaknesses, enabling targeted improvements.\n\n## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere's a simple example of evaluating a response using Ragas:""]",I don't know.,"LangSmith is listed as one of the observability platforms that Ragas works with. The context does not provide details about how LangSmith specifically functions, but it shows that Ragas integrates with LangSmith to support observability in LLM-powered systems.",single_hop_specifc_query_synthesizer
|
6 |
+
How do I use the OPENAI API key when initializing an LLM for evaluation with Ragas?,"[""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere\'s a simple example of evaluating a response using Ragas:\n\n```python\nfrom ragas.metrics import Faithfulness\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import SingleTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\nfrom langchain_openai import ChatOpenAI\n\n# Initialize the LLM, you are going to new OPENAI API key\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")) \n\n# Your evaluation data\ntest_data = {\n ""user_input"": ""What is the capital of France?"",\n ""retrieved_contexts"": [""Paris is the capital and most populous city of France.""],\n ""response"": ""The capital of France is Paris.""\n}\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor', '## Implementing Agent Evaluation in Practice\n\nLet\'s look at a practical example of evaluating an AI agent using these metrics:\n\n```python\nfrom ragas.metrics import AgentGoalAccuracyWithoutReference, ToolCallAccuracy, TopicAdherenceScore\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import MultiTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\n\n# Initialize the LLM\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o""))']","['## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere\'s a simple example of evaluating a response using Ragas:\n\n```python\nfrom ragas.metrics import Faithfulness\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import SingleTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\nfrom langchain_openai import ChatOpenAI\n\n# Initialize the LLM, you are going to new OPENAI API key\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")) \n\n# Your evaluation data\ntest_data = {\n ""user_input"": ""What is the capital of France?"",\n ""retrieved_contexts"": [""Paris is the capital and most populous city of France.""],\n ""response"": ""The capital of France is Paris.""\n}\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor']","To use the OPENAI API key when initializing an LLM for evaluation with Ragas, you can follow this example:
|
7 |
+
|
8 |
+
```python
|
9 |
+
from langchain_openai import ChatOpenAI
|
10 |
+
from ragas.llms import LangchainLLMWrapper
|
11 |
+
|
12 |
+
# Initialize the LLM with your OPENAI API key
|
13 |
+
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o""))
|
14 |
+
```
|
15 |
+
|
16 |
+
Make sure to replace `""gpt-4o""` with the appropriate model you wish to use. You will need to set your API key in your environment or configuration as required by the `ChatOpenAI` class.
|
17 |
+
|
18 |
+
For more details, you can refer to the following links:
|
19 |
+
- [Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)
|
20 |
+
- [Part 5: Advanced Metrics and Customization with Ragas](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)","To use the OPENAI API key when initializing an LLM for evaluation with Ragas, you need to provide your OPENAI API key during the initialization of the ChatOpenAI model, as shown in the example: evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")).",single_hop_specifc_query_synthesizer
|
21 |
+
"How does synthetic data generation contribute to the importance of data quantity in improving RAG evaluation, and what paradigm shift does this represent according to Peter Norvig's perspective?","[""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |', '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', ""**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe'll explore each metric in detail, explaining when and how to use them effectively.\n\n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \nLearn specialized techniques for evaluating retrieval-augmented generation systems, including context precision, recall, and relevance.\n\n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \nDiscover how to create high-quality test datasets that thoroughly exercise your application's capabilities.\n\n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \nGo beyond basic metrics with custom evaluations, multi-aspect analysis, and domain-specific assessments.\n\n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \nLearn how to evaluate complex AI agents that engage in multi-turn interactions, use tools, and work toward specific goals.""]","['<1-hop>\n\n## Why and How to Generate Synthetic Data for RAG Evaluation\n\nIn the world of Retrieval-Augmented Generation (RAG) and LLM-powered applications, **synthetic data generation** is a game-changer for rapid iteration and robust evaluation. This blog post explains why synthetic data is essential, and how you can generate it for your own RAG pipelinesβusing modern tools like [RAGAS](https://github.com/explodinggradients/ragas) and [LangSmith](https://smith.langchain.com/).\n\n---\n\n### Why Generate Synthetic Data?\n\n1. **Early Signal, Fast Iteration** \n Real-world data is often scarce or expensive to label. Synthetic data lets you quickly create test sets that mimic real user queries and contexts, so you can evaluate your systemβs performance before deploying to production.\n\n2. **Controlled Complexity** \n You can design synthetic datasets to cover edge cases, multi-hop reasoning, or specific knowledge domainsβensuring your RAG system is robust, not just good at the βeasyβ cases.', '<2-hop>\n\n## The Origin of ""Data is King""\n\nPeter Norvig famously stated, ""We don\'t have better algorithms. We just have more data."" This statement emerged during a time when Google\'s approach to machine translation was yielding surprisingly effective results not through algorithmic innovations, but through the sheer volume of multilingual data they had amassed. \n\nThis perspective represented a paradigm shift. Prior to this, the field had largely focused on crafting ever more sophisticated algorithms, with the assumption that smarter code would yield better results. Norvig\'s insight suggested something different: even relatively simple algorithms could outperform more sophisticated ones when trained on sufficiently large datasets.\n\n## The Business Imperative of Data Ownership']",I don't know.,"Synthetic data generation enables rapid creation of test sets that mimic real user queries and contexts, allowing for early evaluation and iteration of RAG systems even when real-world data is scarce or expensive to label. This approach increases the quantity of data available for evaluation, supporting robust system development. According to Peter Norvig's perspective, as described in the context, the importance of data quantity represents a paradigm shift: rather than relying solely on more sophisticated algorithms, having more dataβeven with simpler algorithmsβcan lead to better results. Thus, synthetic data generation aligns with this shift by providing the large datasets necessary to improve system performance.",multi_hop_abstract_query_synthesizer
|
22 |
+
"How does Ragas support the evaluation of both LLM applications and AI agents, and what specialized metrics are introduced for evaluating AI agents?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '---\n*This is the eighth part of a series on Ragas, a research-driven evaluation framework for LLM and RAG systems. If you missed the previous parts, check them out below:*\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**Part 8: Building Feedback Loops β _You are here_**']","['<1-hop>\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor\n\n# Create metric\nfaithfulness = Faithfulness(llm=evaluator_llm)\n# Calculate the score\nresult = await faithfulness.single_turn_ascore(sample)\nprint(f""Faithfulness score: {result}"")\n```\n\n> π‘ **Try it yourself:** \n> Explore the hands-on notebook for this workflow: \n> [01_Introduction_to_Ragas](https://github.com/mafzaal/intro-to-ragas/blob/master/01_Introduction_to_Ragas.ipynb)\n\n## What\'s Coming in This Blog Series\n\nThis introduction is just the beginning. In the upcoming posts, we\'ll dive deeper into all aspects of evaluating LLM applications with Ragas:\n\n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe\'ll explore each metric in detail, explaining when and how to use them effectively.', ""<2-hop>\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '<3-hop>\n\n---\ntitle: ""Part 6: Evaluating AI Agents: Beyond Simple Answers with Ragas""\ndate: 2025-04-28T06:00:00-06:00\nlayout: blog\ndescription: ""Learn how to evaluate complex AI agents using Ragas\' specialized metrics for goal accuracy, tool call accuracy, and topic adherence to build more reliable and effective agent-based applications.""\ncategories: [""AI"", ""Agents"", ""Evaluation"", ""Ragas"", ""LLM""]\ncoverImage: ""/images/ai_agent_evaluation.png"" \nreadingTime: 8\npublished: true\n---\n\nIn our previous posts, we\'ve explored how Ragas evaluates RAG systems and enables custom metrics for specialized applications. As LLMs evolve beyond simple question-answering to become powerful AI agents, evaluation needs have grown more sophisticated too. In this post, we\'ll explore Ragas\' specialized metrics for evaluating AI agents that engage in multi-turn interactions, use tools, and work toward specific goals.\n\n## The Challenge of Evaluating AI Agents']","Ragas is an evaluation framework designed to support both LLM (Large Language Model) applications and AI agents by providing a structured approach to assess their performance. It introduces specialized metrics tailored for evaluating AI agents, focusing on aspects beyond simple answer accuracy, such as contextual understanding and interaction quality.
|
23 |
+
|
24 |
+
For more detailed insights, you can explore the following parts of the Ragas series:
|
25 |
+
|
26 |
+
- **[Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)**
|
27 |
+
- **[Part 2: Basic Evaluation Workflow](https://thedataguy.pro/blog/basic-evaluation-workflow-with-ragas/)**
|
28 |
+
- **[Part 3: Evaluating RAG Systems with Ragas](https://thedataguy.pro/blog/evaluating-rag-systems-with-ragas/)**
|
29 |
+
- **[Part 5: Advanced Metrics and Customization](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)**
|
30 |
+
- **[Part 6: Evaluating AI Agents](https://thedataguy.pro/blog/evaluating-ai-agents-with-ragas/)**
|
31 |
+
|
32 |
+
These resources will provide a comprehensive understanding of how Ragas enhances evaluation processes for both LLMs and AI agents.","Ragas supports the evaluation of LLM applications by providing metrics such as faithfulness, which can be applied to single-turn samples as shown in the introductory workflow. For AI agents, Ragas introduces specialized metrics designed to assess more complex behaviors, including goal accuracy, tool call accuracy, and topic adherence. These metrics enable the evaluation of AI agents that perform multi-turn interactions, utilize tools, and work toward specific goals, thus addressing the advanced requirements of evaluating agent-based applications.",multi_hop_abstract_query_synthesizer
|
33 |
+
"How does Metric-Driven Development (MDD) utilize combined performance metrics to guide project outcomes, and what are some examples of such metrics in practice?","[""Reward functions embody the core MDD idea: set a clear, measurable goal, and let it guide actions towards success.\n\n## Metric-Driven Development Across Industries: Real-World Examples\n\nMDD isn't just for software. Here's how different fields use it:"", '* **Team A** is stuck in debates. Should they focus on improving video load speed or making the recommendation engine more accurate? One engineer insists, ""Faster videos keep users from leaving!"" Another counters, ""But better recommendations are what make them subscribe!"" They argue based on gut feelings.\n* **Team B** operates differently. They have a clear, agreed-upon goal: ***Improve the average ""Watch Time per User"" metric, while ensuring video buffering times stay below 2 seconds.*** They rapidly test ideas, measuring the impact of each change against this specific target.\n\nWhich team do you think will make faster, smarter progress?\n\n\nTeam B has the edge because they\'re using **Metric-Driven Development (MDD)**. This is a powerful strategy where teams unite around measurable goals to eliminate guesswork and make real strides. Let\'s break down how it works, what makes a metric truly useful, and see how industries from healthcare to e-commerce use it to succeed.', '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', 'MDD reminds us to track both:\n* **Leading indicators** (like training loss, code coverage) to monitor progress during development.\n* **Lagging indicators** (like user engagement, revenue, customer support tickets) to measure the actual impact.\n\n## The Takeaway: Use Metrics as Your Compass\nMetric-Driven Development isn\'t a complex theory reserved for tech giants. It\'s a fundamental mindset applicable everywhere:\n\n* A local bakery might track *""Daily Units Sold per Pastry Type""* to optimize baking schedules.\n* A city planner could use *""Average Commute Time Reduction""* to evaluate the success of new traffic light patterns.\n* A project manager might measure progress through *""Sprint Velocity""* or *""Percentage of On-Time Task Completions""* rather than subjective assessments of how ""busy"" the team appears.\n\n\nBy choosing metrics that are **measurable, actionable, aligned, simple, and robust**, you transform ambiguity into clarity and opinion into evidence.']","['<1-hop>\n\n## What Exactly is Metric-Driven Development?\n\nMetric-Driven Development (MDD) is a simple but effective framework where teams:\n\n1. **Define Clear, Measurable Goals:** Set specific numerical targets (e.g., ""Increase user sign-ups by 20% this quarter"").\n2. **Base Decisions on Data:** Rely on evidence and measurements, not just opinions or assumptions.\n3. **Iterate and Learn Quickly:** Continuously measure the impact of changes to see what works and what doesn\'t.\n\nThink of MDD as a **GPS for your project**. Without clear metrics, you\'re driving in the fog, hoping you\'re heading in the right direction. With MDD, you get real-time feedback, ensuring you\'re moving towards your destination efficiently.\n\n## Why Teams Struggle Without Clear Metrics\n\nWithout a metric-driven approach, teams often fall into common traps:', '<2-hop>\n\n* **Metric:** A combined score, e.g., `Points Scored - (Time Taken * Penalty Factor)`.\n * **Impact:** Trains AI opponents that are challenging but fair, balancing speed and skill.\n* **Autonomous Vehicles: Safety & Comfort Score**\n * **Metric:** Combination of factors like smooth acceleration/braking, lane adherence, and deductions for interventions or near-misses.\n * **Impact:** Guides development towards vehicles that are not only safe but also provide a comfortable ride.']","Metric-Driven Development (MDD) utilizes combined performance metrics to provide clear, measurable goals that guide project outcomes. By focusing on specific metrics, teams can eliminate guesswork and make informed decisions based on data rather than opinions.
|
34 |
+
|
35 |
+
### Key Aspects of MDD:
|
36 |
+
- **Leading Indicators**: Metrics like training loss and code coverage help monitor progress during development.
|
37 |
+
- **Lagging Indicators**: Metrics such as user engagement, revenue, and customer support tickets measure the actual impact of changes made.
|
38 |
+
|
39 |
+
### Examples of Metrics in Practice:
|
40 |
+
- A bakery might track **""Daily Units Sold per Pastry Type""** to optimize baking schedules.
|
41 |
+
- A city planner could use **""Average Commute Time Reduction""** to evaluate the success of new traffic light patterns.
|
42 |
+
- A project manager might measure progress through **""Sprint Velocity""** or **""Percentage of On-Time Task Completions""**.
|
43 |
+
|
44 |
+
These metrics transform ambiguity into clarity, allowing teams to focus on actionable outcomes.
|
45 |
+
|
46 |
+
For more information, you can explore the following links:
|
47 |
+
- [Metric-Driven Development: Make Smarter Decisions, Faster](https://thedataguy.pro/blog/metric-driven-development/)","Metric-Driven Development (MDD) utilizes combined performance metrics by defining clear, measurable goals and basing decisions on data rather than assumptions. This approach ensures that teams receive real-time feedback and can iterate quickly to improve outcomes. Examples of combined performance metrics in practice include a score such as 'Points Scored - (Time Taken * Penalty Factor)' to train AI opponents that balance speed and skill, and a 'Safety & Comfort Score' for autonomous vehicles, which combines factors like smooth acceleration, lane adherence, and deductions for interventions or near-misses. These combined metrics help guide development towards achieving specific, balanced objectives.",multi_hop_abstract_query_synthesizer
|
48 |
+
"How does Ragas provide specialized evaluation metrics for LLMs, and what steps are involved in creating a custom metric to assess technical accuracy in programming explanations?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""By extending Ragas with custom metrics, you can create evaluation frameworks that precisely measure what matters most for your LLM applications, leading to more meaningful improvements and better user experiences.\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows."", ""In our next post, we'll delve deeper into specialized evaluation techniques for RAG systems, exploring advanced metrics and evaluation strategies for retrieval-augmented generation applications.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**Part 2: Basic Evaluation Workflow β _You are here_** \n*Next up in the series:* \n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""<1-hop>\n\n## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable."", '<2-hop>\n\n## Creating Your First Custom Metric\n\nLet\'s create a custom metric that evaluates technical accuracy in programming explanations:\n\n```python\nfrom dataclasses import dataclass, field\nfrom typing import Dict, Optional, Set\nimport typing as t\n\nfrom ragas.metrics.base import MetricWithLLM, SingleTurnMetric\nfrom ragas.prompt import PydanticPrompt\nfrom ragas.metrics import MetricType, MetricOutputType\nfrom pydantic import BaseModel\n\n# Define input/output models for the prompt\nclass TechnicalAccuracyInput(BaseModel):\n question: str\n context: str\n response: str\n programming_language: str = ""python""\n\nclass TechnicalAccuracyOutput(BaseModel):\n score: float\n feedback: str']","Ragas provides specialized evaluation metrics for LLMs by allowing users to extend its framework with custom metrics tailored to specific evaluation needs. This enables the measurement of aspects that are most relevant for particular applications, such as technical accuracy in programming explanations.
|
49 |
+
|
50 |
+
To create a custom metric for assessing technical accuracy, the following steps are typically involved:
|
51 |
+
|
52 |
+
1. **Define the Evaluation Criteria**: Identify what constitutes technical accuracy in the context of programming explanations.
|
53 |
+
2. **Develop the Metric**: Create a formula or algorithm that quantifies the defined criteria.
|
54 |
+
3. **Integrate with Ragas**: Implement the custom metric within the Ragas framework to ensure it can be used in evaluations.
|
55 |
+
4. **Test and Validate**: Run evaluations using the custom metric to ensure it accurately reflects the technical accuracy of programming explanations.
|
56 |
+
|
57 |
+
For more detailed information, you can explore the following links:
|
58 |
+
|
59 |
+
- [Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)
|
60 |
+
- [Part 2: Basic Evaluation Workflow](https://thedataguy.pro/blog/basic-evaluation-workflow-with-ragas/)
|
61 |
+
- [Part 3: Evaluating RAG Systems](https://thedataguy.pro/blog/evaluating-rag-systems-with-ragas/)
|
62 |
+
- [Part 5: Advanced Metrics and Customization with Ragas](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)
|
63 |
+
- [Part 6: Evaluating AI Agents](https://thedataguy.pro/blog/evaluating-ai-agents-with-ragas/)","Ragas is an open-source evaluation framework specifically designed for LLM applications, offering specialized metrics that address challenges unique to LLM-powered systems, such as ensuring factual accuracy, consistency with retrieved context, and appropriate query handling. To create a custom metric for evaluating technical accuracy in programming explanations, Ragas allows developers to define input and output models (for example, using Pydantic BaseModel classes for technical accuracy input and output), and implement the metric logic using its extensible metric classes. This enables tailored evaluation beyond traditional NLP metrics, supporting the needs of high-stakes LLM applications.",multi_hop_abstract_query_synthesizer
|
64 |
+
How do observability best practices contribute to building production-ready AI systems?,"[""What excites me most is that my experience in enterprise software development gives me a unique perspective on AI implementation. I understand not just the algorithms and models, but also how to integrate them into robust, production-ready systems that deliver real value.\n\n## The Best of Both Worlds\n\nComing back to AI doesn't mean leaving behind everything I learned in web and enterprise development. Quite the opposite - I believe my background gives me a particular advantage in building AI systems that are:"", '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', ""- **Production-ready**: Understanding software engineering best practices helps create AI systems that can operate reliably at scale.\n- **User-focused**: Experience with UX principles ensures AI solutions are designed with actual human users in mind.\n- **Integrated**: Knowledge of enterprise systems makes it easier to connect AI capabilities with existing business processes.\n- **Simplified**: My experience in streamlining complex business processes helps me identify where AI can have the greatest impact through intelligent automation.\n- **Business-oriented**: I understand that AI isn't just about the technologyβit's about solving real business problems and creating measurable value.\n- **Practical**: I focus on practical applications that deliver immediate benefits rather than getting caught up in theoretical possibilities.\n\n## What's Next\n\nAs I return to my AI roots, I'm excited to share this journey with you through this blog. In the coming months, I plan to write about:"", '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |']","['<1-hop>\n\n## Best Practices for Observability\n\n1. **Define clear thresholds**: Establish performance baselines and alert thresholds for each metric\n2. **Segment evaluations**: Break down results by query type, data source, or other relevant factors\n3. **Historical tracking**: Maintain historical evaluation data to identify trends and regressions\n4. **Correlation analysis**: Link evaluation metrics to user feedback and business outcomes\n5. **Regular benchmarking**: Periodically evaluate against fixed test sets to ensure consistency\n6. **Alert on regressions**: Implement automated alerts when metrics drop below thresholds\n7. **Contextualize metrics**: Include example failures alongside aggregate metrics for better understanding\n\n## Building a Feedback Loop\n\nThe ultimate goal of evaluation is to drive improvements. Establish a feedback loop:', ""<2-hop>\n\n- **Production-ready**: Understanding software engineering best practices helps create AI systems that can operate reliably at scale.\n- **User-focused**: Experience with UX principles ensures AI solutions are designed with actual human users in mind.\n- **Integrated**: Knowledge of enterprise systems makes it easier to connect AI capabilities with existing business processes.\n- **Simplified**: My experience in streamlining complex business processes helps me identify where AI can have the greatest impact through intelligent automation.\n- **Business-oriented**: I understand that AI isn't just about the technologyβit's about solving real business problems and creating measurable value.\n- **Practical**: I focus on practical applications that deliver immediate benefits rather than getting caught up in theoretical possibilities.\n\n## What's Next\n\nAs I return to my AI roots, I'm excited to share this journey with you through this blog. In the coming months, I plan to write about:""]",I don't know.,"Observability best practices, such as defining clear thresholds, segmenting evaluations, maintaining historical tracking, and alerting on regressions, ensure that AI systems are continuously monitored and improved. These practices are essential for creating production-ready AI systems that can operate reliably at scale, as they enable consistent performance evaluation and rapid response to issues.",multi_hop_abstract_query_synthesizer
|
evals/rag_eval_result_2.csv
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
user_input,retrieved_contexts,reference_contexts,response,reference,context_recall,faithfulness,factual_correctness(mode=f1),answer_relevancy,context_entity_recall,noise_sensitivity(mode=relevant)
|
2 |
+
"How are Large Language Models integrated into modern applications, and why is their performance evaluation considered critical according to the context?","['| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |', ""**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe'll explore each metric in detail, explaining when and how to use them effectively.\n\n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \nLearn specialized techniques for evaluating retrieval-augmented generation systems, including context precision, recall, and relevance.\n\n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \nDiscover how to create high-quality test datasets that thoroughly exercise your application's capabilities.\n\n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \nGo beyond basic metrics with custom evaluations, multi-aspect analysis, and domain-specific assessments.\n\n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \nLearn how to evaluate complex AI agents that engage in multi-turn interactions, use tools, and work toward specific goals."", ""What excites me most is that my experience in enterprise software development gives me a unique perspective on AI implementation. I understand not just the algorithms and models, but also how to integrate them into robust, production-ready systems that deliver real value.\n\n## The Best of Both Worlds\n\nComing back to AI doesn't mean leaving behind everything I learned in web and enterprise development. Quite the opposite - I believe my background gives me a particular advantage in building AI systems that are:""]","['---\ntitle: ""Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications""\ndate: 2025-04-26T18:00:00-06:00\nlayout: blog\ndescription: ""Explore the essential evaluation framework for LLM applications with Ragas. Learn how to assess performance, ensure accuracy, and improve reliability in Retrieval-Augmented Generation systems.""\ncategories: [""AI"", ""RAG"", ""Evaluation"",""Ragas""]\ncoverImage: ""https://images.unsplash.com/photo-1593642634367-d91a135587b5?q=80&w=1770&auto=format&fit=crop&ixlib=rb-4.0.3""\nreadingTime: 7\npublished: true\n---\n\nAs Large Language Models (LLMs) become fundamental components of modern applications, effectively evaluating their performance becomes increasingly critical. Whether you\'re building a question-answering system, a document retrieval tool, or a conversational agent, you need reliable metrics to assess how well your application performs. This is where Ragas steps in.\n\n## What is Ragas?']",I don't know.,"Large Language Models (LLMs) are becoming fundamental components of modern applications, such as question-answering systems, document retrieval tools, and conversational agents. Effectively evaluating their performance is considered increasingly critical to ensure reliable metrics for assessing how well these applications perform.",0.5,1.0,0.0,0.0,0.0,0.0
|
3 |
+
"Howw does Ragas help evalute LLM applikations in the helthcare industree, and why is this importent for ensuring akuracy and reliabilitee?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable.""]",I don't know.,"Ragas is an open-source evaluation framework designed for LLM applications, with strengths in Retrieval-Augmented Generation systems. It provides specialized metrics to address challenges unique to LLM-powered systems, such as determining if the application retrieves the right information and if responses are factually accurate and consistent with the retrieved context. Evaluating LLM applications is especially important in healthcare, where accuracy and reliability are critical, because LLMs can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. Proper evaluation with frameworks like Ragas is non-negotiable in healthcare to ensure the system's accuracy and reliability.",0.0,1.0,0.0,0.0,0.16666666638888888,
|
4 |
+
What specialized metrics does Ragas provide for evaluating LLM applications?,"[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '---\n*This is the eighth part of a series on Ragas, a research-driven evaluation framework for LLM and RAG systems. If you missed the previous parts, check them out below:*\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**Part 8: Building Feedback Loops β _You are here_**', ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""Evaluation serves several key purposes:\n- **Quality assurance**: Identify and fix issues before they reach users\n- **Performance tracking**: Monitor how changes impact system performance\n- **Benchmarking**: Compare different approaches objectively\n- **Continuous improvement**: Build feedback loops to enhance your application\n\n## Key Features of Ragas\n\n### π― Specialized Metrics\nRagas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications:\n\n- **Faithfulness**: Measures if the response is factually consistent with the retrieved context\n- **Context Relevancy**: Evaluates if the retrieved information is relevant to the query\n- **Answer Relevancy**: Assesses if the response addresses the user's question\n- **Topic Adherence**: Gauges how well multi-turn conversations stay on topic""]",I don't know.,"Ragas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications, including faithfulness, context relevancy, answer relevancy, and topic adherence.",0.0,1.0,0.0,0.0,0.12499999984374999,0.0
|
5 |
+
"me wanna know how LangSmith work with Ragas, like is it for test data or what, and how it help me as LLM app builder, can you tell me all about LangSmith from this info?","[""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""By extending Ragas with custom metrics, you can create evaluation frameworks that precisely measure what matters most for your LLM applications, leading to more meaningful improvements and better user experiences.\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.""]","[""### π§ͺ Test Data Generation\nCreating high-quality test data is often a bottleneck in evaluation. Ragas helps you generate comprehensive test datasets automatically, saving time and ensuring thorough coverage.\n\n### π Seamless Integrations\nRagas works with popular LLM frameworks and tools:\n- [LangChain](https://www.langchain.com/)\n- [LlamaIndex](https://www.llamaindex.ai/)\n- [Haystack](https://haystack.deepset.ai/)\n- [OpenAI](https://openai.com/)\n\nObservability platforms \n- [Phoenix](https://phoenix.arize.com/)\n- [LangSmith](https://python.langchain.com/docs/introduction/)\n- [Langfuse](https://www.langfuse.com/)\n\n### π Comprehensive Analysis\nBeyond simple scores, Ragas provides detailed insights into your application's strengths and weaknesses, enabling targeted improvements.\n\n## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere's a simple example of evaluating a response using Ragas:""]",I don't know.,"LangSmith is listed as one of the observability platforms that Ragas works with. The context does not provide details about how LangSmith specifically functions, but it shows that Ragas integrates with LangSmith to support observability in LLM-powered systems.",0.0,,0.5,0.0,0.249999999375,0.25
|
6 |
+
How do I use the OPENAI API key when initializing an LLM for evaluation with Ragas?,"[""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere\'s a simple example of evaluating a response using Ragas:\n\n```python\nfrom ragas.metrics import Faithfulness\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import SingleTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\nfrom langchain_openai import ChatOpenAI\n\n# Initialize the LLM, you are going to new OPENAI API key\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")) \n\n# Your evaluation data\ntest_data = {\n ""user_input"": ""What is the capital of France?"",\n ""retrieved_contexts"": [""Paris is the capital and most populous city of France.""],\n ""response"": ""The capital of France is Paris.""\n}\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor', '## Implementing Agent Evaluation in Practice\n\nLet\'s look at a practical example of evaluating an AI agent using these metrics:\n\n```python\nfrom ragas.metrics import AgentGoalAccuracyWithoutReference, ToolCallAccuracy, TopicAdherenceScore\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import MultiTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\n\n# Initialize the LLM\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o""))']","['## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere\'s a simple example of evaluating a response using Ragas:\n\n```python\nfrom ragas.metrics import Faithfulness\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import SingleTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\nfrom langchain_openai import ChatOpenAI\n\n# Initialize the LLM, you are going to new OPENAI API key\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")) \n\n# Your evaluation data\ntest_data = {\n ""user_input"": ""What is the capital of France?"",\n ""retrieved_contexts"": [""Paris is the capital and most populous city of France.""],\n ""response"": ""The capital of France is Paris.""\n}\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor']","To use the OPENAI API key when initializing an LLM for evaluation with Ragas, you can follow this example:
|
7 |
+
|
8 |
+
```python
|
9 |
+
from langchain_openai import ChatOpenAI
|
10 |
+
from ragas.llms import LangchainLLMWrapper
|
11 |
+
|
12 |
+
# Initialize the LLM with your OPENAI API key
|
13 |
+
evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o""))
|
14 |
+
```
|
15 |
+
|
16 |
+
Make sure to replace `""gpt-4o""` with the appropriate model you wish to use. You will need to set your API key in your environment or configuration as required by the `ChatOpenAI` class.
|
17 |
+
|
18 |
+
For more details, you can refer to the following links:
|
19 |
+
- [Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)
|
20 |
+
- [Part 5: Advanced Metrics and Customization with Ragas](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)","To use the OPENAI API key when initializing an LLM for evaluation with Ragas, you need to provide your OPENAI API key during the initialization of the ChatOpenAI model, as shown in the example: evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")).",,,0.57,0.9973366591821318,,0.25
|
21 |
+
"How does synthetic data generation contribute to the importance of data quantity in improving RAG evaluation, and what paradigm shift does this represent according to Peter Norvig's perspective?","[""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |', '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', ""**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe'll explore each metric in detail, explaining when and how to use them effectively.\n\n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \nLearn specialized techniques for evaluating retrieval-augmented generation systems, including context precision, recall, and relevance.\n\n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \nDiscover how to create high-quality test datasets that thoroughly exercise your application's capabilities.\n\n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \nGo beyond basic metrics with custom evaluations, multi-aspect analysis, and domain-specific assessments.\n\n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \nLearn how to evaluate complex AI agents that engage in multi-turn interactions, use tools, and work toward specific goals.""]","['<1-hop>\n\n## Why and How to Generate Synthetic Data for RAG Evaluation\n\nIn the world of Retrieval-Augmented Generation (RAG) and LLM-powered applications, **synthetic data generation** is a game-changer for rapid iteration and robust evaluation. This blog post explains why synthetic data is essential, and how you can generate it for your own RAG pipelinesβusing modern tools like [RAGAS](https://github.com/explodinggradients/ragas) and [LangSmith](https://smith.langchain.com/).\n\n---\n\n### Why Generate Synthetic Data?\n\n1. **Early Signal, Fast Iteration** \n Real-world data is often scarce or expensive to label. Synthetic data lets you quickly create test sets that mimic real user queries and contexts, so you can evaluate your systemβs performance before deploying to production.\n\n2. **Controlled Complexity** \n You can design synthetic datasets to cover edge cases, multi-hop reasoning, or specific knowledge domainsβensuring your RAG system is robust, not just good at the βeasyβ cases.', '<2-hop>\n\n## The Origin of ""Data is King""\n\nPeter Norvig famously stated, ""We don\'t have better algorithms. We just have more data."" This statement emerged during a time when Google\'s approach to machine translation was yielding surprisingly effective results not through algorithmic innovations, but through the sheer volume of multilingual data they had amassed. \n\nThis perspective represented a paradigm shift. Prior to this, the field had largely focused on crafting ever more sophisticated algorithms, with the assumption that smarter code would yield better results. Norvig\'s insight suggested something different: even relatively simple algorithms could outperform more sophisticated ones when trained on sufficiently large datasets.\n\n## The Business Imperative of Data Ownership']",I don't know.,"Synthetic data generation enables rapid creation of test sets that mimic real user queries and contexts, allowing for early evaluation and iteration of RAG systems even when real-world data is scarce or expensive to label. This approach increases the quantity of data available for evaluation, supporting robust system development. According to Peter Norvig's perspective, as described in the context, the importance of data quantity represents a paradigm shift: rather than relying solely on more sophisticated algorithms, having more dataβeven with simpler algorithmsβcan lead to better results. Thus, synthetic data generation aligns with this shift by providing the large datasets necessary to improve system performance.",,,0.0,0.0,,0.0
|
22 |
+
"How does Ragas support the evaluation of both LLM applications and AI agents, and what specialized metrics are introduced for evaluating AI agents?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**Part 6: Evaluating AI Agents β _You are here_** \n*Next up in the series:* \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '---\n*This is the eighth part of a series on Ragas, a research-driven evaluation framework for LLM and RAG systems. If you missed the previous parts, check them out below:*\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Metrics and Customization](/blog/advanced-metrics-and-customization-with-ragas/)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**Part 8: Building Feedback Loops β _You are here_**']","['<1-hop>\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor\n\n# Create metric\nfaithfulness = Faithfulness(llm=evaluator_llm)\n# Calculate the score\nresult = await faithfulness.single_turn_ascore(sample)\nprint(f""Faithfulness score: {result}"")\n```\n\n> π‘ **Try it yourself:** \n> Explore the hands-on notebook for this workflow: \n> [01_Introduction_to_Ragas](https://github.com/mafzaal/intro-to-ragas/blob/master/01_Introduction_to_Ragas.ipynb)\n\n## What\'s Coming in This Blog Series\n\nThis introduction is just the beginning. In the upcoming posts, we\'ll dive deeper into all aspects of evaluating LLM applications with Ragas:\n\n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe\'ll explore each metric in detail, explaining when and how to use them effectively.', ""<2-hop>\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '<3-hop>\n\n---\ntitle: ""Part 6: Evaluating AI Agents: Beyond Simple Answers with Ragas""\ndate: 2025-04-28T06:00:00-06:00\nlayout: blog\ndescription: ""Learn how to evaluate complex AI agents using Ragas\' specialized metrics for goal accuracy, tool call accuracy, and topic adherence to build more reliable and effective agent-based applications.""\ncategories: [""AI"", ""Agents"", ""Evaluation"", ""Ragas"", ""LLM""]\ncoverImage: ""/images/ai_agent_evaluation.png"" \nreadingTime: 8\npublished: true\n---\n\nIn our previous posts, we\'ve explored how Ragas evaluates RAG systems and enables custom metrics for specialized applications. As LLMs evolve beyond simple question-answering to become powerful AI agents, evaluation needs have grown more sophisticated too. In this post, we\'ll explore Ragas\' specialized metrics for evaluating AI agents that engage in multi-turn interactions, use tools, and work toward specific goals.\n\n## The Challenge of Evaluating AI Agents']","Ragas is an evaluation framework designed to support both LLM (Large Language Model) applications and AI agents by providing a structured approach to assess their performance. It introduces specialized metrics tailored for evaluating AI agents, focusing on aspects beyond simple answer accuracy, such as contextual understanding and interaction quality.
|
23 |
+
|
24 |
+
For more detailed insights, you can explore the following parts of the Ragas series:
|
25 |
+
|
26 |
+
- **[Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)**
|
27 |
+
- **[Part 2: Basic Evaluation Workflow](https://thedataguy.pro/blog/basic-evaluation-workflow-with-ragas/)**
|
28 |
+
- **[Part 3: Evaluating RAG Systems with Ragas](https://thedataguy.pro/blog/evaluating-rag-systems-with-ragas/)**
|
29 |
+
- **[Part 5: Advanced Metrics and Customization](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)**
|
30 |
+
- **[Part 6: Evaluating AI Agents](https://thedataguy.pro/blog/evaluating-ai-agents-with-ragas/)**
|
31 |
+
|
32 |
+
These resources will provide a comprehensive understanding of how Ragas enhances evaluation processes for both LLMs and AI agents.","Ragas supports the evaluation of LLM applications by providing metrics such as faithfulness, which can be applied to single-turn samples as shown in the introductory workflow. For AI agents, Ragas introduces specialized metrics designed to assess more complex behaviors, including goal accuracy, tool call accuracy, and topic adherence. These metrics enable the evaluation of AI agents that perform multi-turn interactions, utilize tools, and work toward specific goals, thus addressing the advanced requirements of evaluating agent-based applications.",,,0.4,0.9715727542204604,,0.0
|
33 |
+
"How does Metric-Driven Development (MDD) utilize combined performance metrics to guide project outcomes, and what are some examples of such metrics in practice?","[""Reward functions embody the core MDD idea: set a clear, measurable goal, and let it guide actions towards success.\n\n## Metric-Driven Development Across Industries: Real-World Examples\n\nMDD isn't just for software. Here's how different fields use it:"", '* **Team A** is stuck in debates. Should they focus on improving video load speed or making the recommendation engine more accurate? One engineer insists, ""Faster videos keep users from leaving!"" Another counters, ""But better recommendations are what make them subscribe!"" They argue based on gut feelings.\n* **Team B** operates differently. They have a clear, agreed-upon goal: ***Improve the average ""Watch Time per User"" metric, while ensuring video buffering times stay below 2 seconds.*** They rapidly test ideas, measuring the impact of each change against this specific target.\n\nWhich team do you think will make faster, smarter progress?\n\n\nTeam B has the edge because they\'re using **Metric-Driven Development (MDD)**. This is a powerful strategy where teams unite around measurable goals to eliminate guesswork and make real strides. Let\'s break down how it works, what makes a metric truly useful, and see how industries from healthcare to e-commerce use it to succeed.', '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', 'MDD reminds us to track both:\n* **Leading indicators** (like training loss, code coverage) to monitor progress during development.\n* **Lagging indicators** (like user engagement, revenue, customer support tickets) to measure the actual impact.\n\n## The Takeaway: Use Metrics as Your Compass\nMetric-Driven Development isn\'t a complex theory reserved for tech giants. It\'s a fundamental mindset applicable everywhere:\n\n* A local bakery might track *""Daily Units Sold per Pastry Type""* to optimize baking schedules.\n* A city planner could use *""Average Commute Time Reduction""* to evaluate the success of new traffic light patterns.\n* A project manager might measure progress through *""Sprint Velocity""* or *""Percentage of On-Time Task Completions""* rather than subjective assessments of how ""busy"" the team appears.\n\n\nBy choosing metrics that are **measurable, actionable, aligned, simple, and robust**, you transform ambiguity into clarity and opinion into evidence.']","['<1-hop>\n\n## What Exactly is Metric-Driven Development?\n\nMetric-Driven Development (MDD) is a simple but effective framework where teams:\n\n1. **Define Clear, Measurable Goals:** Set specific numerical targets (e.g., ""Increase user sign-ups by 20% this quarter"").\n2. **Base Decisions on Data:** Rely on evidence and measurements, not just opinions or assumptions.\n3. **Iterate and Learn Quickly:** Continuously measure the impact of changes to see what works and what doesn\'t.\n\nThink of MDD as a **GPS for your project**. Without clear metrics, you\'re driving in the fog, hoping you\'re heading in the right direction. With MDD, you get real-time feedback, ensuring you\'re moving towards your destination efficiently.\n\n## Why Teams Struggle Without Clear Metrics\n\nWithout a metric-driven approach, teams often fall into common traps:', '<2-hop>\n\n* **Metric:** A combined score, e.g., `Points Scored - (Time Taken * Penalty Factor)`.\n * **Impact:** Trains AI opponents that are challenging but fair, balancing speed and skill.\n* **Autonomous Vehicles: Safety & Comfort Score**\n * **Metric:** Combination of factors like smooth acceleration/braking, lane adherence, and deductions for interventions or near-misses.\n * **Impact:** Guides development towards vehicles that are not only safe but also provide a comfortable ride.']","Metric-Driven Development (MDD) utilizes combined performance metrics to provide clear, measurable goals that guide project outcomes. By focusing on specific metrics, teams can eliminate guesswork and make informed decisions based on data rather than opinions.
|
34 |
+
|
35 |
+
### Key Aspects of MDD:
|
36 |
+
- **Leading Indicators**: Metrics like training loss and code coverage help monitor progress during development.
|
37 |
+
- **Lagging Indicators**: Metrics such as user engagement, revenue, and customer support tickets measure the actual impact of changes made.
|
38 |
+
|
39 |
+
### Examples of Metrics in Practice:
|
40 |
+
- A bakery might track **""Daily Units Sold per Pastry Type""** to optimize baking schedules.
|
41 |
+
- A city planner could use **""Average Commute Time Reduction""** to evaluate the success of new traffic light patterns.
|
42 |
+
- A project manager might measure progress through **""Sprint Velocity""** or **""Percentage of On-Time Task Completions""**.
|
43 |
+
|
44 |
+
These metrics transform ambiguity into clarity, allowing teams to focus on actionable outcomes.
|
45 |
+
|
46 |
+
For more information, you can explore the following links:
|
47 |
+
- [Metric-Driven Development: Make Smarter Decisions, Faster](https://thedataguy.pro/blog/metric-driven-development/)","Metric-Driven Development (MDD) utilizes combined performance metrics by defining clear, measurable goals and basing decisions on data rather than assumptions. This approach ensures that teams receive real-time feedback and can iterate quickly to improve outcomes. Examples of combined performance metrics in practice include a score such as 'Points Scored - (Time Taken * Penalty Factor)' to train AI opponents that balance speed and skill, and a 'Safety & Comfort Score' for autonomous vehicles, which combines factors like smooth acceleration, lane adherence, and deductions for interventions or near-misses. These combined metrics help guide development towards achieving specific, balanced objectives.",0.5,,0.4,0.9751235853132809,0.24999999968749997,0.7857142857142857
|
48 |
+
"How does Ragas provide specialized evaluation metrics for LLMs, and what steps are involved in creating a custom metric to assess technical accuracy in programming explanations?","[""In our next post, we'll explore advanced metrics and customization techniques for specialized evaluation needs.\n\n---\n\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**Part 4: Test Data Generation β _You are here_** \n*Next up in the series:* \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""By extending Ragas with custom metrics, you can create evaluation frameworks that precisely measure what matters most for your LLM applications, leading to more meaningful improvements and better user experiences.\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows."", ""In our next post, we'll delve deeper into specialized evaluation techniques for RAG systems, exploring advanced metrics and evaluation strategies for retrieval-augmented generation applications.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**Part 2: Basic Evaluation Workflow β _You are here_** \n*Next up in the series:* \n**[Part 3: Evaluating RAG Systems](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", ""In our next post, we'll explore how to generate high-quality test datasets for comprehensive RAG evaluation, addressing the common challenge of limited test data.\n\n---\n\n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**Part 3: Evaluating RAG Systems with Ragas β _You are here_** \n*Next up in the series:* \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas/)** \n**[Part 5: Advanced Evaluation Techniques](/blog/advanced-metrics-and-customization-with-ragas)** \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**""]","[""<1-hop>\n\n## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable."", '<2-hop>\n\n## Creating Your First Custom Metric\n\nLet\'s create a custom metric that evaluates technical accuracy in programming explanations:\n\n```python\nfrom dataclasses import dataclass, field\nfrom typing import Dict, Optional, Set\nimport typing as t\n\nfrom ragas.metrics.base import MetricWithLLM, SingleTurnMetric\nfrom ragas.prompt import PydanticPrompt\nfrom ragas.metrics import MetricType, MetricOutputType\nfrom pydantic import BaseModel\n\n# Define input/output models for the prompt\nclass TechnicalAccuracyInput(BaseModel):\n question: str\n context: str\n response: str\n programming_language: str = ""python""\n\nclass TechnicalAccuracyOutput(BaseModel):\n score: float\n feedback: str']","Ragas provides specialized evaluation metrics for LLMs by allowing users to extend its framework with custom metrics tailored to specific evaluation needs. This enables the measurement of aspects that are most relevant for particular applications, such as technical accuracy in programming explanations.
|
49 |
+
|
50 |
+
To create a custom metric for assessing technical accuracy, the following steps are typically involved:
|
51 |
+
|
52 |
+
1. **Define the Evaluation Criteria**: Identify what constitutes technical accuracy in the context of programming explanations.
|
53 |
+
2. **Develop the Metric**: Create a formula or algorithm that quantifies the defined criteria.
|
54 |
+
3. **Integrate with Ragas**: Implement the custom metric within the Ragas framework to ensure it can be used in evaluations.
|
55 |
+
4. **Test and Validate**: Run evaluations using the custom metric to ensure it accurately reflects the technical accuracy of programming explanations.
|
56 |
+
|
57 |
+
For more detailed information, you can explore the following links:
|
58 |
+
|
59 |
+
- [Part 1: Introduction to Ragas](https://thedataguy.pro/blog/introduction-to-ragas/)
|
60 |
+
- [Part 2: Basic Evaluation Workflow](https://thedataguy.pro/blog/basic-evaluation-workflow-with-ragas/)
|
61 |
+
- [Part 3: Evaluating RAG Systems](https://thedataguy.pro/blog/evaluating-rag-systems-with-ragas/)
|
62 |
+
- [Part 5: Advanced Metrics and Customization with Ragas](https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/)
|
63 |
+
- [Part 6: Evaluating AI Agents](https://thedataguy.pro/blog/evaluating-ai-agents-with-ragas/)","Ragas is an open-source evaluation framework specifically designed for LLM applications, offering specialized metrics that address challenges unique to LLM-powered systems, such as ensuring factual accuracy, consistency with retrieved context, and appropriate query handling. To create a custom metric for evaluating technical accuracy in programming explanations, Ragas allows developers to define input and output models (for example, using Pydantic BaseModel classes for technical accuracy input and output), and implement the metric logic using its extensible metric classes. This enables tailored evaluation beyond traditional NLP metrics, supporting the needs of high-stakes LLM applications.",0.3333333333333333,0.2727272727272727,0.62,0.9475544872608218,0.11764705875432525,0.0
|
64 |
+
How do observability best practices contribute to building production-ready AI systems?,"[""What excites me most is that my experience in enterprise software development gives me a unique perspective on AI implementation. I understand not just the algorithms and models, but also how to integrate them into robust, production-ready systems that deliver real value.\n\n## The Best of Both Worlds\n\nComing back to AI doesn't mean leaving behind everything I learned in web and enterprise development. Quite the opposite - I believe my background gives me a particular advantage in building AI systems that are:"", '| **Harmfulness** | | β | |\n| **Coherence** | | β | |\n| **Context Relevancy** | | | β |', ""- **Production-ready**: Understanding software engineering best practices helps create AI systems that can operate reliably at scale.\n- **User-focused**: Experience with UX principles ensures AI solutions are designed with actual human users in mind.\n- **Integrated**: Knowledge of enterprise systems makes it easier to connect AI capabilities with existing business processes.\n- **Simplified**: My experience in streamlining complex business processes helps me identify where AI can have the greatest impact through intelligent automation.\n- **Business-oriented**: I understand that AI isn't just about the technologyβit's about solving real business problems and creating measurable value.\n- **Practical**: I focus on practical applications that deliver immediate benefits rather than getting caught up in theoretical possibilities.\n\n## What's Next\n\nAs I return to my AI roots, I'm excited to share this journey with you through this blog. In the coming months, I plan to write about:"", '| **Metric** | **Comprehensive RAG Evaluation** | **Content Quality Evaluation** | **Retrieval Quality Evaluation** |\n|-----------------------------|----------------------------------|---------------------------------|-----------------------------------|\n| **Faithfulness** | β | β | |\n| **Answer Relevancy** | β | β | |\n| **Context Recall** | β | | β |\n| **Context Precision** | β | | β |\n| **Harmfulness** | | β | |']","['<1-hop>\n\n## Best Practices for Observability\n\n1. **Define clear thresholds**: Establish performance baselines and alert thresholds for each metric\n2. **Segment evaluations**: Break down results by query type, data source, or other relevant factors\n3. **Historical tracking**: Maintain historical evaluation data to identify trends and regressions\n4. **Correlation analysis**: Link evaluation metrics to user feedback and business outcomes\n5. **Regular benchmarking**: Periodically evaluate against fixed test sets to ensure consistency\n6. **Alert on regressions**: Implement automated alerts when metrics drop below thresholds\n7. **Contextualize metrics**: Include example failures alongside aggregate metrics for better understanding\n\n## Building a Feedback Loop\n\nThe ultimate goal of evaluation is to drive improvements. Establish a feedback loop:', ""<2-hop>\n\n- **Production-ready**: Understanding software engineering best practices helps create AI systems that can operate reliably at scale.\n- **User-focused**: Experience with UX principles ensures AI solutions are designed with actual human users in mind.\n- **Integrated**: Knowledge of enterprise systems makes it easier to connect AI capabilities with existing business processes.\n- **Simplified**: My experience in streamlining complex business processes helps me identify where AI can have the greatest impact through intelligent automation.\n- **Business-oriented**: I understand that AI isn't just about the technologyβit's about solving real business problems and creating measurable value.\n- **Practical**: I focus on practical applications that deliver immediate benefits rather than getting caught up in theoretical possibilities.\n\n## What's Next\n\nAs I return to my AI roots, I'm excited to share this journey with you through this blog. In the coming months, I plan to write about:""]",I don't know.,"Observability best practices, such as defining clear thresholds, segmenting evaluations, maintaining historical tracking, and alerting on regressions, ensure that AI systems are continuously monitored and improved. These practices are essential for creating production-ready AI systems that can operate reliably at scale, as they enable consistent performance evaluation and rapid response to issues.",0.0,1.0,0.0,0.0,0.14285714265306124,1.0
|
evals/testset_2.csv
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
user_input,reference_contexts,reference,synthesizer_name
|
2 |
+
"How are Large Language Models integrated into modern applications, and why is their performance evaluation considered critical according to the context?","['---\ntitle: ""Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications""\ndate: 2025-04-26T18:00:00-06:00\nlayout: blog\ndescription: ""Explore the essential evaluation framework for LLM applications with Ragas. Learn how to assess performance, ensure accuracy, and improve reliability in Retrieval-Augmented Generation systems.""\ncategories: [""AI"", ""RAG"", ""Evaluation"",""Ragas""]\ncoverImage: ""https://images.unsplash.com/photo-1593642634367-d91a135587b5?q=80&w=1770&auto=format&fit=crop&ixlib=rb-4.0.3""\nreadingTime: 7\npublished: true\n---\n\nAs Large Language Models (LLMs) become fundamental components of modern applications, effectively evaluating their performance becomes increasingly critical. Whether you\'re building a question-answering system, a document retrieval tool, or a conversational agent, you need reliable metrics to assess how well your application performs. This is where Ragas steps in.\n\n## What is Ragas?']","Large Language Models (LLMs) are becoming fundamental components of modern applications, such as question-answering systems, document retrieval tools, and conversational agents. Effectively evaluating their performance is considered increasingly critical to ensure reliable metrics for assessing how well these applications perform.",single_hop_specifc_query_synthesizer
|
3 |
+
"Howw does Ragas help evalute LLM applikations in the helthcare industree, and why is this importent for ensuring akuracy and reliabilitee?","[""## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable.""]","Ragas is an open-source evaluation framework designed for LLM applications, with strengths in Retrieval-Augmented Generation systems. It provides specialized metrics to address challenges unique to LLM-powered systems, such as determining if the application retrieves the right information and if responses are factually accurate and consistent with the retrieved context. Evaluating LLM applications is especially important in healthcare, where accuracy and reliability are critical, because LLMs can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. Proper evaluation with frameworks like Ragas is non-negotiable in healthcare to ensure the system's accuracy and reliability.",single_hop_specifc_query_synthesizer
|
4 |
+
What specialized metrics does Ragas provide for evaluating LLM applications?,"[""Evaluation serves several key purposes:\n- **Quality assurance**: Identify and fix issues before they reach users\n- **Performance tracking**: Monitor how changes impact system performance\n- **Benchmarking**: Compare different approaches objectively\n- **Continuous improvement**: Build feedback loops to enhance your application\n\n## Key Features of Ragas\n\n### π― Specialized Metrics\nRagas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications:\n\n- **Faithfulness**: Measures if the response is factually consistent with the retrieved context\n- **Context Relevancy**: Evaluates if the retrieved information is relevant to the query\n- **Answer Relevancy**: Assesses if the response addresses the user's question\n- **Topic Adherence**: Gauges how well multi-turn conversations stay on topic""]","Ragas offers both LLM-based and computational metrics tailored to evaluate different aspects of LLM applications, including faithfulness, context relevancy, answer relevancy, and topic adherence.",single_hop_specifc_query_synthesizer
|
5 |
+
"me wanna know how LangSmith work with Ragas, like is it for test data or what, and how it help me as LLM app builder, can you tell me all about LangSmith from this info?","[""### π§ͺ Test Data Generation\nCreating high-quality test data is often a bottleneck in evaluation. Ragas helps you generate comprehensive test datasets automatically, saving time and ensuring thorough coverage.\n\n### π Seamless Integrations\nRagas works with popular LLM frameworks and tools:\n- [LangChain](https://www.langchain.com/)\n- [LlamaIndex](https://www.llamaindex.ai/)\n- [Haystack](https://haystack.deepset.ai/)\n- [OpenAI](https://openai.com/)\n\nObservability platforms \n- [Phoenix](https://phoenix.arize.com/)\n- [LangSmith](https://python.langchain.com/docs/introduction/)\n- [Langfuse](https://www.langfuse.com/)\n\n### π Comprehensive Analysis\nBeyond simple scores, Ragas provides detailed insights into your application's strengths and weaknesses, enabling targeted improvements.\n\n## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere's a simple example of evaluating a response using Ragas:""]","LangSmith is listed as one of the observability platforms that Ragas works with. The context does not provide details about how LangSmith specifically functions, but it shows that Ragas integrates with LangSmith to support observability in LLM-powered systems.",single_hop_specifc_query_synthesizer
|
6 |
+
How do I use the OPENAI API key when initializing an LLM for evaluation with Ragas?,"['## Getting Started with Ragas\n\nInstalling Ragas is straightforward:\n\n```bash\nuv init && uv add ragas\n```\n\nHere\'s a simple example of evaluating a response using Ragas:\n\n```python\nfrom ragas.metrics import Faithfulness\nfrom ragas.evaluation import EvaluationDataset\nfrom ragas.dataset_schema import SingleTurnSample\nfrom langchain_openai import ChatOpenAI\nfrom ragas.llms import LangchainLLMWrapper\nfrom langchain_openai import ChatOpenAI\n\n# Initialize the LLM, you are going to new OPENAI API key\nevaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")) \n\n# Your evaluation data\ntest_data = {\n ""user_input"": ""What is the capital of France?"",\n ""retrieved_contexts"": [""Paris is the capital and most populous city of France.""],\n ""response"": ""The capital of France is Paris.""\n}\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor']","To use the OPENAI API key when initializing an LLM for evaluation with Ragas, you need to provide your OPENAI API key during the initialization of the ChatOpenAI model, as shown in the example: evaluator_llm = LangchainLLMWrapper(ChatOpenAI(model=""gpt-4o"")).",single_hop_specifc_query_synthesizer
|
7 |
+
"How does synthetic data generation contribute to the importance of data quantity in improving RAG evaluation, and what paradigm shift does this represent according to Peter Norvig's perspective?","['<1-hop>\n\n## Why and How to Generate Synthetic Data for RAG Evaluation\n\nIn the world of Retrieval-Augmented Generation (RAG) and LLM-powered applications, **synthetic data generation** is a game-changer for rapid iteration and robust evaluation. This blog post explains why synthetic data is essential, and how you can generate it for your own RAG pipelinesβusing modern tools like [RAGAS](https://github.com/explodinggradients/ragas) and [LangSmith](https://smith.langchain.com/).\n\n---\n\n### Why Generate Synthetic Data?\n\n1. **Early Signal, Fast Iteration** \n Real-world data is often scarce or expensive to label. Synthetic data lets you quickly create test sets that mimic real user queries and contexts, so you can evaluate your systemβs performance before deploying to production.\n\n2. **Controlled Complexity** \n You can design synthetic datasets to cover edge cases, multi-hop reasoning, or specific knowledge domainsβensuring your RAG system is robust, not just good at the βeasyβ cases.', '<2-hop>\n\n## The Origin of ""Data is King""\n\nPeter Norvig famously stated, ""We don\'t have better algorithms. We just have more data."" This statement emerged during a time when Google\'s approach to machine translation was yielding surprisingly effective results not through algorithmic innovations, but through the sheer volume of multilingual data they had amassed. \n\nThis perspective represented a paradigm shift. Prior to this, the field had largely focused on crafting ever more sophisticated algorithms, with the assumption that smarter code would yield better results. Norvig\'s insight suggested something different: even relatively simple algorithms could outperform more sophisticated ones when trained on sufficiently large datasets.\n\n## The Business Imperative of Data Ownership']","Synthetic data generation enables rapid creation of test sets that mimic real user queries and contexts, allowing for early evaluation and iteration of RAG systems even when real-world data is scarce or expensive to label. This approach increases the quantity of data available for evaluation, supporting robust system development. According to Peter Norvig's perspective, as described in the context, the importance of data quantity represents a paradigm shift: rather than relying solely on more sophisticated algorithms, having more dataβeven with simpler algorithmsβcan lead to better results. Thus, synthetic data generation aligns with this shift by providing the large datasets necessary to improve system performance.",multi_hop_abstract_query_synthesizer
|
8 |
+
"How does Ragas support the evaluation of both LLM applications and AI agents, and what specialized metrics are introduced for evaluating AI agents?","['<1-hop>\n\n# Create a sample\nsample = SingleTurnSample(**test_data) # Unpack the dictionary into the constructor\n\n# Create metric\nfaithfulness = Faithfulness(llm=evaluator_llm)\n# Calculate the score\nresult = await faithfulness.single_turn_ascore(sample)\nprint(f""Faithfulness score: {result}"")\n```\n\n> π‘ **Try it yourself:** \n> Explore the hands-on notebook for this workflow: \n> [01_Introduction_to_Ragas](https://github.com/mafzaal/intro-to-ragas/blob/master/01_Introduction_to_Ragas.ipynb)\n\n## What\'s Coming in This Blog Series\n\nThis introduction is just the beginning. In the upcoming posts, we\'ll dive deeper into all aspects of evaluating LLM applications with Ragas:\n\n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \nWe\'ll explore each metric in detail, explaining when and how to use them effectively.', ""<2-hop>\n\nIn our next post, we'll explore how to integrate Ragas with popular frameworks and observability tools for seamless evaluation workflows.\n\n---\n \n**[Part 1: Introduction to Ragas: The Essential Evaluation Framework for LLM Applications](/blog/introduction-to-ragas/)** \n**[Part 2: Basic Evaluation Workflow](/blog/basic-evaluation-workflow-with-ragas/)** \n**[Part 3: Evaluating RAG Systems with Ragas](/blog/evaluating-rag-systems-with-ragas/)** \n**[Part 4: Test Data Generation](/blog/generating-test-data-with-ragas)** \n**Part 5: Advanced Evaluation Techniques β _You are here_** \n*Next up in the series:* \n**[Part 6: Evaluating AI Agents](/blog/evaluating-ai-agents-with-ragas/)** \n**[Part 7: Integrations and Observability](/blog/integrations-and-observability-with-ragas/)** \n**[Part 8: Building Feedback Loops](/blog/building-feedback-loops-with-ragas/)**"", '<3-hop>\n\n---\ntitle: ""Part 6: Evaluating AI Agents: Beyond Simple Answers with Ragas""\ndate: 2025-04-28T06:00:00-06:00\nlayout: blog\ndescription: ""Learn how to evaluate complex AI agents using Ragas\' specialized metrics for goal accuracy, tool call accuracy, and topic adherence to build more reliable and effective agent-based applications.""\ncategories: [""AI"", ""Agents"", ""Evaluation"", ""Ragas"", ""LLM""]\ncoverImage: ""/images/ai_agent_evaluation.png"" \nreadingTime: 8\npublished: true\n---\n\nIn our previous posts, we\'ve explored how Ragas evaluates RAG systems and enables custom metrics for specialized applications. As LLMs evolve beyond simple question-answering to become powerful AI agents, evaluation needs have grown more sophisticated too. In this post, we\'ll explore Ragas\' specialized metrics for evaluating AI agents that engage in multi-turn interactions, use tools, and work toward specific goals.\n\n## The Challenge of Evaluating AI Agents']","Ragas supports the evaluation of LLM applications by providing metrics such as faithfulness, which can be applied to single-turn samples as shown in the introductory workflow. For AI agents, Ragas introduces specialized metrics designed to assess more complex behaviors, including goal accuracy, tool call accuracy, and topic adherence. These metrics enable the evaluation of AI agents that perform multi-turn interactions, utilize tools, and work toward specific goals, thus addressing the advanced requirements of evaluating agent-based applications.",multi_hop_abstract_query_synthesizer
|
9 |
+
"How does Metric-Driven Development (MDD) utilize combined performance metrics to guide project outcomes, and what are some examples of such metrics in practice?","['<1-hop>\n\n## What Exactly is Metric-Driven Development?\n\nMetric-Driven Development (MDD) is a simple but effective framework where teams:\n\n1. **Define Clear, Measurable Goals:** Set specific numerical targets (e.g., ""Increase user sign-ups by 20% this quarter"").\n2. **Base Decisions on Data:** Rely on evidence and measurements, not just opinions or assumptions.\n3. **Iterate and Learn Quickly:** Continuously measure the impact of changes to see what works and what doesn\'t.\n\nThink of MDD as a **GPS for your project**. Without clear metrics, you\'re driving in the fog, hoping you\'re heading in the right direction. With MDD, you get real-time feedback, ensuring you\'re moving towards your destination efficiently.\n\n## Why Teams Struggle Without Clear Metrics\n\nWithout a metric-driven approach, teams often fall into common traps:', '<2-hop>\n\n* **Metric:** A combined score, e.g., `Points Scored - (Time Taken * Penalty Factor)`.\n * **Impact:** Trains AI opponents that are challenging but fair, balancing speed and skill.\n* **Autonomous Vehicles: Safety & Comfort Score**\n * **Metric:** Combination of factors like smooth acceleration/braking, lane adherence, and deductions for interventions or near-misses.\n * **Impact:** Guides development towards vehicles that are not only safe but also provide a comfortable ride.']","Metric-Driven Development (MDD) utilizes combined performance metrics by defining clear, measurable goals and basing decisions on data rather than assumptions. This approach ensures that teams receive real-time feedback and can iterate quickly to improve outcomes. Examples of combined performance metrics in practice include a score such as 'Points Scored - (Time Taken * Penalty Factor)' to train AI opponents that balance speed and skill, and a 'Safety & Comfort Score' for autonomous vehicles, which combines factors like smooth acceleration, lane adherence, and deductions for interventions or near-misses. These combined metrics help guide development towards achieving specific, balanced objectives.",multi_hop_abstract_query_synthesizer
|
10 |
+
"How does Ragas provide specialized evaluation metrics for LLMs, and what steps are involved in creating a custom metric to assess technical accuracy in programming explanations?","[""<1-hop>\n\n## What is Ragas?\n\n[Ragas](https://docs.ragas.io/en/stable/) is an open-source evaluation framework specifically designed for LLM applications, with particular strengths in Retrieval-Augmented Generation (RAG) systems. Unlike traditional NLP evaluation methods, Ragas provides specialized metrics that address the unique challenges of LLM-powered systems.\n\nAt its core, Ragas helps answer crucial questions:\n- Is my application retrieving the right information?\n- Are the responses factually accurate and consistent with the retrieved context?\n- Does the system appropriately address the user's query?\n- How well does my application handle multi-turn conversations?\n\n## Why Evaluate LLM Applications?\n\nLLMs are powerful but imperfect. They can hallucinate facts, misinterpret queries, or generate convincing but incorrect responses. For applications where accuracy and reliability matterβlike healthcare, finance, or educationβproper evaluation is non-negotiable."", '<2-hop>\n\n## Creating Your First Custom Metric\n\nLet\'s create a custom metric that evaluates technical accuracy in programming explanations:\n\n```python\nfrom dataclasses import dataclass, field\nfrom typing import Dict, Optional, Set\nimport typing as t\n\nfrom ragas.metrics.base import MetricWithLLM, SingleTurnMetric\nfrom ragas.prompt import PydanticPrompt\nfrom ragas.metrics import MetricType, MetricOutputType\nfrom pydantic import BaseModel\n\n# Define input/output models for the prompt\nclass TechnicalAccuracyInput(BaseModel):\n question: str\n context: str\n response: str\n programming_language: str = ""python""\n\nclass TechnicalAccuracyOutput(BaseModel):\n score: float\n feedback: str']","Ragas is an open-source evaluation framework specifically designed for LLM applications, offering specialized metrics that address challenges unique to LLM-powered systems, such as ensuring factual accuracy, consistency with retrieved context, and appropriate query handling. To create a custom metric for evaluating technical accuracy in programming explanations, Ragas allows developers to define input and output models (for example, using Pydantic BaseModel classes for technical accuracy input and output), and implement the metric logic using its extensible metric classes. This enables tailored evaluation beyond traditional NLP metrics, supporting the needs of high-stakes LLM applications.",multi_hop_abstract_query_synthesizer
|
11 |
+
How do observability best practices contribute to building production-ready AI systems?,"['<1-hop>\n\n## Best Practices for Observability\n\n1. **Define clear thresholds**: Establish performance baselines and alert thresholds for each metric\n2. **Segment evaluations**: Break down results by query type, data source, or other relevant factors\n3. **Historical tracking**: Maintain historical evaluation data to identify trends and regressions\n4. **Correlation analysis**: Link evaluation metrics to user feedback and business outcomes\n5. **Regular benchmarking**: Periodically evaluate against fixed test sets to ensure consistency\n6. **Alert on regressions**: Implement automated alerts when metrics drop below thresholds\n7. **Contextualize metrics**: Include example failures alongside aggregate metrics for better understanding\n\n## Building a Feedback Loop\n\nThe ultimate goal of evaluation is to drive improvements. Establish a feedback loop:', ""<2-hop>\n\n- **Production-ready**: Understanding software engineering best practices helps create AI systems that can operate reliably at scale.\n- **User-focused**: Experience with UX principles ensures AI solutions are designed with actual human users in mind.\n- **Integrated**: Knowledge of enterprise systems makes it easier to connect AI capabilities with existing business processes.\n- **Simplified**: My experience in streamlining complex business processes helps me identify where AI can have the greatest impact through intelligent automation.\n- **Business-oriented**: I understand that AI isn't just about the technologyβit's about solving real business problems and creating measurable value.\n- **Practical**: I focus on practical applications that deliver immediate benefits rather than getting caught up in theoretical possibilities.\n\n## What's Next\n\nAs I return to my AI roots, I'm excited to share this journey with you through this blog. In the coming months, I plan to write about:""]","Observability best practices, such as defining clear thresholds, segmenting evaluations, maintaining historical tracking, and alerting on regressions, ensure that AI systems are continuously monitored and improved. These practices are essential for creating production-ready AI systems that can operate reliably at scale, as they enable consistent performance evaluation and rapid response to issues.",multi_hop_abstract_query_synthesizer
|
py-src/lets_talk/agent.py
CHANGED
@@ -1,4 +1,4 @@
|
|
1 |
-
|
2 |
from typing import TypedDict, Annotated, Dict, Any, Literal, Union, cast, List, Optional
|
3 |
from langchain_openai import ChatOpenAI
|
4 |
from langchain_core.tools import Tool
|
@@ -6,6 +6,7 @@ from langchain_core.messages import BaseMessage, SystemMessage, HumanMessage
|
|
6 |
from langchain_core.documents import Document
|
7 |
from langgraph.graph.message import add_messages
|
8 |
from langgraph.graph import StateGraph, END
|
|
|
9 |
from langgraph.prebuilt import ToolNode
|
10 |
from lets_talk.models import RAGQueryInput
|
11 |
from lets_talk.config import LLM_MODEL, LLM_TEMPERATURE
|
@@ -14,7 +15,7 @@ from datetime import datetime
|
|
14 |
import lets_talk.rag as rag
|
15 |
|
16 |
|
17 |
-
class
|
18 |
"""
|
19 |
State definition for the Research Agent using LangGraph.
|
20 |
|
@@ -25,13 +26,14 @@ class ResearchAgentState(TypedDict):
|
|
25 |
"""
|
26 |
messages: Annotated[list[BaseMessage], add_messages]
|
27 |
context: str
|
|
|
|
|
|
|
28 |
|
29 |
|
30 |
rag_prompt_template = """\
|
31 |
You are a helpful assistant that answers questions based on the context provided.
|
32 |
Generate a concise answer to the question in markdown format and include a list of relevant links to the context.
|
33 |
-
Use links from context to help user to navigate to to find more information.
|
34 |
-
|
35 |
You have access to the following information:
|
36 |
|
37 |
Context:
|
@@ -215,6 +217,58 @@ def parse_output(input_state: Dict[str, Any]) -> str:
|
|
215 |
return "I encountered an error while processing your request."
|
216 |
|
217 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
def build_agent() -> StateGraph:
|
219 |
|
220 |
tools = create_search_tools(5)
|
@@ -240,7 +294,7 @@ def build_agent() -> StateGraph:
|
|
240 |
tool_node = ToolNode(tools)
|
241 |
|
242 |
# Initialize the graph with our state type
|
243 |
-
uncompiled_graph = StateGraph(
|
244 |
|
245 |
# Define model node factory with bound model
|
246 |
def call_model_node(state):
|
|
|
1 |
+
from operator import itemgetter
|
2 |
from typing import TypedDict, Annotated, Dict, Any, Literal, Union, cast, List, Optional
|
3 |
from langchain_openai import ChatOpenAI
|
4 |
from langchain_core.tools import Tool
|
|
|
6 |
from langchain_core.documents import Document
|
7 |
from langgraph.graph.message import add_messages
|
8 |
from langgraph.graph import StateGraph, END
|
9 |
+
from langchain_core.prompts import ChatPromptTemplate
|
10 |
from langgraph.prebuilt import ToolNode
|
11 |
from lets_talk.models import RAGQueryInput
|
12 |
from lets_talk.config import LLM_MODEL, LLM_TEMPERATURE
|
|
|
15 |
import lets_talk.rag as rag
|
16 |
|
17 |
|
18 |
+
class InputState(TypedDict):
|
19 |
"""
|
20 |
State definition for the Research Agent using LangGraph.
|
21 |
|
|
|
26 |
"""
|
27 |
messages: Annotated[list[BaseMessage], add_messages]
|
28 |
context: str
|
29 |
+
question: str
|
30 |
+
is_rude: bool = False
|
31 |
+
documents: Optional[list[Document]]
|
32 |
|
33 |
|
34 |
rag_prompt_template = """\
|
35 |
You are a helpful assistant that answers questions based on the context provided.
|
36 |
Generate a concise answer to the question in markdown format and include a list of relevant links to the context.
|
|
|
|
|
37 |
You have access to the following information:
|
38 |
|
39 |
Context:
|
|
|
217 |
return "I encountered an error while processing your request."
|
218 |
|
219 |
|
220 |
+
tone_check_prompt_template = """\
|
221 |
+
Check if the input query is rude, derogatory, disrespectful, or negative, and respond with "YES" or "NO".
|
222 |
+
|
223 |
+
Query:
|
224 |
+
{query}
|
225 |
+
# Output Format
|
226 |
+
|
227 |
+
Respond only with "YES" or "NO".
|
228 |
+
"""
|
229 |
+
|
230 |
+
def check_query_tone(state: Dict[str, Any]) -> Dict[str, str]:
|
231 |
+
"""
|
232 |
+
Check the tone of the user's query and adjust the state accordingly.
|
233 |
+
|
234 |
+
Args:
|
235 |
+
state: Current state containing messages and context
|
236 |
+
Returns:
|
237 |
+
Updated state with tone information
|
238 |
+
"""
|
239 |
+
last_message = state["messages"][-1]
|
240 |
+
|
241 |
+
if isinstance(last_message, HumanMessage):
|
242 |
+
# Check the tone of the last message
|
243 |
+
state["is_rude"] = check_query_rudeness(last_message.content)
|
244 |
+
|
245 |
+
return state
|
246 |
+
|
247 |
+
|
248 |
+
def check_query_rudeness(query: str) -> bool:
|
249 |
+
"""
|
250 |
+
Check if the query is rude or negative.
|
251 |
+
|
252 |
+
Args:
|
253 |
+
query: The user's query
|
254 |
+
Returns:
|
255 |
+
True if the query is rude, False otherwise
|
256 |
+
"""
|
257 |
+
|
258 |
+
tone_prompt = ChatPromptTemplate.from_template(tone_check_prompt_template)
|
259 |
+
llm = ChatOpenAI(model=LLM_MODEL, temperature=LLM_TEMPERATURE)
|
260 |
+
|
261 |
+
# Create chain
|
262 |
+
tone_chain = (
|
263 |
+
{"query": itemgetter("question")}
|
264 |
+
| tone_prompt
|
265 |
+
| llm
|
266 |
+
)
|
267 |
+
response = tone_chain.invoke({"query": query})
|
268 |
+
return response.content.strip().lower() == "yes"
|
269 |
+
|
270 |
+
|
271 |
+
|
272 |
def build_agent() -> StateGraph:
|
273 |
|
274 |
tools = create_search_tools(5)
|
|
|
294 |
tool_node = ToolNode(tools)
|
295 |
|
296 |
# Initialize the graph with our state type
|
297 |
+
uncompiled_graph = StateGraph(InputState)
|
298 |
|
299 |
# Define model node factory with bound model
|
300 |
def call_model_node(state):
|
py-src/notebooks/05_SDG_Eval.ipynb
CHANGED
The diff for this file is too large to render.
See raw diff
|
|
py-src/notebooks/07_Fine_Tuning_Dataset.ipynb
CHANGED
@@ -37,19 +37,18 @@
|
|
37 |
},
|
38 |
{
|
39 |
"cell_type": "code",
|
40 |
-
"execution_count":
|
41 |
"id": "15e97530",
|
42 |
"metadata": {},
|
43 |
"outputs": [],
|
44 |
"source": [
|
45 |
"import nest_asyncio\n",
|
46 |
-
"\n",
|
47 |
"nest_asyncio.apply()"
|
48 |
]
|
49 |
},
|
50 |
{
|
51 |
"cell_type": "code",
|
52 |
-
"execution_count":
|
53 |
"id": "b4f2ddc0",
|
54 |
"metadata": {},
|
55 |
"outputs": [],
|
@@ -60,7 +59,7 @@
|
|
60 |
},
|
61 |
{
|
62 |
"cell_type": "code",
|
63 |
-
"execution_count":
|
64 |
"id": "123779af",
|
65 |
"metadata": {},
|
66 |
"outputs": [
|
@@ -68,7 +67,7 @@
|
|
68 |
"name": "stderr",
|
69 |
"output_type": "stream",
|
70 |
"text": [
|
71 |
-
"100%|ββββββββββ| 14/14 [00:00<00:00,
|
72 |
]
|
73 |
},
|
74 |
{
|
@@ -95,8 +94,8 @@
|
|
95 |
},
|
96 |
{
|
97 |
"cell_type": "code",
|
98 |
-
"execution_count":
|
99 |
-
"id": "
|
100 |
"metadata": {},
|
101 |
"outputs": [],
|
102 |
"source": [
|
@@ -124,67 +123,10 @@
|
|
124 |
"question_generation_chain = qa_prompt_template | qa_chat_model"
|
125 |
]
|
126 |
},
|
127 |
-
{
|
128 |
-
"cell_type": "code",
|
129 |
-
"execution_count": 10,
|
130 |
-
"id": "adb3ae7b",
|
131 |
-
"metadata": {},
|
132 |
-
"outputs": [],
|
133 |
-
"source": [
|
134 |
-
"context = split_docs[0].page_content\n",
|
135 |
-
"n_questions = 3\n",
|
136 |
-
"response = question_generation_chain.invoke({\"context\": context, \"n_questions\": n_questions})\n"
|
137 |
-
]
|
138 |
-
},
|
139 |
-
{
|
140 |
-
"cell_type": "code",
|
141 |
-
"execution_count": 12,
|
142 |
-
"id": "6df35cfd",
|
143 |
-
"metadata": {},
|
144 |
-
"outputs": [
|
145 |
-
{
|
146 |
-
"data": {
|
147 |
-
"text/plain": [
|
148 |
-
"'1. What is the primary purpose of the Ragas evaluation framework in LLM applications? \\n2. Why is it important to have reliable metrics when assessing the performance of LLM-based systems? \\n3. In what types of applications can Ragas be used to evaluate performance?'"
|
149 |
-
]
|
150 |
-
},
|
151 |
-
"execution_count": 12,
|
152 |
-
"metadata": {},
|
153 |
-
"output_type": "execute_result"
|
154 |
-
}
|
155 |
-
],
|
156 |
-
"source": [
|
157 |
-
"response.content"
|
158 |
-
]
|
159 |
-
},
|
160 |
{
|
161 |
"cell_type": "code",
|
162 |
"execution_count": null,
|
163 |
-
"id": "
|
164 |
-
"metadata": {},
|
165 |
-
"outputs": [
|
166 |
-
{
|
167 |
-
"name": "stdout",
|
168 |
-
"output_type": "stream",
|
169 |
-
"text": [
|
170 |
-
"Extracted questions:\n",
|
171 |
-
"1 - What is the primary purpose of the Ragas evaluation framework in LLM applications?\n",
|
172 |
-
"2 - Why is it important to have reliable metrics when assessing the performance of LLM-based systems?\n",
|
173 |
-
"3 - In what types of applications can Ragas be used to evaluate performance?\n"
|
174 |
-
]
|
175 |
-
}
|
176 |
-
],
|
177 |
-
"source": [
|
178 |
-
"questions = extract_questions(response.content)\n",
|
179 |
-
"print(\"Extracted questions:\")\n",
|
180 |
-
"for i, question in enumerate(questions):\n",
|
181 |
-
" print(f\"{i + 1}. {question}\")"
|
182 |
-
]
|
183 |
-
},
|
184 |
-
{
|
185 |
-
"cell_type": "code",
|
186 |
-
"execution_count": 19,
|
187 |
-
"id": "e78f93b8",
|
188 |
"metadata": {},
|
189 |
"outputs": [],
|
190 |
"source": [
|
@@ -192,7 +134,7 @@
|
|
192 |
"import asyncio\n",
|
193 |
"\n",
|
194 |
"\n",
|
195 |
-
"def extract_questions(response_text):\n",
|
196 |
" # Split the response text into lines\n",
|
197 |
" lines = response_text.strip().split('\\n')\n",
|
198 |
"\n",
|
@@ -207,49 +149,79 @@
|
|
207 |
"\n",
|
208 |
" return extracted_questions\n",
|
209 |
"\n",
|
210 |
-
"
|
211 |
-
"\n",
|
212 |
-
"\n",
|
213 |
-
"\n",
|
214 |
-
"async def create_questions(documents, n_questions):\n",
|
215 |
" question_set = []\n",
|
216 |
" \n",
|
217 |
-
"\n",
|
218 |
" for doc in tqdm.tqdm(documents):\n",
|
219 |
" \n",
|
220 |
" context = doc.page_content\n",
|
221 |
"\n",
|
222 |
" # Generate questions using the question generation chain\n",
|
223 |
-
" response = await
|
224 |
" \"context\": context,\n",
|
225 |
" \"n_questions\": n_questions\n",
|
226 |
" })\n",
|
227 |
"\n",
|
228 |
-
" questions = extract_questions(response.content)\n",
|
229 |
" \n",
|
230 |
" for i, question in enumerate(questions):\n",
|
231 |
" questions.append({\"question\":question, \"context\": context})\n",
|
232 |
-
" \n",
|
233 |
-
"\n",
|
234 |
-
" \n",
|
235 |
-
"\n",
|
236 |
" return question_set"
|
237 |
]
|
238 |
},
|
239 |
{
|
240 |
"cell_type": "code",
|
241 |
-
"execution_count":
|
242 |
-
"id": "
|
243 |
"metadata": {},
|
244 |
-
"outputs": [
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
245 |
"source": [
|
246 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
247 |
]
|
248 |
},
|
249 |
{
|
250 |
"cell_type": "code",
|
251 |
"execution_count": null,
|
252 |
-
"id": "
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
253 |
"metadata": {},
|
254 |
"outputs": [
|
255 |
{
|
@@ -271,7 +243,9 @@
|
|
271 |
]
|
272 |
}
|
273 |
],
|
274 |
-
"source": [
|
|
|
|
|
275 |
}
|
276 |
],
|
277 |
"metadata": {
|
|
|
37 |
},
|
38 |
{
|
39 |
"cell_type": "code",
|
40 |
+
"execution_count": 2,
|
41 |
"id": "15e97530",
|
42 |
"metadata": {},
|
43 |
"outputs": [],
|
44 |
"source": [
|
45 |
"import nest_asyncio\n",
|
|
|
46 |
"nest_asyncio.apply()"
|
47 |
]
|
48 |
},
|
49 |
{
|
50 |
"cell_type": "code",
|
51 |
+
"execution_count": 3,
|
52 |
"id": "b4f2ddc0",
|
53 |
"metadata": {},
|
54 |
"outputs": [],
|
|
|
59 |
},
|
60 |
{
|
61 |
"cell_type": "code",
|
62 |
+
"execution_count": 4,
|
63 |
"id": "123779af",
|
64 |
"metadata": {},
|
65 |
"outputs": [
|
|
|
67 |
"name": "stderr",
|
68 |
"output_type": "stream",
|
69 |
"text": [
|
70 |
+
"100%|ββββββββββ| 14/14 [00:00<00:00, 3411.39it/s]"
|
71 |
]
|
72 |
},
|
73 |
{
|
|
|
94 |
},
|
95 |
{
|
96 |
"cell_type": "code",
|
97 |
+
"execution_count": 5,
|
98 |
+
"id": "0b742838",
|
99 |
"metadata": {},
|
100 |
"outputs": [],
|
101 |
"source": [
|
|
|
123 |
"question_generation_chain = qa_prompt_template | qa_chat_model"
|
124 |
]
|
125 |
},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
126 |
{
|
127 |
"cell_type": "code",
|
128 |
"execution_count": null,
|
129 |
+
"id": "5488c3d3",
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
130 |
"metadata": {},
|
131 |
"outputs": [],
|
132 |
"source": [
|
|
|
134 |
"import asyncio\n",
|
135 |
"\n",
|
136 |
"\n",
|
137 |
+
"def extract_questions(response_text,n_questions):\n",
|
138 |
" # Split the response text into lines\n",
|
139 |
" lines = response_text.strip().split('\\n')\n",
|
140 |
"\n",
|
|
|
149 |
"\n",
|
150 |
" return extracted_questions\n",
|
151 |
"\n",
|
152 |
+
"async def create_questions(documents, n_questions, chain):\n",
|
|
|
|
|
|
|
|
|
153 |
" question_set = []\n",
|
154 |
" \n",
|
|
|
155 |
" for doc in tqdm.tqdm(documents):\n",
|
156 |
" \n",
|
157 |
" context = doc.page_content\n",
|
158 |
"\n",
|
159 |
" # Generate questions using the question generation chain\n",
|
160 |
+
" response = await chain.ainvoke({\n",
|
161 |
" \"context\": context,\n",
|
162 |
" \"n_questions\": n_questions\n",
|
163 |
" })\n",
|
164 |
"\n",
|
165 |
+
" questions = extract_questions(response.content,n_questions)\n",
|
166 |
" \n",
|
167 |
" for i, question in enumerate(questions):\n",
|
168 |
" questions.append({\"question\":question, \"context\": context})\n",
|
|
|
|
|
|
|
|
|
169 |
" return question_set"
|
170 |
]
|
171 |
},
|
172 |
{
|
173 |
"cell_type": "code",
|
174 |
+
"execution_count": 9,
|
175 |
+
"id": "adb3ae7b",
|
176 |
"metadata": {},
|
177 |
+
"outputs": [
|
178 |
+
{
|
179 |
+
"name": "stdout",
|
180 |
+
"output_type": "stream",
|
181 |
+
"text": [
|
182 |
+
"Extracted questions:\n",
|
183 |
+
"1. What is the primary purpose of the Ragas evaluation framework in LLM applications?\n",
|
184 |
+
"2. Why is it important to have reliable metrics when assessing the performance of LLM-based systems?\n",
|
185 |
+
"3. What types of applications can benefit from using the Ragas framework for evaluation?\n"
|
186 |
+
]
|
187 |
+
}
|
188 |
+
],
|
189 |
"source": [
|
190 |
+
"context = split_docs[0].page_content\n",
|
191 |
+
"n_questions = 3\n",
|
192 |
+
"response = question_generation_chain.invoke({\"context\": context, \"n_questions\": n_questions})\n",
|
193 |
+
"questions = extract_questions(response.content, n_questions)\n",
|
194 |
+
"print(\"Extracted questions:\")\n",
|
195 |
+
"for i, question in enumerate(questions):\n",
|
196 |
+
" print(f\"{i + 1}. {question}\")"
|
197 |
]
|
198 |
},
|
199 |
{
|
200 |
"cell_type": "code",
|
201 |
"execution_count": null,
|
202 |
+
"id": "7c4a75f9",
|
203 |
+
"metadata": {},
|
204 |
+
"outputs": [
|
205 |
+
{
|
206 |
+
"data": {
|
207 |
+
"text/plain": [
|
208 |
+
"2"
|
209 |
+
]
|
210 |
+
},
|
211 |
+
"execution_count": 12,
|
212 |
+
"metadata": {},
|
213 |
+
"output_type": "execute_result"
|
214 |
+
}
|
215 |
+
],
|
216 |
+
"source": [
|
217 |
+
"documents = split_docs[:2]\n",
|
218 |
+
"len(documents)"
|
219 |
+
]
|
220 |
+
},
|
221 |
+
{
|
222 |
+
"cell_type": "code",
|
223 |
+
"execution_count": 16,
|
224 |
+
"id": "b1ece53b",
|
225 |
"metadata": {},
|
226 |
"outputs": [
|
227 |
{
|
|
|
243 |
]
|
244 |
}
|
245 |
],
|
246 |
+
"source": [
|
247 |
+
"ds = await create_questions(documents=docs, n_questions=3, chain=question_generation_chain)"
|
248 |
+
]
|
249 |
}
|
250 |
],
|
251 |
"metadata": {
|