{ "cells": [ { "cell_type": "code", "execution_count": 1, "id": "c5fe74d8", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Adding package root to sys.path: /home/mafzaal/source/lets-talk/py-src\n", "Current notebook directory: /home/mafzaal/source/lets-talk/py-src/notebooks\n", "Project root: /home/mafzaal/source/lets-talk\n" ] } ], "source": [ "import sys\n", "import os\n", "\n", "# Add the project root to the Python path\n", "package_root = os.path.abspath(os.path.join(os.getcwd(), \"../\"))\n", "print(f\"Adding package root to sys.path: {package_root}\")\n", "if package_root not in sys.path:\n", "\tsys.path.append(package_root)\n", "\n", "\n", "notebook_dir = os.getcwd()\n", "print(f\"Current notebook directory: {notebook_dir}\")\n", "# change to the directory to the root of the project\n", "project_root = os.path.abspath(os.path.join(os.getcwd(), \"../../\"))\n", "print(f\"Project root: {project_root}\")\n", "os.chdir(project_root)" ] }, { "cell_type": "code", "execution_count": 3, "id": "1168fdc5", "metadata": {}, "outputs": [], "source": [ "import nest_asyncio\n", "nest_asyncio.apply()" ] }, { "cell_type": "code", "execution_count": 4, "id": "12ecd1db", "metadata": {}, "outputs": [], "source": [ "import lets_talk.chains as chains\n", "import lets_talk.prompts as prompts" ] }, { "cell_type": "code", "execution_count": 40, "id": "1b6bbe57", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 40, "metadata": {}, "output_type": "execute_result" } ], "source": [ "# hot reload the module\n", "import importlib\n", "importlib.reload(chains)\n", "importlib.reload(prompts)\n" ] }, { "cell_type": "code", "execution_count": 6, "id": "65e5ea03", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "False" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "reponse = chains.tone_check_chain.invoke({\"question\": \"I am so happy to be here!\"})\n", "reponse.content.lower() == \"yes\"" ] }, { "cell_type": "code", "execution_count": 7, "id": "119cf326", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "True" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "reponse = chains.tone_check_chain.invoke({\"question\": \"Go to hell!\"})\n", "reponse.content.lower() == \"yes\"" ] }, { "cell_type": "code", "execution_count": 18, "id": "ada70fe7", "metadata": {}, "outputs": [], "source": [ "from lets_talk.rag import rag_chain\n", "reponse = rag_chain.invoke({\"question\": \"Who is the data guy?\"})" ] }, { "cell_type": "code", "execution_count": 20, "id": "5d878cf2", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"The Data Guy refers to Muhammad Afzaal, a data science expert and the author of the blog at [thedataguy.pro](https://thedataguy.pro). His work focuses on various topics in data science, AI evaluation, RAG systems, and metric-driven development, providing practical insights and frameworks for implementing these concepts effectively.\\n\\nIf you're interested in specific topics such as RAG systems, AI research agents, or data strategy, feel free to ask!\"" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "reponse[\"response\"].content" ] }, { "cell_type": "code", "execution_count": 13, "id": "9689e103", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I don't know the speed of light, but I can help you with topics related to data science, AI evaluation, RAG systems, and more. If you're interested in understanding how to evaluate AI agents or implement RAG systems, feel free to ask! You can also explore more on these topics at [TheDataGuy's blog](https://thedataguy.pro).\"" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from lets_talk.rag import rag_chain\n", "reponse = rag_chain.invoke({\"question\": \"What is speed of light!\"})\n", "reponse[\"response\"].content" ] }, { "cell_type": "code", "execution_count": 8, "id": "80979f2c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I can sense that you're feeling really frustrated right now, and that's completely valid. We all have moments like that. 🌧️ \\n\\nIf there's something specific on your mind, I'm here to listen and help in any way I can. Let's turn this around and find a brighter perspective together! 🌈\"" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from lets_talk.chains import rude_query_answer_chain\n", "reponse = rude_query_answer_chain.invoke({\"question\": \"Go to hell!\"})\n", "\n", "reponse.content" ] }, { "cell_type": "code", "execution_count": 10, "id": "21a54913", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I understand that you might be feeling frustrated or disappointed, and that's completely valid. It's okay to express those feelings! Let's focus on finding something positive together. What’s something that brings you joy or makes you smile? 🌈\"" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from lets_talk.chains import rude_query_answer_chain\n", "reponse = rude_query_answer_chain.invoke({\"question\": \"aweful!\"})\n", "\n", "reponse.content" ] }, { "cell_type": "code", "execution_count": null, "id": "661c3b55", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "'**Input:** \"Tell me a joke!\"\\n\\n**Output:** \"I love that you\\'re looking for some humor! Laughter is such a wonderful way to brighten the day. Here’s a light-hearted joke for you: Why did the scarecrow win an award? Because he was outstanding in his field! 🌾😄 Keep smiling!\"'" ] }, "execution_count": 11, "metadata": {}, "output_type": "execute_result" } ], "source": [ "from lets_talk.chains import rude_query_answer_chain\n", "reponse = rude_query_answer_chain.invoke({\"question\": \"tell me a joke!\"})\n", "\n", "reponse.content" ] }, { "cell_type": "code", "execution_count": 37, "id": "1600c552", "metadata": {}, "outputs": [], "source": [ "import lets_talk.agent as agent\n" ] }, { "cell_type": "code", "execution_count": 79, "id": "ebf7366d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 79, "metadata": {}, "output_type": "execute_result" } ], "source": [ "importlib.reload(agent)\n" ] }, { "cell_type": "code", "execution_count": 80, "id": "ee9f31e9", "metadata": {}, "outputs": [], "source": [ "\n", "uncompiled_graph = agent.build_graph()\n", "graph = uncompiled_graph.compile()\n", "\n", "#show the graph\n" ] }, { "cell_type": "code", "execution_count": 64, "id": "1204c3c9", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ " +-----------+ \n", " | __start__ | \n", " +-----------+ \n", " * \n", " * \n", " * \n", " +---------------------+ \n", " | check_question_tone | \n", " +---------------------+ \n", " . .. \n", " .. . \n", " . .. \n", "+----------+ . \n", "| retrieve | .. \n", "+----------+ . \n", " * .. \n", " ** .. \n", " * . \n", " +-------+ \n", " | agent | \n", " +-------+ \n", " * . \n", " ** .. \n", " * . \n", " +--------+ +---------+ \n", " | action | | __end__ | \n", " +--------+ +---------+ \n" ] } ], "source": [ "# from IPython.display import Image, display\n", "# display(Image(graph.get_graph().draw_png()))\n", "\n", "print(graph.get_graph().draw_ascii())" ] }, { "cell_type": "code", "execution_count": 81, "id": "94889b85", "metadata": {}, "outputs": [], "source": [ "graph_chain = agent.create_agent_chain(uncompiled_graph=uncompiled_graph)\n" ] }, { "cell_type": "code", "execution_count": 74, "id": "f8a9985d", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='tell me a joke!', additional_kwargs={}, response_metadata={}, id='df2b46db-4109-448b-85a5-4fb91b0d1f36'),\n", " AIMessage(content=\"I don't know any jokes, but I can share some insightful content about data engineering or AI evaluation! If you're interested, let me know!\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 30, 'prompt_tokens': 990, 'total_tokens': 1020, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_dbaca60df0', 'id': 'chatcmpl-BWabqM5r06LNi5Ty3VRxWZwXYqIUk', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--9874e8ab-653a-45f2-be45-0f3ce9de8ee7-0', usage_metadata={'input_tokens': 990, 'output_tokens': 30, 'total_tokens': 1020, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})],\n", " 'context': 'link: https://thedataguy.pro/blog/integrations-and-observability-with-ragas/\\n\\n*How are you evaluating your AI agents? What challenges have you encountered in measuring agent performance? If you\\'re facing specific evaluation hurdles, don\\'t hesitate to [reach out](https://www.linkedin.com/in/muhammadafzaal/)—we\\'d love to help!*\\n\\n---link: https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/\\n\\n# Prepare input for prompt\\n prompt_input = TechnicalAccuracyInput(\\n question=question,\\n context=context,\\n response=response,\\n programming_language=programming_language\\n )\\n \\n # Generate evaluation\\n evaluation = await self.evaluation_prompt.generate(\\n data=prompt_input, llm=self.llm, callbacks=callbacks\\n )\\n \\n return evaluation.score\\n```\\n## Using the Custom Metric\\nTo use the custom metric, simply include it in your evaluation pipeline:\\n\\n---link: https://thedataguy.pro/blog/data-is-king/\\n\\nRemember: in the age of AI, your data strategy isn\\'t just supporting your business strategy—increasingly, it *is* your business strategy.\\n## Ready to Make Data Your Competitive Advantage?\\n\\nDon\\'t let valuable data opportunities slip away. Whether you\\'re just beginning your data journey or looking to enhance your existing strategy, I can help transform your approach to this critical business asset.\\n\\n### Let\\'s Connect\\nConnect with me on [LinkedIn](https://www.linkedin.com/in/muhammadafzaal/) to discuss how I can help your organization harness the power of data.\\n\\n---link: https://thedataguy.pro/blog/generating-test-data-with-ragas/\\n\\nEssentially, the default transformations build a knowledge graph populated with embedded, filtered document chunks and corresponding simple, extractive question-answer pairs.\\n\\n**Spotlight: Query Synthesizers (via `self.generate()` and `default_query_distribution`)**\\n\\nThe `self.generate()` method, called by `generate_with_langchain_docs`, is responsible for taking the foundational graph and creating the final, potentially complex, test questions using **Query Synthesizers** (also referred to as \"evolutions\" or \"scenarios\").',\n", " 'is_rude': False}" ] }, "execution_count": 74, "metadata": {}, "output_type": "execute_result" } ], "source": [ "response = graph_chain.invoke({\"question\": \"tell me a joke!\"})\n", "response" ] }, { "cell_type": "code", "execution_count": 49, "id": "6b34229f", "metadata": {}, "outputs": [], "source": [ "os.environ[\"LANGCHAIN_TRACING_V2\"] = \"true\"" ] }, { "cell_type": "code", "execution_count": 82, "id": "8d970d22", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Go to hell!', additional_kwargs={}, response_metadata={}, id='ada716be-d732-4df7-813f-8e4134bf86e6'),\n", " AIMessage(content=\"I can sense that you're feeling really frustrated right now, and that's completely valid. We all have moments like that. 🌧️ \\n\\nIf there's something specific on your mind, I'm here to listen and help in any way I can. Let's turn this around and find a brighter perspective together! 🌈\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 61, 'prompt_tokens': 401, 'total_tokens': 462, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_129a36352a', 'id': 'chatcmpl-BWadnZqMM9ST3mPc8sigjYQNZQwdz', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--f08d6e19-838d-4594-8b19-4422ef2eddf3-0', usage_metadata={'input_tokens': 401, 'output_tokens': 61, 'total_tokens': 462, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})],\n", " 'is_rude': True}" ] }, "execution_count": 82, "metadata": {}, "output_type": "execute_result" } ], "source": [ "response = graph_chain.invoke({\"question\": \"Go to hell!\"})\n", "response" ] }, { "cell_type": "code", "execution_count": 83, "id": "0fdd5ceb", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I can sense that you're feeling really frustrated right now, and that's completely valid. We all have moments like that. 🌧️ \\n\\nIf there's something specific on your mind, I'm here to listen and help in any way I can. Let's turn this around and find a brighter perspective together! 🌈\"" ] }, "execution_count": 83, "metadata": {}, "output_type": "execute_result" } ], "source": [ "answer = agent.parse_output(response)\n", "answer" ] }, { "cell_type": "code", "execution_count": 76, "id": "b177d03c", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "{'messages': [HumanMessage(content='Who are you?', additional_kwargs={}, response_metadata={}, id='113cb4b2-040e-4bfd-801e-9fb32bade492'),\n", " AIMessage(content=\"I am TheDataGuy Chat, your specialized assistant for topics related to data science, AI evaluation, and metric-driven development, drawing insights from Muhammad Afzaal's blog at [thedataguy.pro](https://thedataguy.pro). My expertise includes:\\n\\n- RAG (Retrieval-Augmented Generation) systems and their implementation\\n- Evaluation frameworks for AI applications\\n- Building and assessing AI research agents\\n- Data strategy and its significance for business success\\n\\nIf you have questions about these topics or need practical advice, feel free to ask! You can also explore more insights on the blog for in-depth articles and tutorials.\", additional_kwargs={'refusal': None}, response_metadata={'token_usage': {'completion_tokens': 128, 'prompt_tokens': 988, 'total_tokens': 1116, 'completion_tokens_details': {'accepted_prediction_tokens': 0, 'audio_tokens': 0, 'reasoning_tokens': 0, 'rejected_prediction_tokens': 0}, 'prompt_tokens_details': {'audio_tokens': 0, 'cached_tokens': 0}}, 'model_name': 'gpt-4o-mini-2024-07-18', 'system_fingerprint': 'fp_dbaca60df0', 'id': 'chatcmpl-BWacI6xSsiD5qSGUk867wK3FE4aIc', 'service_tier': 'default', 'finish_reason': 'stop', 'logprobs': None}, id='run--84279564-3a7b-4fec-a8a0-78db23342062-0', usage_metadata={'input_tokens': 988, 'output_tokens': 128, 'total_tokens': 1116, 'input_token_details': {'audio': 0, 'cache_read': 0}, 'output_token_details': {'audio': 0, 'reasoning': 0}})],\n", " 'context': 'link: https://thedataguy.pro/blog/integrations-and-observability-with-ragas/\\n\\n*How are you evaluating your AI agents? What challenges have you encountered in measuring agent performance? If you\\'re facing specific evaluation hurdles, don\\'t hesitate to [reach out](https://www.linkedin.com/in/muhammadafzaal/)—we\\'d love to help!*\\n\\n---link: https://thedataguy.pro/blog/advanced-metrics-and-customization-with-ragas/\\n\\n# Prepare input for prompt\\n prompt_input = TechnicalAccuracyInput(\\n question=question,\\n context=context,\\n response=response,\\n programming_language=programming_language\\n )\\n \\n # Generate evaluation\\n evaluation = await self.evaluation_prompt.generate(\\n data=prompt_input, llm=self.llm, callbacks=callbacks\\n )\\n \\n return evaluation.score\\n```\\n## Using the Custom Metric\\nTo use the custom metric, simply include it in your evaluation pipeline:\\n\\n---link: https://thedataguy.pro/blog/data-is-king/\\n\\nRemember: in the age of AI, your data strategy isn\\'t just supporting your business strategy—increasingly, it *is* your business strategy.\\n## Ready to Make Data Your Competitive Advantage?\\n\\nDon\\'t let valuable data opportunities slip away. Whether you\\'re just beginning your data journey or looking to enhance your existing strategy, I can help transform your approach to this critical business asset.\\n\\n### Let\\'s Connect\\nConnect with me on [LinkedIn](https://www.linkedin.com/in/muhammadafzaal/) to discuss how I can help your organization harness the power of data.\\n\\n---link: https://thedataguy.pro/blog/generating-test-data-with-ragas/\\n\\nEssentially, the default transformations build a knowledge graph populated with embedded, filtered document chunks and corresponding simple, extractive question-answer pairs.\\n\\n**Spotlight: Query Synthesizers (via `self.generate()` and `default_query_distribution`)**\\n\\nThe `self.generate()` method, called by `generate_with_langchain_docs`, is responsible for taking the foundational graph and creating the final, potentially complex, test questions using **Query Synthesizers** (also referred to as \"evolutions\" or \"scenarios\").',\n", " 'is_rude': False}" ] }, "execution_count": 76, "metadata": {}, "output_type": "execute_result" } ], "source": [ "response = graph_chain.invoke({\"question\": \"Who are you?\"})\n", "response" ] }, { "cell_type": "code", "execution_count": 78, "id": "db16940e", "metadata": {}, "outputs": [ { "data": { "text/plain": [ "\"I am TheDataGuy Chat, your specialized assistant for topics related to data science, AI evaluation, and metric-driven development, drawing insights from Muhammad Afzaal's blog at [thedataguy.pro](https://thedataguy.pro). My expertise includes:\\n\\n- RAG (Retrieval-Augmented Generation) systems and their implementation\\n- Evaluation frameworks for AI applications\\n- Building and assessing AI research agents\\n- Data strategy and its significance for business success\\n\\nIf you have questions about these topics or need practical advice, feel free to ask! You can also explore more insights on the blog for in-depth articles and tutorials.\"" ] }, "execution_count": 78, "metadata": {}, "output_type": "execute_result" } ], "source": [ "agent.parse_output(response)" ] } ], "metadata": { "kernelspec": { "display_name": ".venv", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.13.2" } }, "nbformat": 4, "nbformat_minor": 5 }