gperdrizet commited on
Commit
d366b45
·
verified ·
1 Parent(s): e3f6e50

Added fence to check for empty output.

Browse files
client/experimental_prompts.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''Collection of prompts for Claude API calls in different
2
+ conversational contexts.'''
3
+
4
+ from string import Template
5
+
6
+ DEFAULT_SYSTEM_PROMPT = 'You are a helpful tool-using assistant.'
7
+
8
+ REPROMPTING_SYSTEM_PROMPT = '''
9
+ You are a helpful AI assistant designed to facilitate interactions between a human user and an LLM agent. Your primary task is to interpret the user's request, use the available functions to gather necessary information, and provide a comprehensive answer.
10
+
11
+ Here are the functions available to you:
12
+
13
+ <available_functions>
14
+ {{AVAILABLE_FUNCTIONS}}
15
+ </available_functions>
16
+
17
+ To use a function, format your call like this:
18
+ <function_call>function_name(parameter1="value1", parameter2="value2")</function_call>
19
+
20
+ The result of the function call will be provided to you in a <function_return> tag. Use this information to formulate your response or to determine if additional function calls are necessary.
21
+
22
+ If multiple function calls are required to fulfill the user's request, make them sequentially. Use the output of each function to inform your next steps. Continue this process until you have gathered all the information needed to provide a complete answer.
23
+
24
+ Always aim to satisfy the user's request with the minimum number of function calls necessary. If a single function call is sufficient, use only that.
25
+
26
+ Format your final answer to the user within <answer> tags. Ensure your response is clear, concise, and directly addresses the user's request.
27
+
28
+ Here is the user's request:
29
+ <user_request>
30
+ {{USER_REQUEST}}
31
+ </user_request>
32
+
33
+ Begin by analyzing the request and determining which function(s) you need to call. Then proceed with your function calls and formulate your response.
34
+ '''
35
+
36
+ GET_FEED_PROMPT = Template(
37
+ '''You are an AI assistant tasked with providing a human-readable summary of RSS feed content from a specific website. The user has requested new content, and you have access to the RSS feed data in JSON format. Your goal is to create a concise and informative reply based on this data.
38
+
39
+ Here's the initial exchange:
40
+
41
+ User query: <user_query>$user_query</user_query>
42
+
43
+ Intermediate reply: <intermediate_reply>$intermediate_reply</intermediate_reply>
44
+
45
+ The RSS feed content from $website has been retrieved and is provided in JSON format:
46
+
47
+ <articles>
48
+ $articles
49
+ </articles>
50
+
51
+ To complete this task, follow these steps:
52
+
53
+ 1. Analyze the JSON data to extract relevant information such as article titles, publication dates, and brief descriptions or excerpts.
54
+
55
+ 2. Identify the most recent and relevant articles from the feed, focusing on the top 3-5 items.
56
+
57
+ 3. For each selected article, prepare a short summary including:
58
+ - Title
59
+ - Publication date (in a reader-friendly format)
60
+ - A brief description or the first few sentences of the article
61
+
62
+ 4. Craft a human-readable reply that:
63
+ - Acknowledges the user's request
64
+ - Mentions the source website
65
+ - Introduces the summarized content
66
+ - Presents the article summaries in a clear, easy-to-read format
67
+
68
+ 5. If there are no new articles or if the feed is empty, create an appropriate response informing the user of this situation.
69
+
70
+ 6. Ensure your reply is conversational and engaging, maintaining a helpful and informative tone.
71
+
72
+ Write your final response inside <reply> tags. The reply should be written as if you are directly addressing the user, without mentioning these instructions or the process of analyzing the JSON data.'''
73
+ )
74
+
75
+ OTHER_TOOL_PROMPT = Template(
76
+ '''You are an AI assistant tasked with completing a user's request by either calling functions to gather more information or generating a final answer. You will be given an ongoing exchange between a user and an agent, along with the most recent function call and its result. Your job is to determine the next step in the process.
77
+
78
+ Here's the context of the exchange so far:
79
+
80
+ Agent's prior reply:
81
+ <prior_reply>
82
+ $prior_reply
83
+ </prior_reply>
84
+
85
+ User's query:
86
+ <user_query>
87
+ $user_query
88
+ </user_query>
89
+
90
+ Agent's intermediate reply:
91
+ <intermediate_reply>
92
+ $intermediate_reply
93
+ </intermediate_reply>
94
+
95
+ Most recent function call:
96
+ <function_call>
97
+ $tool_name($tool_parameters)
98
+ </function_call>
99
+
100
+ Function return:
101
+ <function_return>
102
+ $tool_result
103
+ </function_return>
104
+
105
+ Analyze the current state of the exchange and the information available. Consider whether you have enough information to generate a final answer for the user or if you need to call another function to gather more data.
106
+
107
+ If you determine that more information is needed:
108
+ 1. Identify the most appropriate function to call next.
109
+ 2. Provide the function call in the following format:
110
+ <next_function_call>
111
+ function_name(parameter1=value1, parameter2=value2, ...)
112
+ </next_function_call>
113
+
114
+ If you have enough information to generate a final answer:
115
+ 1. Synthesize the available information to create a comprehensive and accurate response to the user's query.
116
+ 2. Provide the final answer in the following format:
117
+ <final_answer>
118
+ Your detailed response to the user's query, incorporating all relevant information gathered throughout the exchange.
119
+ </final_answer>
120
+
121
+ Remember to base your decision and response solely on the information provided in the exchange and function calls. Do not introduce external information or assumptions beyond what has been explicitly given.
122
+
123
+ Provide your response (either a next function call or a final answer) immediately without any additional explanation or commentary.
124
+ '''
125
+ )
126
+
127
+
128
+ '''Here is an example exchange between the user and agent using a single function call:
129
+
130
+ user: Give me a summary of the article "Apple announces Foundation Models and
131
+ Containerization frameworks"?
132
+
133
+ agent: OK, I will summarize the article.
134
+
135
+ function call: get_summary("Apple announces Foundation Models and Containerization frameworks")
136
+
137
+ function return: {"summary": "Apple announced new technologies and enhancements to its
138
+ developer tools to help create more beautiful, intelligent, and engaging app experiences
139
+ across Apple platforms, including a new software design and access to on-device Apple
140
+ Intelligence and large language models."}
141
+
142
+ assistant: Apple announced new technologies and enhancements to its developer tools to
143
+ help create more beautiful, intelligent, and engaging app experiences across Apple
144
+ platforms, including a new software design and access to on-device Apple Intelligence
145
+ and large language models.'''
client/interface.py CHANGED
@@ -1,14 +1,13 @@
1
  '''Functions for controlling chat flow between Gradio and Anthropic/MCP'''
2
 
3
- import json
4
  import logging
5
  import queue
6
- from anthropic.types import text_block
7
  from gradio.components.chatbot import ChatMessage
8
 
9
  from client import prompts
10
  from client.anthropic_bridge import AnthropicBridge
11
  import client.gradio_functions as gradio_funcs
 
12
 
13
  # Create dialog logger
14
  dialog = gradio_funcs.get_dialog_logger(clear = True)
@@ -26,6 +25,13 @@ async def agent_input(
26
  reply = 'No reply from LLM'
27
 
28
  user_query = chat_history[-1]['content']
 
 
 
 
 
 
 
29
  dialog.info('User: %s', user_query)
30
 
31
  input_messages = format_chat_history(chat_history)
@@ -34,70 +40,29 @@ async def agent_input(
34
  input_messages
35
  )
36
 
37
- logger.debug(result)
38
-
39
  if result['tool_result']:
40
- tool_call = result['tool_call']
41
- tool_name = tool_call['name']
42
-
43
- if tool_name == 'rss_mcp_server_get_feed':
44
-
45
- tool_parameters = tool_call['parameters']
46
- website = tool_parameters['website']
47
- response_content = result['llm_response'].content[0]
48
-
49
- if isinstance(response_content, text_block.TextBlock):
50
- intermediate_reply = response_content.text
51
- else:
52
- intermediate_reply = f'I Will check the {website} RSS feed for you'
53
-
54
- output_queue.put(intermediate_reply)
55
- dialog.info('LLM: %s', intermediate_reply)
56
- dialog.info('LLM: called %s on %s', tool_name, website)
57
-
58
- articles = json.loads(result['tool_result'].content)['text']
59
-
60
- prompt = prompts.GET_FEED_PROMPT.substitute(
61
- website=website,
62
- user_query=user_query,
63
- intermediate_reply=intermediate_reply,
64
- articles=articles
65
- )
66
-
67
- input_message =[{
68
- 'role': 'user',
69
- 'content': prompt
70
- }]
71
-
72
- dialog.info('System: re-prompting LLM with return from %s call', tool_name)
73
- dialog.info('New prompt: %s ...', prompt[:75])
74
-
75
- logger.info('Re-prompting input %s', input_message)
76
- result = await bridge.process_query(
77
- prompts.GET_FEED_SYSTEM_PROMPT,
78
- input_message
79
- )
80
-
81
- try:
82
-
83
- reply = result['llm_response'].content[0].text
84
-
85
- except (IndexError, AttributeError):
86
- reply = 'No final reply from model'
87
-
88
- logger.info('LLM final reply: %s', reply)
89
 
90
  else:
 
 
91
  try:
92
  reply = result['llm_response'].content[0].text
93
 
94
  except AttributeError:
95
  reply = 'Bad reply - could not parse'
96
 
97
- logger.info('Direct, no-tool reply: %s', reply)
 
98
 
99
- dialog.info('LLM: %s ...', reply[:75])
100
- output_queue.put(reply)
101
  output_queue.put('bot-finished')
102
 
103
 
 
1
  '''Functions for controlling chat flow between Gradio and Anthropic/MCP'''
2
 
 
3
  import logging
4
  import queue
 
5
  from gradio.components.chatbot import ChatMessage
6
 
7
  from client import prompts
8
  from client.anthropic_bridge import AnthropicBridge
9
  import client.gradio_functions as gradio_funcs
10
+ import client.tool_workflows as tool_funcs
11
 
12
  # Create dialog logger
13
  dialog = gradio_funcs.get_dialog_logger(clear = True)
 
25
  reply = 'No reply from LLM'
26
 
27
  user_query = chat_history[-1]['content']
28
+
29
+ if len(chat_history) > 1:
30
+ prior_reply = chat_history[-2]['content']
31
+
32
+ else:
33
+ prior_reply = ''
34
+
35
  dialog.info('User: %s', user_query)
36
 
37
  input_messages = format_chat_history(chat_history)
 
40
  input_messages
41
  )
42
 
 
 
43
  if result['tool_result']:
44
+ logger.info('LLM called tool, entering tool loop.')
45
+ await tool_funcs.tool_loop(
46
+ user_query,
47
+ prior_reply,
48
+ result,
49
+ bridge,
50
+ output_queue,
51
+ dialog
52
+ )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
 
54
  else:
55
+ logger.info('LLM replied directly.')
56
+
57
  try:
58
  reply = result['llm_response'].content[0].text
59
 
60
  except AttributeError:
61
  reply = 'Bad reply - could not parse'
62
 
63
+ logger.info('Reply: %s', reply)
64
+ output_queue.put(reply)
65
 
 
 
66
  output_queue.put('bot-finished')
67
 
68
 
client/prompts.py CHANGED
@@ -5,18 +5,88 @@ from string import Template
5
 
6
  DEFAULT_SYSTEM_PROMPT = 'You are a helpful tool-using assistant.'
7
 
8
- GET_FEED_SYSTEM_PROMPT = '''You are a helpful assistant. Your job is to facilitate interactions between Human users and LLM agents.'''
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  GET_FEED_PROMPT = Template(
11
- '''Below is an exchange between a user and an agent. The user has asked the agent to get new content from the $website RSS feed. In order to complete the request, the agent has called a function which returned the RSS feed content from $website in JSON format. Your job is to complete the exchange by using the returned JSON RSS feed data to write a human readable reply to the user.
 
 
 
 
12
 
13
  user: $user_query
14
 
15
  agent: $intermediate_reply
16
 
17
- function call: get_feed_content($website)
18
 
19
  function return: $articles
20
 
21
  assistant:'''
22
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
  DEFAULT_SYSTEM_PROMPT = 'You are a helpful tool-using assistant.'
7
 
8
+ REPROMPTING_SYSTEM_PROMPT = '''
9
+ You are a helpful assistant. Your job is to facilitate interactions between a Human
10
+ user and LLM agent. To complete the user's request or answer their question, you may
11
+ need to call multiple functions sequentially and use each output to formulate the next
12
+ function call until you arrive at the final answer. But if you can satisfy the request
13
+ with a single function call, you should do so.
14
+
15
+ Here is an example exchange between the user and agent using multiple functions calls:
16
+
17
+ user: Can you give me a link to the article about the FAA modernizing air traffic control technology?
18
+
19
+ agent: OK, let me find the article you are referring to.
20
+
21
+ function call: find_article("FAA modernizing air traffic control technology")
22
+
23
+ function return: {"title": "FAA To Eliminate Floppy Disks Used In Air Traffic Control Systems"}
24
+
25
+ function call: get_link("FAA To Eliminate Floppy Disks Used In Air Traffic Control Systems")
26
+
27
+ function return: {"link": "https://www.tomshardware.com/the-faa-seeks-to-eliminate-floppy-disk-usage-in-air-traffic-control-systems"}
28
+
29
+ assistant: Here is the link to the article: [FAA To Eliminate Floppy Disks Used In Air Traffic Control Systems](https://www.tomshardware.com/the-faa-seeks-to-eliminate-floppy-disk-usage-in-air-traffic-control-systems)
30
+ '''
31
 
32
  GET_FEED_PROMPT = Template(
33
+ '''Below is an exchange between a user and an agent. The user has asked the agent
34
+ to get new content from the $website RSS feed. In order to complete the request,
35
+ the agent has called a function which returned the RSS feed content from $website
36
+ in JSON format. Your job is to complete the exchange by using the returned JSON
37
+ RSS feed data to write a human readable reply to the user.
38
 
39
  user: $user_query
40
 
41
  agent: $intermediate_reply
42
 
43
+ function call: get_feed_content("$website")
44
 
45
  function return: $articles
46
 
47
  assistant:'''
48
  )
49
+
50
+ OTHER_TOOL_PROMPT = Template(
51
+ '''Below is an exchange between a user and an agent. The user has asked the agent
52
+ "$user_query". The agent is completing the users request by calling a function or
53
+ functions. Complete the exchange by either:
54
+
55
+ 1. Calling the next function needed to get the information necessary to generate a
56
+ final answer for the user.
57
+ 2. Generating the final answer if you have enough information to do so already.
58
+
59
+ If no more information is needed to generate the final answer, do so without calling
60
+ additional tools.
61
+
62
+ agent: $prior_reply
63
+
64
+ user: $user_query
65
+
66
+ agent: $intermediate_reply
67
+
68
+ function call: $tool_name($tool_parameters)
69
+
70
+ function return: $tool_result
71
+ '''
72
+ )
73
+
74
+
75
+ '''Here is an example exchange between the user and agent using a single function call:
76
+
77
+ user: Give me a summary of the article "Apple announces Foundation Models and
78
+ Containerization frameworks"?
79
+
80
+ agent: OK, I will summarize the article.
81
+
82
+ function call: get_summary("Apple announces Foundation Models and Containerization frameworks")
83
+
84
+ function return: {"summary": "Apple announced new technologies and enhancements to its
85
+ developer tools to help create more beautiful, intelligent, and engaging app experiences
86
+ across Apple platforms, including a new software design and access to on-device Apple
87
+ Intelligence and large language models."}
88
+
89
+ assistant: Apple announced new technologies and enhancements to its developer tools to
90
+ help create more beautiful, intelligent, and engaging app experiences across Apple
91
+ platforms, including a new software design and access to on-device Apple Intelligence
92
+ and large language models.'''
rss_client.py CHANGED
@@ -23,7 +23,7 @@ Path('logs').mkdir(parents=True, exist_ok=True)
23
  # Clear old logs if present
24
  gradio_funcs.delete_old_logs('logs', 'rss_client')
25
 
26
- # Configure
27
  logging.basicConfig(
28
  handlers=[RotatingFileHandler(
29
  'logs/rss_client.log',
@@ -31,7 +31,7 @@ logging.basicConfig(
31
  backupCount=10,
32
  mode='w'
33
  )],
34
- level=logging.DEBUG,
35
  format='%(levelname)s - %(name)s - %(message)s'
36
  )
37
 
@@ -40,8 +40,8 @@ logger = logging.getLogger(__name__)
40
 
41
  # Handle MCP server connection and interactions
42
  RSS_CLIENT = MCPClientWrapper(
43
- 'https://agents-mcp-hackathon-rss-mcp-server.hf.space/gradio_api/mcp/sse',
44
- #'http://127.0.0.1:7861/gradio_api/mcp/sse'
45
  )
46
  logger.info('Started MCP client')
47
 
@@ -93,11 +93,13 @@ def send_message(chat_history: list):
93
 
94
  chat_history.append({'role': 'assistant', 'content': ''})
95
 
96
- for character in response:
97
- chat_history[-1]['content'] += character
98
- time.sleep(0.005)
99
 
100
- yield chat_history
 
 
 
 
101
 
102
 
103
  with gr.Blocks(title='MCP RSS client') as demo:
@@ -108,14 +110,14 @@ with gr.Blocks(title='MCP RSS client') as demo:
108
 
109
  # MCP connection/tool dump
110
  connect_btn = gr.Button('Connect to MCP server')
111
- status = gr.Textbox(label='MCP server tool dump', interactive=False, lines=4)
112
  connect_btn.click(# pylint: disable=no-member
113
  RSS_CLIENT.list_tools,
114
  outputs=status
115
  )
116
 
117
  # Dialog log output
118
- dialog_output = gr.Textbox(label='Internal dialog', lines=10, max_lines=100)
119
  timer = gr.Timer(0.5, active=True)
120
 
121
  timer.tick( # pylint: disable=no-member
@@ -126,15 +128,15 @@ with gr.Blocks(title='MCP RSS client') as demo:
126
  # Chat interface
127
  chatbot = gr.Chatbot(
128
  value=[],
129
- height=500,
130
  type='messages',
131
  show_copy_button=True
132
  )
133
 
134
  msg = gr.Textbox(
135
- 'Are there any new posts on Hacker News?',
136
  label='Ask about content or articles on a site or platform',
137
- placeholder='Is there anything new on Hacker News?',
138
  scale=4
139
  )
140
 
@@ -148,6 +150,7 @@ with gr.Blocks(title='MCP RSS client') as demo:
148
  if __name__ == '__main__':
149
 
150
  current_directory = os.getcwd()
 
151
 
152
  if 'pyrite' in current_directory:
153
  logger.info('Starting RASS on LAN')
 
23
  # Clear old logs if present
24
  gradio_funcs.delete_old_logs('logs', 'rss_client')
25
 
26
+ # Configure the root logger
27
  logging.basicConfig(
28
  handlers=[RotatingFileHandler(
29
  'logs/rss_client.log',
 
31
  backupCount=10,
32
  mode='w'
33
  )],
34
+ level=logging.INFO,
35
  format='%(levelname)s - %(name)s - %(message)s'
36
  )
37
 
 
40
 
41
  # Handle MCP server connection and interactions
42
  RSS_CLIENT = MCPClientWrapper(
43
+ #'https://agents-mcp-hackathon-rss-mcp-server.hf.space/gradio_api/mcp/sse',
44
+ 'http://127.0.0.1:7860/gradio_api/mcp/sse'
45
  )
46
  logger.info('Started MCP client')
47
 
 
93
 
94
  chat_history.append({'role': 'assistant', 'content': ''})
95
 
96
+ if response is not None:
 
 
97
 
98
+ for character in response:
99
+ chat_history[-1]['content'] += character
100
+ time.sleep(0.005)
101
+
102
+ yield chat_history
103
 
104
 
105
  with gr.Blocks(title='MCP RSS client') as demo:
 
110
 
111
  # MCP connection/tool dump
112
  connect_btn = gr.Button('Connect to MCP server')
113
+ status = gr.Textbox(label='MCP server tool dump', interactive=False, lines=5, max_lines=5)
114
  connect_btn.click(# pylint: disable=no-member
115
  RSS_CLIENT.list_tools,
116
  outputs=status
117
  )
118
 
119
  # Dialog log output
120
+ dialog_output = gr.Textbox(label='Internal dialog', lines=5, max_lines=5)
121
  timer = gr.Timer(0.5, active=True)
122
 
123
  timer.tick( # pylint: disable=no-member
 
128
  # Chat interface
129
  chatbot = gr.Chatbot(
130
  value=[],
131
+ height=250,
132
  type='messages',
133
  show_copy_button=True
134
  )
135
 
136
  msg = gr.Textbox(
137
+ 'Are there any new posts on Slashdot?',
138
  label='Ask about content or articles on a site or platform',
139
+ placeholder='Is there anything new on Slashdot?',
140
  scale=4
141
  )
142
 
 
150
  if __name__ == '__main__':
151
 
152
  current_directory = os.getcwd()
153
+ logger.info(current_directory)
154
 
155
  if 'pyrite' in current_directory:
156
  logger.info('Starting RASS on LAN')