Thanush commited on
Commit
01a984c
·
1 Parent(s): 43e5827

Refactor SYSTEM_PROMPT in app.py to enhance the medical consultation process with a structured approach for gathering patient information. Update follow-up question guidelines and improve response generation logic for more intelligent interactions.

Browse files
Files changed (1) hide show
  1. app.py +21 -32
app.py CHANGED
@@ -9,24 +9,27 @@ import re
9
  LLAMA_MODEL = "meta-llama/Llama-2-7b-chat-hf"
10
  MEDITRON_MODEL = "epfl-llm/meditron-7b"
11
 
12
- SYSTEM_PROMPT = """You are a professional virtual doctor. Your goal is to collect detailed information about the user's health condition, symptoms, medical history, medications, lifestyle, and other relevant data.
13
 
14
- **IMPORTANT** Ask 1-2 follow-up questions at a time to gather more details about:
15
- - Detailed description of symptoms
16
- - Duration (when did it start?)
17
- - Severity (scale of 1-10)
18
- - Aggravating or alleviating factors
19
- - Related symptoms
20
- - Medical history
21
- - Current medications and allergies
22
 
23
- After collecting sufficient information, summarize findings, provide a likely diagnosis (if possible), and suggest when they should seek professional care.
 
 
 
 
 
24
 
25
- If enough information is collected, provide a concise, general diagnosis and a practical over-the-counter medicine and home remedy suggestion.
26
 
27
  Do NOT make specific prescriptions for prescription-only drugs.
28
 
29
- Respond empathetically and clearly. Always be professional and thorough."""
30
 
31
  MEDITRON_PROMPT = """<|im_start|>system
32
  You are a board-certified physician with extensive clinical experience. Your role is to provide evidence-based medical assessment and recommendations following standard medical practice.
@@ -144,29 +147,20 @@ def generate_response(message, history):
144
  if len(history) >= 3: # Skip name/age exchanges
145
  medical_history = history[3:]
146
 
147
- # Define follow-up questions based on turn number
148
- followup_questions = [
149
- "Can you describe your symptoms in more detail? What exactly are you experiencing?",
150
- "How long have you been experiencing these symptoms? When did they first start?",
151
- "On a scale of 1-10, how would you rate the severity of your symptoms?",
152
- "Have you noticed anything that makes your symptoms better or worse?",
153
- "Do you have any other symptoms, medical history, or are you taking any medications?"
154
- ]
155
-
156
  # Build the prompt for medical consultation
157
  if conversation_state['medical_turns'] <= 5:
158
- # Still gathering information
159
  prompt = build_simple_prompt(SYSTEM_PROMPT, medical_history, message)
160
 
161
- # Generate response
162
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
163
  with torch.no_grad():
164
  outputs = model.generate(
165
  inputs.input_ids,
166
  attention_mask=inputs.attention_mask,
167
- max_new_tokens=256,
168
- temperature=0.7,
169
- top_p=0.9,
170
  do_sample=True,
171
  pad_token_id=tokenizer.eos_token_id
172
  )
@@ -174,12 +168,7 @@ def generate_response(message, history):
174
  full_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
175
  llama_response = full_response.split('[/INST]')[-1].strip()
176
 
177
- # Add a specific follow-up question
178
- if conversation_state['medical_turns'] < len(followup_questions):
179
- next_question = followup_questions[conversation_state['medical_turns']]
180
- return f"{llama_response}\n\n{next_question}"
181
- else:
182
- return llama_response
183
 
184
  else:
185
  # Time for diagnosis and treatment (after 5+ turns)
 
9
  LLAMA_MODEL = "meta-llama/Llama-2-7b-chat-hf"
10
  MEDITRON_MODEL = "epfl-llm/meditron-7b"
11
 
12
+ SYSTEM_PROMPT = """You are a professional virtual doctor conducting a medical consultation. Your role is to gather comprehensive information about the patient's condition through intelligent questioning.
13
 
14
+ **CONSULTATION APPROACH:**
15
+ - Ask thoughtful, relevant follow-up questions based on the patient's responses
16
+ - Prioritize gathering information about: symptom details, duration, severity, triggers, related symptoms, medical history, medications, and lifestyle factors
17
+ - Ask 1-2 specific questions at a time that build naturally on their previous answers
18
+ - Be empathetic, professional, and thorough in your questioning
19
+ - Adapt your questions based on the symptoms they describe
 
 
20
 
21
+ **IMPORTANT GUIDELINES:**
22
+ - Generate intelligent follow-up questions that are contextually relevant to their specific symptoms
23
+ - Don't ask generic questions - tailor each question to their particular situation
24
+ - If they mention pain, ask about location, type, and triggers
25
+ - If they mention duration, ask about progression or changes
26
+ - Build each question logically from their previous responses
27
 
28
+ After 4-5 meaningful exchanges, provide assessment and recommendations.
29
 
30
  Do NOT make specific prescriptions for prescription-only drugs.
31
 
32
+ Always maintain a professional, caring tone throughout the consultation."""
33
 
34
  MEDITRON_PROMPT = """<|im_start|>system
35
  You are a board-certified physician with extensive clinical experience. Your role is to provide evidence-based medical assessment and recommendations following standard medical practice.
 
147
  if len(history) >= 3: # Skip name/age exchanges
148
  medical_history = history[3:]
149
 
 
 
 
 
 
 
 
 
 
150
  # Build the prompt for medical consultation
151
  if conversation_state['medical_turns'] <= 5:
152
+ # Still gathering information - let LLM ask intelligent follow-up questions
153
  prompt = build_simple_prompt(SYSTEM_PROMPT, medical_history, message)
154
 
155
+ # Generate response with intelligent follow-up questions
156
  inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
157
  with torch.no_grad():
158
  outputs = model.generate(
159
  inputs.input_ids,
160
  attention_mask=inputs.attention_mask,
161
+ max_new_tokens=384,
162
+ temperature=0.8,
163
+ top_p=0.95,
164
  do_sample=True,
165
  pad_token_id=tokenizer.eos_token_id
166
  )
 
168
  full_response = tokenizer.decode(outputs[0], skip_special_tokens=True)
169
  llama_response = full_response.split('[/INST]')[-1].strip()
170
 
171
+ return llama_response
 
 
 
 
 
172
 
173
  else:
174
  # Time for diagnosis and treatment (after 5+ turns)