Spaces:
Running
Running
update tavily search
Browse files- README.md +76 -11
- app.py +136 -12
- requirements.txt +2 -1
README.md
CHANGED
@@ -13,19 +13,84 @@ hf_oauth: true
|
|
13 |
|
14 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
15 |
|
16 |
-
#
|
17 |
|
18 |
-
|
19 |
|
20 |
## Features
|
21 |
|
22 |
-
- **
|
23 |
-
- **
|
24 |
-
- **
|
25 |
-
- **
|
26 |
-
- **History
|
27 |
-
- **
|
28 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## Project Structure
|
31 |
|
@@ -41,7 +106,7 @@ anycoder/
|
|
41 |
|
42 |
1. Set your Hugging Face API token:
|
43 |
```bash
|
44 |
-
export HF_TOKEN="
|
45 |
```
|
46 |
|
47 |
2. Install dependencies:
|
@@ -117,7 +182,7 @@ pip install -r requirements.txt
|
|
117 |
|
118 |
2. Set your Hugging Face API token as an environment variable:
|
119 |
```bash
|
120 |
-
export HF_TOKEN="
|
121 |
```
|
122 |
|
123 |
3. Run the application:
|
|
|
13 |
|
14 |
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
15 |
|
16 |
+
# AnyCoder - AI Code Generator
|
17 |
|
18 |
+
AnyCoder is an AI-powered code generator that helps you create applications by describing them in plain English. It supports multiple AI models and can generate HTML/CSS/JavaScript code for web applications.
|
19 |
|
20 |
## Features
|
21 |
|
22 |
+
- **Multi-Model Support**: Choose from various AI models including DeepSeek, ERNIE-4.5-VL, MiniMax, and Qwen
|
23 |
+
- **Image-to-Code**: Upload UI design images and get corresponding HTML/CSS code (ERNIE-4.5-VL model)
|
24 |
+
- **Live Preview**: See your generated code in action with the built-in sandbox
|
25 |
+
- **Web Search Integration**: Enable real-time web search to get the latest information and best practices
|
26 |
+
- **Chat History**: Keep track of your conversations and generated code
|
27 |
+
- **Quick Examples**: Pre-built examples to get you started quickly
|
28 |
+
|
29 |
+
## Installation
|
30 |
+
|
31 |
+
1. Clone the repository:
|
32 |
+
```bash
|
33 |
+
git clone <repository-url>
|
34 |
+
cd anycoder
|
35 |
+
```
|
36 |
+
|
37 |
+
2. Install dependencies:
|
38 |
+
```bash
|
39 |
+
pip install -r requirements.txt
|
40 |
+
```
|
41 |
+
|
42 |
+
3. Set up environment variables:
|
43 |
+
```bash
|
44 |
+
export HF_TOKEN="your_huggingface_token"
|
45 |
+
export TAVILY_API_KEY="your_tavily_api_key" # Optional, for web search feature
|
46 |
+
```
|
47 |
+
|
48 |
+
## Usage
|
49 |
+
|
50 |
+
1. Run the application:
|
51 |
+
```bash
|
52 |
+
python app.py
|
53 |
+
```
|
54 |
+
|
55 |
+
2. Open your browser and navigate to the provided URL
|
56 |
+
|
57 |
+
3. Describe your application in the text input field
|
58 |
+
|
59 |
+
4. Optionally:
|
60 |
+
- Upload a UI design image (for ERNIE-4.5-VL model)
|
61 |
+
- Enable web search to get the latest information
|
62 |
+
- Choose a different AI model
|
63 |
+
|
64 |
+
5. Click "Generate" to create your code
|
65 |
+
|
66 |
+
6. View the generated code in the Code Editor tab or see it in action in the Live Preview tab
|
67 |
+
|
68 |
+
## Web Search Feature
|
69 |
+
|
70 |
+
The web search feature uses Tavily to provide real-time information when generating code. To enable this feature:
|
71 |
+
|
72 |
+
1. Get a free Tavily API key from [Tavily Platform](https://tavily.com/)
|
73 |
+
2. Set the `TAVILY_API_KEY` environment variable
|
74 |
+
3. Toggle the "🔍 Enable Web Search" checkbox in the sidebar
|
75 |
+
|
76 |
+
When enabled, the AI will search the web for the latest information, best practices, and technologies related to your request.
|
77 |
+
|
78 |
+
## Available Models
|
79 |
+
|
80 |
+
- **DeepSeek V3**: Advanced code generation model
|
81 |
+
- **DeepSeek R1**: Specialized for code generation tasks
|
82 |
+
- **ERNIE-4.5-VL**: Multimodal model with image support
|
83 |
+
- **MiniMax M1**: General-purpose AI model
|
84 |
+
- **Qwen3-235B-A22B**: Large language model for code generation
|
85 |
+
|
86 |
+
## Environment Variables
|
87 |
+
|
88 |
+
- `HF_TOKEN`: Your Hugging Face API token (required)
|
89 |
+
- `TAVILY_API_KEY`: Your Tavily API key (optional, for web search)
|
90 |
+
|
91 |
+
## License
|
92 |
+
|
93 |
+
[Add your license information here]
|
94 |
|
95 |
## Project Structure
|
96 |
|
|
|
106 |
|
107 |
1. Set your Hugging Face API token:
|
108 |
```bash
|
109 |
+
export HF_TOKEN="your_huggingface_token"
|
110 |
```
|
111 |
|
112 |
2. Install dependencies:
|
|
|
182 |
|
183 |
2. Set your Hugging Face API token as an environment variable:
|
184 |
```bash
|
185 |
+
export HF_TOKEN="your_huggingface_token"
|
186 |
```
|
187 |
|
188 |
3. Run the application:
|
app.py
CHANGED
@@ -6,6 +6,7 @@ import base64
|
|
6 |
|
7 |
import gradio as gr
|
8 |
from huggingface_hub import InferenceClient
|
|
|
9 |
|
10 |
# Configuration
|
11 |
SystemPrompt = """You are a helpful coding assistant. You help users create applications by generating code based on their requirements.
|
@@ -22,6 +23,22 @@ Always respond with code that can be executed or rendered directly.
|
|
22 |
|
23 |
Always output only the HTML code inside a ```html ... ``` code block, and do not include any explanations or extra text."""
|
24 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
# Available models
|
26 |
AVAILABLE_MODELS = [
|
27 |
{
|
@@ -102,6 +119,16 @@ client = InferenceClient(
|
|
102 |
bill_to="huggingface"
|
103 |
)
|
104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
105 |
History = List[Tuple[str, str]]
|
106 |
Messages = List[Dict[str, str]]
|
107 |
|
@@ -138,6 +165,22 @@ def messages_to_history(messages: Messages) -> Tuple[str, History]:
|
|
138 |
history.append([user_content, r['content']])
|
139 |
return history
|
140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
141 |
def remove_code_block(text):
|
142 |
# Try to match code blocks with language markers
|
143 |
patterns = [
|
@@ -159,7 +202,7 @@ def history_render(history: History):
|
|
159 |
return gr.update(visible=True), history
|
160 |
|
161 |
def clear_history():
|
162 |
-
return []
|
163 |
|
164 |
def update_image_input_visibility(model):
|
165 |
"""Update image input visibility based on selected model"""
|
@@ -206,6 +249,64 @@ def create_multimodal_message(text, image=None):
|
|
206 |
|
207 |
return {"role": "user", "content": content}
|
208 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
def send_to_sandbox(code):
|
210 |
# Add a wrapper to inject necessary permissions and ensure full HTML
|
211 |
wrapped_code = f"""
|
@@ -268,16 +369,23 @@ def demo_card_click(e: gr.EventData):
|
|
268 |
# Return the first demo description as fallback
|
269 |
return DEMO_LIST[0]['description']
|
270 |
|
271 |
-
def generation_code(query: Optional[str], image: Optional[gr.Image], _setting: Dict[str, str], _history: Optional[History], _current_model: Dict):
|
272 |
if query is None:
|
273 |
query = ''
|
274 |
if _history is None:
|
275 |
_history = []
|
276 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
277 |
if image is not None:
|
278 |
-
messages.append(create_multimodal_message(
|
279 |
else:
|
280 |
-
messages.append({'role': 'user', 'content':
|
281 |
try:
|
282 |
completion = client.chat.completions.create(
|
283 |
model=_current_model["id"],
|
@@ -290,10 +398,11 @@ def generation_code(query: Optional[str], image: Optional[gr.Image], _setting: D
|
|
290 |
if chunk.choices[0].delta.content:
|
291 |
content += chunk.choices[0].delta.content
|
292 |
clean_code = remove_code_block(content)
|
|
|
293 |
yield {
|
294 |
code_output: clean_code,
|
295 |
-
status_indicator: '<div class="status-indicator generating" id="status">Generating code...</div>',
|
296 |
-
history_output: _history,
|
297 |
}
|
298 |
_history = messages_to_history(messages + [{
|
299 |
'role': 'assistant',
|
@@ -304,14 +413,14 @@ def generation_code(query: Optional[str], image: Optional[gr.Image], _setting: D
|
|
304 |
history: _history,
|
305 |
sandbox: send_to_sandbox(remove_code_block(content)),
|
306 |
status_indicator: '<div class="status-indicator success" id="status">Code generated successfully!</div>',
|
307 |
-
history_output: _history,
|
308 |
}
|
309 |
except Exception as e:
|
310 |
error_message = f"Error: {str(e)}"
|
311 |
yield {
|
312 |
code_output: error_message,
|
313 |
status_indicator: '<div class="status-indicator error" id="status">Error generating code</div>',
|
314 |
-
history_output: _history,
|
315 |
}
|
316 |
|
317 |
# Main application
|
@@ -326,6 +435,7 @@ with gr.Blocks(theme=gr.themes.Base(), title="AnyCoder - AI Code Generator") as
|
|
326 |
with gr.Sidebar():
|
327 |
gr.Markdown("# AnyCoder\nAI-Powered Code Generator")
|
328 |
gr.Markdown("""Describe your app or UI in plain English. Optionally upload a UI image (for ERNIE model). Click Generate to get code and preview.""")
|
|
|
329 |
input = gr.Textbox(
|
330 |
label="Describe your application",
|
331 |
placeholder="e.g., Create a todo app with add, delete, and mark as complete functionality",
|
@@ -338,6 +448,20 @@ with gr.Blocks(theme=gr.themes.Base(), title="AnyCoder - AI Code Generator") as
|
|
338 |
with gr.Row():
|
339 |
btn = gr.Button("Generate", variant="primary", size="sm")
|
340 |
clear_btn = gr.Button("Clear", variant="secondary", size="sm")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
341 |
gr.Markdown("### Quick Examples")
|
342 |
for i, demo_item in enumerate(DEMO_LIST[:5]):
|
343 |
demo_card = gr.Button(
|
@@ -390,7 +514,7 @@ with gr.Blocks(theme=gr.themes.Base(), title="AnyCoder - AI Code Generator") as
|
|
390 |
with gr.Tab("Live Preview"):
|
391 |
sandbox = gr.HTML(label="Live Preview")
|
392 |
with gr.Tab("History"):
|
393 |
-
history_output = gr.Chatbot(show_label=False, height=400)
|
394 |
status_indicator = gr.Markdown(
|
395 |
'Ready to generate code',
|
396 |
)
|
@@ -398,10 +522,10 @@ with gr.Blocks(theme=gr.themes.Base(), title="AnyCoder - AI Code Generator") as
|
|
398 |
# Event handlers
|
399 |
btn.click(
|
400 |
generation_code,
|
401 |
-
inputs=[input, image_input, setting, history, current_model],
|
402 |
outputs=[code_output, history, sandbox, status_indicator, history_output]
|
403 |
)
|
404 |
-
clear_btn.click(clear_history, outputs=[history])
|
405 |
|
406 |
if __name__ == "__main__":
|
407 |
demo.queue(default_concurrency_limit=20).launch(ssr_mode=False)
|
|
|
6 |
|
7 |
import gradio as gr
|
8 |
from huggingface_hub import InferenceClient
|
9 |
+
from tavily import TavilyClient
|
10 |
|
11 |
# Configuration
|
12 |
SystemPrompt = """You are a helpful coding assistant. You help users create applications by generating code based on their requirements.
|
|
|
23 |
|
24 |
Always output only the HTML code inside a ```html ... ``` code block, and do not include any explanations or extra text."""
|
25 |
|
26 |
+
# System prompt with search capability
|
27 |
+
SystemPromptWithSearch = """You are a helpful coding assistant with access to real-time web search. You help users create applications by generating code based on their requirements.
|
28 |
+
When asked to create an application, you should:
|
29 |
+
1. Understand the user's requirements
|
30 |
+
2. Use web search when needed to find the latest information, best practices, or specific technologies
|
31 |
+
3. Generate clean, working code
|
32 |
+
4. Provide HTML output when appropriate for web applications
|
33 |
+
5. Include necessary comments and documentation
|
34 |
+
6. Ensure the code is functional and follows best practices
|
35 |
+
|
36 |
+
If an image is provided, analyze it and use the visual information to better understand the user's requirements.
|
37 |
+
|
38 |
+
Always respond with code that can be executed or rendered directly.
|
39 |
+
|
40 |
+
Always output only the HTML code inside a ```html ... ``` code block, and do not include any explanations or extra text."""
|
41 |
+
|
42 |
# Available models
|
43 |
AVAILABLE_MODELS = [
|
44 |
{
|
|
|
119 |
bill_to="huggingface"
|
120 |
)
|
121 |
|
122 |
+
# Tavily Search Client
|
123 |
+
TAVILY_API_KEY = os.getenv('TAVILY_API_KEY')
|
124 |
+
tavily_client = None
|
125 |
+
if TAVILY_API_KEY:
|
126 |
+
try:
|
127 |
+
tavily_client = TavilyClient(api_key=TAVILY_API_KEY)
|
128 |
+
except Exception as e:
|
129 |
+
print(f"Failed to initialize Tavily client: {e}")
|
130 |
+
tavily_client = None
|
131 |
+
|
132 |
History = List[Tuple[str, str]]
|
133 |
Messages = List[Dict[str, str]]
|
134 |
|
|
|
165 |
history.append([user_content, r['content']])
|
166 |
return history
|
167 |
|
168 |
+
def history_to_chatbot_messages(history: History) -> List[Dict[str, str]]:
|
169 |
+
"""Convert history tuples to chatbot message format"""
|
170 |
+
messages = []
|
171 |
+
for user_msg, assistant_msg in history:
|
172 |
+
# Handle multimodal content
|
173 |
+
if isinstance(user_msg, list):
|
174 |
+
text_content = ""
|
175 |
+
for item in user_msg:
|
176 |
+
if isinstance(item, dict) and item.get("type") == "text":
|
177 |
+
text_content += item.get("text", "")
|
178 |
+
user_msg = text_content if text_content else str(user_msg)
|
179 |
+
|
180 |
+
messages.append({"role": "user", "content": user_msg})
|
181 |
+
messages.append({"role": "assistant", "content": assistant_msg})
|
182 |
+
return messages
|
183 |
+
|
184 |
def remove_code_block(text):
|
185 |
# Try to match code blocks with language markers
|
186 |
patterns = [
|
|
|
202 |
return gr.update(visible=True), history
|
203 |
|
204 |
def clear_history():
|
205 |
+
return [], [] # Empty lists for both tuple format and chatbot messages
|
206 |
|
207 |
def update_image_input_visibility(model):
|
208 |
"""Update image input visibility based on selected model"""
|
|
|
249 |
|
250 |
return {"role": "user", "content": content}
|
251 |
|
252 |
+
# Updated for faster Tavily search and closer prompt usage
|
253 |
+
# Uses 'basic' search_depth and auto_parameters=True for speed and relevance
|
254 |
+
|
255 |
+
def perform_web_search(query: str, max_results: int = 5, include_domains=None, exclude_domains=None) -> str:
|
256 |
+
"""Perform web search using Tavily and return formatted results (fast, prompt-focused)"""
|
257 |
+
if not tavily_client:
|
258 |
+
return "Web search is not available. Please set the TAVILY_API_KEY environment variable."
|
259 |
+
|
260 |
+
try:
|
261 |
+
# Use basic search for speed, auto_parameters for prompt intent
|
262 |
+
search_params = {
|
263 |
+
"auto_parameters": True,
|
264 |
+
"search_depth": "basic",
|
265 |
+
"max_results": min(max(1, max_results), 20),
|
266 |
+
"include_answer": True
|
267 |
+
}
|
268 |
+
if include_domains is not None:
|
269 |
+
search_params["include_domains"] = include_domains
|
270 |
+
if exclude_domains is not None:
|
271 |
+
search_params["exclude_domains"] = exclude_domains
|
272 |
+
|
273 |
+
response = tavily_client.search(query, **search_params)
|
274 |
+
|
275 |
+
answer = response.get('answer')
|
276 |
+
formatted_answer = f"**AI Answer:**\n{answer}\n\n" if answer else ""
|
277 |
+
|
278 |
+
search_results = []
|
279 |
+
for result in response.get('results', []):
|
280 |
+
title = result.get('title', 'No title')
|
281 |
+
url = result.get('url', 'No URL')
|
282 |
+
content = result.get('content', 'No content')
|
283 |
+
search_results.append(f"Title: {title}\nURL: {url}\nContent: {content}\n")
|
284 |
+
|
285 |
+
if search_results:
|
286 |
+
return formatted_answer + "Web Search Results:\n\n" + "\n---\n".join(search_results)
|
287 |
+
else:
|
288 |
+
return formatted_answer + "No search results found."
|
289 |
+
|
290 |
+
except Exception as e:
|
291 |
+
return f"Search error: {str(e)}"
|
292 |
+
|
293 |
+
def enhance_query_with_search(query: str, enable_search: bool) -> str:
|
294 |
+
"""Enhance the query with web search results if search is enabled"""
|
295 |
+
if not enable_search or not tavily_client:
|
296 |
+
return query
|
297 |
+
|
298 |
+
# Perform search to get relevant information
|
299 |
+
search_results = perform_web_search(query)
|
300 |
+
|
301 |
+
# Combine original query with search results
|
302 |
+
enhanced_query = f"""Original Query: {query}
|
303 |
+
|
304 |
+
{search_results}
|
305 |
+
|
306 |
+
Please use the search results above to help create the requested application with the most up-to-date information and best practices."""
|
307 |
+
|
308 |
+
return enhanced_query
|
309 |
+
|
310 |
def send_to_sandbox(code):
|
311 |
# Add a wrapper to inject necessary permissions and ensure full HTML
|
312 |
wrapped_code = f"""
|
|
|
369 |
# Return the first demo description as fallback
|
370 |
return DEMO_LIST[0]['description']
|
371 |
|
372 |
+
def generation_code(query: Optional[str], image: Optional[gr.Image], _setting: Dict[str, str], _history: Optional[History], _current_model: Dict, enable_search: bool = False):
|
373 |
if query is None:
|
374 |
query = ''
|
375 |
if _history is None:
|
376 |
_history = []
|
377 |
+
|
378 |
+
# Choose system prompt based on search setting
|
379 |
+
system_prompt = SystemPromptWithSearch if enable_search else _setting['system']
|
380 |
+
messages = history_to_messages(_history, system_prompt)
|
381 |
+
|
382 |
+
# Enhance query with search if enabled
|
383 |
+
enhanced_query = enhance_query_with_search(query, enable_search)
|
384 |
+
|
385 |
if image is not None:
|
386 |
+
messages.append(create_multimodal_message(enhanced_query, image))
|
387 |
else:
|
388 |
+
messages.append({'role': 'user', 'content': enhanced_query})
|
389 |
try:
|
390 |
completion = client.chat.completions.create(
|
391 |
model=_current_model["id"],
|
|
|
398 |
if chunk.choices[0].delta.content:
|
399 |
content += chunk.choices[0].delta.content
|
400 |
clean_code = remove_code_block(content)
|
401 |
+
search_status = " (with web search)" if enable_search and tavily_client else ""
|
402 |
yield {
|
403 |
code_output: clean_code,
|
404 |
+
status_indicator: f'<div class="status-indicator generating" id="status">Generating code{search_status}...</div>',
|
405 |
+
history_output: history_to_chatbot_messages(_history),
|
406 |
}
|
407 |
_history = messages_to_history(messages + [{
|
408 |
'role': 'assistant',
|
|
|
413 |
history: _history,
|
414 |
sandbox: send_to_sandbox(remove_code_block(content)),
|
415 |
status_indicator: '<div class="status-indicator success" id="status">Code generated successfully!</div>',
|
416 |
+
history_output: history_to_chatbot_messages(_history),
|
417 |
}
|
418 |
except Exception as e:
|
419 |
error_message = f"Error: {str(e)}"
|
420 |
yield {
|
421 |
code_output: error_message,
|
422 |
status_indicator: '<div class="status-indicator error" id="status">Error generating code</div>',
|
423 |
+
history_output: history_to_chatbot_messages(_history),
|
424 |
}
|
425 |
|
426 |
# Main application
|
|
|
435 |
with gr.Sidebar():
|
436 |
gr.Markdown("# AnyCoder\nAI-Powered Code Generator")
|
437 |
gr.Markdown("""Describe your app or UI in plain English. Optionally upload a UI image (for ERNIE model). Click Generate to get code and preview.""")
|
438 |
+
gr.Markdown("**Tip:** For best search results about people or entities, include details like profession, company, or location. Example: 'John Smith software engineer at Google.'")
|
439 |
input = gr.Textbox(
|
440 |
label="Describe your application",
|
441 |
placeholder="e.g., Create a todo app with add, delete, and mark as complete functionality",
|
|
|
448 |
with gr.Row():
|
449 |
btn = gr.Button("Generate", variant="primary", size="sm")
|
450 |
clear_btn = gr.Button("Clear", variant="secondary", size="sm")
|
451 |
+
|
452 |
+
# Search toggle
|
453 |
+
search_toggle = gr.Checkbox(
|
454 |
+
label="🔍 Enable Web Search",
|
455 |
+
value=False,
|
456 |
+
info="Enable real-time web search to get the latest information and best practices"
|
457 |
+
)
|
458 |
+
|
459 |
+
# Search status indicator
|
460 |
+
if not tavily_client:
|
461 |
+
gr.Markdown("⚠️ **Web Search Unavailable**: Set `TAVILY_API_KEY` environment variable to enable search")
|
462 |
+
else:
|
463 |
+
gr.Markdown("✅ **Web Search Available**: Toggle above to enable real-time search")
|
464 |
+
|
465 |
gr.Markdown("### Quick Examples")
|
466 |
for i, demo_item in enumerate(DEMO_LIST[:5]):
|
467 |
demo_card = gr.Button(
|
|
|
514 |
with gr.Tab("Live Preview"):
|
515 |
sandbox = gr.HTML(label="Live Preview")
|
516 |
with gr.Tab("History"):
|
517 |
+
history_output = gr.Chatbot(show_label=False, height=400, type="messages")
|
518 |
status_indicator = gr.Markdown(
|
519 |
'Ready to generate code',
|
520 |
)
|
|
|
522 |
# Event handlers
|
523 |
btn.click(
|
524 |
generation_code,
|
525 |
+
inputs=[input, image_input, setting, history, current_model, search_toggle],
|
526 |
outputs=[code_output, history, sandbox, status_indicator, history_output]
|
527 |
)
|
528 |
+
clear_btn.click(clear_history, outputs=[history, history_output])
|
529 |
|
530 |
if __name__ == "__main__":
|
531 |
demo.queue(default_concurrency_limit=20).launch(ssr_mode=False)
|
requirements.txt
CHANGED
@@ -1,2 +1,3 @@
|
|
1 |
git+https://github.com/huggingface/huggingface_hub.git
|
2 |
-
gradio[oauth]
|
|
|
|
1 |
git+https://github.com/huggingface/huggingface_hub.git
|
2 |
+
gradio[oauth]
|
3 |
+
tavily-python
|