Spaces:
Running
Running
File size: 7,365 Bytes
a68746d 05c2ac8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 |
---
title: Mcp Discussion Bot
emoji: π
colorFrom: purple
colorTo: yellow
sdk: gradio
sdk_version: 5.31.0
app_file: app.py
pinned: false
---
# π€ Hugging Face Discussion Bot
A FastAPI and Gradio application that automatically responds to Hugging Face Hub discussion comments using AI-powered responses via Hugging Face Inference API with MCP integration.
## β¨ Features
- **Webhook Integration**: Receives real-time webhooks from Hugging Face Hub when new discussion comments are posted
- **AI-Powered Responses**: Uses Hugging Face Inference API with MCP support for intelligent, context-aware responses
- **Interactive Dashboard**: Beautiful Gradio interface to monitor comments and test functionality
- **Automatic Posting**: Posts AI responses back to the original discussion thread
- **Testing Tools**: Built-in webhook simulation and AI testing capabilities
- **MCP Server**: Includes a Model Context Protocol server for advanced tool integration
## π Quick Start
### 1. Installation
```bash
# Clone the repository
git clone <your-repo-url>
cd mcp-course-unit3-example
# Install dependencies
pip install -e .
```
### 2. Environment Setup
Copy the example environment file and configure your API keys:
```bash
cp env.example .env
```
Edit `.env` with your credentials:
```env
# Webhook Configuration
WEBHOOK_SECRET=your-secure-webhook-secret
# Hugging Face Configuration
HF_TOKEN=hf_your_hugging_face_token_here
# Model Configuration (optional)
HF_MODEL=microsoft/DialoGPT-medium
HF_PROVIDER=huggingface
```
### 3. Run the Application
```bash
python server.py
```
The application will start on `http://localhost:8000` with:
- π **Gradio Dashboard**: `http://localhost:8000/gradio`
- π **Webhook Endpoint**: `http://localhost:8000/webhook`
- π **API Documentation**: `http://localhost:8000/docs`
## π§ Configuration
### Hugging Face Hub Webhook Setup
1. Go to your Hugging Face repository settings
2. Navigate to the "Webhooks" section
3. Create a new webhook with:
- **URL**: `https://your-domain.com/webhook`
- **Secret**: Same as `WEBHOOK_SECRET` in your `.env`
- **Events**: Subscribe to "Community (PR & discussions)"
### Required API Keys
#### Hugging Face Token
1. Go to [Hugging Face Settings](https://huggingface.co/settings/tokens)
2. Create a new token with "Write" permissions
3. Add it to your `.env` as `HF_TOKEN`
## π Dashboard Features
### Recent Comments Tab
- View all processed discussion comments
- See AI responses in real-time
- Refresh and filter capabilities
### Test HF Inference Tab
- Direct testing of the Hugging Face Inference API
- Custom prompt input
- Response preview
### Simulate Webhook Tab
- Test webhook processing without real HF events
- Mock discussion scenarios
- Validate AI response generation
### Configuration Tab
- View current setup status
- Check API key configuration
- Monitor processing statistics
## π API Endpoints
### POST `/webhook`
Receives webhooks from Hugging Face Hub.
**Headers:**
- `X-Webhook-Secret`: Your webhook secret
**Body:** HF Hub webhook payload
### GET `/comments`
Returns all processed comments and responses.
### GET `/`
Basic API information and available endpoints.
## π€ MCP Server
The application includes a Model Context Protocol (MCP) server that provides tools for:
- **get_discussions**: Retrieve discussions from HF repositories
- **get_discussion_details**: Get detailed information about specific discussions
- **comment_on_discussion**: Add comments to discussions
- **generate_ai_response**: Generate AI responses using HF Inference
- **respond_to_discussion**: Generate and post AI responses automatically
### Running the MCP Server
```bash
python mcp_server.py
```
The MCP server uses stdio transport and can be integrated with MCP clients following the [Tiny Agents pattern](https://huggingface.co/blog/python-tiny-agents).
## π§ͺ Testing
### Local Testing
Use the "Simulate Webhook" tab in the Gradio dashboard to test without real webhooks.
### Webhook Testing
You can test the webhook endpoint directly:
```bash
curl -X POST http://localhost:8000/webhook \
-H "Content-Type: application/json" \
-H "X-Webhook-Secret: your-webhook-secret" \
-d '{
"event": {"action": "create", "scope": "discussion.comment"},
"comment": {
"content": "@discussion-bot How do I use this model?",
"author": "test-user",
"created_at": "2024-01-01T00:00:00Z"
},
"discussion": {
"title": "Test Discussion",
"num": 1,
"url": {"api": "https://huggingface.co/api/repos/test/repo/discussions"}
},
"repo": {"name": "test/repo"}
}'
```
## ποΈ Architecture
```
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β HF Hub βββββΆβ FastAPI βββββΆβ HF Inference β
β Webhook β β Server β β API β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β Gradio β
β Dashboard β
βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β MCP Server β
β (Tools) β
βββββββββββββββββββ
```
## π Security
- Webhook secret verification prevents unauthorized requests
- Environment variables keep sensitive data secure
- CORS middleware configured for safe cross-origin requests
## π Deployment
### Using Docker (Recommended)
```dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install -e .
EXPOSE 8000
CMD ["python", "server.py"]
```
### Using Cloud Platforms
The application can be deployed on:
- **Hugging Face Spaces** (recommended for HF integration)
- **Railway**
- **Render**
- **Heroku**
- **AWS/GCP/Azure**
## π€ Contributing
1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request
## π License
This project is licensed under the MIT License.
## π Support
If you encounter issues:
1. Check the Configuration tab in the dashboard
2. Verify your API keys are correct
3. Ensure webhook URL is accessible
4. Check the application logs
For additional help, please open an issue in the repository.
## π Related Links
- [Hugging Face Webhooks Guide](https://huggingface.co/docs/hub/en/webhooks-guide-discussion-bot)
- [Hugging Face Hub Python Library](https://huggingface.co/docs/huggingface_hub/en/guides/community)
- [Tiny Agents in Python Blog Post](https://huggingface.co/blog/python-tiny-agents)
- [FastAPI Documentation](https://fastapi.tiangolo.com/)
- [Gradio Documentation](https://gradio.app/)
- [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
|