File size: 11,291 Bytes
776e7c0 2f1dc83 776e7c0 1bcfc6a a4b9912 a0f810f a4b9912 1bcfc6a 776e7c0 b5514e1 776e7c0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 |
---
title: Sentiment Analysis Server
emoji: π
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.33.2
app_file: app.py
pinned: false
---
# MCP Sentiment Analysis Server
A complete **Model Context Protocol (MCP) Server** implementation for sentiment analysis with integrated **Gradio Web Interface**. This project provides both AI model integration through JSON-RPC 2.0 protocol and human-friendly web interface for sentiment analysis using TextBlob and Transformers.
## π Features
### Core Capabilities
- **Dual Backend Support**: TextBlob (fast) and Transformers (accurate)
- **MCP Protocol Compliance**: Full JSON-RPC 2.0 implementation
- **Modern Web Interface**: Responsive Gradio UI with real-time analysis
- **Batch Processing**: Analyze multiple texts simultaneously
- **Analysis History**: Track and visualize analysis results
- **Confidence Scoring**: Detailed confidence metrics for all predictions
### Technical Features
- **Async Architecture**: Non-blocking request processing
- **Auto Backend Selection**: Intelligent fallback between analysis engines
- **Comprehensive Error Handling**: Graceful failure management
- **Performance Optimization**: Model caching and uvloop integration
- **Type Safety**: Full type annotations throughout codebase
## π Requirements
- Python 3.8+
- 4GB+ RAM (for Transformers models)
- Internet connection (for model downloads)
## π οΈ Installation
### 1. Clone Repository
```bash
git clone <repository-url>
cd mcp-sentiment
```
### 2. Install Dependencies
```bash
python3 -m pip install -r requirements.txt
```
### 3. Download TextBlob Corpora (Optional)
```bash
python3 -c "import nltk; nltk.download('punkt'); nltk.download('brown')"
```
## π― Quick Start
### Run Tests
```bash
python3 app.py --mode test
```
### Launch Web Interface with MCP Server
```bash
python3 app.py --mode gradio --mcp-server
```
Enables both web interface and MCP server functionality in a single Gradio app.
### Start MCP Server
```bash
python3 app.py --mode mcp
=======
python app.py --mode gradio
```
Visit `http://localhost:7860` to access the web interface.
### Start MCP Server
```bash
python app.py --mode mcp
>>>>>>> 6c0a877e212b959072c8948b934d212c97a3c597
```
### Combined Mode (Default)
```bash
python3 app.py --mode combined
```
Runs both MCP server and Gradio interface simultaneously.
## π Usage Examples
### Command Line Options
```bash
# Different execution modes
python3 app.py --mode mcp # MCP server only
python3 app.py --mode gradio # Web interface only
python3 app.py --mode combined # Both services (default)
python3 app.py --mode test # Run functionality tests
# Gradio customization
python3 app.py --mode gradio --port 8080 # Custom port
python3 app.py --mode gradio --share # Enable public sharing
python3 app.py --mode gradio --debug # Debug mode
python3 app.py --mode gradio --mcp-server # Enable MCP server in Gradio
# Logging control
python3 app.py --log-level DEBUG # Detailed logging
```
### Python API Usage
```python
import asyncio
from src import get_analyzer, analyze_sentiment
async def example():
# Simple analysis
result = await analyze_sentiment("I love this product!", backend="auto")
print(f"Sentiment: {result['label']} (confidence: {result['confidence']:.2%})")
# Advanced usage
analyzer = await get_analyzer("transformers")
result = await analyzer.analyze("This is amazing!")
print(f"Result: {result.to_dict()}")
# Batch processing
texts = ["Great product!", "Terrible service", "It's okay"]
results = await analyzer.analyze_batch(texts)
for text, result in zip(texts, results):
print(f"'{text}' -> {result.label.value}")
asyncio.run(example())
```
### MCP Tool Integration
The server provides these MCP tools:
#### `analyze_sentiment`
```json
{
"name": "analyze_sentiment",
"arguments": {
"text": "I love this new feature!",
"backend": "auto"
}
}
```
#### `analyze_sentiment_batch`
```json
{
"name": "analyze_sentiment_batch",
"arguments": {
"texts": ["Great!", "Terrible!", "Okay."],
"backend": "transformers"
}
}
```
#### `health_check`
```json
{
"name": "health_check",
"arguments": {}
}
```
## π Gradio MCP Server Integration
The Gradio web interface can also serve as an MCP server, providing API endpoints that correspond to each sentiment analysis function. This allows the same interface to serve both human users through the web UI and AI models through the MCP protocol.
### Enable MCP Server in Gradio
#### Method 1: Command Line Flag
```bash
python3 app.py --mode gradio --mcp-server
```
#### Method 2: Environment Variable
```bash
export GRADIO_MCP_SERVER=true
python3 app.py --mode gradio
```
#### Method 3: Programmatic (Python API)
```python
from src import create_gradio_interface
interface = create_gradio_interface()
interface.launch(mcp_server=True)
```
### Available MCP Endpoints
When MCP server is enabled, the Gradio interface automatically exposes these API endpoints:
- **POST /api/analyze_sentiment** - Single text analysis
- **POST /api/analyze_sentiment_batch** - Batch text analysis
- **POST /api/health_check** - Server health status
### MCP API Usage Examples
```bash
# Single sentiment analysis
curl -X POST http://localhost:7860/api/analyze_sentiment \
-H "Content-Type: application/json" \
-d '{"text": "I love this product!", "backend": "auto"}'
# Batch analysis
curl -X POST http://localhost:7860/api/analyze_sentiment_batch \
-H "Content-Type: application/json" \
-d '{"texts": ["Great!", "Terrible!", "Okay."], "backend": "transformers"}'
# Health check
curl -X POST http://localhost:7860/api/health_check \
-H "Content-Type: application/json" \
-d '{}'
```
## ποΈ Architecture
### Project Structure
```
mcp-sentiment/
βββ src/
β βββ __init__.py # Package initialization
β βββ sentiment_analyzer.py # Core sentiment analysis engine
β βββ mcp_server.py # JSON-RPC 2.0 MCP server
β βββ tools.py # MCP tool definitions
β βββ gradio_interface.py # Web UI implementation
βββ requirements.txt # Python dependencies
βββ app.py # Application entry point
βββ README.md # This file
```
### Component Overview
| Component | Purpose | Technology |
|-----------|---------|------------|
| **MCP Server** | AI model communication | JSON-RPC 2.0, asyncio |
| **Gradio UI** | Human web interface | Gradio framework |
| **Sentiment Engine** | Text analysis core | TextBlob/Transformers |
| **Tool Registry** | MCP tool definitions | Python decorators |
## π§ Configuration
### Backend Selection
- **`auto`**: Automatically selects best available backend
- **`textblob`**: Fast, rule-based sentiment analysis
- **`transformers`**: AI-powered analysis with higher accuracy
### Environment Variables
```bash
# Optional configuration
export MCP_LOG_LEVEL=INFO
export GRADIO_SERVER_PORT=7860
export GRADIO_MCP_SERVER=true # Enable MCP server in Gradio interface
export TRANSFORMERS_CACHE=/path/to/cache
```
## π Web Interface Features
### Main Analysis Tab
- Real-time sentiment analysis
- Confidence visualization with gauge charts
- Backend selection (TextBlob/Transformers/Auto)
- Quick example buttons for testing
### Batch Analysis Tab
- Process multiple texts simultaneously
- Results table with sentiment and confidence
- Summary visualization with pie charts
- Export capabilities
### History Tab
- Track all analysis results
- Statistical summaries
- Sentiment distribution charts
- Clear/refresh functionality
### Settings & Info Tab
- Backend status and configuration
- Model information and availability
- Usage tips and documentation
- System health monitoring
## π§ͺ Testing
### Run All Tests
```bash
python app.py --mode test
```
### Manual Testing
```python
# Test individual components
python -m src.sentiment_analyzer
python -m src.tools
python -m src.mcp_server
```
### Performance Testing
```bash
# Benchmark different backends
python -c "
import asyncio
from src import get_analyzer
import time
async def benchmark():
analyzer = await get_analyzer('textblob')
start = time.time()
await analyzer.analyze('Test message')
print(f'TextBlob: {time.time() - start:.3f}s')
asyncio.run(benchmark())
"
```
## π¨ Troubleshooting
### Common Issues
#### "No sentiment analysis backend available"
```bash
pip install textblob
python -c "import nltk; nltk.download('punkt')"
```
#### "Failed to load transformer model"
```bash
pip install torch transformers
# Ensure sufficient RAM (4GB+)
```
#### "Gradio not available"
```bash
pip install gradio plotly pandas
```
#### Port already in use
```bash
python app.py --mode gradio --port 8080
```
### Debug Mode
```bash
python app.py --mode combined --debug --log-level DEBUG
```
## π Security Considerations
- Input validation and sanitization
- Rate limiting for API endpoints
- No persistent data storage by default
- Configurable access controls
- Error message sanitization
## π Performance Optimization
### Model Caching
- Transformers models cached after first load
- Shared analyzer instances across requests
- Memory-efficient batch processing
### Async Processing
- Non-blocking request handling
- Concurrent batch analysis
- Thread pool for CPU-bound operations
### Resource Management
- Automatic cleanup on shutdown
- Memory monitoring and limits
- Configurable worker threads
## π Monitoring & Metrics
### Built-in Metrics
- Request count and success rate
- Average response times
- Error rates by type
- Backend usage statistics
### Health Checks
```bash
curl -X POST http://localhost:7860/health \
-H "Content-Type: application/json" \
-d '{"method": "health_check", "params": {}}'
```
## π€ Contributing
1. Fork the repository
2. Create feature branch (`git checkout -b feature/amazing-feature`)
3. Commit changes (`git commit -m 'Add amazing feature'`)
4. Push to branch (`git push origin feature/amazing-feature`)
5. Open Pull Request
### Development Setup
```bash
pip install -r requirements.txt
pip install pytest black flake8 mypy
```
### Code Quality
```bash
black src/ # Format code
flake8 src/ # Lint code
mypy src/ # Type checking
pytest tests/ # Run tests
```
## π License
This project is licensed under the MIT License - see the LICENSE file for details.
## π Acknowledgments
- [Hugging Face Transformers](https://huggingface.co/transformers/) for state-of-the-art NLP models
- [TextBlob](https://textblob.readthedocs.io/) for simple sentiment analysis
- [Gradio](https://gradio.app/) for the amazing web interface framework
- [Model Context Protocol](https://modelcontextprotocol.io/) for the integration standard
## π Support
- **Issues**: [GitHub Issues](https://github.com/your-repo/issues)
- **Discussions**: [GitHub Discussions](https://github.com/your-repo/discussions)
- **Documentation**: [Wiki](https://github.com/your-repo/wiki)
---
**Made with β€οΈ for the AI community** |