A newer version of the Gradio SDK is available:
5.42.0
title: Sentiment Analysis Server
emoji: π
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.33.2
app_file: app.py
pinned: false
MCP Sentiment Analysis Server
A complete Model Context Protocol (MCP) Server implementation for sentiment analysis with integrated Gradio Web Interface. This project provides both AI model integration through JSON-RPC 2.0 protocol and human-friendly web interface for sentiment analysis using TextBlob and Transformers.
π Features
Core Capabilities
- Dual Backend Support: TextBlob (fast) and Transformers (accurate)
- MCP Protocol Compliance: Full JSON-RPC 2.0 implementation
- Modern Web Interface: Responsive Gradio UI with real-time analysis
- Batch Processing: Analyze multiple texts simultaneously
- Analysis History: Track and visualize analysis results
- Confidence Scoring: Detailed confidence metrics for all predictions
Technical Features
- Async Architecture: Non-blocking request processing
- Auto Backend Selection: Intelligent fallback between analysis engines
- Comprehensive Error Handling: Graceful failure management
- Performance Optimization: Model caching and uvloop integration
- Type Safety: Full type annotations throughout codebase
π Requirements
- Python 3.8+
- 4GB+ RAM (for Transformers models)
- Internet connection (for model downloads)
π οΈ Installation
1. Clone Repository
git clone <repository-url>
cd mcp-sentiment
2. Install Dependencies
python3 -m pip install -r requirements.txt
3. Download TextBlob Corpora (Optional)
python3 -c "import nltk; nltk.download('punkt'); nltk.download('brown')"
π― Quick Start
Run Tests
python3 app.py --mode test
Launch Web Interface with MCP Server
python3 app.py --mode gradio --mcp-server
Enables both web interface and MCP server functionality in a single Gradio app.
Start MCP Server
python3 app.py --mode mcp
=======
python app.py --mode gradio
Visit http://localhost:7860
to access the web interface.
Start MCP Server
python app.py --mode mcp
>>>>>>> 6c0a877e212b959072c8948b934d212c97a3c597
Combined Mode (Default)
python3 app.py --mode combined
Runs both MCP server and Gradio interface simultaneously.
π Usage Examples
Command Line Options
# Different execution modes
python3 app.py --mode mcp # MCP server only
python3 app.py --mode gradio # Web interface only
python3 app.py --mode combined # Both services (default)
python3 app.py --mode test # Run functionality tests
# Gradio customization
python3 app.py --mode gradio --port 8080 # Custom port
python3 app.py --mode gradio --share # Enable public sharing
python3 app.py --mode gradio --debug # Debug mode
python3 app.py --mode gradio --mcp-server # Enable MCP server in Gradio
# Logging control
python3 app.py --log-level DEBUG # Detailed logging
Python API Usage
import asyncio
from src import get_analyzer, analyze_sentiment
async def example():
# Simple analysis
result = await analyze_sentiment("I love this product!", backend="auto")
print(f"Sentiment: {result['label']} (confidence: {result['confidence']:.2%})")
# Advanced usage
analyzer = await get_analyzer("transformers")
result = await analyzer.analyze("This is amazing!")
print(f"Result: {result.to_dict()}")
# Batch processing
texts = ["Great product!", "Terrible service", "It's okay"]
results = await analyzer.analyze_batch(texts)
for text, result in zip(texts, results):
print(f"'{text}' -> {result.label.value}")
asyncio.run(example())
MCP Tool Integration
The server provides these MCP tools:
analyze_sentiment
{
"name": "analyze_sentiment",
"arguments": {
"text": "I love this new feature!",
"backend": "auto"
}
}
analyze_sentiment_batch
{
"name": "analyze_sentiment_batch",
"arguments": {
"texts": ["Great!", "Terrible!", "Okay."],
"backend": "transformers"
}
}
health_check
{
"name": "health_check",
"arguments": {}
}
π Gradio MCP Server Integration
The Gradio web interface can also serve as an MCP server, providing API endpoints that correspond to each sentiment analysis function. This allows the same interface to serve both human users through the web UI and AI models through the MCP protocol.
Enable MCP Server in Gradio
Method 1: Command Line Flag
python3 app.py --mode gradio --mcp-server
Method 2: Environment Variable
export GRADIO_MCP_SERVER=true
python3 app.py --mode gradio
Method 3: Programmatic (Python API)
from src import create_gradio_interface
interface = create_gradio_interface()
interface.launch(mcp_server=True)
Available MCP Endpoints
When MCP server is enabled, the Gradio interface automatically exposes these API endpoints:
- POST /api/analyze_sentiment - Single text analysis
- POST /api/analyze_sentiment_batch - Batch text analysis
- POST /api/health_check - Server health status
MCP API Usage Examples
# Single sentiment analysis
curl -X POST http://localhost:7860/api/analyze_sentiment \
-H "Content-Type: application/json" \
-d '{"text": "I love this product!", "backend": "auto"}'
# Batch analysis
curl -X POST http://localhost:7860/api/analyze_sentiment_batch \
-H "Content-Type: application/json" \
-d '{"texts": ["Great!", "Terrible!", "Okay."], "backend": "transformers"}'
# Health check
curl -X POST http://localhost:7860/api/health_check \
-H "Content-Type: application/json" \
-d '{}'
ποΈ Architecture
Project Structure
mcp-sentiment/
βββ src/
β βββ __init__.py # Package initialization
β βββ sentiment_analyzer.py # Core sentiment analysis engine
β βββ mcp_server.py # JSON-RPC 2.0 MCP server
β βββ tools.py # MCP tool definitions
β βββ gradio_interface.py # Web UI implementation
βββ requirements.txt # Python dependencies
βββ app.py # Application entry point
βββ README.md # This file
Component Overview
Component | Purpose | Technology |
---|---|---|
MCP Server | AI model communication | JSON-RPC 2.0, asyncio |
Gradio UI | Human web interface | Gradio framework |
Sentiment Engine | Text analysis core | TextBlob/Transformers |
Tool Registry | MCP tool definitions | Python decorators |
π§ Configuration
Backend Selection
auto
: Automatically selects best available backendtextblob
: Fast, rule-based sentiment analysistransformers
: AI-powered analysis with higher accuracy
Environment Variables
# Optional configuration
export MCP_LOG_LEVEL=INFO
export GRADIO_SERVER_PORT=7860
export GRADIO_MCP_SERVER=true # Enable MCP server in Gradio interface
export TRANSFORMERS_CACHE=/path/to/cache
π Web Interface Features
Main Analysis Tab
- Real-time sentiment analysis
- Confidence visualization with gauge charts
- Backend selection (TextBlob/Transformers/Auto)
- Quick example buttons for testing
Batch Analysis Tab
- Process multiple texts simultaneously
- Results table with sentiment and confidence
- Summary visualization with pie charts
- Export capabilities
History Tab
- Track all analysis results
- Statistical summaries
- Sentiment distribution charts
- Clear/refresh functionality
Settings & Info Tab
- Backend status and configuration
- Model information and availability
- Usage tips and documentation
- System health monitoring
π§ͺ Testing
Run All Tests
python app.py --mode test
Manual Testing
# Test individual components
python -m src.sentiment_analyzer
python -m src.tools
python -m src.mcp_server
Performance Testing
# Benchmark different backends
python -c "
import asyncio
from src import get_analyzer
import time
async def benchmark():
analyzer = await get_analyzer('textblob')
start = time.time()
await analyzer.analyze('Test message')
print(f'TextBlob: {time.time() - start:.3f}s')
asyncio.run(benchmark())
"
π¨ Troubleshooting
Common Issues
"No sentiment analysis backend available"
pip install textblob
python -c "import nltk; nltk.download('punkt')"
"Failed to load transformer model"
pip install torch transformers
# Ensure sufficient RAM (4GB+)
"Gradio not available"
pip install gradio plotly pandas
Port already in use
python app.py --mode gradio --port 8080
Debug Mode
python app.py --mode combined --debug --log-level DEBUG
π Security Considerations
- Input validation and sanitization
- Rate limiting for API endpoints
- No persistent data storage by default
- Configurable access controls
- Error message sanitization
π Performance Optimization
Model Caching
- Transformers models cached after first load
- Shared analyzer instances across requests
- Memory-efficient batch processing
Async Processing
- Non-blocking request handling
- Concurrent batch analysis
- Thread pool for CPU-bound operations
Resource Management
- Automatic cleanup on shutdown
- Memory monitoring and limits
- Configurable worker threads
π Monitoring & Metrics
Built-in Metrics
- Request count and success rate
- Average response times
- Error rates by type
- Backend usage statistics
Health Checks
curl -X POST http://localhost:7860/health \
-H "Content-Type: application/json" \
-d '{"method": "health_check", "params": {}}'
π€ Contributing
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Commit changes (
git commit -m 'Add amazing feature'
) - Push to branch (
git push origin feature/amazing-feature
) - Open Pull Request
Development Setup
pip install -r requirements.txt
pip install pytest black flake8 mypy
Code Quality
black src/ # Format code
flake8 src/ # Lint code
mypy src/ # Type checking
pytest tests/ # Run tests
π License
This project is licensed under the MIT License - see the LICENSE file for details.
π Acknowledgments
- Hugging Face Transformers for state-of-the-art NLP models
- TextBlob for simple sentiment analysis
- Gradio for the amazing web interface framework
- Model Context Protocol for the integration standard
π Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: Wiki
Made with β€οΈ for the AI community