sentiment / README.md
Denys Kanunnikov
fix README
b5514e1

A newer version of the Gradio SDK is available: 5.42.0

Upgrade
metadata
title: Sentiment Analysis Server
emoji: 🎭
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 5.33.2
app_file: app.py
pinned: false

MCP Sentiment Analysis Server

A complete Model Context Protocol (MCP) Server implementation for sentiment analysis with integrated Gradio Web Interface. This project provides both AI model integration through JSON-RPC 2.0 protocol and human-friendly web interface for sentiment analysis using TextBlob and Transformers.

πŸš€ Features

Core Capabilities

  • Dual Backend Support: TextBlob (fast) and Transformers (accurate)
  • MCP Protocol Compliance: Full JSON-RPC 2.0 implementation
  • Modern Web Interface: Responsive Gradio UI with real-time analysis
  • Batch Processing: Analyze multiple texts simultaneously
  • Analysis History: Track and visualize analysis results
  • Confidence Scoring: Detailed confidence metrics for all predictions

Technical Features

  • Async Architecture: Non-blocking request processing
  • Auto Backend Selection: Intelligent fallback between analysis engines
  • Comprehensive Error Handling: Graceful failure management
  • Performance Optimization: Model caching and uvloop integration
  • Type Safety: Full type annotations throughout codebase

πŸ“‹ Requirements

  • Python 3.8+
  • 4GB+ RAM (for Transformers models)
  • Internet connection (for model downloads)

πŸ› οΈ Installation

1. Clone Repository

git clone <repository-url>
cd mcp-sentiment

2. Install Dependencies

python3 -m pip install -r requirements.txt

3. Download TextBlob Corpora (Optional)

python3 -c "import nltk; nltk.download('punkt'); nltk.download('brown')"

🎯 Quick Start

Run Tests

python3 app.py --mode test

Launch Web Interface with MCP Server

python3 app.py --mode gradio --mcp-server

Enables both web interface and MCP server functionality in a single Gradio app.

Start MCP Server

python3 app.py --mode mcp
=======
python app.py --mode gradio

Visit http://localhost:7860 to access the web interface.

Start MCP Server

python app.py --mode mcp
>>>>>>> 6c0a877e212b959072c8948b934d212c97a3c597

Combined Mode (Default)

python3 app.py --mode combined

Runs both MCP server and Gradio interface simultaneously.

πŸ“– Usage Examples

Command Line Options

# Different execution modes

python3 app.py --mode mcp                    # MCP server only
python3 app.py --mode gradio                 # Web interface only
python3 app.py --mode combined               # Both services (default)
python3 app.py --mode test                   # Run functionality tests

# Gradio customization
python3 app.py --mode gradio --port 8080     # Custom port
python3 app.py --mode gradio --share         # Enable public sharing
python3 app.py --mode gradio --debug         # Debug mode
python3 app.py --mode gradio --mcp-server    # Enable MCP server in Gradio

# Logging control
python3 app.py --log-level DEBUG             # Detailed logging

Python API Usage

import asyncio
from src import get_analyzer, analyze_sentiment

async def example():
    # Simple analysis
    result = await analyze_sentiment("I love this product!", backend="auto")
    print(f"Sentiment: {result['label']} (confidence: {result['confidence']:.2%})")
    
    # Advanced usage
    analyzer = await get_analyzer("transformers")
    result = await analyzer.analyze("This is amazing!")
    print(f"Result: {result.to_dict()}")
    
    # Batch processing
    texts = ["Great product!", "Terrible service", "It's okay"]
    results = await analyzer.analyze_batch(texts)
    for text, result in zip(texts, results):
        print(f"'{text}' -> {result.label.value}")

asyncio.run(example())

MCP Tool Integration

The server provides these MCP tools:

analyze_sentiment

{
  "name": "analyze_sentiment",
  "arguments": {
    "text": "I love this new feature!",
    "backend": "auto"
  }
}

analyze_sentiment_batch

{
  "name": "analyze_sentiment_batch", 
  "arguments": {
    "texts": ["Great!", "Terrible!", "Okay."],
    "backend": "transformers"
  }
}

health_check

{
  "name": "health_check",
  "arguments": {}
}

πŸ”— Gradio MCP Server Integration

The Gradio web interface can also serve as an MCP server, providing API endpoints that correspond to each sentiment analysis function. This allows the same interface to serve both human users through the web UI and AI models through the MCP protocol.

Enable MCP Server in Gradio

Method 1: Command Line Flag

python3 app.py --mode gradio --mcp-server

Method 2: Environment Variable

export GRADIO_MCP_SERVER=true
python3 app.py --mode gradio

Method 3: Programmatic (Python API)

from src import create_gradio_interface

interface = create_gradio_interface()
interface.launch(mcp_server=True)

Available MCP Endpoints

When MCP server is enabled, the Gradio interface automatically exposes these API endpoints:

  • POST /api/analyze_sentiment - Single text analysis
  • POST /api/analyze_sentiment_batch - Batch text analysis
  • POST /api/health_check - Server health status

MCP API Usage Examples

# Single sentiment analysis
curl -X POST http://localhost:7860/api/analyze_sentiment \
  -H "Content-Type: application/json" \
  -d '{"text": "I love this product!", "backend": "auto"}'

# Batch analysis
curl -X POST http://localhost:7860/api/analyze_sentiment_batch \
  -H "Content-Type: application/json" \
  -d '{"texts": ["Great!", "Terrible!", "Okay."], "backend": "transformers"}'

# Health check
curl -X POST http://localhost:7860/api/health_check \
  -H "Content-Type: application/json" \
  -d '{}'

πŸ—οΈ Architecture

Project Structure

mcp-sentiment/
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ __init__.py              # Package initialization
β”‚   β”œβ”€β”€ sentiment_analyzer.py   # Core sentiment analysis engine
β”‚   β”œβ”€β”€ mcp_server.py           # JSON-RPC 2.0 MCP server
β”‚   β”œβ”€β”€ tools.py                # MCP tool definitions
β”‚   └── gradio_interface.py     # Web UI implementation
β”œβ”€β”€ requirements.txt             # Python dependencies
β”œβ”€β”€ app.py                      # Application entry point
└── README.md                   # This file

Component Overview

Component Purpose Technology
MCP Server AI model communication JSON-RPC 2.0, asyncio
Gradio UI Human web interface Gradio framework
Sentiment Engine Text analysis core TextBlob/Transformers
Tool Registry MCP tool definitions Python decorators

πŸ”§ Configuration

Backend Selection

  • auto: Automatically selects best available backend
  • textblob: Fast, rule-based sentiment analysis
  • transformers: AI-powered analysis with higher accuracy

Environment Variables

# Optional configuration
export MCP_LOG_LEVEL=INFO
export GRADIO_SERVER_PORT=7860
export GRADIO_MCP_SERVER=true              # Enable MCP server in Gradio interface
export TRANSFORMERS_CACHE=/path/to/cache

πŸ“Š Web Interface Features

Main Analysis Tab

  • Real-time sentiment analysis
  • Confidence visualization with gauge charts
  • Backend selection (TextBlob/Transformers/Auto)
  • Quick example buttons for testing

Batch Analysis Tab

  • Process multiple texts simultaneously
  • Results table with sentiment and confidence
  • Summary visualization with pie charts
  • Export capabilities

History Tab

  • Track all analysis results
  • Statistical summaries
  • Sentiment distribution charts
  • Clear/refresh functionality

Settings & Info Tab

  • Backend status and configuration
  • Model information and availability
  • Usage tips and documentation
  • System health monitoring

πŸ§ͺ Testing

Run All Tests

python app.py --mode test

Manual Testing

# Test individual components
python -m src.sentiment_analyzer
python -m src.tools
python -m src.mcp_server

Performance Testing

# Benchmark different backends
python -c "
import asyncio
from src import get_analyzer
import time

async def benchmark():
    analyzer = await get_analyzer('textblob')
    start = time.time()
    await analyzer.analyze('Test message')
    print(f'TextBlob: {time.time() - start:.3f}s')

asyncio.run(benchmark())
"

🚨 Troubleshooting

Common Issues

"No sentiment analysis backend available"

pip install textblob
python -c "import nltk; nltk.download('punkt')"

"Failed to load transformer model"

pip install torch transformers
# Ensure sufficient RAM (4GB+)

"Gradio not available"

pip install gradio plotly pandas

Port already in use

python app.py --mode gradio --port 8080

Debug Mode

python app.py --mode combined --debug --log-level DEBUG

πŸ”’ Security Considerations

  • Input validation and sanitization
  • Rate limiting for API endpoints
  • No persistent data storage by default
  • Configurable access controls
  • Error message sanitization

πŸš€ Performance Optimization

Model Caching

  • Transformers models cached after first load
  • Shared analyzer instances across requests
  • Memory-efficient batch processing

Async Processing

  • Non-blocking request handling
  • Concurrent batch analysis
  • Thread pool for CPU-bound operations

Resource Management

  • Automatic cleanup on shutdown
  • Memory monitoring and limits
  • Configurable worker threads

πŸ“ˆ Monitoring & Metrics

Built-in Metrics

  • Request count and success rate
  • Average response times
  • Error rates by type
  • Backend usage statistics

Health Checks

curl -X POST http://localhost:7860/health \
  -H "Content-Type: application/json" \
  -d '{"method": "health_check", "params": {}}'

🀝 Contributing

  1. Fork the repository
  2. Create feature branch (git checkout -b feature/amazing-feature)
  3. Commit changes (git commit -m 'Add amazing feature')
  4. Push to branch (git push origin feature/amazing-feature)
  5. Open Pull Request

Development Setup

pip install -r requirements.txt
pip install pytest black flake8 mypy

Code Quality

black src/                    # Format code
flake8 src/                   # Lint code
mypy src/                     # Type checking
pytest tests/                 # Run tests

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgments

πŸ“ž Support


Made with ❀️ for the AI community