Denys Kanunnikov commited on
Commit
776e7c0
·
0 Parent(s):

update logic

Browse files
README.md ADDED
@@ -0,0 +1,431 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: Sentiment Analysis Server
3
+ emoji: 🎭
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 5.33.2
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ # MCP Sentiment Analysis Server
13
+
14
+ A complete **Model Context Protocol (MCP) Server** implementation for sentiment analysis with integrated **Gradio Web Interface**. This project provides both AI model integration through JSON-RPC 2.0 protocol and human-friendly web interface for sentiment analysis using TextBlob and Transformers.
15
+
16
+ ## 🚀 Features
17
+
18
+ ### Core Capabilities
19
+ - **Dual Backend Support**: TextBlob (fast) and Transformers (accurate)
20
+ - **MCP Protocol Compliance**: Full JSON-RPC 2.0 implementation
21
+ - **Modern Web Interface**: Responsive Gradio UI with real-time analysis
22
+ - **Batch Processing**: Analyze multiple texts simultaneously
23
+ - **Analysis History**: Track and visualize analysis results
24
+ - **Confidence Scoring**: Detailed confidence metrics for all predictions
25
+
26
+ ### Technical Features
27
+ - **Async Architecture**: Non-blocking request processing
28
+ - **Auto Backend Selection**: Intelligent fallback between analysis engines
29
+ - **Comprehensive Error Handling**: Graceful failure management
30
+ - **Performance Optimization**: Model caching and uvloop integration
31
+ - **Type Safety**: Full type annotations throughout codebase
32
+
33
+ ## 📋 Requirements
34
+
35
+ - Python 3.8+
36
+ - 4GB+ RAM (for Transformers models)
37
+ - Internet connection (for model downloads)
38
+
39
+ ## 🛠️ Installation
40
+
41
+ ### 1. Clone Repository
42
+ ```bash
43
+ git clone <repository-url>
44
+ cd mcp-sentiment
45
+ ```
46
+
47
+ ### 2. Install Dependencies
48
+ ```bash
49
+ python3 -m pip install -r requirements.txt
50
+ ```
51
+
52
+ ### 3. Download TextBlob Corpora (Optional)
53
+ ```bash
54
+ python3 -c "import nltk; nltk.download('punkt'); nltk.download('brown')"
55
+ ```
56
+
57
+ ## 🎯 Quick Start
58
+
59
+ ### Run Tests
60
+ ```bash
61
+ python3 app.py --mode test
62
+ ```
63
+
64
+ ### Launch Web Interface
65
+ ```bash
66
+ python3 app.py --mode gradio
67
+ ```
68
+ Visit `http://localhost:7860` to access the web interface.
69
+
70
+ ### Launch Web Interface with MCP Server
71
+ ```bash
72
+ python3 app.py --mode gradio --mcp-server
73
+ ```
74
+ Enables both web interface and MCP server functionality in a single Gradio app.
75
+
76
+ ### Start MCP Server
77
+ ```bash
78
+ python3 app.py --mode mcp
79
+ ```
80
+
81
+ ### Combined Mode (Default)
82
+ ```bash
83
+ python3 app.py --mode combined
84
+ ```
85
+ Runs both MCP server and Gradio interface simultaneously.
86
+
87
+ ## 📖 Usage Examples
88
+
89
+ ### Command Line Options
90
+
91
+ ```bash
92
+ # Different execution modes
93
+ python3 app.py --mode mcp # MCP server only
94
+ python3 app.py --mode gradio # Web interface only
95
+ python3 app.py --mode combined # Both services (default)
96
+ python3 app.py --mode test # Run functionality tests
97
+
98
+ # Gradio customization
99
+ python3 app.py --mode gradio --port 8080 # Custom port
100
+ python3 app.py --mode gradio --share # Enable public sharing
101
+ python3 app.py --mode gradio --debug # Debug mode
102
+ python3 app.py --mode gradio --mcp-server # Enable MCP server in Gradio
103
+
104
+ # Logging control
105
+ python3 app.py --log-level DEBUG # Detailed logging
106
+ ```
107
+
108
+ ### Python API Usage
109
+
110
+ ```python
111
+ import asyncio
112
+ from src import get_analyzer, analyze_sentiment
113
+
114
+ async def example():
115
+ # Simple analysis
116
+ result = await analyze_sentiment("I love this product!", backend="auto")
117
+ print(f"Sentiment: {result['label']} (confidence: {result['confidence']:.2%})")
118
+
119
+ # Advanced usage
120
+ analyzer = await get_analyzer("transformers")
121
+ result = await analyzer.analyze("This is amazing!")
122
+ print(f"Result: {result.to_dict()}")
123
+
124
+ # Batch processing
125
+ texts = ["Great product!", "Terrible service", "It's okay"]
126
+ results = await analyzer.analyze_batch(texts)
127
+ for text, result in zip(texts, results):
128
+ print(f"'{text}' -> {result.label.value}")
129
+
130
+ asyncio.run(example())
131
+ ```
132
+
133
+ ### MCP Tool Integration
134
+
135
+ The server provides these MCP tools:
136
+
137
+ #### `analyze_sentiment`
138
+ ```json
139
+ {
140
+ "name": "analyze_sentiment",
141
+ "arguments": {
142
+ "text": "I love this new feature!",
143
+ "backend": "auto"
144
+ }
145
+ }
146
+ ```
147
+
148
+ #### `analyze_sentiment_batch`
149
+ ```json
150
+ {
151
+ "name": "analyze_sentiment_batch",
152
+ "arguments": {
153
+ "texts": ["Great!", "Terrible!", "Okay."],
154
+ "backend": "transformers"
155
+ }
156
+ }
157
+ ```
158
+
159
+ #### `health_check`
160
+ ```json
161
+ {
162
+ "name": "health_check",
163
+ "arguments": {}
164
+ }
165
+ ```
166
+
167
+ ## 🔗 Gradio MCP Server Integration
168
+
169
+ The Gradio web interface can also serve as an MCP server, providing API endpoints that correspond to each sentiment analysis function. This allows the same interface to serve both human users through the web UI and AI models through the MCP protocol.
170
+
171
+ ### Enable MCP Server in Gradio
172
+
173
+ #### Method 1: Command Line Flag
174
+ ```bash
175
+ python3 app.py --mode gradio --mcp-server
176
+ ```
177
+
178
+ #### Method 2: Environment Variable
179
+ ```bash
180
+ export GRADIO_MCP_SERVER=true
181
+ python3 app.py --mode gradio
182
+ ```
183
+
184
+ #### Method 3: Programmatic (Python API)
185
+ ```python
186
+ from src import create_gradio_interface
187
+
188
+ interface = create_gradio_interface()
189
+ interface.launch(mcp_server=True)
190
+ ```
191
+
192
+ ### Available MCP Endpoints
193
+
194
+ When MCP server is enabled, the Gradio interface automatically exposes these API endpoints:
195
+
196
+ - **POST /api/analyze_sentiment** - Single text analysis
197
+ - **POST /api/analyze_sentiment_batch** - Batch text analysis
198
+ - **POST /api/health_check** - Server health status
199
+
200
+ ### MCP API Usage Examples
201
+
202
+ ```bash
203
+ # Single sentiment analysis
204
+ curl -X POST http://localhost:7860/api/analyze_sentiment \
205
+ -H "Content-Type: application/json" \
206
+ -d '{"text": "I love this product!", "backend": "auto"}'
207
+
208
+ # Batch analysis
209
+ curl -X POST http://localhost:7860/api/analyze_sentiment_batch \
210
+ -H "Content-Type: application/json" \
211
+ -d '{"texts": ["Great!", "Terrible!", "Okay."], "backend": "transformers"}'
212
+
213
+ # Health check
214
+ curl -X POST http://localhost:7860/api/health_check \
215
+ -H "Content-Type: application/json" \
216
+ -d '{}'
217
+ ```
218
+
219
+ ## 🏗️ Architecture
220
+
221
+ ### Project Structure
222
+ ```
223
+ mcp-sentiment/
224
+ ├── src/
225
+ │ ├── __init__.py # Package initialization
226
+ │ ├── sentiment_analyzer.py # Core sentiment analysis engine
227
+ │ ├── mcp_server.py # JSON-RPC 2.0 MCP server
228
+ │ ├── tools.py # MCP tool definitions
229
+ │ └── gradio_interface.py # Web UI implementation
230
+ ├── requirements.txt # Python dependencies
231
+ ├── app.py # Application entry point
232
+ └── README.md # This file
233
+ ```
234
+
235
+ ### Component Overview
236
+
237
+ | Component | Purpose | Technology |
238
+ |-----------|---------|------------|
239
+ | **MCP Server** | AI model communication | JSON-RPC 2.0, asyncio |
240
+ | **Gradio UI** | Human web interface | Gradio framework |
241
+ | **Sentiment Engine** | Text analysis core | TextBlob/Transformers |
242
+ | **Tool Registry** | MCP tool definitions | Python decorators |
243
+
244
+ ## 🔧 Configuration
245
+
246
+ ### Backend Selection
247
+ - **`auto`**: Automatically selects best available backend
248
+ - **`textblob`**: Fast, rule-based sentiment analysis
249
+ - **`transformers`**: AI-powered analysis with higher accuracy
250
+
251
+ ### Environment Variables
252
+ ```bash
253
+ # Optional configuration
254
+ export MCP_LOG_LEVEL=INFO
255
+ export GRADIO_SERVER_PORT=7860
256
+ export GRADIO_MCP_SERVER=true # Enable MCP server in Gradio interface
257
+ export TRANSFORMERS_CACHE=/path/to/cache
258
+ ```
259
+
260
+ ## 📊 Web Interface Features
261
+
262
+ ### Main Analysis Tab
263
+ - Real-time sentiment analysis
264
+ - Confidence visualization with gauge charts
265
+ - Backend selection (TextBlob/Transformers/Auto)
266
+ - Quick example buttons for testing
267
+
268
+ ### Batch Analysis Tab
269
+ - Process multiple texts simultaneously
270
+ - Results table with sentiment and confidence
271
+ - Summary visualization with pie charts
272
+ - Export capabilities
273
+
274
+ ### History Tab
275
+ - Track all analysis results
276
+ - Statistical summaries
277
+ - Sentiment distribution charts
278
+ - Clear/refresh functionality
279
+
280
+ ### Settings & Info Tab
281
+ - Backend status and configuration
282
+ - Model information and availability
283
+ - Usage tips and documentation
284
+ - System health monitoring
285
+
286
+ ## 🧪 Testing
287
+
288
+ ### Run All Tests
289
+ ```bash
290
+ python app.py --mode test
291
+ ```
292
+
293
+ ### Manual Testing
294
+ ```python
295
+ # Test individual components
296
+ python -m src.sentiment_analyzer
297
+ python -m src.tools
298
+ python -m src.mcp_server
299
+ ```
300
+
301
+ ### Performance Testing
302
+ ```bash
303
+ # Benchmark different backends
304
+ python -c "
305
+ import asyncio
306
+ from src import get_analyzer
307
+ import time
308
+
309
+ async def benchmark():
310
+ analyzer = await get_analyzer('textblob')
311
+ start = time.time()
312
+ await analyzer.analyze('Test message')
313
+ print(f'TextBlob: {time.time() - start:.3f}s')
314
+
315
+ asyncio.run(benchmark())
316
+ "
317
+ ```
318
+
319
+ ## 🚨 Troubleshooting
320
+
321
+ ### Common Issues
322
+
323
+ #### "No sentiment analysis backend available"
324
+ ```bash
325
+ pip install textblob
326
+ python -c "import nltk; nltk.download('punkt')"
327
+ ```
328
+
329
+ #### "Failed to load transformer model"
330
+ ```bash
331
+ pip install torch transformers
332
+ # Ensure sufficient RAM (4GB+)
333
+ ```
334
+
335
+ #### "Gradio not available"
336
+ ```bash
337
+ pip install gradio plotly pandas
338
+ ```
339
+
340
+ #### Port already in use
341
+ ```bash
342
+ python app.py --mode gradio --port 8080
343
+ ```
344
+
345
+ ### Debug Mode
346
+ ```bash
347
+ python app.py --mode combined --debug --log-level DEBUG
348
+ ```
349
+
350
+ ## 🔒 Security Considerations
351
+
352
+ - Input validation and sanitization
353
+ - Rate limiting for API endpoints
354
+ - No persistent data storage by default
355
+ - Configurable access controls
356
+ - Error message sanitization
357
+
358
+ ## 🚀 Performance Optimization
359
+
360
+ ### Model Caching
361
+ - Transformers models cached after first load
362
+ - Shared analyzer instances across requests
363
+ - Memory-efficient batch processing
364
+
365
+ ### Async Processing
366
+ - Non-blocking request handling
367
+ - Concurrent batch analysis
368
+ - Thread pool for CPU-bound operations
369
+
370
+ ### Resource Management
371
+ - Automatic cleanup on shutdown
372
+ - Memory monitoring and limits
373
+ - Configurable worker threads
374
+
375
+ ## 📈 Monitoring & Metrics
376
+
377
+ ### Built-in Metrics
378
+ - Request count and success rate
379
+ - Average response times
380
+ - Error rates by type
381
+ - Backend usage statistics
382
+
383
+ ### Health Checks
384
+ ```bash
385
+ curl -X POST http://localhost:7860/health \
386
+ -H "Content-Type: application/json" \
387
+ -d '{"method": "health_check", "params": {}}'
388
+ ```
389
+
390
+ ## 🤝 Contributing
391
+
392
+ 1. Fork the repository
393
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
394
+ 3. Commit changes (`git commit -m 'Add amazing feature'`)
395
+ 4. Push to branch (`git push origin feature/amazing-feature`)
396
+ 5. Open Pull Request
397
+
398
+ ### Development Setup
399
+ ```bash
400
+ pip install -r requirements.txt
401
+ pip install pytest black flake8 mypy
402
+ ```
403
+
404
+ ### Code Quality
405
+ ```bash
406
+ black src/ # Format code
407
+ flake8 src/ # Lint code
408
+ mypy src/ # Type checking
409
+ pytest tests/ # Run tests
410
+ ```
411
+
412
+ ## 📄 License
413
+
414
+ This project is licensed under the MIT License - see the LICENSE file for details.
415
+
416
+ ## 🙏 Acknowledgments
417
+
418
+ - [Hugging Face Transformers](https://huggingface.co/transformers/) for state-of-the-art NLP models
419
+ - [TextBlob](https://textblob.readthedocs.io/) for simple sentiment analysis
420
+ - [Gradio](https://gradio.app/) for the amazing web interface framework
421
+ - [Model Context Protocol](https://modelcontextprotocol.io/) for the integration standard
422
+
423
+ ## 📞 Support
424
+
425
+ - **Issues**: [GitHub Issues](https://github.com/your-repo/issues)
426
+ - **Discussions**: [GitHub Discussions](https://github.com/your-repo/discussions)
427
+ - **Documentation**: [Wiki](https://github.com/your-repo/wiki)
428
+
429
+ ---
430
+
431
+ **Made with ❤️ for the AI community**
app.py ADDED
@@ -0,0 +1,368 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Main entry point for MCP Sentiment Analysis Server.
4
+
5
+ This script provides multiple modes of operation:
6
+ 1. MCP Server mode - JSON-RPC 2.0 server for AI model integration
7
+ 2. Gradio Interface mode - Web UI for human interaction
8
+ 3. Combined mode - Both MCP server and Gradio interface
9
+ 4. Test mode - Run basic functionality tests
10
+
11
+ Usage:
12
+ python app.py --mode mcp # Run MCP server only
13
+ python app.py --mode gradio # Run Gradio interface only
14
+ python app.py --mode combined # Run both (default)
15
+ python app.py --mode test # Run tests
16
+ """
17
+
18
+ import asyncio
19
+ import argparse
20
+ import logging
21
+ import sys
22
+ import signal
23
+ import threading
24
+ from typing import Optional
25
+ from concurrent.futures import ThreadPoolExecutor
26
+
27
+ try:
28
+ import uvloop
29
+ UVLOOP_AVAILABLE = True
30
+ except ImportError:
31
+ UVLOOP_AVAILABLE = False
32
+
33
+ from src import (
34
+ create_server,
35
+ MCPServerRunner,
36
+ create_gradio_interface,
37
+ get_analyzer
38
+ )
39
+
40
+
41
+ class ApplicationRunner:
42
+ """
43
+ Main application runner that manages different execution modes.
44
+
45
+ Supports running MCP server, Gradio interface, or both simultaneously
46
+ with proper resource management and graceful shutdown.
47
+ """
48
+
49
+ def __init__(self):
50
+ """Initialize application runner."""
51
+ self.logger = logging.getLogger(__name__)
52
+ self.mcp_server = None
53
+ self.gradio_interface = None
54
+ self.running = False
55
+ self.executor = ThreadPoolExecutor(max_workers=2)
56
+
57
+ # Setup signal handlers
58
+ signal.signal(signal.SIGINT, self._signal_handler)
59
+ signal.signal(signal.SIGTERM, self._signal_handler)
60
+
61
+ def _signal_handler(self, signum, frame):
62
+ """Handle shutdown signals."""
63
+ self.logger.info(f"Received signal {signum}, shutting down...")
64
+ self.running = False
65
+
66
+ async def run_mcp_server(self) -> None:
67
+ """Run MCP server only."""
68
+ self.logger.info("Starting MCP server mode")
69
+
70
+ try:
71
+ # Create and run MCP server
72
+ self.mcp_server = await create_server()
73
+ runner = MCPServerRunner(self.mcp_server)
74
+
75
+ self.running = True
76
+ await runner.run()
77
+
78
+ except Exception as e:
79
+ self.logger.error(f"MCP server failed: {e}")
80
+ raise
81
+ finally:
82
+ if self.mcp_server:
83
+ await self.mcp_server.stop()
84
+
85
+ def run_gradio_interface(self, **kwargs) -> None:
86
+ """Run Gradio interface only."""
87
+ self.logger.info("Starting Gradio interface mode")
88
+
89
+ try:
90
+ # Create and launch Gradio interface
91
+ self.gradio_interface = create_gradio_interface()
92
+
93
+ # Default launch parameters
94
+ launch_params = {
95
+ "server_name": "0.0.0.0",
96
+ "server_port": 7860,
97
+ "share": False,
98
+ "debug": False,
99
+ "show_error": True,
100
+ "quiet": False
101
+ }
102
+ launch_params.update(kwargs)
103
+
104
+ self.running = True
105
+ self.gradio_interface.launch(**launch_params)
106
+
107
+ except Exception as e:
108
+ self.logger.error(f"Gradio interface failed: {e}")
109
+ raise
110
+
111
+ async def run_combined(self, **gradio_kwargs) -> None:
112
+ """Run both MCP server and Gradio interface."""
113
+ self.logger.info("Starting combined mode (MCP server + Gradio interface)")
114
+
115
+ try:
116
+ # Create MCP server
117
+ self.mcp_server = await create_server()
118
+
119
+ # Create Gradio interface
120
+ self.gradio_interface = create_gradio_interface()
121
+
122
+ # Default Gradio launch parameters
123
+ launch_params = {
124
+ "server_name": "0.0.0.0",
125
+ "server_port": 7860,
126
+ "share": False,
127
+ "debug": False,
128
+ "show_error": True,
129
+ "quiet": False
130
+ }
131
+ launch_params.update(gradio_kwargs)
132
+
133
+ self.running = True
134
+
135
+ # Run Gradio in thread pool
136
+ gradio_future = self.executor.submit(
137
+ self.gradio_interface.launch, **launch_params
138
+ )
139
+
140
+ # Run MCP server in main thread
141
+ runner = MCPServerRunner(self.mcp_server)
142
+
143
+ # Start both services
144
+ self.logger.info("Both services starting...")
145
+
146
+ # Wait for either to complete or fail
147
+ try:
148
+ await runner.run()
149
+ except Exception as e:
150
+ self.logger.error(f"MCP server error: {e}")
151
+ raise
152
+ finally:
153
+ # Cleanup
154
+ if gradio_future:
155
+ gradio_future.cancel()
156
+
157
+ except Exception as e:
158
+ self.logger.error(f"Combined mode failed: {e}")
159
+ raise
160
+ finally:
161
+ if self.mcp_server:
162
+ await self.mcp_server.stop()
163
+
164
+ async def run_tests(self) -> bool:
165
+ """Run basic functionality tests."""
166
+ self.logger.info("Running functionality tests...")
167
+
168
+ try:
169
+ # Test 1: Sentiment analyzer initialization
170
+ self.logger.info("Test 1: Initializing sentiment analyzer...")
171
+ analyzer = await get_analyzer("textblob")
172
+ self.logger.info(f"✓ Analyzer initialized with backend: {analyzer.backend}")
173
+
174
+ # Test 2: Basic sentiment analysis
175
+ self.logger.info("Test 2: Basic sentiment analysis...")
176
+ test_texts = [
177
+ "I love this product!",
178
+ "This is terrible.",
179
+ "It's okay, nothing special."
180
+ ]
181
+
182
+ for text in test_texts:
183
+ result = await analyzer.analyze(text)
184
+ self.logger.info(f"✓ '{text}' -> {result.label.value} ({result.confidence:.2f})")
185
+
186
+ # Test 3: Batch analysis
187
+ self.logger.info("Test 3: Batch analysis...")
188
+ batch_results = await analyzer.analyze_batch(test_texts)
189
+ self.logger.info(f"✓ Batch analysis completed: {len(batch_results)} results")
190
+
191
+ # Test 4: MCP tools
192
+ self.logger.info("Test 4: MCP tools...")
193
+ from src.tools import get_tools
194
+ tools = get_tools()
195
+ available_tools = tools.get_tools()
196
+ self.logger.info(f"✓ {len(available_tools)} MCP tools available")
197
+
198
+ # Test 5: Tool execution
199
+ self.logger.info("Test 5: Tool execution...")
200
+ result = await tools.call_tool("analyze_sentiment", {
201
+ "text": "This is a test message",
202
+ "backend": "textblob"
203
+ })
204
+ self.logger.info(f"✓ Tool execution successful: {result.get('success', False)}")
205
+
206
+ # Test 6: Health check
207
+ self.logger.info("Test 6: Health check...")
208
+ health_result = await tools.call_tool("health_check", {})
209
+ self.logger.info(f"✓ Health check: {health_result.get('status', 'unknown')}")
210
+
211
+ # Cleanup
212
+ await analyzer.cleanup()
213
+
214
+ self.logger.info("🎉 All tests passed!")
215
+ return True
216
+
217
+ except Exception as e:
218
+ self.logger.error(f"❌ Test failed: {e}")
219
+ return False
220
+
221
+
222
+ def setup_logging(level: str = "INFO") -> None:
223
+ """
224
+ Setup logging configuration.
225
+
226
+ Args:
227
+ level: Logging level
228
+ """
229
+ logging.basicConfig(
230
+ level=getattr(logging, level.upper()),
231
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
232
+ handlers=[
233
+ logging.StreamHandler(sys.stderr)
234
+ ]
235
+ )
236
+
237
+
238
+ def parse_arguments() -> argparse.Namespace:
239
+ """
240
+ Parse command line arguments.
241
+
242
+ Returns:
243
+ Parsed arguments
244
+ """
245
+ parser = argparse.ArgumentParser(
246
+ description="MCP Sentiment Analysis Server",
247
+ formatter_class=argparse.RawDescriptionHelpFormatter,
248
+ epilog="""
249
+ Examples:
250
+ python app.py --mode mcp # MCP server only
251
+ python app.py --mode gradio # Gradio interface only
252
+ python app.py --mode combined # Both services
253
+ python app.py --mode test # Run tests
254
+ python app.py --mode gradio --port 8080 # Custom port
255
+ python app.py --mode gradio --share # Public sharing
256
+ python app.py --mode gradio --mcp-server # Gradio with MCP server
257
+
258
+ Environment Variables:
259
+ GRADIO_MCP_SERVER=true # Enable MCP server in Gradio
260
+ """
261
+ )
262
+
263
+ parser.add_argument(
264
+ "--mode",
265
+ choices=["mcp", "gradio", "combined", "test"],
266
+ default="combined",
267
+ help="Execution mode (default: combined)"
268
+ )
269
+
270
+ parser.add_argument(
271
+ "--log-level",
272
+ choices=["DEBUG", "INFO", "WARNING", "ERROR"],
273
+ default="INFO",
274
+ help="Logging level (default: INFO)"
275
+ )
276
+
277
+ # Gradio-specific options
278
+ parser.add_argument(
279
+ "--port",
280
+ type=int,
281
+ default=7860,
282
+ help="Gradio server port (default: 7860)"
283
+ )
284
+
285
+ parser.add_argument(
286
+ "--host",
287
+ default="0.0.0.0",
288
+ help="Gradio server host (default: 0.0.0.0)"
289
+ )
290
+
291
+ parser.add_argument(
292
+ "--share",
293
+ action="store_true",
294
+ help="Enable Gradio public sharing"
295
+ )
296
+
297
+ parser.add_argument(
298
+ "--debug",
299
+ action="store_true",
300
+ help="Enable debug mode"
301
+ )
302
+
303
+ parser.add_argument(
304
+ "--mcp-server",
305
+ action="store_true",
306
+ help="Enable MCP server functionality in Gradio interface"
307
+ )
308
+
309
+ return parser.parse_args()
310
+
311
+
312
+ async def main() -> None:
313
+ """Main application entry point."""
314
+ # Parse arguments
315
+ args = parse_arguments()
316
+
317
+ # Setup logging
318
+ setup_logging(args.log_level)
319
+ logger = logging.getLogger(__name__)
320
+
321
+ # Use uvloop if available for better performance
322
+ if UVLOOP_AVAILABLE and args.mode in ["mcp", "combined"]:
323
+ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
324
+ logger.info("Using uvloop for better performance")
325
+
326
+ # Create application runner
327
+ runner = ApplicationRunner()
328
+
329
+ try:
330
+ if args.mode == "mcp":
331
+ await runner.run_mcp_server()
332
+
333
+ elif args.mode == "gradio":
334
+ # Gradio runs in sync mode
335
+ gradio_kwargs = {
336
+ "server_name": args.host,
337
+ "server_port": args.port,
338
+ "share": args.share,
339
+ "debug": args.debug,
340
+ "mcp_server": args.mcp_server
341
+ }
342
+ runner.run_gradio_interface(**gradio_kwargs)
343
+
344
+ elif args.mode == "combined":
345
+ gradio_kwargs = {
346
+ "server_name": args.host,
347
+ "server_port": args.port,
348
+ "share": args.share,
349
+ "debug": args.debug,
350
+ "mcp_server": args.mcp_server
351
+ }
352
+ await runner.run_combined(**gradio_kwargs)
353
+
354
+ elif args.mode == "test":
355
+ success = await runner.run_tests()
356
+ sys.exit(0 if success else 1)
357
+
358
+ except KeyboardInterrupt:
359
+ logger.info("Application interrupted by user")
360
+ except Exception as e:
361
+ logger.error(f"Application error: {e}")
362
+ sys.exit(1)
363
+ finally:
364
+ logger.info("Application shutdown complete")
365
+
366
+
367
+ if __name__ == "__main__":
368
+ asyncio.run(main())
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=4.0.0
2
+ textblob>=0.17.1
3
+ transformers>=4.30.0
4
+ torch>=2.0.0
5
+ mcp>=1.0.0
6
+ asyncio>=3.4.3
7
+ aiohttp>=3.8.0
8
+ pydantic>=2.0.0
9
+ typing-extensions>=4.0.0
10
+ plotly>=5.0.0
11
+ pandas>=2.0.0
12
+ pytest>=7.0.0
13
+ uvloop>=0.17.0
src/__init__.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Sentiment Analysis Server Package.
3
+
4
+ This package provides a complete Model Context Protocol server implementation
5
+ for sentiment analysis with Gradio web interface integration.
6
+ """
7
+
8
+ from .sentiment_analyzer import (
9
+ SentimentAnalyzer,
10
+ SentimentResult,
11
+ SentimentLabel,
12
+ get_analyzer,
13
+ analyze_sentiment
14
+ )
15
+
16
+ from .tools import (
17
+ MCPTools,
18
+ get_tools,
19
+ list_tools,
20
+ call_tool
21
+ )
22
+
23
+ from .mcp_server import (
24
+ SentimentMCPServer,
25
+ MCPServerRunner,
26
+ create_server
27
+ )
28
+
29
+ from .gradio_interface import (
30
+ GradioInterface,
31
+ create_gradio_interface
32
+ )
33
+
34
+ __version__ = "1.0.0"
35
+ __author__ = "MCP Sentiment Analysis Team"
36
+ __description__ = "Model Context Protocol server for sentiment analysis with Gradio interface"
37
+
38
+ __all__ = [
39
+ # Sentiment Analysis
40
+ "SentimentAnalyzer",
41
+ "SentimentResult",
42
+ "SentimentLabel",
43
+ "get_analyzer",
44
+ "analyze_sentiment",
45
+
46
+ # MCP Tools
47
+ "MCPTools",
48
+ "get_tools",
49
+ "list_tools",
50
+ "call_tool",
51
+
52
+ # MCP Server
53
+ "SentimentMCPServer",
54
+ "MCPServerRunner",
55
+ "create_server",
56
+
57
+ # Gradio Interface
58
+ "GradioInterface",
59
+ "create_gradio_interface",
60
+ ]
src/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (1.11 kB). View file
 
src/__pycache__/gradio_interface.cpython-312.pyc ADDED
Binary file (33.6 kB). View file
 
src/__pycache__/mcp_server.cpython-312.pyc ADDED
Binary file (22.8 kB). View file
 
src/__pycache__/sentiment_analyzer.cpython-312.pyc ADDED
Binary file (16.4 kB). View file
 
src/__pycache__/tools.cpython-312.pyc ADDED
Binary file (15.1 kB). View file
 
src/gradio_interface.py ADDED
@@ -0,0 +1,576 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Gradio web interface for sentiment analysis.
3
+
4
+ This module provides a modern, responsive web interface using Gradio
5
+ for human interaction with the sentiment analysis system, including
6
+ real-time analysis, confidence visualization, and history tracking.
7
+ """
8
+
9
+ import asyncio
10
+ import logging
11
+ import json
12
+ import os
13
+ from typing import Dict, Any, List, Tuple, Optional
14
+ from datetime import datetime
15
+ import pandas as pd
16
+ import plotly.graph_objects as go
17
+ import plotly.express as px
18
+
19
+ try:
20
+ import gradio as gr
21
+ GRADIO_AVAILABLE = True
22
+ except ImportError:
23
+ GRADIO_AVAILABLE = False
24
+ logging.error("Gradio not available. Install with: pip install gradio")
25
+
26
+ from .sentiment_analyzer import get_analyzer, SentimentResult, SentimentLabel
27
+ from .tools import list_tools
28
+
29
+
30
+ class SentimentHistory:
31
+ """Manages sentiment analysis history."""
32
+
33
+ def __init__(self, max_entries: int = 100):
34
+ self.max_entries = max_entries
35
+ self.entries: List[Dict[str, Any]] = []
36
+ self.logger = logging.getLogger(__name__)
37
+
38
+ def add_entry(self, text: str, result: SentimentResult, backend: str) -> None:
39
+ entry = {
40
+ "timestamp": datetime.now().isoformat(),
41
+ "text": text[:100] + "..." if len(text) > 100 else text,
42
+ "full_text": text,
43
+ "label": result.label.value,
44
+ "confidence": result.confidence,
45
+ "backend": backend,
46
+ "raw_scores": result.raw_scores
47
+ }
48
+
49
+ self.entries.append(entry)
50
+
51
+ if len(self.entries) > self.max_entries:
52
+ self.entries = self.entries[-self.max_entries:]
53
+
54
+ def get_recent_entries(self, count: int = 10) -> List[Dict[str, Any]]:
55
+ return self.entries[-count:] if self.entries else []
56
+
57
+ def get_statistics(self) -> Dict[str, Any]:
58
+ if not self.entries:
59
+ return {
60
+ "total_analyses": 0,
61
+ "label_distribution": {},
62
+ "average_confidence": 0.0,
63
+ "backend_usage": {}
64
+ }
65
+
66
+ labels = [entry["label"] for entry in self.entries]
67
+ confidences = [entry["confidence"] for entry in self.entries]
68
+ backends = [entry["backend"] for entry in self.entries]
69
+
70
+ label_counts = {
71
+ "positive": labels.count("positive"),
72
+ "negative": labels.count("negative"),
73
+ "neutral": labels.count("neutral")
74
+ }
75
+
76
+ backend_counts = {}
77
+ for backend in backends:
78
+ backend_counts[backend] = backend_counts.get(backend, 0) + 1
79
+
80
+ return {
81
+ "total_analyses": len(self.entries),
82
+ "label_distribution": label_counts,
83
+ "average_confidence": sum(confidences) / len(confidences),
84
+ "backend_usage": backend_counts
85
+ }
86
+
87
+
88
+ class GradioInterface:
89
+ """Gradio web interface for sentiment analysis."""
90
+
91
+ def __init__(self, title: str = "Sentiment Analysis Server",
92
+ description: str = "Analyze text sentiment using TextBlob or Transformers"):
93
+ self.title = title
94
+ self.description = description
95
+ self.logger = logging.getLogger(__name__)
96
+ self.history = SentimentHistory()
97
+ self.interface = None
98
+ self._setup_interface()
99
+
100
+ def _setup_interface(self) -> None:
101
+ if not GRADIO_AVAILABLE:
102
+ raise RuntimeError("Gradio not available")
103
+
104
+ with gr.Blocks(
105
+ theme=gr.themes.Soft(),
106
+ title=self.title
107
+ ) as interface:
108
+
109
+ gr.Markdown(f"# {self.title}")
110
+ gr.Markdown(f"*{self.description}*")
111
+
112
+ with gr.Tabs():
113
+ with gr.TabItem("Sentiment Analysis"):
114
+ with gr.Row():
115
+ with gr.Column(scale=2):
116
+ text_input = gr.Textbox(
117
+ label="Text to Analyze",
118
+ placeholder="Enter text here to analyze its sentiment...",
119
+ lines=4
120
+ )
121
+
122
+ with gr.Row():
123
+ backend_choice = gr.Dropdown(
124
+ choices=["auto", "textblob", "transformers"],
125
+ value="auto",
126
+ label="Analysis Backend"
127
+ )
128
+
129
+ analyze_btn = gr.Button(
130
+ "Analyze Sentiment",
131
+ variant="primary"
132
+ )
133
+
134
+ with gr.Column(scale=1):
135
+ result_display = gr.HTML(
136
+ value="<p>Enter text and click 'Analyze Sentiment' to see results.</p>"
137
+ )
138
+
139
+ confidence_plot = gr.Plot(visible=False)
140
+
141
+ gr.Markdown("### Quick Examples")
142
+ with gr.Row():
143
+ pos_btn = gr.Button("😊 Positive", size="sm")
144
+ neu_btn = gr.Button("😐 Neutral", size="sm")
145
+ neg_btn = gr.Button("😞 Negative", size="sm")
146
+ mix_btn = gr.Button("📝 Mixed", size="sm")
147
+
148
+ with gr.TabItem("Batch Analysis"):
149
+ with gr.Row():
150
+ with gr.Column():
151
+ batch_input = gr.Textbox(
152
+ label="Texts to Analyze (one per line)",
153
+ placeholder="Enter multiple texts, one per line...",
154
+ lines=8
155
+ )
156
+
157
+ with gr.Row():
158
+ batch_backend = gr.Dropdown(
159
+ choices=["auto", "textblob", "transformers"],
160
+ value="auto",
161
+ label="Analysis Backend"
162
+ )
163
+
164
+ batch_analyze_btn = gr.Button(
165
+ "Analyze Batch",
166
+ variant="primary"
167
+ )
168
+
169
+ with gr.Column():
170
+ batch_results = gr.DataFrame(
171
+ label="Batch Results",
172
+ headers=["Text", "Sentiment", "Confidence"]
173
+ )
174
+
175
+ batch_summary_plot = gr.Plot(visible=False)
176
+
177
+ with gr.TabItem("Analysis History"):
178
+ with gr.Row():
179
+ refresh_history_btn = gr.Button("Refresh History", variant="secondary")
180
+ clear_history_btn = gr.Button("Clear History", variant="stop")
181
+
182
+ with gr.Row():
183
+ with gr.Column(scale=2):
184
+ history_table = gr.DataFrame(
185
+ label="Recent Analyses",
186
+ headers=["Time", "Text", "Sentiment", "Confidence", "Backend"]
187
+ )
188
+
189
+ with gr.Column(scale=1):
190
+ stats_display = gr.HTML(value="<p>No analyses yet.</p>")
191
+ history_plot = gr.Plot(visible=False)
192
+
193
+ with gr.TabItem("Settings & Info"):
194
+ with gr.Row():
195
+ with gr.Column():
196
+ gr.Markdown("### Backend Information")
197
+ backend_info = gr.HTML(value="<p>Loading backend information...</p>")
198
+ refresh_info_btn = gr.Button("Refresh Info", variant="secondary")
199
+
200
+ with gr.Column():
201
+ gr.Markdown("### Usage Tips")
202
+ gr.Markdown("""
203
+ - **Auto**: Automatically selects the best available backend
204
+ - **TextBlob**: Fast, simple sentiment analysis
205
+ - **Transformers**: More accurate, AI-powered analysis
206
+ - **Batch Analysis**: Process multiple texts at once
207
+ - **History**: Track your analysis results over time
208
+ """)
209
+
210
+ # Event handlers
211
+ def analyze_sentiment(text: str, backend: str) -> Tuple[str, gr.Plot]:
212
+ return asyncio.run(self._analyze_sentiment_async(text, backend))
213
+
214
+ def analyze_batch(texts: str, backend: str) -> Tuple[pd.DataFrame, gr.Plot]:
215
+ return asyncio.run(self._analyze_batch_async(texts, backend))
216
+
217
+ def refresh_history() -> Tuple[pd.DataFrame, str, gr.Plot]:
218
+ return self._get_history_data()
219
+
220
+ def clear_history() -> Tuple[pd.DataFrame, str, gr.Plot]:
221
+ self.history.entries.clear()
222
+ return self._get_history_data()
223
+
224
+ def get_backend_info() -> str:
225
+ return asyncio.run(self._get_backend_info_async())
226
+
227
+ def get_mcp_schema() -> str:
228
+ """Get MCP tools schema as JSON."""
229
+ return asyncio.run(self._get_mcp_schema_async())
230
+
231
+ # Example texts
232
+ examples = [
233
+ "I absolutely love this new feature! It's incredible and makes everything so much easier.",
234
+ "The weather is okay today, nothing particularly special about it.",
235
+ "This is terrible and frustrating. I hate how complicated this has become.",
236
+ "The movie had great visuals but the plot was disappointing. Mixed feelings overall."
237
+ ]
238
+
239
+ # Wire up events
240
+ analyze_btn.click(
241
+ analyze_sentiment,
242
+ inputs=[text_input, backend_choice],
243
+ outputs=[result_display, confidence_plot]
244
+ )
245
+
246
+ batch_analyze_btn.click(
247
+ analyze_batch,
248
+ inputs=[batch_input, batch_backend],
249
+ outputs=[batch_results, batch_summary_plot]
250
+ )
251
+
252
+ refresh_history_btn.click(
253
+ refresh_history,
254
+ outputs=[history_table, stats_display, history_plot]
255
+ )
256
+
257
+ clear_history_btn.click(
258
+ clear_history,
259
+ outputs=[history_table, stats_display, history_plot]
260
+ )
261
+
262
+ refresh_info_btn.click(
263
+ get_backend_info,
264
+ outputs=[backend_info]
265
+ )
266
+
267
+ # Example buttons
268
+ pos_btn.click(lambda: examples[0], outputs=[text_input])
269
+ neu_btn.click(lambda: examples[1], outputs=[text_input])
270
+ neg_btn.click(lambda: examples[2], outputs=[text_input])
271
+ mix_btn.click(lambda: examples[3], outputs=[text_input])
272
+
273
+ # Load initial data
274
+ interface.load(get_backend_info, outputs=[backend_info])
275
+ interface.load(refresh_history, outputs=[history_table, stats_display, history_plot])
276
+
277
+ self.interface = interface
278
+
279
+ async def _analyze_sentiment_async(self, text: str, backend: str) -> Tuple[str, gr.Plot]:
280
+ try:
281
+ if not text.strip():
282
+ return "<p>Please enter some text to analyze.</p>", gr.Plot(visible=False)
283
+
284
+ analyzer = await get_analyzer(backend)
285
+ result = await analyzer.analyze(text)
286
+
287
+ self.history.add_entry(text, result, analyzer.backend)
288
+
289
+ sentiment_class = f"sentiment-{result.label.value}"
290
+ confidence_class = (
291
+ "confidence-high" if result.confidence > 0.7
292
+ else "confidence-medium" if result.confidence > 0.4
293
+ else "confidence-low"
294
+ )
295
+
296
+ html_result = f"""
297
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #f8fafc; border-left: 4px solid #3b82f6;">
298
+ <h3>Analysis Result</h3>
299
+ <p><strong>Sentiment:</strong> <span style="color: {'#22c55e' if result.label.value == 'positive' else '#ef4444' if result.label.value == 'negative' else '#6b7280'}; font-weight: bold;">{result.label.value.title()}</span></p>
300
+ <p><strong>Confidence:</strong> <span style="color: {'#059669' if result.confidence > 0.7 else '#d97706' if result.confidence > 0.4 else '#dc2626'};">{result.confidence:.2%}</span></p>
301
+ <p><strong>Backend:</strong> {analyzer.backend}</p>
302
+ <p><strong>Text Length:</strong> {len(text)} characters</p>
303
+ </div>
304
+ """
305
+
306
+ plot = self._create_confidence_plot(result)
307
+ return html_result, plot
308
+
309
+ except Exception as e:
310
+ self.logger.error(f"Analysis failed: {e}")
311
+ error_html = f"""
312
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #fef2f2; border-left: 4px solid #ef4444;">
313
+ <h3>Analysis Error</h3>
314
+ <p><strong>Error:</strong> {str(e)}</p>
315
+ <p>Please try again or check your input.</p>
316
+ </div>
317
+ """
318
+ return error_html, gr.Plot(visible=False)
319
+
320
+ async def _analyze_batch_async(self, texts: str, backend: str) -> Tuple[pd.DataFrame, gr.Plot]:
321
+ try:
322
+ if not texts.strip():
323
+ return pd.DataFrame(), gr.Plot(visible=False)
324
+
325
+ text_list = [t.strip() for t in texts.split('\n') if t.strip()]
326
+
327
+ if not text_list:
328
+ return pd.DataFrame(), gr.Plot(visible=False)
329
+
330
+ analyzer = await get_analyzer(backend)
331
+ results = await analyzer.analyze_batch(text_list)
332
+
333
+ data = []
334
+ for text, result in zip(text_list, results):
335
+ self.history.add_entry(text, result, analyzer.backend)
336
+
337
+ data.append({
338
+ "Text": text[:50] + "..." if len(text) > 50 else text,
339
+ "Sentiment": result.label.value.title(),
340
+ "Confidence": f"{result.confidence:.2%}"
341
+ })
342
+
343
+ df = pd.DataFrame(data)
344
+ plot = self._create_batch_summary_plot(results)
345
+
346
+ return df, plot
347
+
348
+ except Exception as e:
349
+ self.logger.error(f"Batch analysis failed: {e}")
350
+ return pd.DataFrame([{"Error": str(e)}]), gr.Plot(visible=False)
351
+
352
+ def _create_confidence_plot(self, result: SentimentResult) -> gr.Plot:
353
+ try:
354
+ fig = go.Figure(go.Indicator(
355
+ mode="gauge+number",
356
+ value=result.confidence * 100,
357
+ domain={'x': [0, 1], 'y': [0, 1]},
358
+ title={'text': f"Confidence - {result.label.value.title()}"},
359
+ gauge={
360
+ 'axis': {'range': [None, 100]},
361
+ 'bar': {'color': "darkblue"},
362
+ 'steps': [
363
+ {'range': [0, 40], 'color': "lightgray"},
364
+ {'range': [40, 70], 'color': "yellow"},
365
+ {'range': [70, 100], 'color': "green"}
366
+ ]
367
+ }
368
+ ))
369
+
370
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20))
371
+ return gr.Plot(value=fig, visible=True)
372
+
373
+ except Exception as e:
374
+ self.logger.error(f"Failed to create confidence plot: {e}")
375
+ return gr.Plot(visible=False)
376
+
377
+ def _create_batch_summary_plot(self, results: List[SentimentResult]) -> gr.Plot:
378
+ try:
379
+ labels = [result.label.value for result in results]
380
+ label_counts = {
381
+ "Positive": labels.count("positive"),
382
+ "Negative": labels.count("negative"),
383
+ "Neutral": labels.count("neutral")
384
+ }
385
+
386
+ fig = px.pie(
387
+ values=list(label_counts.values()),
388
+ names=list(label_counts.keys()),
389
+ title="Sentiment Distribution",
390
+ color_discrete_map={
391
+ "Positive": "#22c55e",
392
+ "Negative": "#ef4444",
393
+ "Neutral": "#6b7280"
394
+ }
395
+ )
396
+
397
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20))
398
+ return gr.Plot(value=fig, visible=True)
399
+
400
+ except Exception as e:
401
+ self.logger.error(f"Failed to create batch summary plot: {e}")
402
+ return gr.Plot(visible=False)
403
+
404
+ def _get_history_data(self) -> Tuple[pd.DataFrame, str, gr.Plot]:
405
+ try:
406
+ entries = self.history.get_recent_entries(20)
407
+
408
+ if not entries:
409
+ empty_df = pd.DataFrame(columns=["Time", "Text", "Sentiment", "Confidence", "Backend"])
410
+ return empty_df, "<p>No analyses yet.</p>", gr.Plot(visible=False)
411
+
412
+ data = []
413
+ for entry in reversed(entries):
414
+ data.append({
415
+ "Time": entry["timestamp"][:19].replace("T", " "),
416
+ "Text": entry["text"],
417
+ "Sentiment": entry["label"].title(),
418
+ "Confidence": f"{entry['confidence']:.2%}",
419
+ "Backend": entry["backend"]
420
+ })
421
+
422
+ df = pd.DataFrame(data)
423
+ stats = self.history.get_statistics()
424
+
425
+ stats_html = f"""
426
+ <div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 1rem; border-radius: 0.5rem;">
427
+ <h3>📊 Analysis Statistics</h3>
428
+ <p><strong>Total Analyses:</strong> {stats['total_analyses']}</p>
429
+ <p><strong>Average Confidence:</strong> {stats['average_confidence']:.2%}</p>
430
+ <h4>Sentiment Distribution:</h4>
431
+ <ul>
432
+ <li>Positive: {stats['label_distribution'].get('positive', 0)}</li>
433
+ <li>Negative: {stats['label_distribution'].get('negative', 0)}</li>
434
+ <li>Neutral: {stats['label_distribution'].get('neutral', 0)}</li>
435
+ </ul>
436
+ </div>
437
+ """
438
+
439
+ plot = self._create_history_plot(stats) if stats['total_analyses'] > 0 else gr.Plot(visible=False)
440
+ return df, stats_html, plot
441
+
442
+ except Exception as e:
443
+ self.logger.error(f"Failed to get history data: {e}")
444
+ error_df = pd.DataFrame([{"Error": str(e)}])
445
+ return error_df, f"<p>Error loading history: {e}</p>", gr.Plot(visible=False)
446
+
447
+ def _create_history_plot(self, stats: Dict[str, Any]) -> gr.Plot:
448
+ try:
449
+ labels = list(stats['label_distribution'].keys())
450
+ values = list(stats['label_distribution'].values())
451
+
452
+ fig = px.bar(
453
+ x=[label.title() for label in labels],
454
+ y=values,
455
+ title="Historical Sentiment Distribution",
456
+ color=labels,
457
+ color_discrete_map={
458
+ "positive": "#22c55e",
459
+ "negative": "#ef4444",
460
+ "neutral": "#6b7280"
461
+ }
462
+ )
463
+
464
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20), showlegend=False)
465
+ return gr.Plot(value=fig, visible=True)
466
+
467
+ except Exception as e:
468
+ self.logger.error(f"Failed to create history plot: {e}")
469
+ return gr.Plot(visible=False)
470
+
471
+ async def _get_backend_info_async(self) -> str:
472
+ try:
473
+ analyzer = await get_analyzer("auto")
474
+ info = analyzer.get_info()
475
+
476
+ html = f"""
477
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #f0f9ff; border-left: 4px solid #0ea5e9;">
478
+ <h3>🔧 Backend Information</h3>
479
+ <p><strong>Current Backend:</strong> {info['backend']}</p>
480
+ <p><strong>Model Loaded:</strong> {'Yes' if info['model_loaded'] else 'No'}</p>
481
+ <p><strong>TextBlob Available:</strong> {'Yes' if info['textblob_available'] else 'No'}</p>
482
+ <p><strong>Transformers Available:</strong> {'Yes' if info['transformers_available'] else 'No'}</p>
483
+ <p><strong>CUDA Available:</strong> {'Yes' if info.get('cuda_available', False) else 'No'}</p>
484
+ {f"<p><strong>Model Name:</strong> {info['model_name']}</p>" if info.get('model_name') else ""}
485
+ </div>
486
+ """
487
+ return html
488
+
489
+ except Exception as e:
490
+ self.logger.error(f"Failed to get backend info: {e}")
491
+ return f"""
492
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #fef2f2; border-left: 4px solid #ef4444;">
493
+ <h3>❌ Backend Error</h3>
494
+ <p>Failed to load backend information: {str(e)}</p>
495
+ </div>
496
+ """
497
+
498
+ async def _get_mcp_schema_async(self) -> str:
499
+ """Get MCP tools schema as formatted JSON."""
500
+ try:
501
+ tools = await list_tools()
502
+ schema = {
503
+ "mcp_version": "2024-11-05",
504
+ "server_info": {
505
+ "name": "sentiment-analyzer",
506
+ "version": "1.0.0",
507
+ "description": "Sentiment analysis server using TextBlob and Transformers"
508
+ },
509
+ "tools": tools,
510
+ "total_tools": len(tools)
511
+ }
512
+ return json.dumps(schema, indent=2)
513
+
514
+ except Exception as e:
515
+ self.logger.error(f"Failed to get MCP schema: {e}")
516
+ return json.dumps({
517
+ "error": str(e),
518
+ "error_type": type(e).__name__
519
+ }, indent=2)
520
+
521
+ def launch(self, **kwargs) -> None:
522
+ if not self.interface:
523
+ raise RuntimeError("Interface not initialized")
524
+
525
+ # Check for MCP server mode from environment variable or parameter
526
+ mcp_server_enabled = (
527
+ kwargs.get("mcp_server", False) or
528
+ os.getenv("GRADIO_MCP_SERVER", "").lower() in ("true", "1", "yes", "on")
529
+ )
530
+
531
+ launch_params = {
532
+ "server_name": "0.0.0.0",
533
+ "server_port": 7860,
534
+ "share": False,
535
+ "debug": False,
536
+ "show_error": True,
537
+ "quiet": False
538
+ }
539
+
540
+ # Add MCP server parameter if enabled
541
+ if mcp_server_enabled:
542
+ launch_params["mcp_server"] = True
543
+ self.logger.info("MCP server functionality enabled for Gradio interface")
544
+
545
+ launch_params.update(kwargs)
546
+
547
+ self.logger.info(f"Launching Gradio interface on {launch_params['server_name']}:{launch_params['server_port']}")
548
+ if mcp_server_enabled:
549
+ self.logger.info("Gradio interface will also serve as MCP server with API endpoints")
550
+
551
+ try:
552
+ self.interface.launch(**launch_params)
553
+ except Exception as e:
554
+ self.logger.error(f"Failed to launch interface: {e}")
555
+ raise
556
+
557
+
558
+ def create_gradio_interface(**kwargs) -> GradioInterface:
559
+ if not GRADIO_AVAILABLE:
560
+ raise RuntimeError("Gradio not available. Install with: pip install gradio")
561
+
562
+ return GradioInterface(**kwargs)
563
+
564
+
565
+ async def main() -> None:
566
+ logging.basicConfig(
567
+ level=logging.INFO,
568
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
569
+ )
570
+
571
+ interface = create_gradio_interface()
572
+ interface.launch(debug=True)
573
+
574
+
575
+ if __name__ == "__main__":
576
+ asyncio.run(main())
src/mcp_server.py ADDED
@@ -0,0 +1,572 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Server implementation for sentiment analysis.
3
+
4
+ This module implements a Model Context Protocol server that provides
5
+ sentiment analysis capabilities through JSON-RPC 2.0 protocol with
6
+ async request handling and comprehensive error management.
7
+ """
8
+
9
+ import asyncio
10
+ import json
11
+ import logging
12
+ import sys
13
+ import traceback
14
+ from typing import Dict, Any, List, Optional, Union
15
+ from dataclasses import dataclass
16
+ from enum import Enum
17
+ import uuid
18
+ from datetime import datetime
19
+
20
+ try:
21
+ import uvloop
22
+ UVLOOP_AVAILABLE = True
23
+ except ImportError:
24
+ UVLOOP_AVAILABLE = False
25
+
26
+ from .tools import get_tools, list_tools, call_tool
27
+ from .sentiment_analyzer import get_analyzer
28
+
29
+
30
+ class MCPMessageType(Enum):
31
+ """MCP message types."""
32
+ REQUEST = "request"
33
+ RESPONSE = "response"
34
+ NOTIFICATION = "notification"
35
+
36
+
37
+ @dataclass
38
+ class MCPRequest:
39
+ """MCP request message structure."""
40
+ jsonrpc: str
41
+ method: str
42
+ params: Optional[Dict[str, Any]] = None
43
+ id: Optional[Union[str, int]] = None
44
+
45
+
46
+ @dataclass
47
+ class MCPResponse:
48
+ """MCP response message structure."""
49
+ jsonrpc: str
50
+ id: Optional[Union[str, int]]
51
+ result: Optional[Dict[str, Any]] = None
52
+ error: Optional[Dict[str, Any]] = None
53
+
54
+
55
+ @dataclass
56
+ class MCPError:
57
+ """MCP error structure."""
58
+ code: int
59
+ message: str
60
+ data: Optional[Dict[str, Any]] = None
61
+
62
+
63
+ class MCPErrorCodes:
64
+ """Standard JSON-RPC 2.0 error codes."""
65
+ PARSE_ERROR = -32700
66
+ INVALID_REQUEST = -32600
67
+ METHOD_NOT_FOUND = -32601
68
+ INVALID_PARAMS = -32602
69
+ INTERNAL_ERROR = -32603
70
+
71
+ # Custom error codes
72
+ TOOL_ERROR = -32000
73
+ ANALYZER_ERROR = -32001
74
+ VALIDATION_ERROR = -32002
75
+
76
+
77
+ class SentimentMCPServer:
78
+ """
79
+ Model Context Protocol server for sentiment analysis.
80
+
81
+ Implements JSON-RPC 2.0 protocol with async request handling,
82
+ tool registration, and comprehensive error management.
83
+ """
84
+
85
+ def __init__(self, name: str = "sentiment-analyzer", version: str = "1.0.0"):
86
+ """
87
+ Initialize MCP server.
88
+
89
+ Args:
90
+ name: Server name
91
+ version: Server version
92
+ """
93
+ self.name = name
94
+ self.version = version
95
+ self.logger = logging.getLogger(__name__)
96
+
97
+ # Server state
98
+ self.running = False
99
+ self.request_count = 0
100
+ self.error_count = 0
101
+ self.start_time = None
102
+
103
+ # Request handlers
104
+ self._handlers = {
105
+ "initialize": self._handle_initialize,
106
+ "tools/list": self._handle_list_tools,
107
+ "tools/call": self._handle_call_tool,
108
+ "ping": self._handle_ping,
109
+ "server/info": self._handle_server_info,
110
+ "server/stats": self._handle_server_stats
111
+ }
112
+
113
+ self.logger.info(f"Initialized MCP server '{name}' v{version}")
114
+
115
+ async def start(self) -> None:
116
+ """Start the MCP server."""
117
+ self.running = True
118
+ self.start_time = datetime.now()
119
+ self.logger.info(f"MCP server '{self.name}' started")
120
+
121
+ # Pre-load analyzer to improve first request performance
122
+ try:
123
+ await get_analyzer("auto")
124
+ self.logger.info("Sentiment analyzer pre-loaded successfully")
125
+ except Exception as e:
126
+ self.logger.warning(f"Failed to pre-load analyzer: {e}")
127
+
128
+ async def stop(self) -> None:
129
+ """Stop the MCP server."""
130
+ self.running = False
131
+ self.logger.info(f"MCP server '{self.name}' stopped")
132
+
133
+ # Cleanup resources
134
+ try:
135
+ analyzer = await get_analyzer("auto")
136
+ await analyzer.cleanup()
137
+ except Exception as e:
138
+ self.logger.error(f"Error during cleanup: {e}")
139
+
140
+ def _create_error_response(self, request_id: Optional[Union[str, int]],
141
+ code: int, message: str,
142
+ data: Optional[Dict[str, Any]] = None) -> MCPResponse:
143
+ """
144
+ Create error response.
145
+
146
+ Args:
147
+ request_id: Request ID
148
+ code: Error code
149
+ message: Error message
150
+ data: Additional error data
151
+
152
+ Returns:
153
+ MCPResponse with error
154
+ """
155
+ error = {
156
+ "code": code,
157
+ "message": message
158
+ }
159
+ if data:
160
+ error["data"] = data
161
+
162
+ return MCPResponse(
163
+ jsonrpc="2.0",
164
+ id=request_id,
165
+ error=error
166
+ )
167
+
168
+ def _create_success_response(self, request_id: Optional[Union[str, int]],
169
+ result: Dict[str, Any]) -> MCPResponse:
170
+ """
171
+ Create success response.
172
+
173
+ Args:
174
+ request_id: Request ID
175
+ result: Response result
176
+
177
+ Returns:
178
+ MCPResponse with result
179
+ """
180
+ return MCPResponse(
181
+ jsonrpc="2.0",
182
+ id=request_id,
183
+ result=result
184
+ )
185
+
186
+ def _parse_request(self, message: str) -> MCPRequest:
187
+ """
188
+ Parse JSON-RPC request message.
189
+
190
+ Args:
191
+ message: JSON message string
192
+
193
+ Returns:
194
+ Parsed MCPRequest
195
+
196
+ Raises:
197
+ ValueError: If parsing fails
198
+ """
199
+ try:
200
+ data = json.loads(message)
201
+ except json.JSONDecodeError as e:
202
+ raise ValueError(f"Invalid JSON: {e}")
203
+
204
+ # Validate required fields
205
+ if not isinstance(data, dict):
206
+ raise ValueError("Request must be a JSON object")
207
+
208
+ if data.get("jsonrpc") != "2.0":
209
+ raise ValueError("Invalid JSON-RPC version")
210
+
211
+ if "method" not in data:
212
+ raise ValueError("Missing 'method' field")
213
+
214
+ return MCPRequest(
215
+ jsonrpc=data["jsonrpc"],
216
+ method=data["method"],
217
+ params=data.get("params"),
218
+ id=data.get("id")
219
+ )
220
+
221
+ async def process_request(self, message: str) -> str:
222
+ """
223
+ Process incoming JSON-RPC request.
224
+
225
+ Args:
226
+ message: JSON-RPC request message
227
+
228
+ Returns:
229
+ JSON-RPC response message
230
+ """
231
+ request_id = None
232
+
233
+ try:
234
+ # Parse request
235
+ try:
236
+ request = self._parse_request(message)
237
+ request_id = request.id
238
+ except ValueError as e:
239
+ response = self._create_error_response(
240
+ None, MCPErrorCodes.PARSE_ERROR, str(e)
241
+ )
242
+ return json.dumps(response.__dict__)
243
+
244
+ # Update stats
245
+ self.request_count += 1
246
+
247
+ # Log request
248
+ self.logger.debug(f"Processing request: {request.method} (ID: {request_id})")
249
+
250
+ # Handle request
251
+ if request.method in self._handlers:
252
+ handler = self._handlers[request.method]
253
+ result = await handler(request.params or {})
254
+ response = self._create_success_response(request_id, result)
255
+ else:
256
+ response = self._create_error_response(
257
+ request_id,
258
+ MCPErrorCodes.METHOD_NOT_FOUND,
259
+ f"Method '{request.method}' not found"
260
+ )
261
+
262
+ except Exception as e:
263
+ self.error_count += 1
264
+ self.logger.error(f"Request processing failed: {e}")
265
+ self.logger.debug(traceback.format_exc())
266
+
267
+ response = self._create_error_response(
268
+ request_id,
269
+ MCPErrorCodes.INTERNAL_ERROR,
270
+ "Internal server error",
271
+ {"error": str(e), "type": type(e).__name__}
272
+ )
273
+
274
+ # Convert response to JSON
275
+ response_dict = response.__dict__
276
+ # Remove None values
277
+ response_dict = {k: v for k, v in response_dict.items() if v is not None}
278
+
279
+ return json.dumps(response_dict)
280
+
281
+ async def _handle_initialize(self, params: Dict[str, Any]) -> Dict[str, Any]:
282
+ """
283
+ Handle initialize request.
284
+
285
+ Args:
286
+ params: Initialize parameters
287
+
288
+ Returns:
289
+ Server capabilities
290
+ """
291
+ client_info = params.get("clientInfo", {})
292
+ self.logger.info(f"Client connected: {client_info}")
293
+
294
+ return {
295
+ "protocolVersion": "2024-11-05",
296
+ "capabilities": {
297
+ "tools": {
298
+ "listChanged": False
299
+ },
300
+ "logging": {},
301
+ "prompts": {
302
+ "listChanged": False
303
+ },
304
+ "resources": {
305
+ "subscribe": False,
306
+ "listChanged": False
307
+ }
308
+ },
309
+ "serverInfo": {
310
+ "name": self.name,
311
+ "version": self.version,
312
+ "description": "Sentiment analysis server using TextBlob and Transformers"
313
+ }
314
+ }
315
+
316
+ async def _handle_list_tools(self, params: Dict[str, Any]) -> Dict[str, Any]:
317
+ """
318
+ Handle tools/list request.
319
+
320
+ Args:
321
+ params: List tools parameters
322
+
323
+ Returns:
324
+ Available tools
325
+ """
326
+ try:
327
+ tools = await list_tools()
328
+ return {"tools": tools}
329
+ except Exception as e:
330
+ raise RuntimeError(f"Failed to list tools: {e}")
331
+
332
+ async def _handle_call_tool(self, params: Dict[str, Any]) -> Dict[str, Any]:
333
+ """
334
+ Handle tools/call request.
335
+
336
+ Args:
337
+ params: Tool call parameters
338
+
339
+ Returns:
340
+ Tool execution result
341
+ """
342
+ try:
343
+ name = params.get("name")
344
+ arguments = params.get("arguments", {})
345
+
346
+ if not name:
347
+ raise ValueError("Tool name is required")
348
+
349
+ result = await call_tool(name, arguments)
350
+
351
+ return {
352
+ "content": [
353
+ {
354
+ "type": "text",
355
+ "text": json.dumps(result, indent=2)
356
+ }
357
+ ],
358
+ "isError": not result.get("success", True)
359
+ }
360
+
361
+ except Exception as e:
362
+ self.logger.error(f"Tool call failed: {e}")
363
+ return {
364
+ "content": [
365
+ {
366
+ "type": "text",
367
+ "text": json.dumps({
368
+ "success": False,
369
+ "error": str(e),
370
+ "error_type": type(e).__name__
371
+ }, indent=2)
372
+ }
373
+ ],
374
+ "isError": True
375
+ }
376
+
377
+ async def _handle_ping(self, params: Dict[str, Any]) -> Dict[str, Any]:
378
+ """
379
+ Handle ping request.
380
+
381
+ Args:
382
+ params: Ping parameters
383
+
384
+ Returns:
385
+ Pong response
386
+ """
387
+ return {
388
+ "pong": True,
389
+ "timestamp": datetime.now().isoformat(),
390
+ "server": self.name,
391
+ "version": self.version
392
+ }
393
+
394
+ async def _handle_server_info(self, params: Dict[str, Any]) -> Dict[str, Any]:
395
+ """
396
+ Handle server/info request.
397
+
398
+ Args:
399
+ params: Server info parameters
400
+
401
+ Returns:
402
+ Server information
403
+ """
404
+ try:
405
+ analyzer = await get_analyzer("auto")
406
+ analyzer_info = analyzer.get_info()
407
+ except Exception as e:
408
+ analyzer_info = {"error": str(e)}
409
+
410
+ return {
411
+ "server": {
412
+ "name": self.name,
413
+ "version": self.version,
414
+ "running": self.running,
415
+ "start_time": self.start_time.isoformat() if self.start_time else None
416
+ },
417
+ "analyzer": analyzer_info,
418
+ "capabilities": {
419
+ "sentiment_analysis": True,
420
+ "batch_processing": True,
421
+ "multiple_backends": True,
422
+ "async_processing": True
423
+ }
424
+ }
425
+
426
+ async def _handle_server_stats(self, params: Dict[str, Any]) -> Dict[str, Any]:
427
+ """
428
+ Handle server/stats request.
429
+
430
+ Args:
431
+ params: Server stats parameters
432
+
433
+ Returns:
434
+ Server statistics
435
+ """
436
+ uptime = None
437
+ if self.start_time:
438
+ uptime = (datetime.now() - self.start_time).total_seconds()
439
+
440
+ return {
441
+ "requests_processed": self.request_count,
442
+ "errors_encountered": self.error_count,
443
+ "success_rate": (
444
+ (self.request_count - self.error_count) / self.request_count
445
+ if self.request_count > 0 else 0
446
+ ),
447
+ "uptime_seconds": uptime,
448
+ "running": self.running
449
+ }
450
+
451
+
452
+ class MCPServerRunner:
453
+ """
454
+ Runner for MCP server with stdio communication.
455
+
456
+ Handles stdin/stdout communication for MCP protocol.
457
+ """
458
+
459
+ def __init__(self, server: SentimentMCPServer):
460
+ """
461
+ Initialize server runner.
462
+
463
+ Args:
464
+ server: MCP server instance
465
+ """
466
+ self.server = server
467
+ self.logger = logging.getLogger(__name__)
468
+
469
+ async def run(self) -> None:
470
+ """Run the MCP server with stdio communication."""
471
+ self.logger.info("Starting MCP server with stdio communication")
472
+
473
+ # Start server
474
+ await self.server.start()
475
+
476
+ try:
477
+ # Set up event loop optimization
478
+ if UVLOOP_AVAILABLE:
479
+ self.logger.info("Using uvloop for better performance")
480
+
481
+ # Handle stdin/stdout communication
482
+ reader = asyncio.StreamReader()
483
+ protocol = asyncio.StreamReaderProtocol(reader)
484
+ await asyncio.get_event_loop().connect_read_pipe(
485
+ lambda: protocol, sys.stdin
486
+ )
487
+
488
+ writer_transport, writer_protocol = await asyncio.get_event_loop().connect_write_pipe(
489
+ asyncio.streams.FlowControlMixin, sys.stdout
490
+ )
491
+ writer = asyncio.StreamWriter(writer_transport, writer_protocol, reader, asyncio.get_event_loop())
492
+
493
+ self.logger.info("MCP server ready for requests")
494
+
495
+ # Process requests
496
+ while self.server.running:
497
+ try:
498
+ # Read request
499
+ line = await reader.readline()
500
+ if not line:
501
+ break
502
+
503
+ message = line.decode().strip()
504
+ if not message:
505
+ continue
506
+
507
+ # Process request
508
+ response = await self.server.process_request(message)
509
+
510
+ # Send response
511
+ writer.write((response + '\n').encode())
512
+ await writer.drain()
513
+
514
+ except asyncio.CancelledError:
515
+ break
516
+ except Exception as e:
517
+ self.logger.error(f"Communication error: {e}")
518
+ break
519
+
520
+ finally:
521
+ await self.server.stop()
522
+ self.logger.info("MCP server stopped")
523
+
524
+
525
+ async def create_server(name: str = "sentiment-analyzer",
526
+ version: str = "1.0.0") -> SentimentMCPServer:
527
+ """
528
+ Create and configure MCP server.
529
+
530
+ Args:
531
+ name: Server name
532
+ version: Server version
533
+
534
+ Returns:
535
+ Configured MCP server
536
+ """
537
+ # Configure logging
538
+ logging.basicConfig(
539
+ level=logging.INFO,
540
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
541
+ handlers=[
542
+ logging.StreamHandler(sys.stderr) # Use stderr to avoid interfering with stdio protocol
543
+ ]
544
+ )
545
+
546
+ # Create server
547
+ server = SentimentMCPServer(name, version)
548
+
549
+ return server
550
+
551
+
552
+ async def main() -> None:
553
+ """Main entry point for MCP server."""
554
+ # Use uvloop if available for better performance
555
+ if UVLOOP_AVAILABLE:
556
+ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
557
+
558
+ # Create and run server
559
+ server = await create_server()
560
+ runner = MCPServerRunner(server)
561
+
562
+ try:
563
+ await runner.run()
564
+ except KeyboardInterrupt:
565
+ logging.info("Server interrupted by user")
566
+ except Exception as e:
567
+ logging.error(f"Server error: {e}")
568
+ sys.exit(1)
569
+
570
+
571
+ if __name__ == "__main__":
572
+ asyncio.run(main())
src/sentiment_analyzer.py ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Core sentiment analysis engine for MCP server.
3
+
4
+ This module provides sentiment analysis functionality using both TextBlob
5
+ for simplicity and Transformers for accuracy, with confidence scoring
6
+ and comprehensive error handling.
7
+ """
8
+
9
+ import logging
10
+ from typing import Dict, Any, Optional, Tuple
11
+ from enum import Enum
12
+ import asyncio
13
+ from concurrent.futures import ThreadPoolExecutor
14
+
15
+ try:
16
+ from textblob import TextBlob
17
+ TEXTBLOB_AVAILABLE = True
18
+ except ImportError:
19
+ TEXTBLOB_AVAILABLE = False
20
+ logging.warning("TextBlob not available. Install with: pip install textblob")
21
+
22
+ try:
23
+ from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
24
+ import torch
25
+ TRANSFORMERS_AVAILABLE = True
26
+ except ImportError:
27
+ TRANSFORMERS_AVAILABLE = False
28
+ logging.warning("Transformers not available. Install with: pip install transformers torch")
29
+
30
+
31
+ class SentimentLabel(Enum):
32
+ """Sentiment classification labels."""
33
+ POSITIVE = "positive"
34
+ NEGATIVE = "negative"
35
+ NEUTRAL = "neutral"
36
+
37
+
38
+ class SentimentResult:
39
+ """Container for sentiment analysis results."""
40
+
41
+ def __init__(self, label: SentimentLabel, confidence: float, raw_scores: Optional[Dict[str, float]] = None):
42
+ self.label = label
43
+ self.confidence = confidence
44
+ self.raw_scores = raw_scores or {}
45
+
46
+ def to_dict(self) -> Dict[str, Any]:
47
+ """Convert result to dictionary format."""
48
+ return {
49
+ "label": self.label.value,
50
+ "confidence": round(self.confidence, 4),
51
+ "raw_scores": self.raw_scores
52
+ }
53
+
54
+
55
+ class SentimentAnalyzer:
56
+ """
57
+ Advanced sentiment analysis engine supporting multiple backends.
58
+
59
+ Supports both TextBlob (simple) and Transformers (accurate) for sentiment analysis
60
+ with confidence scoring and async processing capabilities.
61
+ """
62
+
63
+ def __init__(self, backend: str = "auto", model_name: str = "cardiffnlp/twitter-roberta-base-sentiment-latest"):
64
+ """
65
+ Initialize sentiment analyzer.
66
+
67
+ Args:
68
+ backend: Analysis backend ("textblob", "transformers", or "auto")
69
+ model_name: Hugging Face model name for transformers backend
70
+ """
71
+ self.backend = backend
72
+ self.model_name = model_name
73
+ self.logger = logging.getLogger(__name__)
74
+ self.executor = ThreadPoolExecutor(max_workers=2)
75
+
76
+ # Model caching
77
+ self._transformer_pipeline = None
78
+ self._model_loaded = False
79
+
80
+ # Initialize backend
81
+ self._initialize_backend()
82
+
83
+ def _initialize_backend(self) -> None:
84
+ """Initialize the selected backend."""
85
+ if self.backend == "auto":
86
+ if TRANSFORMERS_AVAILABLE:
87
+ self.backend = "transformers"
88
+ self.logger.info("Auto-selected Transformers backend")
89
+ elif TEXTBLOB_AVAILABLE:
90
+ self.backend = "textblob"
91
+ self.logger.info("Auto-selected TextBlob backend")
92
+ else:
93
+ raise RuntimeError("No sentiment analysis backend available. Install textblob or transformers.")
94
+
95
+ if self.backend == "transformers" and not TRANSFORMERS_AVAILABLE:
96
+ raise RuntimeError("Transformers backend requested but not available")
97
+
98
+ if self.backend == "textblob" and not TEXTBLOB_AVAILABLE:
99
+ raise RuntimeError("TextBlob backend requested but not available")
100
+
101
+ async def _load_transformer_model(self) -> None:
102
+ """Load transformer model asynchronously."""
103
+ if self._model_loaded:
104
+ return
105
+
106
+ try:
107
+ self.logger.info(f"Loading transformer model: {self.model_name}")
108
+
109
+ # Load model in thread pool to avoid blocking
110
+ loop = asyncio.get_event_loop()
111
+ self._transformer_pipeline = await loop.run_in_executor(
112
+ self.executor,
113
+ lambda: pipeline(
114
+ "sentiment-analysis",
115
+ model=self.model_name,
116
+ tokenizer=self.model_name,
117
+ device=0 if torch.cuda.is_available() else -1,
118
+ return_all_scores=True
119
+ )
120
+ )
121
+
122
+ self._model_loaded = True
123
+ self.logger.info("Transformer model loaded successfully")
124
+
125
+ except Exception as e:
126
+ self.logger.error(f"Failed to load transformer model: {e}")
127
+ raise RuntimeError(f"Model loading failed: {e}")
128
+
129
+ def _validate_input(self, text: str) -> str:
130
+ """
131
+ Validate and sanitize input text.
132
+
133
+ Args:
134
+ text: Input text to validate
135
+
136
+ Returns:
137
+ Sanitized text
138
+
139
+ Raises:
140
+ ValueError: If text is invalid
141
+ """
142
+ if not isinstance(text, str):
143
+ raise ValueError("Input must be a string")
144
+
145
+ text = text.strip()
146
+
147
+ if not text:
148
+ raise ValueError("Input text cannot be empty")
149
+
150
+ if len(text) > 10000: # Reasonable limit
151
+ raise ValueError("Input text too long (max 10,000 characters)")
152
+
153
+ # Basic sanitization
154
+ text = text.replace('\x00', '') # Remove null bytes
155
+
156
+ return text
157
+
158
+ def _analyze_with_textblob(self, text: str) -> SentimentResult:
159
+ """
160
+ Analyze sentiment using TextBlob.
161
+
162
+ Args:
163
+ text: Text to analyze
164
+
165
+ Returns:
166
+ SentimentResult with classification and confidence
167
+ """
168
+ try:
169
+ blob = TextBlob(text)
170
+ polarity = blob.sentiment.polarity
171
+
172
+ # Convert polarity to label and confidence
173
+ if polarity > 0.1:
174
+ label = SentimentLabel.POSITIVE
175
+ confidence = min(polarity, 1.0)
176
+ elif polarity < -0.1:
177
+ label = SentimentLabel.NEGATIVE
178
+ confidence = min(abs(polarity), 1.0)
179
+ else:
180
+ label = SentimentLabel.NEUTRAL
181
+ confidence = 1.0 - abs(polarity)
182
+
183
+ raw_scores = {
184
+ "polarity": polarity,
185
+ "subjectivity": blob.sentiment.subjectivity
186
+ }
187
+
188
+ return SentimentResult(label, confidence, raw_scores)
189
+
190
+ except Exception as e:
191
+ self.logger.error(f"TextBlob analysis failed: {e}")
192
+ raise RuntimeError(f"Sentiment analysis failed: {e}")
193
+
194
+ async def _analyze_with_transformers(self, text: str) -> SentimentResult:
195
+ """
196
+ Analyze sentiment using Transformers.
197
+
198
+ Args:
199
+ text: Text to analyze
200
+
201
+ Returns:
202
+ SentimentResult with classification and confidence
203
+ """
204
+ try:
205
+ await self._load_transformer_model()
206
+
207
+ # Run inference in thread pool
208
+ loop = asyncio.get_event_loop()
209
+ results = await loop.run_in_executor(
210
+ self.executor,
211
+ lambda: self._transformer_pipeline(text)
212
+ )
213
+
214
+ # Process results
215
+ scores = {result['label'].lower(): result['score'] for result in results[0]}
216
+
217
+ # Map model labels to our labels
218
+ label_mapping = {
219
+ 'positive': SentimentLabel.POSITIVE,
220
+ 'negative': SentimentLabel.NEGATIVE,
221
+ 'neutral': SentimentLabel.NEUTRAL,
222
+ 'label_0': SentimentLabel.NEGATIVE, # Some models use numeric labels
223
+ 'label_1': SentimentLabel.NEUTRAL,
224
+ 'label_2': SentimentLabel.POSITIVE
225
+ }
226
+
227
+ # Find best match
228
+ best_score = 0
229
+ best_label = SentimentLabel.NEUTRAL
230
+
231
+ for model_label, score in scores.items():
232
+ if model_label in label_mapping and score > best_score:
233
+ best_score = score
234
+ best_label = label_mapping[model_label]
235
+
236
+ return SentimentResult(best_label, best_score, scores)
237
+
238
+ except Exception as e:
239
+ self.logger.error(f"Transformers analysis failed: {e}")
240
+ raise RuntimeError(f"Sentiment analysis failed: {e}")
241
+
242
+ async def analyze(self, text: str) -> SentimentResult:
243
+ """
244
+ Analyze sentiment of input text.
245
+
246
+ Args:
247
+ text: Text to analyze
248
+
249
+ Returns:
250
+ SentimentResult with label, confidence, and raw scores
251
+
252
+ Raises:
253
+ ValueError: If input is invalid
254
+ RuntimeError: If analysis fails
255
+ """
256
+ # Validate input
257
+ text = self._validate_input(text)
258
+
259
+ try:
260
+ if self.backend == "transformers":
261
+ return await self._analyze_with_transformers(text)
262
+ elif self.backend == "textblob":
263
+ # Run TextBlob in thread pool since it's CPU-bound
264
+ loop = asyncio.get_event_loop()
265
+ return await loop.run_in_executor(
266
+ self.executor,
267
+ self._analyze_with_textblob,
268
+ text
269
+ )
270
+ else:
271
+ raise RuntimeError(f"Unknown backend: {self.backend}")
272
+
273
+ except Exception as e:
274
+ self.logger.error(f"Sentiment analysis failed for text: {text[:100]}... Error: {e}")
275
+ raise
276
+
277
+ async def analyze_batch(self, texts: list[str]) -> list[SentimentResult]:
278
+ """
279
+ Analyze sentiment for multiple texts concurrently.
280
+
281
+ Args:
282
+ texts: List of texts to analyze
283
+
284
+ Returns:
285
+ List of SentimentResult objects
286
+ """
287
+ if not texts:
288
+ return []
289
+
290
+ # Analyze all texts concurrently
291
+ tasks = [self.analyze(text) for text in texts]
292
+ results = await asyncio.gather(*tasks, return_exceptions=True)
293
+
294
+ # Handle exceptions
295
+ processed_results = []
296
+ for i, result in enumerate(results):
297
+ if isinstance(result, Exception):
298
+ self.logger.error(f"Failed to analyze text {i}: {result}")
299
+ # Return neutral result for failed analysis
300
+ processed_results.append(
301
+ SentimentResult(SentimentLabel.NEUTRAL, 0.0, {"error": str(result)})
302
+ )
303
+ else:
304
+ processed_results.append(result)
305
+
306
+ return processed_results
307
+
308
+ def get_info(self) -> Dict[str, Any]:
309
+ """Get information about the analyzer configuration."""
310
+ return {
311
+ "backend": self.backend,
312
+ "model_name": self.model_name if self.backend == "transformers" else None,
313
+ "model_loaded": self._model_loaded,
314
+ "textblob_available": TEXTBLOB_AVAILABLE,
315
+ "transformers_available": TRANSFORMERS_AVAILABLE,
316
+ "cuda_available": torch.cuda.is_available() if TRANSFORMERS_AVAILABLE else False
317
+ }
318
+
319
+ async def cleanup(self) -> None:
320
+ """Clean up resources."""
321
+ self.executor.shutdown(wait=True)
322
+ self.logger.info("Sentiment analyzer cleaned up")
323
+
324
+
325
+ # Global analyzer instance for reuse
326
+ _global_analyzer: Optional[SentimentAnalyzer] = None
327
+
328
+
329
+ async def get_analyzer(backend: str = "auto") -> SentimentAnalyzer:
330
+ """
331
+ Get or create global sentiment analyzer instance.
332
+
333
+ Args:
334
+ backend: Analysis backend to use
335
+
336
+ Returns:
337
+ SentimentAnalyzer instance
338
+ """
339
+ global _global_analyzer
340
+
341
+ if _global_analyzer is None:
342
+ _global_analyzer = SentimentAnalyzer(backend=backend)
343
+
344
+ return _global_analyzer
345
+
346
+
347
+ async def analyze_sentiment(text: str, backend: str = "auto") -> Dict[str, Any]:
348
+ """
349
+ Convenience function for sentiment analysis.
350
+
351
+ Args:
352
+ text: Text to analyze
353
+ backend: Analysis backend to use
354
+
355
+ Returns:
356
+ Dictionary with sentiment analysis results
357
+ """
358
+ analyzer = await get_analyzer(backend)
359
+ result = await analyzer.analyze(text)
360
+ return result.to_dict()
361
+
362
+
363
+ if __name__ == "__main__":
364
+ # Example usage
365
+ async def main():
366
+ analyzer = SentimentAnalyzer(backend="textblob")
367
+
368
+ test_texts = [
369
+ "I love this product! It's amazing!",
370
+ "This is terrible and I hate it.",
371
+ "It's okay, nothing special.",
372
+ "The weather is nice today."
373
+ ]
374
+
375
+ for text in test_texts:
376
+ result = await analyzer.analyze(text)
377
+ print(f"Text: {text}")
378
+ print(f"Result: {result.to_dict()}")
379
+ print("-" * 50)
380
+
381
+ await analyzer.cleanup()
382
+
383
+ asyncio.run(main())
src/tools.py ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP tool definitions for sentiment analysis server.
3
+
4
+ This module defines the tools available through the Model Context Protocol,
5
+ including sentiment analysis, batch processing, and analyzer information tools.
6
+ """
7
+
8
+ import logging
9
+ from typing import Dict, Any, List, Optional
10
+ from pydantic import BaseModel, Field
11
+ import asyncio
12
+
13
+ from .sentiment_analyzer import get_analyzer, SentimentAnalyzer
14
+
15
+
16
+ class SentimentAnalysisInput(BaseModel):
17
+ """Input schema for sentiment analysis tool."""
18
+ text: str = Field(..., description="Text to analyze for sentiment", min_length=1, max_length=10000)
19
+ backend: Optional[str] = Field("auto", description="Analysis backend: 'textblob', 'transformers', or 'auto'")
20
+
21
+
22
+ class BatchSentimentAnalysisInput(BaseModel):
23
+ """Input schema for batch sentiment analysis tool."""
24
+ texts: List[str] = Field(..., description="List of texts to analyze for sentiment", min_items=1, max_items=100)
25
+ backend: Optional[str] = Field("auto", description="Analysis backend: 'textblob', 'transformers', or 'auto'")
26
+
27
+
28
+ class AnalyzerInfoInput(BaseModel):
29
+ """Input schema for analyzer info tool."""
30
+ backend: Optional[str] = Field("auto", description="Backend to get info for")
31
+
32
+
33
+ class MCPTools:
34
+ """
35
+ MCP tool registry and handlers for sentiment analysis.
36
+
37
+ This class manages the registration and execution of tools available
38
+ through the Model Context Protocol interface.
39
+ """
40
+
41
+ def __init__(self):
42
+ self.logger = logging.getLogger(__name__)
43
+ self._tools = {}
44
+ self._register_tools()
45
+
46
+ def _register_tools(self) -> None:
47
+ """Register all available MCP tools."""
48
+ self._tools = {
49
+ "analyze_sentiment": {
50
+ "name": "analyze_sentiment",
51
+ "description": "Analyze the sentiment of a given text",
52
+ "inputSchema": {
53
+ "type": "object",
54
+ "properties": {
55
+ "text": {
56
+ "type": "string",
57
+ "description": "Text to analyze for sentiment",
58
+ "minLength": 1,
59
+ "maxLength": 10000
60
+ },
61
+ "backend": {
62
+ "type": "string",
63
+ "description": "Analysis backend: 'textblob', 'transformers', or 'auto'",
64
+ "enum": ["textblob", "transformers", "auto"],
65
+ "default": "auto"
66
+ }
67
+ },
68
+ "required": ["text"]
69
+ },
70
+ "handler": self._handle_analyze_sentiment
71
+ },
72
+
73
+ "analyze_sentiment_batch": {
74
+ "name": "analyze_sentiment_batch",
75
+ "description": "Analyze sentiment for multiple texts in batch",
76
+ "inputSchema": {
77
+ "type": "object",
78
+ "properties": {
79
+ "texts": {
80
+ "type": "array",
81
+ "description": "List of texts to analyze for sentiment",
82
+ "items": {
83
+ "type": "string",
84
+ "minLength": 1,
85
+ "maxLength": 10000
86
+ },
87
+ "minItems": 1,
88
+ "maxItems": 100
89
+ },
90
+ "backend": {
91
+ "type": "string",
92
+ "description": "Analysis backend: 'textblob', 'transformers', or 'auto'",
93
+ "enum": ["textblob", "transformers", "auto"],
94
+ "default": "auto"
95
+ }
96
+ },
97
+ "required": ["texts"]
98
+ },
99
+ "handler": self._handle_analyze_sentiment_batch
100
+ },
101
+
102
+ "get_analyzer_info": {
103
+ "name": "get_analyzer_info",
104
+ "description": "Get information about the sentiment analyzer configuration",
105
+ "inputSchema": {
106
+ "type": "object",
107
+ "properties": {
108
+ "backend": {
109
+ "type": "string",
110
+ "description": "Backend to get info for",
111
+ "enum": ["textblob", "transformers", "auto"],
112
+ "default": "auto"
113
+ }
114
+ },
115
+ "required": []
116
+ },
117
+ "handler": self._handle_get_analyzer_info
118
+ },
119
+
120
+ "health_check": {
121
+ "name": "health_check",
122
+ "description": "Check the health status of the sentiment analysis service",
123
+ "inputSchema": {
124
+ "type": "object",
125
+ "properties": {},
126
+ "required": []
127
+ },
128
+ "handler": self._handle_health_check
129
+ }
130
+ }
131
+
132
+ self.logger.info(f"Registered {len(self._tools)} MCP tools")
133
+
134
+ def get_tools(self) -> List[Dict[str, Any]]:
135
+ """
136
+ Get list of available tools for MCP protocol.
137
+
138
+ Returns:
139
+ List of tool definitions
140
+ """
141
+ return [
142
+ {
143
+ "name": tool["name"],
144
+ "description": tool["description"],
145
+ "inputSchema": tool["inputSchema"]
146
+ }
147
+ for tool in self._tools.values()
148
+ ]
149
+
150
+ async def call_tool(self, name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
151
+ """
152
+ Call a registered tool with given arguments.
153
+
154
+ Args:
155
+ name: Tool name
156
+ arguments: Tool arguments
157
+
158
+ Returns:
159
+ Tool execution result
160
+
161
+ Raises:
162
+ ValueError: If tool not found or arguments invalid
163
+ RuntimeError: If tool execution fails
164
+ """
165
+ if name not in self._tools:
166
+ raise ValueError(f"Tool '{name}' not found. Available tools: {list(self._tools.keys())}")
167
+
168
+ tool = self._tools[name]
169
+ handler = tool["handler"]
170
+
171
+ try:
172
+ self.logger.info(f"Calling tool '{name}' with arguments: {arguments}")
173
+ result = await handler(arguments)
174
+ self.logger.info(f"Tool '{name}' completed successfully")
175
+ return result
176
+
177
+ except Exception as e:
178
+ self.logger.error(f"Tool '{name}' failed: {e}")
179
+ raise RuntimeError(f"Tool execution failed: {e}")
180
+
181
+ async def _handle_analyze_sentiment(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
182
+ """
183
+ Handle sentiment analysis tool call.
184
+
185
+ Args:
186
+ arguments: Tool arguments containing text and optional backend
187
+
188
+ Returns:
189
+ Sentiment analysis result
190
+ """
191
+ try:
192
+ # Validate input
193
+ input_data = SentimentAnalysisInput(**arguments)
194
+
195
+ # Get analyzer and perform analysis
196
+ analyzer = await get_analyzer(input_data.backend)
197
+ result = await analyzer.analyze(input_data.text)
198
+
199
+ return {
200
+ "success": True,
201
+ "result": result.to_dict(),
202
+ "metadata": {
203
+ "backend": analyzer.backend,
204
+ "text_length": len(input_data.text),
205
+ "model_info": analyzer.get_info()
206
+ }
207
+ }
208
+
209
+ except Exception as e:
210
+ return {
211
+ "success": False,
212
+ "error": str(e),
213
+ "error_type": type(e).__name__
214
+ }
215
+
216
+ async def _handle_analyze_sentiment_batch(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
217
+ """
218
+ Handle batch sentiment analysis tool call.
219
+
220
+ Args:
221
+ arguments: Tool arguments containing texts and optional backend
222
+
223
+ Returns:
224
+ Batch sentiment analysis results
225
+ """
226
+ try:
227
+ # Validate input
228
+ input_data = BatchSentimentAnalysisInput(**arguments)
229
+
230
+ # Get analyzer and perform batch analysis
231
+ analyzer = await get_analyzer(input_data.backend)
232
+ results = await analyzer.analyze_batch(input_data.texts)
233
+
234
+ # Convert results to dictionaries
235
+ result_dicts = [result.to_dict() for result in results]
236
+
237
+ # Calculate summary statistics
238
+ labels = [result.label.value for result in results]
239
+ label_counts = {
240
+ "positive": labels.count("positive"),
241
+ "negative": labels.count("negative"),
242
+ "neutral": labels.count("neutral")
243
+ }
244
+
245
+ avg_confidence = sum(result.confidence for result in results) / len(results)
246
+
247
+ return {
248
+ "success": True,
249
+ "results": result_dicts,
250
+ "summary": {
251
+ "total_texts": len(input_data.texts),
252
+ "label_distribution": label_counts,
253
+ "average_confidence": round(avg_confidence, 4)
254
+ },
255
+ "metadata": {
256
+ "backend": analyzer.backend,
257
+ "model_info": analyzer.get_info()
258
+ }
259
+ }
260
+
261
+ except Exception as e:
262
+ return {
263
+ "success": False,
264
+ "error": str(e),
265
+ "error_type": type(e).__name__
266
+ }
267
+
268
+ async def _handle_get_analyzer_info(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
269
+ """
270
+ Handle analyzer info tool call.
271
+
272
+ Args:
273
+ arguments: Tool arguments containing optional backend
274
+
275
+ Returns:
276
+ Analyzer configuration information
277
+ """
278
+ try:
279
+ # Validate input
280
+ input_data = AnalyzerInfoInput(**arguments)
281
+
282
+ # Get analyzer info
283
+ analyzer = await get_analyzer(input_data.backend)
284
+ info = analyzer.get_info()
285
+
286
+ return {
287
+ "success": True,
288
+ "info": info,
289
+ "available_backends": ["textblob", "transformers", "auto"],
290
+ "recommended_backend": "transformers" if info.get("transformers_available") else "textblob"
291
+ }
292
+
293
+ except Exception as e:
294
+ return {
295
+ "success": False,
296
+ "error": str(e),
297
+ "error_type": type(e).__name__
298
+ }
299
+
300
+ async def _handle_health_check(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
301
+ """
302
+ Handle health check tool call.
303
+
304
+ Args:
305
+ arguments: Tool arguments (empty for health check)
306
+
307
+ Returns:
308
+ Health status information
309
+ """
310
+ try:
311
+ # Test basic functionality
312
+ test_text = "This is a test message for health check."
313
+ analyzer = await get_analyzer("auto")
314
+ result = await analyzer.analyze(test_text)
315
+
316
+ return {
317
+ "success": True,
318
+ "status": "healthy",
319
+ "test_result": result.to_dict(),
320
+ "analyzer_info": analyzer.get_info(),
321
+ "timestamp": asyncio.get_event_loop().time()
322
+ }
323
+
324
+ except Exception as e:
325
+ return {
326
+ "success": False,
327
+ "status": "unhealthy",
328
+ "error": str(e),
329
+ "error_type": type(e).__name__,
330
+ "timestamp": asyncio.get_event_loop().time()
331
+ }
332
+
333
+
334
+ # Global tools instance
335
+ _global_tools: Optional[MCPTools] = None
336
+
337
+
338
+ def get_tools() -> MCPTools:
339
+ """
340
+ Get or create global MCP tools instance.
341
+
342
+ Returns:
343
+ MCPTools instance
344
+ """
345
+ global _global_tools
346
+
347
+ if _global_tools is None:
348
+ _global_tools = MCPTools()
349
+
350
+ return _global_tools
351
+
352
+
353
+ async def list_tools() -> List[Dict[str, Any]]:
354
+ """
355
+ Get list of available MCP tools.
356
+
357
+ Returns:
358
+ List of tool definitions
359
+ """
360
+ tools = get_tools()
361
+ return tools.get_tools()
362
+
363
+
364
+ async def call_tool(name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
365
+ """
366
+ Call an MCP tool with given arguments.
367
+
368
+ Args:
369
+ name: Tool name
370
+ arguments: Tool arguments
371
+
372
+ Returns:
373
+ Tool execution result
374
+ """
375
+ tools = get_tools()
376
+ return await tools.call_tool(name, arguments)
377
+
378
+
379
+ if __name__ == "__main__":
380
+ # Example usage
381
+ async def main():
382
+ tools = get_tools()
383
+
384
+ # List available tools
385
+ available_tools = tools.get_tools()
386
+ print("Available tools:")
387
+ for tool in available_tools:
388
+ print(f"- {tool['name']}: {tool['description']}")
389
+
390
+ print("\n" + "="*50 + "\n")
391
+
392
+ # Test sentiment analysis tool
393
+ result = await tools.call_tool("analyze_sentiment", {
394
+ "text": "I love this new feature! It's absolutely amazing!",
395
+ "backend": "textblob"
396
+ })
397
+ print("Sentiment analysis result:")
398
+ print(result)
399
+
400
+ print("\n" + "="*50 + "\n")
401
+
402
+ # Test batch analysis
403
+ batch_result = await tools.call_tool("analyze_sentiment_batch", {
404
+ "texts": [
405
+ "This is great!",
406
+ "I hate this.",
407
+ "It's okay, I guess."
408
+ ],
409
+ "backend": "textblob"
410
+ })
411
+ print("Batch analysis result:")
412
+ print(batch_result)
413
+
414
+ print("\n" + "="*50 + "\n")
415
+
416
+ # Test health check
417
+ health_result = await tools.call_tool("health_check", {})
418
+ print("Health check result:")
419
+ print(health_result)
420
+
421
+ asyncio.run(main())