Denys Kanunnikov commited on
Commit
a0f810f
·
0 Parent(s):

initial commit

Browse files
README.md ADDED
@@ -0,0 +1,360 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MCP Sentiment Analysis Server
2
+
3
+ A complete **Model Context Protocol (MCP) Server** implementation for sentiment analysis with integrated **Gradio Web Interface**. This project provides both AI model integration through JSON-RPC 2.0 protocol and human-friendly web interface for sentiment analysis using TextBlob and Transformers.
4
+
5
+ ## 🚀 Features
6
+
7
+ ### Core Capabilities
8
+ - **Dual Backend Support**: TextBlob (fast) and Transformers (accurate)
9
+ - **MCP Protocol Compliance**: Full JSON-RPC 2.0 implementation
10
+ - **Modern Web Interface**: Responsive Gradio UI with real-time analysis
11
+ - **Batch Processing**: Analyze multiple texts simultaneously
12
+ - **Analysis History**: Track and visualize analysis results
13
+ - **Confidence Scoring**: Detailed confidence metrics for all predictions
14
+
15
+ ### Technical Features
16
+ - **Async Architecture**: Non-blocking request processing
17
+ - **Auto Backend Selection**: Intelligent fallback between analysis engines
18
+ - **Comprehensive Error Handling**: Graceful failure management
19
+ - **Performance Optimization**: Model caching and uvloop integration
20
+ - **Type Safety**: Full type annotations throughout codebase
21
+
22
+ ## 📋 Requirements
23
+
24
+ - Python 3.8+
25
+ - 4GB+ RAM (for Transformers models)
26
+ - Internet connection (for model downloads)
27
+
28
+ ## 🛠️ Installation
29
+
30
+ ### 1. Clone Repository
31
+ ```bash
32
+ git clone <repository-url>
33
+ cd mcp-sentiment
34
+ ```
35
+
36
+ ### 2. Install Dependencies
37
+ ```bash
38
+ pip install -r requirements.txt
39
+ ```
40
+
41
+ ### 3. Download TextBlob Corpora (Optional)
42
+ ```bash
43
+ python -c "import nltk; nltk.download('punkt'); nltk.download('brown')"
44
+ ```
45
+
46
+ ## 🎯 Quick Start
47
+
48
+ ### Run Tests
49
+ ```bash
50
+ python main.py --mode test
51
+ ```
52
+
53
+ ### Launch Web Interface
54
+ ```bash
55
+ python main.py --mode gradio
56
+ ```
57
+ Visit `http://localhost:7860` to access the web interface.
58
+
59
+ ### Start MCP Server
60
+ ```bash
61
+ python main.py --mode mcp
62
+ ```
63
+
64
+ ### Combined Mode (Default)
65
+ ```bash
66
+ python main.py --mode combined
67
+ ```
68
+ Runs both MCP server and Gradio interface simultaneously.
69
+
70
+ ## 📖 Usage Examples
71
+
72
+ ### Command Line Options
73
+
74
+ ```bash
75
+ # Different execution modes
76
+ python main.py --mode mcp # MCP server only
77
+ python main.py --mode gradio # Web interface only
78
+ python main.py --mode combined # Both services (default)
79
+ python main.py --mode test # Run functionality tests
80
+
81
+ # Gradio customization
82
+ python main.py --mode gradio --port 8080 # Custom port
83
+ python main.py --mode gradio --share # Enable public sharing
84
+ python main.py --mode gradio --debug # Debug mode
85
+
86
+ # Logging control
87
+ python main.py --log-level DEBUG # Detailed logging
88
+ ```
89
+
90
+ ### Python API Usage
91
+
92
+ ```python
93
+ import asyncio
94
+ from src import get_analyzer, analyze_sentiment
95
+
96
+ async def example():
97
+ # Simple analysis
98
+ result = await analyze_sentiment("I love this product!", backend="auto")
99
+ print(f"Sentiment: {result['label']} (confidence: {result['confidence']:.2%})")
100
+
101
+ # Advanced usage
102
+ analyzer = await get_analyzer("transformers")
103
+ result = await analyzer.analyze("This is amazing!")
104
+ print(f"Result: {result.to_dict()}")
105
+
106
+ # Batch processing
107
+ texts = ["Great product!", "Terrible service", "It's okay"]
108
+ results = await analyzer.analyze_batch(texts)
109
+ for text, result in zip(texts, results):
110
+ print(f"'{text}' -> {result.label.value}")
111
+
112
+ asyncio.run(example())
113
+ ```
114
+
115
+ ### MCP Tool Integration
116
+
117
+ The server provides these MCP tools:
118
+
119
+ #### `analyze_sentiment`
120
+ ```json
121
+ {
122
+ "name": "analyze_sentiment",
123
+ "arguments": {
124
+ "text": "I love this new feature!",
125
+ "backend": "auto"
126
+ }
127
+ }
128
+ ```
129
+
130
+ #### `analyze_sentiment_batch`
131
+ ```json
132
+ {
133
+ "name": "analyze_sentiment_batch",
134
+ "arguments": {
135
+ "texts": ["Great!", "Terrible!", "Okay."],
136
+ "backend": "transformers"
137
+ }
138
+ }
139
+ ```
140
+
141
+ #### `health_check`
142
+ ```json
143
+ {
144
+ "name": "health_check",
145
+ "arguments": {}
146
+ }
147
+ ```
148
+
149
+ ## 🏗️ Architecture
150
+
151
+ ### Project Structure
152
+ ```
153
+ mcp-sentiment/
154
+ ├── src/
155
+ │ ├── __init__.py # Package initialization
156
+ │ ├── sentiment_analyzer.py # Core sentiment analysis engine
157
+ │ ├── mcp_server.py # JSON-RPC 2.0 MCP server
158
+ │ ├── tools.py # MCP tool definitions
159
+ │ └── gradio_interface.py # Web UI implementation
160
+ ├── requirements.txt # Python dependencies
161
+ ├── main.py # Application entry point
162
+ └── README.md # This file
163
+ ```
164
+
165
+ ### Component Overview
166
+
167
+ | Component | Purpose | Technology |
168
+ |-----------|---------|------------|
169
+ | **MCP Server** | AI model communication | JSON-RPC 2.0, asyncio |
170
+ | **Gradio UI** | Human web interface | Gradio framework |
171
+ | **Sentiment Engine** | Text analysis core | TextBlob/Transformers |
172
+ | **Tool Registry** | MCP tool definitions | Python decorators |
173
+
174
+ ## 🔧 Configuration
175
+
176
+ ### Backend Selection
177
+ - **`auto`**: Automatically selects best available backend
178
+ - **`textblob`**: Fast, rule-based sentiment analysis
179
+ - **`transformers`**: AI-powered analysis with higher accuracy
180
+
181
+ ### Environment Variables
182
+ ```bash
183
+ # Optional configuration
184
+ export MCP_LOG_LEVEL=INFO
185
+ export GRADIO_SERVER_PORT=7860
186
+ export TRANSFORMERS_CACHE=/path/to/cache
187
+ ```
188
+
189
+ ## 📊 Web Interface Features
190
+
191
+ ### Main Analysis Tab
192
+ - Real-time sentiment analysis
193
+ - Confidence visualization with gauge charts
194
+ - Backend selection (TextBlob/Transformers/Auto)
195
+ - Quick example buttons for testing
196
+
197
+ ### Batch Analysis Tab
198
+ - Process multiple texts simultaneously
199
+ - Results table with sentiment and confidence
200
+ - Summary visualization with pie charts
201
+ - Export capabilities
202
+
203
+ ### History Tab
204
+ - Track all analysis results
205
+ - Statistical summaries
206
+ - Sentiment distribution charts
207
+ - Clear/refresh functionality
208
+
209
+ ### Settings & Info Tab
210
+ - Backend status and configuration
211
+ - Model information and availability
212
+ - Usage tips and documentation
213
+ - System health monitoring
214
+
215
+ ## 🧪 Testing
216
+
217
+ ### Run All Tests
218
+ ```bash
219
+ python main.py --mode test
220
+ ```
221
+
222
+ ### Manual Testing
223
+ ```python
224
+ # Test individual components
225
+ python -m src.sentiment_analyzer
226
+ python -m src.tools
227
+ python -m src.mcp_server
228
+ ```
229
+
230
+ ### Performance Testing
231
+ ```bash
232
+ # Benchmark different backends
233
+ python -c "
234
+ import asyncio
235
+ from src import get_analyzer
236
+ import time
237
+
238
+ async def benchmark():
239
+ analyzer = await get_analyzer('textblob')
240
+ start = time.time()
241
+ await analyzer.analyze('Test message')
242
+ print(f'TextBlob: {time.time() - start:.3f}s')
243
+
244
+ asyncio.run(benchmark())
245
+ "
246
+ ```
247
+
248
+ ## 🚨 Troubleshooting
249
+
250
+ ### Common Issues
251
+
252
+ #### "No sentiment analysis backend available"
253
+ ```bash
254
+ pip install textblob
255
+ python -c "import nltk; nltk.download('punkt')"
256
+ ```
257
+
258
+ #### "Failed to load transformer model"
259
+ ```bash
260
+ pip install torch transformers
261
+ # Ensure sufficient RAM (4GB+)
262
+ ```
263
+
264
+ #### "Gradio not available"
265
+ ```bash
266
+ pip install gradio plotly pandas
267
+ ```
268
+
269
+ #### Port already in use
270
+ ```bash
271
+ python main.py --mode gradio --port 8080
272
+ ```
273
+
274
+ ### Debug Mode
275
+ ```bash
276
+ python main.py --mode combined --debug --log-level DEBUG
277
+ ```
278
+
279
+ ## 🔒 Security Considerations
280
+
281
+ - Input validation and sanitization
282
+ - Rate limiting for API endpoints
283
+ - No persistent data storage by default
284
+ - Configurable access controls
285
+ - Error message sanitization
286
+
287
+ ## 🚀 Performance Optimization
288
+
289
+ ### Model Caching
290
+ - Transformers models cached after first load
291
+ - Shared analyzer instances across requests
292
+ - Memory-efficient batch processing
293
+
294
+ ### Async Processing
295
+ - Non-blocking request handling
296
+ - Concurrent batch analysis
297
+ - Thread pool for CPU-bound operations
298
+
299
+ ### Resource Management
300
+ - Automatic cleanup on shutdown
301
+ - Memory monitoring and limits
302
+ - Configurable worker threads
303
+
304
+ ## 📈 Monitoring & Metrics
305
+
306
+ ### Built-in Metrics
307
+ - Request count and success rate
308
+ - Average response times
309
+ - Error rates by type
310
+ - Backend usage statistics
311
+
312
+ ### Health Checks
313
+ ```bash
314
+ curl -X POST http://localhost:7860/health \
315
+ -H "Content-Type: application/json" \
316
+ -d '{"method": "health_check", "params": {}}'
317
+ ```
318
+
319
+ ## 🤝 Contributing
320
+
321
+ 1. Fork the repository
322
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
323
+ 3. Commit changes (`git commit -m 'Add amazing feature'`)
324
+ 4. Push to branch (`git push origin feature/amazing-feature`)
325
+ 5. Open Pull Request
326
+
327
+ ### Development Setup
328
+ ```bash
329
+ pip install -r requirements.txt
330
+ pip install pytest black flake8 mypy
331
+ ```
332
+
333
+ ### Code Quality
334
+ ```bash
335
+ black src/ # Format code
336
+ flake8 src/ # Lint code
337
+ mypy src/ # Type checking
338
+ pytest tests/ # Run tests
339
+ ```
340
+
341
+ ## 📄 License
342
+
343
+ This project is licensed under the MIT License - see the LICENSE file for details.
344
+
345
+ ## 🙏 Acknowledgments
346
+
347
+ - [Hugging Face Transformers](https://huggingface.co/transformers/) for state-of-the-art NLP models
348
+ - [TextBlob](https://textblob.readthedocs.io/) for simple sentiment analysis
349
+ - [Gradio](https://gradio.app/) for the amazing web interface framework
350
+ - [Model Context Protocol](https://modelcontextprotocol.io/) for the integration standard
351
+
352
+ ## 📞 Support
353
+
354
+ - **Issues**: [GitHub Issues](https://github.com/your-repo/issues)
355
+ - **Discussions**: [GitHub Discussions](https://github.com/your-repo/discussions)
356
+ - **Documentation**: [Wiki](https://github.com/your-repo/wiki)
357
+
358
+ ---
359
+
360
+ **Made with ❤️ for the AI community**
main.py ADDED
@@ -0,0 +1,356 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Main entry point for MCP Sentiment Analysis Server.
4
+
5
+ This script provides multiple modes of operation:
6
+ 1. MCP Server mode - JSON-RPC 2.0 server for AI model integration
7
+ 2. Gradio Interface mode - Web UI for human interaction
8
+ 3. Combined mode - Both MCP server and Gradio interface
9
+ 4. Test mode - Run basic functionality tests
10
+
11
+ Usage:
12
+ python main.py --mode mcp # Run MCP server only
13
+ python main.py --mode gradio # Run Gradio interface only
14
+ python main.py --mode combined # Run both (default)
15
+ python main.py --mode test # Run tests
16
+ """
17
+
18
+ import asyncio
19
+ import argparse
20
+ import logging
21
+ import sys
22
+ import signal
23
+ import threading
24
+ from typing import Optional
25
+ from concurrent.futures import ThreadPoolExecutor
26
+
27
+ try:
28
+ import uvloop
29
+ UVLOOP_AVAILABLE = True
30
+ except ImportError:
31
+ UVLOOP_AVAILABLE = False
32
+
33
+ from src import (
34
+ create_server,
35
+ MCPServerRunner,
36
+ create_gradio_interface,
37
+ get_analyzer
38
+ )
39
+
40
+
41
+ class ApplicationRunner:
42
+ """
43
+ Main application runner that manages different execution modes.
44
+
45
+ Supports running MCP server, Gradio interface, or both simultaneously
46
+ with proper resource management and graceful shutdown.
47
+ """
48
+
49
+ def __init__(self):
50
+ """Initialize application runner."""
51
+ self.logger = logging.getLogger(__name__)
52
+ self.mcp_server = None
53
+ self.gradio_interface = None
54
+ self.running = False
55
+ self.executor = ThreadPoolExecutor(max_workers=2)
56
+
57
+ # Setup signal handlers
58
+ signal.signal(signal.SIGINT, self._signal_handler)
59
+ signal.signal(signal.SIGTERM, self._signal_handler)
60
+
61
+ def _signal_handler(self, signum, frame):
62
+ """Handle shutdown signals."""
63
+ self.logger.info(f"Received signal {signum}, shutting down...")
64
+ self.running = False
65
+
66
+ async def run_mcp_server(self) -> None:
67
+ """Run MCP server only."""
68
+ self.logger.info("Starting MCP server mode")
69
+
70
+ try:
71
+ # Create and run MCP server
72
+ self.mcp_server = await create_server()
73
+ runner = MCPServerRunner(self.mcp_server)
74
+
75
+ self.running = True
76
+ await runner.run()
77
+
78
+ except Exception as e:
79
+ self.logger.error(f"MCP server failed: {e}")
80
+ raise
81
+ finally:
82
+ if self.mcp_server:
83
+ await self.mcp_server.stop()
84
+
85
+ def run_gradio_interface(self, **kwargs) -> None:
86
+ """Run Gradio interface only."""
87
+ self.logger.info("Starting Gradio interface mode")
88
+
89
+ try:
90
+ # Create and launch Gradio interface
91
+ self.gradio_interface = create_gradio_interface()
92
+
93
+ # Default launch parameters
94
+ launch_params = {
95
+ "server_name": "0.0.0.0",
96
+ "server_port": 7860,
97
+ "share": False,
98
+ "debug": False,
99
+ "show_error": True,
100
+ "quiet": False
101
+ }
102
+ launch_params.update(kwargs)
103
+
104
+ self.running = True
105
+ self.gradio_interface.launch(**launch_params)
106
+
107
+ except Exception as e:
108
+ self.logger.error(f"Gradio interface failed: {e}")
109
+ raise
110
+
111
+ async def run_combined(self, **gradio_kwargs) -> None:
112
+ """Run both MCP server and Gradio interface."""
113
+ self.logger.info("Starting combined mode (MCP server + Gradio interface)")
114
+
115
+ try:
116
+ # Create MCP server
117
+ self.mcp_server = await create_server()
118
+
119
+ # Create Gradio interface
120
+ self.gradio_interface = create_gradio_interface()
121
+
122
+ # Default Gradio launch parameters
123
+ launch_params = {
124
+ "server_name": "0.0.0.0",
125
+ "server_port": 7860,
126
+ "share": False,
127
+ "debug": False,
128
+ "show_error": True,
129
+ "quiet": False
130
+ }
131
+ launch_params.update(gradio_kwargs)
132
+
133
+ self.running = True
134
+
135
+ # Run Gradio in thread pool
136
+ gradio_future = self.executor.submit(
137
+ self.gradio_interface.launch, **launch_params
138
+ )
139
+
140
+ # Run MCP server in main thread
141
+ runner = MCPServerRunner(self.mcp_server)
142
+
143
+ # Start both services
144
+ self.logger.info("Both services starting...")
145
+
146
+ # Wait for either to complete or fail
147
+ try:
148
+ await runner.run()
149
+ except Exception as e:
150
+ self.logger.error(f"MCP server error: {e}")
151
+ raise
152
+ finally:
153
+ # Cleanup
154
+ if gradio_future:
155
+ gradio_future.cancel()
156
+
157
+ except Exception as e:
158
+ self.logger.error(f"Combined mode failed: {e}")
159
+ raise
160
+ finally:
161
+ if self.mcp_server:
162
+ await self.mcp_server.stop()
163
+
164
+ async def run_tests(self) -> bool:
165
+ """Run basic functionality tests."""
166
+ self.logger.info("Running functionality tests...")
167
+
168
+ try:
169
+ # Test 1: Sentiment analyzer initialization
170
+ self.logger.info("Test 1: Initializing sentiment analyzer...")
171
+ analyzer = await get_analyzer("textblob")
172
+ self.logger.info(f"✓ Analyzer initialized with backend: {analyzer.backend}")
173
+
174
+ # Test 2: Basic sentiment analysis
175
+ self.logger.info("Test 2: Basic sentiment analysis...")
176
+ test_texts = [
177
+ "I love this product!",
178
+ "This is terrible.",
179
+ "It's okay, nothing special."
180
+ ]
181
+
182
+ for text in test_texts:
183
+ result = await analyzer.analyze(text)
184
+ self.logger.info(f"✓ '{text}' -> {result.label.value} ({result.confidence:.2f})")
185
+
186
+ # Test 3: Batch analysis
187
+ self.logger.info("Test 3: Batch analysis...")
188
+ batch_results = await analyzer.analyze_batch(test_texts)
189
+ self.logger.info(f"✓ Batch analysis completed: {len(batch_results)} results")
190
+
191
+ # Test 4: MCP tools
192
+ self.logger.info("Test 4: MCP tools...")
193
+ from src.tools import get_tools
194
+ tools = get_tools()
195
+ available_tools = tools.get_tools()
196
+ self.logger.info(f"✓ {len(available_tools)} MCP tools available")
197
+
198
+ # Test 5: Tool execution
199
+ self.logger.info("Test 5: Tool execution...")
200
+ result = await tools.call_tool("analyze_sentiment", {
201
+ "text": "This is a test message",
202
+ "backend": "textblob"
203
+ })
204
+ self.logger.info(f"✓ Tool execution successful: {result.get('success', False)}")
205
+
206
+ # Test 6: Health check
207
+ self.logger.info("Test 6: Health check...")
208
+ health_result = await tools.call_tool("health_check", {})
209
+ self.logger.info(f"✓ Health check: {health_result.get('status', 'unknown')}")
210
+
211
+ # Cleanup
212
+ await analyzer.cleanup()
213
+
214
+ self.logger.info("🎉 All tests passed!")
215
+ return True
216
+
217
+ except Exception as e:
218
+ self.logger.error(f"❌ Test failed: {e}")
219
+ return False
220
+
221
+
222
+ def setup_logging(level: str = "INFO") -> None:
223
+ """
224
+ Setup logging configuration.
225
+
226
+ Args:
227
+ level: Logging level
228
+ """
229
+ logging.basicConfig(
230
+ level=getattr(logging, level.upper()),
231
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
232
+ handlers=[
233
+ logging.StreamHandler(sys.stderr)
234
+ ]
235
+ )
236
+
237
+
238
+ def parse_arguments() -> argparse.Namespace:
239
+ """
240
+ Parse command line arguments.
241
+
242
+ Returns:
243
+ Parsed arguments
244
+ """
245
+ parser = argparse.ArgumentParser(
246
+ description="MCP Sentiment Analysis Server",
247
+ formatter_class=argparse.RawDescriptionHelpFormatter,
248
+ epilog="""
249
+ Examples:
250
+ python main.py --mode mcp # MCP server only
251
+ python main.py --mode gradio # Gradio interface only
252
+ python main.py --mode combined # Both services
253
+ python main.py --mode test # Run tests
254
+ python main.py --mode gradio --port 8080 # Custom port
255
+ python main.py --mode gradio --share # Public sharing
256
+ """
257
+ )
258
+
259
+ parser.add_argument(
260
+ "--mode",
261
+ choices=["mcp", "gradio", "combined", "test"],
262
+ default="combined",
263
+ help="Execution mode (default: combined)"
264
+ )
265
+
266
+ parser.add_argument(
267
+ "--log-level",
268
+ choices=["DEBUG", "INFO", "WARNING", "ERROR"],
269
+ default="INFO",
270
+ help="Logging level (default: INFO)"
271
+ )
272
+
273
+ # Gradio-specific options
274
+ parser.add_argument(
275
+ "--port",
276
+ type=int,
277
+ default=7860,
278
+ help="Gradio server port (default: 7860)"
279
+ )
280
+
281
+ parser.add_argument(
282
+ "--host",
283
+ default="0.0.0.0",
284
+ help="Gradio server host (default: 0.0.0.0)"
285
+ )
286
+
287
+ parser.add_argument(
288
+ "--share",
289
+ action="store_true",
290
+ help="Enable Gradio public sharing"
291
+ )
292
+
293
+ parser.add_argument(
294
+ "--debug",
295
+ action="store_true",
296
+ help="Enable debug mode"
297
+ )
298
+
299
+ return parser.parse_args()
300
+
301
+
302
+ async def main() -> None:
303
+ """Main application entry point."""
304
+ # Parse arguments
305
+ args = parse_arguments()
306
+
307
+ # Setup logging
308
+ setup_logging(args.log_level)
309
+ logger = logging.getLogger(__name__)
310
+
311
+ # Use uvloop if available for better performance
312
+ if UVLOOP_AVAILABLE and args.mode in ["mcp", "combined"]:
313
+ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
314
+ logger.info("Using uvloop for better performance")
315
+
316
+ # Create application runner
317
+ runner = ApplicationRunner()
318
+
319
+ try:
320
+ if args.mode == "mcp":
321
+ await runner.run_mcp_server()
322
+
323
+ elif args.mode == "gradio":
324
+ # Gradio runs in sync mode
325
+ gradio_kwargs = {
326
+ "server_name": args.host,
327
+ "server_port": args.port,
328
+ "share": args.share,
329
+ "debug": args.debug
330
+ }
331
+ runner.run_gradio_interface(**gradio_kwargs)
332
+
333
+ elif args.mode == "combined":
334
+ gradio_kwargs = {
335
+ "server_name": args.host,
336
+ "server_port": args.port,
337
+ "share": args.share,
338
+ "debug": args.debug
339
+ }
340
+ await runner.run_combined(**gradio_kwargs)
341
+
342
+ elif args.mode == "test":
343
+ success = await runner.run_tests()
344
+ sys.exit(0 if success else 1)
345
+
346
+ except KeyboardInterrupt:
347
+ logger.info("Application interrupted by user")
348
+ except Exception as e:
349
+ logger.error(f"Application error: {e}")
350
+ sys.exit(1)
351
+ finally:
352
+ logger.info("Application shutdown complete")
353
+
354
+
355
+ if __name__ == "__main__":
356
+ asyncio.run(main())
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=4.0.0
2
+ textblob>=0.17.1
3
+ transformers>=4.30.0
4
+ torch>=2.0.0
5
+ mcp>=1.0.0
6
+ asyncio>=3.4.3
7
+ aiohttp>=3.8.0
8
+ pydantic>=2.0.0
9
+ typing-extensions>=4.0.0
10
+ plotly>=5.0.0
11
+ pandas>=2.0.0
12
+ pytest>=7.0.0
13
+ uvloop>=0.17.0
src/__init__.py ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Sentiment Analysis Server Package.
3
+
4
+ This package provides a complete Model Context Protocol server implementation
5
+ for sentiment analysis with Gradio web interface integration.
6
+ """
7
+
8
+ from .sentiment_analyzer import (
9
+ SentimentAnalyzer,
10
+ SentimentResult,
11
+ SentimentLabel,
12
+ get_analyzer,
13
+ analyze_sentiment
14
+ )
15
+
16
+ from .tools import (
17
+ MCPTools,
18
+ get_tools,
19
+ list_tools,
20
+ call_tool
21
+ )
22
+
23
+ from .mcp_server import (
24
+ SentimentMCPServer,
25
+ MCPServerRunner,
26
+ create_server
27
+ )
28
+
29
+ from .gradio_interface import (
30
+ GradioInterface,
31
+ create_gradio_interface
32
+ )
33
+
34
+ __version__ = "1.0.0"
35
+ __author__ = "MCP Sentiment Analysis Team"
36
+ __description__ = "Model Context Protocol server for sentiment analysis with Gradio interface"
37
+
38
+ __all__ = [
39
+ # Sentiment Analysis
40
+ "SentimentAnalyzer",
41
+ "SentimentResult",
42
+ "SentimentLabel",
43
+ "get_analyzer",
44
+ "analyze_sentiment",
45
+
46
+ # MCP Tools
47
+ "MCPTools",
48
+ "get_tools",
49
+ "list_tools",
50
+ "call_tool",
51
+
52
+ # MCP Server
53
+ "SentimentMCPServer",
54
+ "MCPServerRunner",
55
+ "create_server",
56
+
57
+ # Gradio Interface
58
+ "GradioInterface",
59
+ "create_gradio_interface",
60
+ ]
src/__pycache__/__init__.cpython-312.pyc ADDED
Binary file (1.11 kB). View file
 
src/__pycache__/gradio_interface.cpython-312.pyc ADDED
Binary file (31.5 kB). View file
 
src/__pycache__/mcp_server.cpython-312.pyc ADDED
Binary file (22.8 kB). View file
 
src/__pycache__/sentiment_analyzer.cpython-312.pyc ADDED
Binary file (16.4 kB). View file
 
src/__pycache__/tools.cpython-312.pyc ADDED
Binary file (15.1 kB). View file
 
src/gradio_interface.py ADDED
@@ -0,0 +1,534 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Gradio web interface for sentiment analysis.
3
+
4
+ This module provides a modern, responsive web interface using Gradio
5
+ for human interaction with the sentiment analysis system, including
6
+ real-time analysis, confidence visualization, and history tracking.
7
+ """
8
+
9
+ import asyncio
10
+ import logging
11
+ import json
12
+ from typing import Dict, Any, List, Tuple, Optional
13
+ from datetime import datetime
14
+ import pandas as pd
15
+ import plotly.graph_objects as go
16
+ import plotly.express as px
17
+
18
+ try:
19
+ import gradio as gr
20
+ GRADIO_AVAILABLE = True
21
+ except ImportError:
22
+ GRADIO_AVAILABLE = False
23
+ logging.error("Gradio not available. Install with: pip install gradio")
24
+
25
+ from .sentiment_analyzer import get_analyzer, SentimentResult, SentimentLabel
26
+
27
+
28
+ class SentimentHistory:
29
+ """Manages sentiment analysis history."""
30
+
31
+ def __init__(self, max_entries: int = 100):
32
+ self.max_entries = max_entries
33
+ self.entries: List[Dict[str, Any]] = []
34
+ self.logger = logging.getLogger(__name__)
35
+
36
+ def add_entry(self, text: str, result: SentimentResult, backend: str) -> None:
37
+ entry = {
38
+ "timestamp": datetime.now().isoformat(),
39
+ "text": text[:100] + "..." if len(text) > 100 else text,
40
+ "full_text": text,
41
+ "label": result.label.value,
42
+ "confidence": result.confidence,
43
+ "backend": backend,
44
+ "raw_scores": result.raw_scores
45
+ }
46
+
47
+ self.entries.append(entry)
48
+
49
+ if len(self.entries) > self.max_entries:
50
+ self.entries = self.entries[-self.max_entries:]
51
+
52
+ def get_recent_entries(self, count: int = 10) -> List[Dict[str, Any]]:
53
+ return self.entries[-count:] if self.entries else []
54
+
55
+ def get_statistics(self) -> Dict[str, Any]:
56
+ if not self.entries:
57
+ return {
58
+ "total_analyses": 0,
59
+ "label_distribution": {},
60
+ "average_confidence": 0.0,
61
+ "backend_usage": {}
62
+ }
63
+
64
+ labels = [entry["label"] for entry in self.entries]
65
+ confidences = [entry["confidence"] for entry in self.entries]
66
+ backends = [entry["backend"] for entry in self.entries]
67
+
68
+ label_counts = {
69
+ "positive": labels.count("positive"),
70
+ "negative": labels.count("negative"),
71
+ "neutral": labels.count("neutral")
72
+ }
73
+
74
+ backend_counts = {}
75
+ for backend in backends:
76
+ backend_counts[backend] = backend_counts.get(backend, 0) + 1
77
+
78
+ return {
79
+ "total_analyses": len(self.entries),
80
+ "label_distribution": label_counts,
81
+ "average_confidence": sum(confidences) / len(confidences),
82
+ "backend_usage": backend_counts
83
+ }
84
+
85
+
86
+ class GradioInterface:
87
+ """Gradio web interface for sentiment analysis."""
88
+
89
+ def __init__(self, title: str = "Sentiment Analysis Server",
90
+ description: str = "Analyze text sentiment using TextBlob or Transformers"):
91
+ self.title = title
92
+ self.description = description
93
+ self.logger = logging.getLogger(__name__)
94
+ self.history = SentimentHistory()
95
+ self.interface = None
96
+ self._setup_interface()
97
+
98
+ def _setup_interface(self) -> None:
99
+ if not GRADIO_AVAILABLE:
100
+ raise RuntimeError("Gradio not available")
101
+
102
+ with gr.Blocks(
103
+ theme=gr.themes.Soft(),
104
+ title=self.title
105
+ ) as interface:
106
+
107
+ gr.Markdown(f"# {self.title}")
108
+ gr.Markdown(f"*{self.description}*")
109
+
110
+ with gr.Tabs():
111
+ with gr.TabItem("Sentiment Analysis"):
112
+ with gr.Row():
113
+ with gr.Column(scale=2):
114
+ text_input = gr.Textbox(
115
+ label="Text to Analyze",
116
+ placeholder="Enter text here to analyze its sentiment...",
117
+ lines=4
118
+ )
119
+
120
+ with gr.Row():
121
+ backend_choice = gr.Dropdown(
122
+ choices=["auto", "textblob", "transformers"],
123
+ value="auto",
124
+ label="Analysis Backend"
125
+ )
126
+
127
+ analyze_btn = gr.Button(
128
+ "Analyze Sentiment",
129
+ variant="primary"
130
+ )
131
+
132
+ with gr.Column(scale=1):
133
+ result_display = gr.HTML(
134
+ value="<p>Enter text and click 'Analyze Sentiment' to see results.</p>"
135
+ )
136
+
137
+ confidence_plot = gr.Plot(visible=False)
138
+
139
+ gr.Markdown("### Quick Examples")
140
+ with gr.Row():
141
+ pos_btn = gr.Button("😊 Positive", size="sm")
142
+ neu_btn = gr.Button("😐 Neutral", size="sm")
143
+ neg_btn = gr.Button("😞 Negative", size="sm")
144
+ mix_btn = gr.Button("📝 Mixed", size="sm")
145
+
146
+ with gr.TabItem("Batch Analysis"):
147
+ with gr.Row():
148
+ with gr.Column():
149
+ batch_input = gr.Textbox(
150
+ label="Texts to Analyze (one per line)",
151
+ placeholder="Enter multiple texts, one per line...",
152
+ lines=8
153
+ )
154
+
155
+ with gr.Row():
156
+ batch_backend = gr.Dropdown(
157
+ choices=["auto", "textblob", "transformers"],
158
+ value="auto",
159
+ label="Analysis Backend"
160
+ )
161
+
162
+ batch_analyze_btn = gr.Button(
163
+ "Analyze Batch",
164
+ variant="primary"
165
+ )
166
+
167
+ with gr.Column():
168
+ batch_results = gr.DataFrame(
169
+ label="Batch Results",
170
+ headers=["Text", "Sentiment", "Confidence"]
171
+ )
172
+
173
+ batch_summary_plot = gr.Plot(visible=False)
174
+
175
+ with gr.TabItem("Analysis History"):
176
+ with gr.Row():
177
+ refresh_history_btn = gr.Button("Refresh History", variant="secondary")
178
+ clear_history_btn = gr.Button("Clear History", variant="stop")
179
+
180
+ with gr.Row():
181
+ with gr.Column(scale=2):
182
+ history_table = gr.DataFrame(
183
+ label="Recent Analyses",
184
+ headers=["Time", "Text", "Sentiment", "Confidence", "Backend"]
185
+ )
186
+
187
+ with gr.Column(scale=1):
188
+ stats_display = gr.HTML(value="<p>No analyses yet.</p>")
189
+ history_plot = gr.Plot(visible=False)
190
+
191
+ with gr.TabItem("Settings & Info"):
192
+ with gr.Row():
193
+ with gr.Column():
194
+ gr.Markdown("### Backend Information")
195
+ backend_info = gr.HTML(value="<p>Loading backend information...</p>")
196
+ refresh_info_btn = gr.Button("Refresh Info", variant="secondary")
197
+
198
+ with gr.Column():
199
+ gr.Markdown("### Usage Tips")
200
+ gr.Markdown("""
201
+ - **Auto**: Automatically selects the best available backend
202
+ - **TextBlob**: Fast, simple sentiment analysis
203
+ - **Transformers**: More accurate, AI-powered analysis
204
+ - **Batch Analysis**: Process multiple texts at once
205
+ - **History**: Track your analysis results over time
206
+ """)
207
+
208
+ # Event handlers
209
+ def analyze_sentiment(text: str, backend: str) -> Tuple[str, gr.Plot]:
210
+ return asyncio.run(self._analyze_sentiment_async(text, backend))
211
+
212
+ def analyze_batch(texts: str, backend: str) -> Tuple[pd.DataFrame, gr.Plot]:
213
+ return asyncio.run(self._analyze_batch_async(texts, backend))
214
+
215
+ def refresh_history() -> Tuple[pd.DataFrame, str, gr.Plot]:
216
+ return self._get_history_data()
217
+
218
+ def clear_history() -> Tuple[pd.DataFrame, str, gr.Plot]:
219
+ self.history.entries.clear()
220
+ return self._get_history_data()
221
+
222
+ def get_backend_info() -> str:
223
+ return asyncio.run(self._get_backend_info_async())
224
+
225
+ # Example texts
226
+ examples = [
227
+ "I absolutely love this new feature! It's incredible and makes everything so much easier.",
228
+ "The weather is okay today, nothing particularly special about it.",
229
+ "This is terrible and frustrating. I hate how complicated this has become.",
230
+ "The movie had great visuals but the plot was disappointing. Mixed feelings overall."
231
+ ]
232
+
233
+ # Wire up events
234
+ analyze_btn.click(
235
+ analyze_sentiment,
236
+ inputs=[text_input, backend_choice],
237
+ outputs=[result_display, confidence_plot]
238
+ )
239
+
240
+ batch_analyze_btn.click(
241
+ analyze_batch,
242
+ inputs=[batch_input, batch_backend],
243
+ outputs=[batch_results, batch_summary_plot]
244
+ )
245
+
246
+ refresh_history_btn.click(
247
+ refresh_history,
248
+ outputs=[history_table, stats_display, history_plot]
249
+ )
250
+
251
+ clear_history_btn.click(
252
+ clear_history,
253
+ outputs=[history_table, stats_display, history_plot]
254
+ )
255
+
256
+ refresh_info_btn.click(
257
+ get_backend_info,
258
+ outputs=[backend_info]
259
+ )
260
+
261
+ # Example buttons
262
+ pos_btn.click(lambda: examples[0], outputs=[text_input])
263
+ neu_btn.click(lambda: examples[1], outputs=[text_input])
264
+ neg_btn.click(lambda: examples[2], outputs=[text_input])
265
+ mix_btn.click(lambda: examples[3], outputs=[text_input])
266
+
267
+ # Load initial data
268
+ interface.load(get_backend_info, outputs=[backend_info])
269
+ interface.load(refresh_history, outputs=[history_table, stats_display, history_plot])
270
+
271
+ self.interface = interface
272
+
273
+ async def _analyze_sentiment_async(self, text: str, backend: str) -> Tuple[str, gr.Plot]:
274
+ try:
275
+ if not text.strip():
276
+ return "<p>Please enter some text to analyze.</p>", gr.Plot(visible=False)
277
+
278
+ analyzer = await get_analyzer(backend)
279
+ result = await analyzer.analyze(text)
280
+
281
+ self.history.add_entry(text, result, analyzer.backend)
282
+
283
+ sentiment_class = f"sentiment-{result.label.value}"
284
+ confidence_class = (
285
+ "confidence-high" if result.confidence > 0.7
286
+ else "confidence-medium" if result.confidence > 0.4
287
+ else "confidence-low"
288
+ )
289
+
290
+ html_result = f"""
291
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #f8fafc; border-left: 4px solid #3b82f6;">
292
+ <h3>Analysis Result</h3>
293
+ <p><strong>Sentiment:</strong> <span style="color: {'#22c55e' if result.label.value == 'positive' else '#ef4444' if result.label.value == 'negative' else '#6b7280'}; font-weight: bold;">{result.label.value.title()}</span></p>
294
+ <p><strong>Confidence:</strong> <span style="color: {'#059669' if result.confidence > 0.7 else '#d97706' if result.confidence > 0.4 else '#dc2626'};">{result.confidence:.2%}</span></p>
295
+ <p><strong>Backend:</strong> {analyzer.backend}</p>
296
+ <p><strong>Text Length:</strong> {len(text)} characters</p>
297
+ </div>
298
+ """
299
+
300
+ plot = self._create_confidence_plot(result)
301
+ return html_result, plot
302
+
303
+ except Exception as e:
304
+ self.logger.error(f"Analysis failed: {e}")
305
+ error_html = f"""
306
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #fef2f2; border-left: 4px solid #ef4444;">
307
+ <h3>Analysis Error</h3>
308
+ <p><strong>Error:</strong> {str(e)}</p>
309
+ <p>Please try again or check your input.</p>
310
+ </div>
311
+ """
312
+ return error_html, gr.Plot(visible=False)
313
+
314
+ async def _analyze_batch_async(self, texts: str, backend: str) -> Tuple[pd.DataFrame, gr.Plot]:
315
+ try:
316
+ if not texts.strip():
317
+ return pd.DataFrame(), gr.Plot(visible=False)
318
+
319
+ text_list = [t.strip() for t in texts.split('\n') if t.strip()]
320
+
321
+ if not text_list:
322
+ return pd.DataFrame(), gr.Plot(visible=False)
323
+
324
+ analyzer = await get_analyzer(backend)
325
+ results = await analyzer.analyze_batch(text_list)
326
+
327
+ data = []
328
+ for text, result in zip(text_list, results):
329
+ self.history.add_entry(text, result, analyzer.backend)
330
+
331
+ data.append({
332
+ "Text": text[:50] + "..." if len(text) > 50 else text,
333
+ "Sentiment": result.label.value.title(),
334
+ "Confidence": f"{result.confidence:.2%}"
335
+ })
336
+
337
+ df = pd.DataFrame(data)
338
+ plot = self._create_batch_summary_plot(results)
339
+
340
+ return df, plot
341
+
342
+ except Exception as e:
343
+ self.logger.error(f"Batch analysis failed: {e}")
344
+ return pd.DataFrame([{"Error": str(e)}]), gr.Plot(visible=False)
345
+
346
+ def _create_confidence_plot(self, result: SentimentResult) -> gr.Plot:
347
+ try:
348
+ fig = go.Figure(go.Indicator(
349
+ mode="gauge+number",
350
+ value=result.confidence * 100,
351
+ domain={'x': [0, 1], 'y': [0, 1]},
352
+ title={'text': f"Confidence - {result.label.value.title()}"},
353
+ gauge={
354
+ 'axis': {'range': [None, 100]},
355
+ 'bar': {'color': "darkblue"},
356
+ 'steps': [
357
+ {'range': [0, 40], 'color': "lightgray"},
358
+ {'range': [40, 70], 'color': "yellow"},
359
+ {'range': [70, 100], 'color': "green"}
360
+ ]
361
+ }
362
+ ))
363
+
364
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20))
365
+ return gr.Plot(value=fig, visible=True)
366
+
367
+ except Exception as e:
368
+ self.logger.error(f"Failed to create confidence plot: {e}")
369
+ return gr.Plot(visible=False)
370
+
371
+ def _create_batch_summary_plot(self, results: List[SentimentResult]) -> gr.Plot:
372
+ try:
373
+ labels = [result.label.value for result in results]
374
+ label_counts = {
375
+ "Positive": labels.count("positive"),
376
+ "Negative": labels.count("negative"),
377
+ "Neutral": labels.count("neutral")
378
+ }
379
+
380
+ fig = px.pie(
381
+ values=list(label_counts.values()),
382
+ names=list(label_counts.keys()),
383
+ title="Sentiment Distribution",
384
+ color_discrete_map={
385
+ "Positive": "#22c55e",
386
+ "Negative": "#ef4444",
387
+ "Neutral": "#6b7280"
388
+ }
389
+ )
390
+
391
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20))
392
+ return gr.Plot(value=fig, visible=True)
393
+
394
+ except Exception as e:
395
+ self.logger.error(f"Failed to create batch summary plot: {e}")
396
+ return gr.Plot(visible=False)
397
+
398
+ def _get_history_data(self) -> Tuple[pd.DataFrame, str, gr.Plot]:
399
+ try:
400
+ entries = self.history.get_recent_entries(20)
401
+
402
+ if not entries:
403
+ empty_df = pd.DataFrame(columns=["Time", "Text", "Sentiment", "Confidence", "Backend"])
404
+ return empty_df, "<p>No analyses yet.</p>", gr.Plot(visible=False)
405
+
406
+ data = []
407
+ for entry in reversed(entries):
408
+ data.append({
409
+ "Time": entry["timestamp"][:19].replace("T", " "),
410
+ "Text": entry["text"],
411
+ "Sentiment": entry["label"].title(),
412
+ "Confidence": f"{entry['confidence']:.2%}",
413
+ "Backend": entry["backend"]
414
+ })
415
+
416
+ df = pd.DataFrame(data)
417
+ stats = self.history.get_statistics()
418
+
419
+ stats_html = f"""
420
+ <div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); color: white; padding: 1rem; border-radius: 0.5rem;">
421
+ <h3>📊 Analysis Statistics</h3>
422
+ <p><strong>Total Analyses:</strong> {stats['total_analyses']}</p>
423
+ <p><strong>Average Confidence:</strong> {stats['average_confidence']:.2%}</p>
424
+ <h4>Sentiment Distribution:</h4>
425
+ <ul>
426
+ <li>Positive: {stats['label_distribution'].get('positive', 0)}</li>
427
+ <li>Negative: {stats['label_distribution'].get('negative', 0)}</li>
428
+ <li>Neutral: {stats['label_distribution'].get('neutral', 0)}</li>
429
+ </ul>
430
+ </div>
431
+ """
432
+
433
+ plot = self._create_history_plot(stats) if stats['total_analyses'] > 0 else gr.Plot(visible=False)
434
+ return df, stats_html, plot
435
+
436
+ except Exception as e:
437
+ self.logger.error(f"Failed to get history data: {e}")
438
+ error_df = pd.DataFrame([{"Error": str(e)}])
439
+ return error_df, f"<p>Error loading history: {e}</p>", gr.Plot(visible=False)
440
+
441
+ def _create_history_plot(self, stats: Dict[str, Any]) -> gr.Plot:
442
+ try:
443
+ labels = list(stats['label_distribution'].keys())
444
+ values = list(stats['label_distribution'].values())
445
+
446
+ fig = px.bar(
447
+ x=[label.title() for label in labels],
448
+ y=values,
449
+ title="Historical Sentiment Distribution",
450
+ color=labels,
451
+ color_discrete_map={
452
+ "positive": "#22c55e",
453
+ "negative": "#ef4444",
454
+ "neutral": "#6b7280"
455
+ }
456
+ )
457
+
458
+ fig.update_layout(height=300, margin=dict(l=20, r=20, t=40, b=20), showlegend=False)
459
+ return gr.Plot(value=fig, visible=True)
460
+
461
+ except Exception as e:
462
+ self.logger.error(f"Failed to create history plot: {e}")
463
+ return gr.Plot(visible=False)
464
+
465
+ async def _get_backend_info_async(self) -> str:
466
+ try:
467
+ analyzer = await get_analyzer("auto")
468
+ info = analyzer.get_info()
469
+
470
+ html = f"""
471
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #f0f9ff; border-left: 4px solid #0ea5e9;">
472
+ <h3>🔧 Backend Information</h3>
473
+ <p><strong>Current Backend:</strong> {info['backend']}</p>
474
+ <p><strong>Model Loaded:</strong> {'Yes' if info['model_loaded'] else 'No'}</p>
475
+ <p><strong>TextBlob Available:</strong> {'Yes' if info['textblob_available'] else 'No'}</p>
476
+ <p><strong>Transformers Available:</strong> {'Yes' if info['transformers_available'] else 'No'}</p>
477
+ <p><strong>CUDA Available:</strong> {'Yes' if info.get('cuda_available', False) else 'No'}</p>
478
+ {f"<p><strong>Model Name:</strong> {info['model_name']}</p>" if info.get('model_name') else ""}
479
+ </div>
480
+ """
481
+ return html
482
+
483
+ except Exception as e:
484
+ self.logger.error(f"Failed to get backend info: {e}")
485
+ return f"""
486
+ <div style="padding: 1rem; border-radius: 0.5rem; background: #fef2f2; border-left: 4px solid #ef4444;">
487
+ <h3>❌ Backend Error</h3>
488
+ <p>Failed to load backend information: {str(e)}</p>
489
+ </div>
490
+ """
491
+
492
+ def launch(self, **kwargs) -> None:
493
+ if not self.interface:
494
+ raise RuntimeError("Interface not initialized")
495
+
496
+ launch_params = {
497
+ "server_name": "0.0.0.0",
498
+ "server_port": 7860,
499
+ "share": False,
500
+ "debug": False,
501
+ "show_error": True,
502
+ "quiet": False
503
+ }
504
+
505
+ launch_params.update(kwargs)
506
+
507
+ self.logger.info(f"Launching Gradio interface on {launch_params['server_name']}:{launch_params['server_port']}")
508
+
509
+ try:
510
+ self.interface.launch(**launch_params)
511
+ except Exception as e:
512
+ self.logger.error(f"Failed to launch interface: {e}")
513
+ raise
514
+
515
+
516
+ def create_gradio_interface(**kwargs) -> GradioInterface:
517
+ if not GRADIO_AVAILABLE:
518
+ raise RuntimeError("Gradio not available. Install with: pip install gradio")
519
+
520
+ return GradioInterface(**kwargs)
521
+
522
+
523
+ async def main() -> None:
524
+ logging.basicConfig(
525
+ level=logging.INFO,
526
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
527
+ )
528
+
529
+ interface = create_gradio_interface()
530
+ interface.launch(debug=True)
531
+
532
+
533
+ if __name__ == "__main__":
534
+ asyncio.run(main())
src/mcp_server.py ADDED
@@ -0,0 +1,572 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP Server implementation for sentiment analysis.
3
+
4
+ This module implements a Model Context Protocol server that provides
5
+ sentiment analysis capabilities through JSON-RPC 2.0 protocol with
6
+ async request handling and comprehensive error management.
7
+ """
8
+
9
+ import asyncio
10
+ import json
11
+ import logging
12
+ import sys
13
+ import traceback
14
+ from typing import Dict, Any, List, Optional, Union
15
+ from dataclasses import dataclass
16
+ from enum import Enum
17
+ import uuid
18
+ from datetime import datetime
19
+
20
+ try:
21
+ import uvloop
22
+ UVLOOP_AVAILABLE = True
23
+ except ImportError:
24
+ UVLOOP_AVAILABLE = False
25
+
26
+ from .tools import get_tools, list_tools, call_tool
27
+ from .sentiment_analyzer import get_analyzer
28
+
29
+
30
+ class MCPMessageType(Enum):
31
+ """MCP message types."""
32
+ REQUEST = "request"
33
+ RESPONSE = "response"
34
+ NOTIFICATION = "notification"
35
+
36
+
37
+ @dataclass
38
+ class MCPRequest:
39
+ """MCP request message structure."""
40
+ jsonrpc: str
41
+ method: str
42
+ params: Optional[Dict[str, Any]] = None
43
+ id: Optional[Union[str, int]] = None
44
+
45
+
46
+ @dataclass
47
+ class MCPResponse:
48
+ """MCP response message structure."""
49
+ jsonrpc: str
50
+ id: Optional[Union[str, int]]
51
+ result: Optional[Dict[str, Any]] = None
52
+ error: Optional[Dict[str, Any]] = None
53
+
54
+
55
+ @dataclass
56
+ class MCPError:
57
+ """MCP error structure."""
58
+ code: int
59
+ message: str
60
+ data: Optional[Dict[str, Any]] = None
61
+
62
+
63
+ class MCPErrorCodes:
64
+ """Standard JSON-RPC 2.0 error codes."""
65
+ PARSE_ERROR = -32700
66
+ INVALID_REQUEST = -32600
67
+ METHOD_NOT_FOUND = -32601
68
+ INVALID_PARAMS = -32602
69
+ INTERNAL_ERROR = -32603
70
+
71
+ # Custom error codes
72
+ TOOL_ERROR = -32000
73
+ ANALYZER_ERROR = -32001
74
+ VALIDATION_ERROR = -32002
75
+
76
+
77
+ class SentimentMCPServer:
78
+ """
79
+ Model Context Protocol server for sentiment analysis.
80
+
81
+ Implements JSON-RPC 2.0 protocol with async request handling,
82
+ tool registration, and comprehensive error management.
83
+ """
84
+
85
+ def __init__(self, name: str = "sentiment-analyzer", version: str = "1.0.0"):
86
+ """
87
+ Initialize MCP server.
88
+
89
+ Args:
90
+ name: Server name
91
+ version: Server version
92
+ """
93
+ self.name = name
94
+ self.version = version
95
+ self.logger = logging.getLogger(__name__)
96
+
97
+ # Server state
98
+ self.running = False
99
+ self.request_count = 0
100
+ self.error_count = 0
101
+ self.start_time = None
102
+
103
+ # Request handlers
104
+ self._handlers = {
105
+ "initialize": self._handle_initialize,
106
+ "tools/list": self._handle_list_tools,
107
+ "tools/call": self._handle_call_tool,
108
+ "ping": self._handle_ping,
109
+ "server/info": self._handle_server_info,
110
+ "server/stats": self._handle_server_stats
111
+ }
112
+
113
+ self.logger.info(f"Initialized MCP server '{name}' v{version}")
114
+
115
+ async def start(self) -> None:
116
+ """Start the MCP server."""
117
+ self.running = True
118
+ self.start_time = datetime.now()
119
+ self.logger.info(f"MCP server '{self.name}' started")
120
+
121
+ # Pre-load analyzer to improve first request performance
122
+ try:
123
+ await get_analyzer("auto")
124
+ self.logger.info("Sentiment analyzer pre-loaded successfully")
125
+ except Exception as e:
126
+ self.logger.warning(f"Failed to pre-load analyzer: {e}")
127
+
128
+ async def stop(self) -> None:
129
+ """Stop the MCP server."""
130
+ self.running = False
131
+ self.logger.info(f"MCP server '{self.name}' stopped")
132
+
133
+ # Cleanup resources
134
+ try:
135
+ analyzer = await get_analyzer("auto")
136
+ await analyzer.cleanup()
137
+ except Exception as e:
138
+ self.logger.error(f"Error during cleanup: {e}")
139
+
140
+ def _create_error_response(self, request_id: Optional[Union[str, int]],
141
+ code: int, message: str,
142
+ data: Optional[Dict[str, Any]] = None) -> MCPResponse:
143
+ """
144
+ Create error response.
145
+
146
+ Args:
147
+ request_id: Request ID
148
+ code: Error code
149
+ message: Error message
150
+ data: Additional error data
151
+
152
+ Returns:
153
+ MCPResponse with error
154
+ """
155
+ error = {
156
+ "code": code,
157
+ "message": message
158
+ }
159
+ if data:
160
+ error["data"] = data
161
+
162
+ return MCPResponse(
163
+ jsonrpc="2.0",
164
+ id=request_id,
165
+ error=error
166
+ )
167
+
168
+ def _create_success_response(self, request_id: Optional[Union[str, int]],
169
+ result: Dict[str, Any]) -> MCPResponse:
170
+ """
171
+ Create success response.
172
+
173
+ Args:
174
+ request_id: Request ID
175
+ result: Response result
176
+
177
+ Returns:
178
+ MCPResponse with result
179
+ """
180
+ return MCPResponse(
181
+ jsonrpc="2.0",
182
+ id=request_id,
183
+ result=result
184
+ )
185
+
186
+ def _parse_request(self, message: str) -> MCPRequest:
187
+ """
188
+ Parse JSON-RPC request message.
189
+
190
+ Args:
191
+ message: JSON message string
192
+
193
+ Returns:
194
+ Parsed MCPRequest
195
+
196
+ Raises:
197
+ ValueError: If parsing fails
198
+ """
199
+ try:
200
+ data = json.loads(message)
201
+ except json.JSONDecodeError as e:
202
+ raise ValueError(f"Invalid JSON: {e}")
203
+
204
+ # Validate required fields
205
+ if not isinstance(data, dict):
206
+ raise ValueError("Request must be a JSON object")
207
+
208
+ if data.get("jsonrpc") != "2.0":
209
+ raise ValueError("Invalid JSON-RPC version")
210
+
211
+ if "method" not in data:
212
+ raise ValueError("Missing 'method' field")
213
+
214
+ return MCPRequest(
215
+ jsonrpc=data["jsonrpc"],
216
+ method=data["method"],
217
+ params=data.get("params"),
218
+ id=data.get("id")
219
+ )
220
+
221
+ async def process_request(self, message: str) -> str:
222
+ """
223
+ Process incoming JSON-RPC request.
224
+
225
+ Args:
226
+ message: JSON-RPC request message
227
+
228
+ Returns:
229
+ JSON-RPC response message
230
+ """
231
+ request_id = None
232
+
233
+ try:
234
+ # Parse request
235
+ try:
236
+ request = self._parse_request(message)
237
+ request_id = request.id
238
+ except ValueError as e:
239
+ response = self._create_error_response(
240
+ None, MCPErrorCodes.PARSE_ERROR, str(e)
241
+ )
242
+ return json.dumps(response.__dict__)
243
+
244
+ # Update stats
245
+ self.request_count += 1
246
+
247
+ # Log request
248
+ self.logger.debug(f"Processing request: {request.method} (ID: {request_id})")
249
+
250
+ # Handle request
251
+ if request.method in self._handlers:
252
+ handler = self._handlers[request.method]
253
+ result = await handler(request.params or {})
254
+ response = self._create_success_response(request_id, result)
255
+ else:
256
+ response = self._create_error_response(
257
+ request_id,
258
+ MCPErrorCodes.METHOD_NOT_FOUND,
259
+ f"Method '{request.method}' not found"
260
+ )
261
+
262
+ except Exception as e:
263
+ self.error_count += 1
264
+ self.logger.error(f"Request processing failed: {e}")
265
+ self.logger.debug(traceback.format_exc())
266
+
267
+ response = self._create_error_response(
268
+ request_id,
269
+ MCPErrorCodes.INTERNAL_ERROR,
270
+ "Internal server error",
271
+ {"error": str(e), "type": type(e).__name__}
272
+ )
273
+
274
+ # Convert response to JSON
275
+ response_dict = response.__dict__
276
+ # Remove None values
277
+ response_dict = {k: v for k, v in response_dict.items() if v is not None}
278
+
279
+ return json.dumps(response_dict)
280
+
281
+ async def _handle_initialize(self, params: Dict[str, Any]) -> Dict[str, Any]:
282
+ """
283
+ Handle initialize request.
284
+
285
+ Args:
286
+ params: Initialize parameters
287
+
288
+ Returns:
289
+ Server capabilities
290
+ """
291
+ client_info = params.get("clientInfo", {})
292
+ self.logger.info(f"Client connected: {client_info}")
293
+
294
+ return {
295
+ "protocolVersion": "2024-11-05",
296
+ "capabilities": {
297
+ "tools": {
298
+ "listChanged": False
299
+ },
300
+ "logging": {},
301
+ "prompts": {
302
+ "listChanged": False
303
+ },
304
+ "resources": {
305
+ "subscribe": False,
306
+ "listChanged": False
307
+ }
308
+ },
309
+ "serverInfo": {
310
+ "name": self.name,
311
+ "version": self.version,
312
+ "description": "Sentiment analysis server using TextBlob and Transformers"
313
+ }
314
+ }
315
+
316
+ async def _handle_list_tools(self, params: Dict[str, Any]) -> Dict[str, Any]:
317
+ """
318
+ Handle tools/list request.
319
+
320
+ Args:
321
+ params: List tools parameters
322
+
323
+ Returns:
324
+ Available tools
325
+ """
326
+ try:
327
+ tools = await list_tools()
328
+ return {"tools": tools}
329
+ except Exception as e:
330
+ raise RuntimeError(f"Failed to list tools: {e}")
331
+
332
+ async def _handle_call_tool(self, params: Dict[str, Any]) -> Dict[str, Any]:
333
+ """
334
+ Handle tools/call request.
335
+
336
+ Args:
337
+ params: Tool call parameters
338
+
339
+ Returns:
340
+ Tool execution result
341
+ """
342
+ try:
343
+ name = params.get("name")
344
+ arguments = params.get("arguments", {})
345
+
346
+ if not name:
347
+ raise ValueError("Tool name is required")
348
+
349
+ result = await call_tool(name, arguments)
350
+
351
+ return {
352
+ "content": [
353
+ {
354
+ "type": "text",
355
+ "text": json.dumps(result, indent=2)
356
+ }
357
+ ],
358
+ "isError": not result.get("success", True)
359
+ }
360
+
361
+ except Exception as e:
362
+ self.logger.error(f"Tool call failed: {e}")
363
+ return {
364
+ "content": [
365
+ {
366
+ "type": "text",
367
+ "text": json.dumps({
368
+ "success": False,
369
+ "error": str(e),
370
+ "error_type": type(e).__name__
371
+ }, indent=2)
372
+ }
373
+ ],
374
+ "isError": True
375
+ }
376
+
377
+ async def _handle_ping(self, params: Dict[str, Any]) -> Dict[str, Any]:
378
+ """
379
+ Handle ping request.
380
+
381
+ Args:
382
+ params: Ping parameters
383
+
384
+ Returns:
385
+ Pong response
386
+ """
387
+ return {
388
+ "pong": True,
389
+ "timestamp": datetime.now().isoformat(),
390
+ "server": self.name,
391
+ "version": self.version
392
+ }
393
+
394
+ async def _handle_server_info(self, params: Dict[str, Any]) -> Dict[str, Any]:
395
+ """
396
+ Handle server/info request.
397
+
398
+ Args:
399
+ params: Server info parameters
400
+
401
+ Returns:
402
+ Server information
403
+ """
404
+ try:
405
+ analyzer = await get_analyzer("auto")
406
+ analyzer_info = analyzer.get_info()
407
+ except Exception as e:
408
+ analyzer_info = {"error": str(e)}
409
+
410
+ return {
411
+ "server": {
412
+ "name": self.name,
413
+ "version": self.version,
414
+ "running": self.running,
415
+ "start_time": self.start_time.isoformat() if self.start_time else None
416
+ },
417
+ "analyzer": analyzer_info,
418
+ "capabilities": {
419
+ "sentiment_analysis": True,
420
+ "batch_processing": True,
421
+ "multiple_backends": True,
422
+ "async_processing": True
423
+ }
424
+ }
425
+
426
+ async def _handle_server_stats(self, params: Dict[str, Any]) -> Dict[str, Any]:
427
+ """
428
+ Handle server/stats request.
429
+
430
+ Args:
431
+ params: Server stats parameters
432
+
433
+ Returns:
434
+ Server statistics
435
+ """
436
+ uptime = None
437
+ if self.start_time:
438
+ uptime = (datetime.now() - self.start_time).total_seconds()
439
+
440
+ return {
441
+ "requests_processed": self.request_count,
442
+ "errors_encountered": self.error_count,
443
+ "success_rate": (
444
+ (self.request_count - self.error_count) / self.request_count
445
+ if self.request_count > 0 else 0
446
+ ),
447
+ "uptime_seconds": uptime,
448
+ "running": self.running
449
+ }
450
+
451
+
452
+ class MCPServerRunner:
453
+ """
454
+ Runner for MCP server with stdio communication.
455
+
456
+ Handles stdin/stdout communication for MCP protocol.
457
+ """
458
+
459
+ def __init__(self, server: SentimentMCPServer):
460
+ """
461
+ Initialize server runner.
462
+
463
+ Args:
464
+ server: MCP server instance
465
+ """
466
+ self.server = server
467
+ self.logger = logging.getLogger(__name__)
468
+
469
+ async def run(self) -> None:
470
+ """Run the MCP server with stdio communication."""
471
+ self.logger.info("Starting MCP server with stdio communication")
472
+
473
+ # Start server
474
+ await self.server.start()
475
+
476
+ try:
477
+ # Set up event loop optimization
478
+ if UVLOOP_AVAILABLE:
479
+ self.logger.info("Using uvloop for better performance")
480
+
481
+ # Handle stdin/stdout communication
482
+ reader = asyncio.StreamReader()
483
+ protocol = asyncio.StreamReaderProtocol(reader)
484
+ await asyncio.get_event_loop().connect_read_pipe(
485
+ lambda: protocol, sys.stdin
486
+ )
487
+
488
+ writer_transport, writer_protocol = await asyncio.get_event_loop().connect_write_pipe(
489
+ asyncio.streams.FlowControlMixin, sys.stdout
490
+ )
491
+ writer = asyncio.StreamWriter(writer_transport, writer_protocol, reader, asyncio.get_event_loop())
492
+
493
+ self.logger.info("MCP server ready for requests")
494
+
495
+ # Process requests
496
+ while self.server.running:
497
+ try:
498
+ # Read request
499
+ line = await reader.readline()
500
+ if not line:
501
+ break
502
+
503
+ message = line.decode().strip()
504
+ if not message:
505
+ continue
506
+
507
+ # Process request
508
+ response = await self.server.process_request(message)
509
+
510
+ # Send response
511
+ writer.write((response + '\n').encode())
512
+ await writer.drain()
513
+
514
+ except asyncio.CancelledError:
515
+ break
516
+ except Exception as e:
517
+ self.logger.error(f"Communication error: {e}")
518
+ break
519
+
520
+ finally:
521
+ await self.server.stop()
522
+ self.logger.info("MCP server stopped")
523
+
524
+
525
+ async def create_server(name: str = "sentiment-analyzer",
526
+ version: str = "1.0.0") -> SentimentMCPServer:
527
+ """
528
+ Create and configure MCP server.
529
+
530
+ Args:
531
+ name: Server name
532
+ version: Server version
533
+
534
+ Returns:
535
+ Configured MCP server
536
+ """
537
+ # Configure logging
538
+ logging.basicConfig(
539
+ level=logging.INFO,
540
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
541
+ handlers=[
542
+ logging.StreamHandler(sys.stderr) # Use stderr to avoid interfering with stdio protocol
543
+ ]
544
+ )
545
+
546
+ # Create server
547
+ server = SentimentMCPServer(name, version)
548
+
549
+ return server
550
+
551
+
552
+ async def main() -> None:
553
+ """Main entry point for MCP server."""
554
+ # Use uvloop if available for better performance
555
+ if UVLOOP_AVAILABLE:
556
+ asyncio.set_event_loop_policy(uvloop.EventLoopPolicy())
557
+
558
+ # Create and run server
559
+ server = await create_server()
560
+ runner = MCPServerRunner(server)
561
+
562
+ try:
563
+ await runner.run()
564
+ except KeyboardInterrupt:
565
+ logging.info("Server interrupted by user")
566
+ except Exception as e:
567
+ logging.error(f"Server error: {e}")
568
+ sys.exit(1)
569
+
570
+
571
+ if __name__ == "__main__":
572
+ asyncio.run(main())
src/sentiment_analyzer.py ADDED
@@ -0,0 +1,383 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Core sentiment analysis engine for MCP server.
3
+
4
+ This module provides sentiment analysis functionality using both TextBlob
5
+ for simplicity and Transformers for accuracy, with confidence scoring
6
+ and comprehensive error handling.
7
+ """
8
+
9
+ import logging
10
+ from typing import Dict, Any, Optional, Tuple
11
+ from enum import Enum
12
+ import asyncio
13
+ from concurrent.futures import ThreadPoolExecutor
14
+
15
+ try:
16
+ from textblob import TextBlob
17
+ TEXTBLOB_AVAILABLE = True
18
+ except ImportError:
19
+ TEXTBLOB_AVAILABLE = False
20
+ logging.warning("TextBlob not available. Install with: pip install textblob")
21
+
22
+ try:
23
+ from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
24
+ import torch
25
+ TRANSFORMERS_AVAILABLE = True
26
+ except ImportError:
27
+ TRANSFORMERS_AVAILABLE = False
28
+ logging.warning("Transformers not available. Install with: pip install transformers torch")
29
+
30
+
31
+ class SentimentLabel(Enum):
32
+ """Sentiment classification labels."""
33
+ POSITIVE = "positive"
34
+ NEGATIVE = "negative"
35
+ NEUTRAL = "neutral"
36
+
37
+
38
+ class SentimentResult:
39
+ """Container for sentiment analysis results."""
40
+
41
+ def __init__(self, label: SentimentLabel, confidence: float, raw_scores: Optional[Dict[str, float]] = None):
42
+ self.label = label
43
+ self.confidence = confidence
44
+ self.raw_scores = raw_scores or {}
45
+
46
+ def to_dict(self) -> Dict[str, Any]:
47
+ """Convert result to dictionary format."""
48
+ return {
49
+ "label": self.label.value,
50
+ "confidence": round(self.confidence, 4),
51
+ "raw_scores": self.raw_scores
52
+ }
53
+
54
+
55
+ class SentimentAnalyzer:
56
+ """
57
+ Advanced sentiment analysis engine supporting multiple backends.
58
+
59
+ Supports both TextBlob (simple) and Transformers (accurate) for sentiment analysis
60
+ with confidence scoring and async processing capabilities.
61
+ """
62
+
63
+ def __init__(self, backend: str = "auto", model_name: str = "cardiffnlp/twitter-roberta-base-sentiment-latest"):
64
+ """
65
+ Initialize sentiment analyzer.
66
+
67
+ Args:
68
+ backend: Analysis backend ("textblob", "transformers", or "auto")
69
+ model_name: Hugging Face model name for transformers backend
70
+ """
71
+ self.backend = backend
72
+ self.model_name = model_name
73
+ self.logger = logging.getLogger(__name__)
74
+ self.executor = ThreadPoolExecutor(max_workers=2)
75
+
76
+ # Model caching
77
+ self._transformer_pipeline = None
78
+ self._model_loaded = False
79
+
80
+ # Initialize backend
81
+ self._initialize_backend()
82
+
83
+ def _initialize_backend(self) -> None:
84
+ """Initialize the selected backend."""
85
+ if self.backend == "auto":
86
+ if TRANSFORMERS_AVAILABLE:
87
+ self.backend = "transformers"
88
+ self.logger.info("Auto-selected Transformers backend")
89
+ elif TEXTBLOB_AVAILABLE:
90
+ self.backend = "textblob"
91
+ self.logger.info("Auto-selected TextBlob backend")
92
+ else:
93
+ raise RuntimeError("No sentiment analysis backend available. Install textblob or transformers.")
94
+
95
+ if self.backend == "transformers" and not TRANSFORMERS_AVAILABLE:
96
+ raise RuntimeError("Transformers backend requested but not available")
97
+
98
+ if self.backend == "textblob" and not TEXTBLOB_AVAILABLE:
99
+ raise RuntimeError("TextBlob backend requested but not available")
100
+
101
+ async def _load_transformer_model(self) -> None:
102
+ """Load transformer model asynchronously."""
103
+ if self._model_loaded:
104
+ return
105
+
106
+ try:
107
+ self.logger.info(f"Loading transformer model: {self.model_name}")
108
+
109
+ # Load model in thread pool to avoid blocking
110
+ loop = asyncio.get_event_loop()
111
+ self._transformer_pipeline = await loop.run_in_executor(
112
+ self.executor,
113
+ lambda: pipeline(
114
+ "sentiment-analysis",
115
+ model=self.model_name,
116
+ tokenizer=self.model_name,
117
+ device=0 if torch.cuda.is_available() else -1,
118
+ return_all_scores=True
119
+ )
120
+ )
121
+
122
+ self._model_loaded = True
123
+ self.logger.info("Transformer model loaded successfully")
124
+
125
+ except Exception as e:
126
+ self.logger.error(f"Failed to load transformer model: {e}")
127
+ raise RuntimeError(f"Model loading failed: {e}")
128
+
129
+ def _validate_input(self, text: str) -> str:
130
+ """
131
+ Validate and sanitize input text.
132
+
133
+ Args:
134
+ text: Input text to validate
135
+
136
+ Returns:
137
+ Sanitized text
138
+
139
+ Raises:
140
+ ValueError: If text is invalid
141
+ """
142
+ if not isinstance(text, str):
143
+ raise ValueError("Input must be a string")
144
+
145
+ text = text.strip()
146
+
147
+ if not text:
148
+ raise ValueError("Input text cannot be empty")
149
+
150
+ if len(text) > 10000: # Reasonable limit
151
+ raise ValueError("Input text too long (max 10,000 characters)")
152
+
153
+ # Basic sanitization
154
+ text = text.replace('\x00', '') # Remove null bytes
155
+
156
+ return text
157
+
158
+ def _analyze_with_textblob(self, text: str) -> SentimentResult:
159
+ """
160
+ Analyze sentiment using TextBlob.
161
+
162
+ Args:
163
+ text: Text to analyze
164
+
165
+ Returns:
166
+ SentimentResult with classification and confidence
167
+ """
168
+ try:
169
+ blob = TextBlob(text)
170
+ polarity = blob.sentiment.polarity
171
+
172
+ # Convert polarity to label and confidence
173
+ if polarity > 0.1:
174
+ label = SentimentLabel.POSITIVE
175
+ confidence = min(polarity, 1.0)
176
+ elif polarity < -0.1:
177
+ label = SentimentLabel.NEGATIVE
178
+ confidence = min(abs(polarity), 1.0)
179
+ else:
180
+ label = SentimentLabel.NEUTRAL
181
+ confidence = 1.0 - abs(polarity)
182
+
183
+ raw_scores = {
184
+ "polarity": polarity,
185
+ "subjectivity": blob.sentiment.subjectivity
186
+ }
187
+
188
+ return SentimentResult(label, confidence, raw_scores)
189
+
190
+ except Exception as e:
191
+ self.logger.error(f"TextBlob analysis failed: {e}")
192
+ raise RuntimeError(f"Sentiment analysis failed: {e}")
193
+
194
+ async def _analyze_with_transformers(self, text: str) -> SentimentResult:
195
+ """
196
+ Analyze sentiment using Transformers.
197
+
198
+ Args:
199
+ text: Text to analyze
200
+
201
+ Returns:
202
+ SentimentResult with classification and confidence
203
+ """
204
+ try:
205
+ await self._load_transformer_model()
206
+
207
+ # Run inference in thread pool
208
+ loop = asyncio.get_event_loop()
209
+ results = await loop.run_in_executor(
210
+ self.executor,
211
+ lambda: self._transformer_pipeline(text)
212
+ )
213
+
214
+ # Process results
215
+ scores = {result['label'].lower(): result['score'] for result in results[0]}
216
+
217
+ # Map model labels to our labels
218
+ label_mapping = {
219
+ 'positive': SentimentLabel.POSITIVE,
220
+ 'negative': SentimentLabel.NEGATIVE,
221
+ 'neutral': SentimentLabel.NEUTRAL,
222
+ 'label_0': SentimentLabel.NEGATIVE, # Some models use numeric labels
223
+ 'label_1': SentimentLabel.NEUTRAL,
224
+ 'label_2': SentimentLabel.POSITIVE
225
+ }
226
+
227
+ # Find best match
228
+ best_score = 0
229
+ best_label = SentimentLabel.NEUTRAL
230
+
231
+ for model_label, score in scores.items():
232
+ if model_label in label_mapping and score > best_score:
233
+ best_score = score
234
+ best_label = label_mapping[model_label]
235
+
236
+ return SentimentResult(best_label, best_score, scores)
237
+
238
+ except Exception as e:
239
+ self.logger.error(f"Transformers analysis failed: {e}")
240
+ raise RuntimeError(f"Sentiment analysis failed: {e}")
241
+
242
+ async def analyze(self, text: str) -> SentimentResult:
243
+ """
244
+ Analyze sentiment of input text.
245
+
246
+ Args:
247
+ text: Text to analyze
248
+
249
+ Returns:
250
+ SentimentResult with label, confidence, and raw scores
251
+
252
+ Raises:
253
+ ValueError: If input is invalid
254
+ RuntimeError: If analysis fails
255
+ """
256
+ # Validate input
257
+ text = self._validate_input(text)
258
+
259
+ try:
260
+ if self.backend == "transformers":
261
+ return await self._analyze_with_transformers(text)
262
+ elif self.backend == "textblob":
263
+ # Run TextBlob in thread pool since it's CPU-bound
264
+ loop = asyncio.get_event_loop()
265
+ return await loop.run_in_executor(
266
+ self.executor,
267
+ self._analyze_with_textblob,
268
+ text
269
+ )
270
+ else:
271
+ raise RuntimeError(f"Unknown backend: {self.backend}")
272
+
273
+ except Exception as e:
274
+ self.logger.error(f"Sentiment analysis failed for text: {text[:100]}... Error: {e}")
275
+ raise
276
+
277
+ async def analyze_batch(self, texts: list[str]) -> list[SentimentResult]:
278
+ """
279
+ Analyze sentiment for multiple texts concurrently.
280
+
281
+ Args:
282
+ texts: List of texts to analyze
283
+
284
+ Returns:
285
+ List of SentimentResult objects
286
+ """
287
+ if not texts:
288
+ return []
289
+
290
+ # Analyze all texts concurrently
291
+ tasks = [self.analyze(text) for text in texts]
292
+ results = await asyncio.gather(*tasks, return_exceptions=True)
293
+
294
+ # Handle exceptions
295
+ processed_results = []
296
+ for i, result in enumerate(results):
297
+ if isinstance(result, Exception):
298
+ self.logger.error(f"Failed to analyze text {i}: {result}")
299
+ # Return neutral result for failed analysis
300
+ processed_results.append(
301
+ SentimentResult(SentimentLabel.NEUTRAL, 0.0, {"error": str(result)})
302
+ )
303
+ else:
304
+ processed_results.append(result)
305
+
306
+ return processed_results
307
+
308
+ def get_info(self) -> Dict[str, Any]:
309
+ """Get information about the analyzer configuration."""
310
+ return {
311
+ "backend": self.backend,
312
+ "model_name": self.model_name if self.backend == "transformers" else None,
313
+ "model_loaded": self._model_loaded,
314
+ "textblob_available": TEXTBLOB_AVAILABLE,
315
+ "transformers_available": TRANSFORMERS_AVAILABLE,
316
+ "cuda_available": torch.cuda.is_available() if TRANSFORMERS_AVAILABLE else False
317
+ }
318
+
319
+ async def cleanup(self) -> None:
320
+ """Clean up resources."""
321
+ self.executor.shutdown(wait=True)
322
+ self.logger.info("Sentiment analyzer cleaned up")
323
+
324
+
325
+ # Global analyzer instance for reuse
326
+ _global_analyzer: Optional[SentimentAnalyzer] = None
327
+
328
+
329
+ async def get_analyzer(backend: str = "auto") -> SentimentAnalyzer:
330
+ """
331
+ Get or create global sentiment analyzer instance.
332
+
333
+ Args:
334
+ backend: Analysis backend to use
335
+
336
+ Returns:
337
+ SentimentAnalyzer instance
338
+ """
339
+ global _global_analyzer
340
+
341
+ if _global_analyzer is None:
342
+ _global_analyzer = SentimentAnalyzer(backend=backend)
343
+
344
+ return _global_analyzer
345
+
346
+
347
+ async def analyze_sentiment(text: str, backend: str = "auto") -> Dict[str, Any]:
348
+ """
349
+ Convenience function for sentiment analysis.
350
+
351
+ Args:
352
+ text: Text to analyze
353
+ backend: Analysis backend to use
354
+
355
+ Returns:
356
+ Dictionary with sentiment analysis results
357
+ """
358
+ analyzer = await get_analyzer(backend)
359
+ result = await analyzer.analyze(text)
360
+ return result.to_dict()
361
+
362
+
363
+ if __name__ == "__main__":
364
+ # Example usage
365
+ async def main():
366
+ analyzer = SentimentAnalyzer(backend="textblob")
367
+
368
+ test_texts = [
369
+ "I love this product! It's amazing!",
370
+ "This is terrible and I hate it.",
371
+ "It's okay, nothing special.",
372
+ "The weather is nice today."
373
+ ]
374
+
375
+ for text in test_texts:
376
+ result = await analyzer.analyze(text)
377
+ print(f"Text: {text}")
378
+ print(f"Result: {result.to_dict()}")
379
+ print("-" * 50)
380
+
381
+ await analyzer.cleanup()
382
+
383
+ asyncio.run(main())
src/tools.py ADDED
@@ -0,0 +1,421 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MCP tool definitions for sentiment analysis server.
3
+
4
+ This module defines the tools available through the Model Context Protocol,
5
+ including sentiment analysis, batch processing, and analyzer information tools.
6
+ """
7
+
8
+ import logging
9
+ from typing import Dict, Any, List, Optional
10
+ from pydantic import BaseModel, Field
11
+ import asyncio
12
+
13
+ from .sentiment_analyzer import get_analyzer, SentimentAnalyzer
14
+
15
+
16
+ class SentimentAnalysisInput(BaseModel):
17
+ """Input schema for sentiment analysis tool."""
18
+ text: str = Field(..., description="Text to analyze for sentiment", min_length=1, max_length=10000)
19
+ backend: Optional[str] = Field("auto", description="Analysis backend: 'textblob', 'transformers', or 'auto'")
20
+
21
+
22
+ class BatchSentimentAnalysisInput(BaseModel):
23
+ """Input schema for batch sentiment analysis tool."""
24
+ texts: List[str] = Field(..., description="List of texts to analyze for sentiment", min_items=1, max_items=100)
25
+ backend: Optional[str] = Field("auto", description="Analysis backend: 'textblob', 'transformers', or 'auto'")
26
+
27
+
28
+ class AnalyzerInfoInput(BaseModel):
29
+ """Input schema for analyzer info tool."""
30
+ backend: Optional[str] = Field("auto", description="Backend to get info for")
31
+
32
+
33
+ class MCPTools:
34
+ """
35
+ MCP tool registry and handlers for sentiment analysis.
36
+
37
+ This class manages the registration and execution of tools available
38
+ through the Model Context Protocol interface.
39
+ """
40
+
41
+ def __init__(self):
42
+ self.logger = logging.getLogger(__name__)
43
+ self._tools = {}
44
+ self._register_tools()
45
+
46
+ def _register_tools(self) -> None:
47
+ """Register all available MCP tools."""
48
+ self._tools = {
49
+ "analyze_sentiment": {
50
+ "name": "analyze_sentiment",
51
+ "description": "Analyze the sentiment of a given text",
52
+ "inputSchema": {
53
+ "type": "object",
54
+ "properties": {
55
+ "text": {
56
+ "type": "string",
57
+ "description": "Text to analyze for sentiment",
58
+ "minLength": 1,
59
+ "maxLength": 10000
60
+ },
61
+ "backend": {
62
+ "type": "string",
63
+ "description": "Analysis backend: 'textblob', 'transformers', or 'auto'",
64
+ "enum": ["textblob", "transformers", "auto"],
65
+ "default": "auto"
66
+ }
67
+ },
68
+ "required": ["text"]
69
+ },
70
+ "handler": self._handle_analyze_sentiment
71
+ },
72
+
73
+ "analyze_sentiment_batch": {
74
+ "name": "analyze_sentiment_batch",
75
+ "description": "Analyze sentiment for multiple texts in batch",
76
+ "inputSchema": {
77
+ "type": "object",
78
+ "properties": {
79
+ "texts": {
80
+ "type": "array",
81
+ "description": "List of texts to analyze for sentiment",
82
+ "items": {
83
+ "type": "string",
84
+ "minLength": 1,
85
+ "maxLength": 10000
86
+ },
87
+ "minItems": 1,
88
+ "maxItems": 100
89
+ },
90
+ "backend": {
91
+ "type": "string",
92
+ "description": "Analysis backend: 'textblob', 'transformers', or 'auto'",
93
+ "enum": ["textblob", "transformers", "auto"],
94
+ "default": "auto"
95
+ }
96
+ },
97
+ "required": ["texts"]
98
+ },
99
+ "handler": self._handle_analyze_sentiment_batch
100
+ },
101
+
102
+ "get_analyzer_info": {
103
+ "name": "get_analyzer_info",
104
+ "description": "Get information about the sentiment analyzer configuration",
105
+ "inputSchema": {
106
+ "type": "object",
107
+ "properties": {
108
+ "backend": {
109
+ "type": "string",
110
+ "description": "Backend to get info for",
111
+ "enum": ["textblob", "transformers", "auto"],
112
+ "default": "auto"
113
+ }
114
+ },
115
+ "required": []
116
+ },
117
+ "handler": self._handle_get_analyzer_info
118
+ },
119
+
120
+ "health_check": {
121
+ "name": "health_check",
122
+ "description": "Check the health status of the sentiment analysis service",
123
+ "inputSchema": {
124
+ "type": "object",
125
+ "properties": {},
126
+ "required": []
127
+ },
128
+ "handler": self._handle_health_check
129
+ }
130
+ }
131
+
132
+ self.logger.info(f"Registered {len(self._tools)} MCP tools")
133
+
134
+ def get_tools(self) -> List[Dict[str, Any]]:
135
+ """
136
+ Get list of available tools for MCP protocol.
137
+
138
+ Returns:
139
+ List of tool definitions
140
+ """
141
+ return [
142
+ {
143
+ "name": tool["name"],
144
+ "description": tool["description"],
145
+ "inputSchema": tool["inputSchema"]
146
+ }
147
+ for tool in self._tools.values()
148
+ ]
149
+
150
+ async def call_tool(self, name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
151
+ """
152
+ Call a registered tool with given arguments.
153
+
154
+ Args:
155
+ name: Tool name
156
+ arguments: Tool arguments
157
+
158
+ Returns:
159
+ Tool execution result
160
+
161
+ Raises:
162
+ ValueError: If tool not found or arguments invalid
163
+ RuntimeError: If tool execution fails
164
+ """
165
+ if name not in self._tools:
166
+ raise ValueError(f"Tool '{name}' not found. Available tools: {list(self._tools.keys())}")
167
+
168
+ tool = self._tools[name]
169
+ handler = tool["handler"]
170
+
171
+ try:
172
+ self.logger.info(f"Calling tool '{name}' with arguments: {arguments}")
173
+ result = await handler(arguments)
174
+ self.logger.info(f"Tool '{name}' completed successfully")
175
+ return result
176
+
177
+ except Exception as e:
178
+ self.logger.error(f"Tool '{name}' failed: {e}")
179
+ raise RuntimeError(f"Tool execution failed: {e}")
180
+
181
+ async def _handle_analyze_sentiment(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
182
+ """
183
+ Handle sentiment analysis tool call.
184
+
185
+ Args:
186
+ arguments: Tool arguments containing text and optional backend
187
+
188
+ Returns:
189
+ Sentiment analysis result
190
+ """
191
+ try:
192
+ # Validate input
193
+ input_data = SentimentAnalysisInput(**arguments)
194
+
195
+ # Get analyzer and perform analysis
196
+ analyzer = await get_analyzer(input_data.backend)
197
+ result = await analyzer.analyze(input_data.text)
198
+
199
+ return {
200
+ "success": True,
201
+ "result": result.to_dict(),
202
+ "metadata": {
203
+ "backend": analyzer.backend,
204
+ "text_length": len(input_data.text),
205
+ "model_info": analyzer.get_info()
206
+ }
207
+ }
208
+
209
+ except Exception as e:
210
+ return {
211
+ "success": False,
212
+ "error": str(e),
213
+ "error_type": type(e).__name__
214
+ }
215
+
216
+ async def _handle_analyze_sentiment_batch(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
217
+ """
218
+ Handle batch sentiment analysis tool call.
219
+
220
+ Args:
221
+ arguments: Tool arguments containing texts and optional backend
222
+
223
+ Returns:
224
+ Batch sentiment analysis results
225
+ """
226
+ try:
227
+ # Validate input
228
+ input_data = BatchSentimentAnalysisInput(**arguments)
229
+
230
+ # Get analyzer and perform batch analysis
231
+ analyzer = await get_analyzer(input_data.backend)
232
+ results = await analyzer.analyze_batch(input_data.texts)
233
+
234
+ # Convert results to dictionaries
235
+ result_dicts = [result.to_dict() for result in results]
236
+
237
+ # Calculate summary statistics
238
+ labels = [result.label.value for result in results]
239
+ label_counts = {
240
+ "positive": labels.count("positive"),
241
+ "negative": labels.count("negative"),
242
+ "neutral": labels.count("neutral")
243
+ }
244
+
245
+ avg_confidence = sum(result.confidence for result in results) / len(results)
246
+
247
+ return {
248
+ "success": True,
249
+ "results": result_dicts,
250
+ "summary": {
251
+ "total_texts": len(input_data.texts),
252
+ "label_distribution": label_counts,
253
+ "average_confidence": round(avg_confidence, 4)
254
+ },
255
+ "metadata": {
256
+ "backend": analyzer.backend,
257
+ "model_info": analyzer.get_info()
258
+ }
259
+ }
260
+
261
+ except Exception as e:
262
+ return {
263
+ "success": False,
264
+ "error": str(e),
265
+ "error_type": type(e).__name__
266
+ }
267
+
268
+ async def _handle_get_analyzer_info(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
269
+ """
270
+ Handle analyzer info tool call.
271
+
272
+ Args:
273
+ arguments: Tool arguments containing optional backend
274
+
275
+ Returns:
276
+ Analyzer configuration information
277
+ """
278
+ try:
279
+ # Validate input
280
+ input_data = AnalyzerInfoInput(**arguments)
281
+
282
+ # Get analyzer info
283
+ analyzer = await get_analyzer(input_data.backend)
284
+ info = analyzer.get_info()
285
+
286
+ return {
287
+ "success": True,
288
+ "info": info,
289
+ "available_backends": ["textblob", "transformers", "auto"],
290
+ "recommended_backend": "transformers" if info.get("transformers_available") else "textblob"
291
+ }
292
+
293
+ except Exception as e:
294
+ return {
295
+ "success": False,
296
+ "error": str(e),
297
+ "error_type": type(e).__name__
298
+ }
299
+
300
+ async def _handle_health_check(self, arguments: Dict[str, Any]) -> Dict[str, Any]:
301
+ """
302
+ Handle health check tool call.
303
+
304
+ Args:
305
+ arguments: Tool arguments (empty for health check)
306
+
307
+ Returns:
308
+ Health status information
309
+ """
310
+ try:
311
+ # Test basic functionality
312
+ test_text = "This is a test message for health check."
313
+ analyzer = await get_analyzer("auto")
314
+ result = await analyzer.analyze(test_text)
315
+
316
+ return {
317
+ "success": True,
318
+ "status": "healthy",
319
+ "test_result": result.to_dict(),
320
+ "analyzer_info": analyzer.get_info(),
321
+ "timestamp": asyncio.get_event_loop().time()
322
+ }
323
+
324
+ except Exception as e:
325
+ return {
326
+ "success": False,
327
+ "status": "unhealthy",
328
+ "error": str(e),
329
+ "error_type": type(e).__name__,
330
+ "timestamp": asyncio.get_event_loop().time()
331
+ }
332
+
333
+
334
+ # Global tools instance
335
+ _global_tools: Optional[MCPTools] = None
336
+
337
+
338
+ def get_tools() -> MCPTools:
339
+ """
340
+ Get or create global MCP tools instance.
341
+
342
+ Returns:
343
+ MCPTools instance
344
+ """
345
+ global _global_tools
346
+
347
+ if _global_tools is None:
348
+ _global_tools = MCPTools()
349
+
350
+ return _global_tools
351
+
352
+
353
+ async def list_tools() -> List[Dict[str, Any]]:
354
+ """
355
+ Get list of available MCP tools.
356
+
357
+ Returns:
358
+ List of tool definitions
359
+ """
360
+ tools = get_tools()
361
+ return tools.get_tools()
362
+
363
+
364
+ async def call_tool(name: str, arguments: Dict[str, Any]) -> Dict[str, Any]:
365
+ """
366
+ Call an MCP tool with given arguments.
367
+
368
+ Args:
369
+ name: Tool name
370
+ arguments: Tool arguments
371
+
372
+ Returns:
373
+ Tool execution result
374
+ """
375
+ tools = get_tools()
376
+ return await tools.call_tool(name, arguments)
377
+
378
+
379
+ if __name__ == "__main__":
380
+ # Example usage
381
+ async def main():
382
+ tools = get_tools()
383
+
384
+ # List available tools
385
+ available_tools = tools.get_tools()
386
+ print("Available tools:")
387
+ for tool in available_tools:
388
+ print(f"- {tool['name']}: {tool['description']}")
389
+
390
+ print("\n" + "="*50 + "\n")
391
+
392
+ # Test sentiment analysis tool
393
+ result = await tools.call_tool("analyze_sentiment", {
394
+ "text": "I love this new feature! It's absolutely amazing!",
395
+ "backend": "textblob"
396
+ })
397
+ print("Sentiment analysis result:")
398
+ print(result)
399
+
400
+ print("\n" + "="*50 + "\n")
401
+
402
+ # Test batch analysis
403
+ batch_result = await tools.call_tool("analyze_sentiment_batch", {
404
+ "texts": [
405
+ "This is great!",
406
+ "I hate this.",
407
+ "It's okay, I guess."
408
+ ],
409
+ "backend": "textblob"
410
+ })
411
+ print("Batch analysis result:")
412
+ print(batch_result)
413
+
414
+ print("\n" + "="*50 + "\n")
415
+
416
+ # Test health check
417
+ health_result = await tools.call_tool("health_check", {})
418
+ print("Health check result:")
419
+ print(health_result)
420
+
421
+ asyncio.run(main())