femtowin commited on
Commit
b55e829
·
1 Parent(s): cdba494
MCP_INTEGRATION_README.md ADDED
@@ -0,0 +1,346 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MCP Integration with Minion Brain (Standalone Version)
2
+
3
+ This is a **standalone** MCP (Model Context Protocol) integration solution for Minion Brain.step that **does not depend on huggingface_hub** package. It provides a clean, lightweight tool integration approach.
4
+
5
+ ## Features
6
+
7
+ - **🔗 MCP Server Support**: Connect to stdio, SSE, and HTTP MCP servers
8
+ - **🛠 Tool Adaptation**: Automatically converts MCP tools to brain.step compatible format
9
+ - **🧠 Brain Integration**: Native support for tools in minion brain.step function
10
+ - **🖥 Gradio UI**: Web interface to configure and test MCP tools
11
+ - **⚙️ Environment Configuration**: Easy setup via environment variables
12
+ - **🔒 Independence**: No dependency on huggingface_hub, completely self-contained
13
+
14
+ ## Architecture
15
+
16
+ ### Core Components
17
+
18
+ 1. **BrainTool**: Adapter class that wraps MCP tools for brain.step compatibility
19
+ 2. **MCPBrainClient**: Main client for managing MCP server connections
20
+ 3. **Local Tools**: Built-in local tools (calculator, final answer, etc.)
21
+
22
+ ### Tool Format Conversion
23
+
24
+ MCP tools are automatically converted to the format expected by brain.step:
25
+
26
+ ```python
27
+ # MCP Tool (from server)
28
+ {
29
+ "name": "calculator",
30
+ "description": "Perform basic arithmetic operations",
31
+ "inputSchema": {
32
+ "type": "object",
33
+ "properties": {
34
+ "expression": {"type": "string"}
35
+ }
36
+ }
37
+ }
38
+
39
+ # Converted to BrainTool (for brain.step)
40
+ BrainTool(
41
+ name="calculator",
42
+ description="Perform basic arithmetic operations",
43
+ parameters={...},
44
+ session=mcp_session
45
+ )
46
+ ```
47
+
48
+ ## Installation
49
+
50
+ ### 1. Install Dependencies
51
+
52
+ ```bash
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ **Note**: This version **does not require** `huggingface_hub`, fewer dependencies!
57
+
58
+ ### 2. Environment Configuration
59
+
60
+ Create a `.env` file:
61
+
62
+ ```bash
63
+ # LLM Configuration
64
+ GPT_4O_API_TYPE=azure
65
+ GPT_4O_API_KEY=your_api_key_here
66
+ GPT_4O_BASE_URL=https://your-endpoint.openai.azure.com/
67
+ GPT_4O_API_VERSION=2024-06-01
68
+ GPT_4O_MODEL=gpt-4o
69
+
70
+ # MCP Server Configuration (optional)
71
+ MCP_SSE_URL=http://localhost:8080/sse
72
+ MCP_STDIO_COMMAND=python example_mcp_server.py
73
+ ```
74
+
75
+ ### 3. Quick Test
76
+
77
+ ```bash
78
+ # Test local tools and MCP integration
79
+ python simple_mcp_test.py
80
+ ```
81
+
82
+ ## Usage
83
+
84
+ ### Basic Usage
85
+
86
+ ```python
87
+ from mcp_integration import MCPBrainClient, create_final_answer_tool, create_calculator_tool
88
+
89
+ async def example_usage():
90
+ # Create local tools
91
+ local_tools = [
92
+ create_calculator_tool(),
93
+ create_final_answer_tool()
94
+ ]
95
+
96
+ # Optional: Add MCP tools
97
+ mcp_tools = []
98
+ try:
99
+ async with MCPBrainClient() as mcp_client:
100
+ await mcp_client.add_mcp_server("sse", url="http://localhost:8080/sse")
101
+ mcp_tools = mcp_client.get_tools_for_brain()
102
+ except:
103
+ pass # It's okay if there's no MCP server
104
+
105
+ # Combine all tools
106
+ all_tools = local_tools + mcp_tools
107
+
108
+ # Use in brain.step
109
+ obs, score, *_ = await brain.step(
110
+ query="Calculate 234*568",
111
+ route="raw",
112
+ check=False,
113
+ tools=all_tools
114
+ )
115
+
116
+ print(f"Result: {obs}")
117
+ ```
118
+
119
+ ### Using Only Local Tools
120
+
121
+ If you don't need to connect to external MCP servers:
122
+
123
+ ```python
124
+ from mcp_integration import create_calculator_tool, create_final_answer_tool
125
+
126
+ # Create local tools
127
+ tools = [
128
+ create_calculator_tool(),
129
+ create_final_answer_tool()
130
+ ]
131
+
132
+ # Use directly in brain.step
133
+ obs, score, *_ = await brain.step(
134
+ query="Calculate 234*568",
135
+ tools=tools
136
+ )
137
+ ```
138
+
139
+ ### Connecting to Different MCP Server Types
140
+
141
+ #### SSE Server
142
+ ```python
143
+ await mcp_client.add_mcp_server(
144
+ "sse",
145
+ url="http://localhost:8080/sse",
146
+ headers={"Authorization": "Bearer token"},
147
+ timeout=30.0
148
+ )
149
+ ```
150
+
151
+ #### Stdio Server
152
+ ```python
153
+ await mcp_client.add_mcp_server(
154
+ "stdio",
155
+ command="python",
156
+ args=["example_mcp_server.py"],
157
+ cwd="/path/to/server"
158
+ )
159
+ ```
160
+
161
+ #### HTTP Server
162
+ ```python
163
+ await mcp_client.add_mcp_server(
164
+ "http",
165
+ url="http://localhost:8080/http",
166
+ timeout=timedelta(seconds=30)
167
+ )
168
+ ```
169
+
170
+ ### Creating Custom Local Tools
171
+
172
+ ```python
173
+ def create_custom_tool():
174
+ class CustomSession:
175
+ async def call_tool(self, name: str, args: Dict[str, Any]) -> Dict[str, Any]:
176
+ # Your tool logic
177
+ result = f"Processed: {args}"
178
+ return {
179
+ "content": [{"type": "text", "text": result}]
180
+ }
181
+
182
+ return BrainTool(
183
+ name="custom_tool",
184
+ description="A custom tool",
185
+ parameters={
186
+ "type": "object",
187
+ "properties": {
188
+ "input": {"type": "string"}
189
+ }
190
+ },
191
+ session=CustomSession()
192
+ )
193
+ ```
194
+
195
+ ## Built-in Tools
196
+
197
+ ### 1. Calculator Tool
198
+ ```python
199
+ calculator_tool = create_calculator_tool()
200
+ result = await calculator_tool(expression="234 * 568")
201
+ # Output: "Calculation result: 234 * 568 = 132912"
202
+ ```
203
+
204
+ ### 2. Final Answer Tool
205
+ ```python
206
+ final_tool = create_final_answer_tool()
207
+ result = await final_tool(answer="Calculation completed, result is 132912")
208
+ # Output: "Calculation completed, result is 132912"
209
+ ```
210
+
211
+ ### 3. Filesystem Tool (MCP)
212
+ ```python
213
+ from mcp_integration import add_filesystem_tool
214
+
215
+ async with MCPBrainClient() as mcp_client:
216
+ # Add filesystem tool, specify allowed access paths
217
+ await add_filesystem_tool(mcp_client, workspace_paths=[
218
+ "/Users/femtozheng/workspace",
219
+ "/Users/femtozheng/python-project/minion-agent"
220
+ ])
221
+
222
+ # Get tools and use
223
+ tools = mcp_client.get_tools_for_brain()
224
+ # Filesystem tools typically include: read_file, write_file, list_directory, etc.
225
+ ```
226
+
227
+ **Filesystem Tool Features**:
228
+ - 📖 **read_file**: Read file contents
229
+ - ✏️ **write_file**: Write to files
230
+ - 📂 **list_directory**: List directory contents
231
+ - 🔍 **search_files**: Search files
232
+ - 🔒 **Security restriction**: Only access pre-configured paths
233
+
234
+ ## Run Complete Application
235
+
236
+ ```bash
237
+ python app_with_mcp.py
238
+ ```
239
+
240
+ Then open `http://localhost:7860` in your browser and enable the "MCP Tools" option.
241
+
242
+ ## Integration Patterns
243
+
244
+ ### 1. Tool Discovery
245
+ The system automatically discovers and registers tools from connected MCP servers:
246
+
247
+ ```python
248
+ tools = mcp_client.get_tools_for_brain()
249
+ print([tool.name for tool in tools])
250
+ ```
251
+
252
+ ### 2. Error Handling
253
+ Tools include built-in error handling:
254
+
255
+ ```python
256
+ result = await tool(invalid_param="value")
257
+ # Returns: "Error: <error_description>"
258
+ ```
259
+
260
+ ### 3. Tool Execution
261
+ Tools execute asynchronously and return formatted results:
262
+
263
+ ```python
264
+ result = await tool(expression="2+2")
265
+ # Result is automatically formatted for brain.step use
266
+ ```
267
+
268
+ ## Advantages
269
+
270
+ ### Compared to huggingface_hub dependent version:
271
+
272
+ 1. **Lighter**: Reduced large dependency packages
273
+ 2. **Simpler**: Focus on core MCP integration functionality
274
+ 3. **More flexible**: Not restricted by huggingface_hub versions
275
+ 4. **Faster**: Less import and initialization time
276
+ 5. **More independent**: Can run in any environment
277
+
278
+ ## Troubleshooting
279
+
280
+ ### Common Issues
281
+
282
+ 1. **MCP Server Connection Failed**
283
+ - Check server URL and port
284
+ - Verify server is running
285
+ - Check network connectivity
286
+
287
+ 2. **Tool Not Found**
288
+ - Verify MCP server has tools
289
+ - Check tool name spelling
290
+ - Ensure server initialization completed
291
+
292
+ 3. **Import Errors**
293
+ - Install all required dependencies
294
+ - Check Python version compatibility
295
+ - Verify mcp package installation
296
+
297
+ ### Debug Mode
298
+
299
+ Enable debug logging to troubleshoot issues:
300
+
301
+ ```python
302
+ import logging
303
+ logging.basicConfig(level=logging.DEBUG)
304
+ ```
305
+
306
+ ## Advanced Configuration
307
+
308
+ ### Tool Filtering
309
+
310
+ Filter tools by name or type:
311
+
312
+ ```python
313
+ # Filter specific tools
314
+ filtered_tools = [
315
+ tool for tool in tools
316
+ if tool.name in ["calculator", "final_answer"]
317
+ ]
318
+ ```
319
+
320
+ ### Tool Prioritization
321
+
322
+ Organize tools by priority:
323
+
324
+ ```python
325
+ # High-priority tools first
326
+ priority_tools = ["final_answer", "calculator"]
327
+ other_tools = [t for t in tools if t.name not in priority_tools]
328
+ ordered_tools = priority_tools + other_tools
329
+ ```
330
+
331
+ ## Example Projects
332
+
333
+ Check `example_mcp_server.py` to learn how to create MCP servers, or run `simple_mcp_test.py` to see complete integration examples.
334
+
335
+ ## Contributing
336
+
337
+ To extend the MCP integration:
338
+
339
+ 1. Implement new tool adapters in `BrainTool`
340
+ 2. Add server type support in `MCPBrainClient`
341
+ 3. Enhance error handling and logging
342
+ 4. Add new tool creation utilities
343
+
344
+ ## License
345
+
346
+ MIT License - see LICENSE file for details.
README.md CHANGED
@@ -5,7 +5,7 @@ colorFrom: yellow
5
  colorTo: purple
6
  sdk: gradio
7
  sdk_version: 5.32.0
8
- app_file: app1.py
9
  pinned: false
10
  license: mit
11
  short_description: minion running in space
 
5
  colorTo: purple
6
  sdk: gradio
7
  sdk_version: 5.32.0
8
+ app_file: app_with_mcp.py
9
  pinned: false
10
  license: mit
11
  short_description: minion running in space
app_with_mcp.py ADDED
@@ -0,0 +1,461 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio as gr
2
+ import asyncio
3
+ import os
4
+ from typing import Dict, Any, List, Optional
5
+ from dotenv import load_dotenv
6
+
7
+ from minion import config
8
+ from minion.main import LocalPythonEnv
9
+ from minion.main.rpyc_python_env import RpycPythonEnv
10
+ from minion.main.brain import Brain
11
+ from minion.providers import create_llm_provider
12
+
13
+ # Import our MCP integration
14
+ from mcp_integration import MCPBrainClient, create_final_answer_tool, BrainTool, add_filesystem_tool
15
+
16
+ # Load .env file
17
+ load_dotenv()
18
+
19
+ class LLMConfig:
20
+ def __init__(self, api_type: str, api_key: str, base_url: str, api_version: str,
21
+ model: str, temperature: float = 0.7, max_tokens: int = 4000,
22
+ vision_enabled: bool = False):
23
+ self.api_type = api_type
24
+ self.api_key = api_key
25
+ self.base_url = base_url
26
+ self.api_version = api_version
27
+ self.model = model
28
+ self.temperature = temperature
29
+ self.max_tokens = max_tokens
30
+ self.vision_enabled = vision_enabled
31
+
32
+ def get_preset_configs():
33
+ """Get preset configurations"""
34
+ presets = {
35
+ "gpt-4o": LLMConfig(
36
+ api_type=os.getenv("GPT_4O_API_TYPE", "azure"),
37
+ api_key=os.getenv("GPT_4O_API_KEY", ""),
38
+ base_url=os.getenv("GPT_4O_BASE_URL", ""),
39
+ api_version=os.getenv("GPT_4O_API_VERSION", "2024-06-01"),
40
+ model=os.getenv("GPT_4O_MODEL", "gpt-4o"),
41
+ temperature=float(os.getenv("GPT_4O_TEMPERATURE", "0")),
42
+ max_tokens=int(os.getenv("GPT_4O_MAX_TOKENS", "4000"))
43
+ ),
44
+ "gpt-4o-mini": LLMConfig(
45
+ api_type=os.getenv("GPT_4O_MINI_API_TYPE", "azure"),
46
+ api_key=os.getenv("GPT_4O_MINI_API_KEY", ""),
47
+ base_url=os.getenv("GPT_4O_MINI_BASE_URL", ""),
48
+ api_version=os.getenv("GPT_4O_MINI_API_VERSION", "2024-06-01"),
49
+ model=os.getenv("GPT_4O_MINI_MODEL", "gpt-4o-mini"),
50
+ temperature=float(os.getenv("GPT_4O_MINI_TEMPERATURE", "0.1")),
51
+ max_tokens=int(os.getenv("GPT_4O_MINI_MAX_TOKENS", "4000"))
52
+ ),
53
+ "gpt-4.1": LLMConfig(
54
+ api_type=os.getenv("GPT_41_API_TYPE", "azure"),
55
+ api_key=os.getenv("GPT_41_API_KEY", ""),
56
+ base_url=os.getenv("GPT_41_BASE_URL", ""),
57
+ api_version=os.getenv("GPT_41_API_VERSION", "2025-03-01-preview"),
58
+ model=os.getenv("GPT_41_MODEL", "gpt-4.1"),
59
+ temperature=float(os.getenv("GPT_41_TEMPERATURE", "0.7")),
60
+ max_tokens=int(os.getenv("GPT_41_MAX_TOKENS", "4000"))
61
+ ),
62
+ "o4-mini": LLMConfig(
63
+ api_type=os.getenv("O4_MINI_API_TYPE", "azure"),
64
+ api_key=os.getenv("O4_MINI_API_KEY", ""),
65
+ base_url=os.getenv("O4_MINI_BASE_URL", ""),
66
+ api_version=os.getenv("O4_MINI_API_VERSION", "2025-03-01-preview"),
67
+ model=os.getenv("O4_MINI_MODEL", "o4-mini"),
68
+ temperature=float(os.getenv("O4_MINI_TEMPERATURE", "0.7")),
69
+ max_tokens=int(os.getenv("O4_MINI_MAX_TOKENS", "4000"))
70
+ )
71
+ }
72
+ return presets
73
+
74
+ def get_default_config():
75
+ """Get default configuration"""
76
+ return LLMConfig(
77
+ api_type=os.getenv("DEFAULT_API_TYPE", "azure"),
78
+ api_key=os.getenv("DEFAULT_API_KEY", ""),
79
+ base_url=os.getenv("DEFAULT_BASE_URL", ""),
80
+ api_version=os.getenv("DEFAULT_API_VERSION", "2024-06-01"),
81
+ model=os.getenv("DEFAULT_MODEL", "gpt-4o"),
82
+ temperature=float(os.getenv("DEFAULT_TEMPERATURE", "0.7")),
83
+ max_tokens=int(os.getenv("DEFAULT_MAX_TOKENS", "4000"))
84
+ )
85
+
86
+ def get_available_routes():
87
+ """Get available route options for current minion system"""
88
+ return [
89
+ "", # Auto route selection (empty for automatic)
90
+ "raw", # Raw LLM output without processing
91
+ "native", # Native minion processing
92
+ "cot", # Chain of Thought reasoning
93
+ "dcot", # Dynamic Chain of Thought
94
+ "plan", # Planning-based approach
95
+ "python" # Python code execution
96
+ ]
97
+
98
+ def create_custom_llm_config(api_type: str, api_key: str, base_url: str,
99
+ api_version: str, model: str, temperature: float,
100
+ max_tokens: int) -> Dict[str, Any]:
101
+ """Create custom LLM configuration"""
102
+ return {
103
+ 'api_type': api_type,
104
+ 'api_key': api_key,
105
+ 'base_url': base_url,
106
+ 'api_version': api_version,
107
+ 'model': model,
108
+ 'temperature': temperature,
109
+ 'max_tokens': max_tokens,
110
+ 'vision_enabled': False
111
+ }
112
+
113
+ def build_brain_with_config(llm_config_dict: Dict[str, Any]):
114
+ """Build brain with specified configuration"""
115
+ # Create a config object similar to LLMConfig
116
+ class Config:
117
+ def __init__(self, config_dict):
118
+ for key, value in config_dict.items():
119
+ setattr(self, key, value)
120
+
121
+ config_obj = Config(llm_config_dict)
122
+ llm = create_llm_provider(config_obj)
123
+ python_env = LocalPythonEnv(verbose=False)
124
+ brain = Brain(
125
+ python_env=python_env,
126
+ llm=llm,
127
+ )
128
+ return brain
129
+
130
+ # Global MCP client instance
131
+ mcp_client: Optional[MCPBrainClient] = None
132
+
133
+ async def setup_mcp_tools():
134
+ """Setup MCP tools and connections"""
135
+ global mcp_client
136
+
137
+ if mcp_client is None:
138
+ mcp_client = MCPBrainClient()
139
+ await mcp_client.__aenter__()
140
+
141
+ # Add filesystem tool (always try to add this)
142
+ try:
143
+ await add_filesystem_tool(mcp_client)
144
+ print("✓ Added filesystem tool")
145
+ except Exception as e:
146
+ print(f"⚠ Failed to add filesystem tool: {e}")
147
+
148
+ # Add MCP servers from environment variables
149
+ # Example: SSE server
150
+ sse_url = os.getenv("MCP_SSE_URL")
151
+ if sse_url:
152
+ try:
153
+ await mcp_client.add_mcp_server("sse", url=sse_url)
154
+ print(f"✓ Added SSE server: {sse_url}")
155
+ except Exception as e:
156
+ print(f"⚠ Failed to add SSE server: {e}")
157
+
158
+ # Example: Stdio server
159
+ stdio_command = os.getenv("MCP_STDIO_COMMAND")
160
+ if stdio_command:
161
+ try:
162
+ await mcp_client.add_mcp_server("stdio", command=stdio_command)
163
+ print(f"✓ Added stdio server: {stdio_command}")
164
+ except Exception as e:
165
+ print(f"⚠ Failed to add stdio server: {e}")
166
+
167
+ return mcp_client
168
+
169
+ async def get_available_tools() -> List[BrainTool]:
170
+ """Get all available tools including MCP tools and final answer tool"""
171
+ try:
172
+ mcp_client = await setup_mcp_tools()
173
+ mcp_tools = mcp_client.get_tools_for_brain()
174
+ except Exception as e:
175
+ print(f"Warning: Failed to setup MCP tools: {e}")
176
+ mcp_tools = []
177
+
178
+ # Always add final answer tool
179
+ final_answer_tool = create_final_answer_tool()
180
+
181
+ return mcp_tools + [final_answer_tool]
182
+
183
+ # Get preset configurations and default configuration
184
+ preset_configs = get_preset_configs()
185
+ default_config = get_default_config()
186
+ available_routes = get_available_routes()
187
+
188
+ async def minion_respond_async(query: str, preset_model: str, api_type: str,
189
+ api_key: str, base_url: str, api_version: str,
190
+ model: str, temperature: float, max_tokens: int,
191
+ route: str, query_type: str, check_enabled: bool,
192
+ use_tools: bool):
193
+ """Respond to query using specified configuration with optional MCP tools"""
194
+
195
+ # If a preset model is selected, use preset configuration
196
+ if preset_model != "Custom":
197
+ config_obj = preset_configs.get(preset_model, default_config)
198
+ llm_config_dict = {
199
+ 'api_type': config_obj.api_type,
200
+ 'api_key': config_obj.api_key,
201
+ 'base_url': config_obj.base_url,
202
+ 'api_version': config_obj.api_version,
203
+ 'model': config_obj.model,
204
+ 'temperature': config_obj.temperature,
205
+ 'max_tokens': config_obj.max_tokens,
206
+ 'vision_enabled': config_obj.vision_enabled
207
+ }
208
+ else:
209
+ # Use custom configuration
210
+ llm_config_dict = create_custom_llm_config(
211
+ api_type, api_key, base_url, api_version, model, temperature, max_tokens
212
+ )
213
+
214
+ brain = build_brain_with_config(llm_config_dict)
215
+ # Handle empty route selection for auto route
216
+ route_param = route if route else None
217
+
218
+ # Build kwargs for brain.step
219
+ kwargs = {'query': query, 'route': route_param, 'check': check_enabled}
220
+
221
+ # Add query_type to kwargs if route is python
222
+ if route == "python" and query_type:
223
+ kwargs['query_type'] = query_type
224
+
225
+ # Add tools if enabled
226
+ if use_tools:
227
+ try:
228
+ tools = await get_available_tools()
229
+ kwargs['tools'] = tools
230
+ print(f"Using {len(tools)} tools: {[tool.name for tool in tools]}")
231
+ except Exception as e:
232
+ print(f"Warning: Failed to get tools: {e}")
233
+
234
+ obs, score, *_ = await brain.step(**kwargs)
235
+ return obs
236
+
237
+ def minion_respond(query: str, preset_model: str, api_type: str, api_key: str,
238
+ base_url: str, api_version: str, model: str, temperature: float,
239
+ max_tokens: int, route: str, query_type: str, check_enabled: bool,
240
+ use_tools: bool):
241
+ """Gradio sync interface, automatically schedules async"""
242
+ return asyncio.run(minion_respond_async(
243
+ query, preset_model, api_type, api_key, base_url,
244
+ api_version, model, temperature, max_tokens, route, query_type, check_enabled,
245
+ use_tools
246
+ ))
247
+
248
+ def update_fields(preset_model: str):
249
+ """Update other fields when preset model is selected"""
250
+ if preset_model == "Custom":
251
+ # Return default values, let user configure themselves
252
+ return (
253
+ default_config.api_type,
254
+ "", # Don't display API key
255
+ default_config.base_url,
256
+ default_config.api_version,
257
+ default_config.model,
258
+ default_config.temperature,
259
+ default_config.max_tokens
260
+ )
261
+ else:
262
+ config_obj = preset_configs.get(preset_model, default_config)
263
+ # Ensure API type is from valid choices
264
+ api_type = config_obj.api_type if config_obj.api_type in ["azure", "openai", "groq", "ollama", "anthropic", "gemini"] else "azure"
265
+ return (
266
+ api_type,
267
+ "***hidden***", # Hide API key display
268
+ config_obj.base_url,
269
+ config_obj.api_version,
270
+ config_obj.model,
271
+ config_obj.temperature,
272
+ config_obj.max_tokens
273
+ )
274
+
275
+ def update_query_type_visibility(route: str):
276
+ """Show query_type dropdown only when route is python"""
277
+ return gr.update(visible=(route == "python"))
278
+
279
+ async def get_tool_status():
280
+ """Get status of available tools"""
281
+ try:
282
+ tools = await get_available_tools()
283
+ return f"Available tools: {', '.join([tool.name for tool in tools])}"
284
+ except Exception as e:
285
+ return f"Error getting tools: {str(e)}"
286
+
287
+ def check_tools():
288
+ """Sync wrapper for tool status check"""
289
+ return asyncio.run(get_tool_status())
290
+
291
+ # Create Gradio interface
292
+ with gr.Blocks(title="Minion Brain Chat with MCP Tools") as demo:
293
+ gr.Markdown("# Minion Brain Chat with MCP Tools\nIntelligent Q&A powered by Minion1 Brain with MCP tool support")
294
+
295
+ with gr.Row():
296
+ with gr.Column(scale=2):
297
+ query_input = gr.Textbox(
298
+ label="Enter your question",
299
+ placeholder="Please enter your question...",
300
+ lines=3
301
+ )
302
+ submit_btn = gr.Button("Submit", variant="primary")
303
+
304
+ # Tool status
305
+ with gr.Row():
306
+ tool_status_btn = gr.Button("Check Available Tools", size="sm")
307
+ tool_status_output = gr.Textbox(
308
+ label="Tool Status",
309
+ lines=2,
310
+ interactive=False
311
+ )
312
+
313
+ # Move Answer section to left column, closer to question input
314
+ output = gr.Textbox(
315
+ label="Answer",
316
+ lines=10,
317
+ show_copy_button=True
318
+ )
319
+
320
+ with gr.Column(scale=1):
321
+ # Tool settings
322
+ use_tools_checkbox = gr.Checkbox(
323
+ label="Enable MCP Tools",
324
+ value=True,
325
+ info="Use Model Context Protocol tools"
326
+ )
327
+
328
+ # Move route selection to the front
329
+ route_dropdown = gr.Dropdown(
330
+ label="Route",
331
+ choices=available_routes,
332
+ value="",
333
+ info="empty: auto select, raw: direct LLM, native: standard, cot: chain of thought, dcot: dynamic cot, plan: planning, python: code execution"
334
+ )
335
+
336
+ # Add query_type option, visible only when route="python"
337
+ query_type_dropdown = gr.Dropdown(
338
+ label="Query Type",
339
+ choices=["calculate", "code_solution", "generate"],
340
+ value="calculate",
341
+ visible=False,
342
+ info="Type of query for python route"
343
+ )
344
+
345
+ # Add check option
346
+ check_checkbox = gr.Checkbox(
347
+ label="Enable Check",
348
+ value=False,
349
+ info="Enable output verification and validation"
350
+ )
351
+
352
+ preset_dropdown = gr.Dropdown(
353
+ label="Preset Model",
354
+ choices=["Custom"] + list(preset_configs.keys()),
355
+ value="gpt-4o",
356
+ info="Select preset configuration or custom"
357
+ )
358
+
359
+ api_type_input = gr.Dropdown(
360
+ label="API Type",
361
+ choices=["azure", "openai", "groq", "ollama", "anthropic", "gemini"],
362
+ value=default_config.api_type,
363
+ info="Select API provider type"
364
+ )
365
+
366
+ api_key_input = gr.Textbox(
367
+ label="API Key",
368
+ value="***hidden***",
369
+ type="password",
370
+ info="Your API key"
371
+ )
372
+
373
+ base_url_input = gr.Textbox(
374
+ label="Base URL",
375
+ value=default_config.base_url,
376
+ info="API base URL"
377
+ )
378
+
379
+ api_version_input = gr.Textbox(
380
+ label="API Version",
381
+ value=default_config.api_version,
382
+ info="API version (required for Azure)"
383
+ )
384
+
385
+ model_input = gr.Textbox(
386
+ label="Model",
387
+ value=default_config.model,
388
+ info="Model name"
389
+ )
390
+
391
+ temperature_input = gr.Slider(
392
+ label="Temperature",
393
+ minimum=0.0,
394
+ maximum=2.0,
395
+ value=default_config.temperature,
396
+ step=0.1,
397
+ info="Control output randomness"
398
+ )
399
+
400
+ max_tokens_input = gr.Slider(
401
+ label="Max Tokens",
402
+ minimum=100,
403
+ maximum=8000,
404
+ value=default_config.max_tokens,
405
+ step=100,
406
+ info="Maximum number of tokens to generate"
407
+ )
408
+
409
+ # Update other fields when preset model changes
410
+ preset_dropdown.change(
411
+ fn=update_fields,
412
+ inputs=[preset_dropdown],
413
+ outputs=[api_type_input, api_key_input, base_url_input,
414
+ api_version_input, model_input, temperature_input, max_tokens_input]
415
+ )
416
+
417
+ # Update query_type visibility when route changes
418
+ route_dropdown.change(
419
+ fn=update_query_type_visibility,
420
+ inputs=[route_dropdown],
421
+ outputs=[query_type_dropdown]
422
+ )
423
+
424
+ # Tool status check
425
+ tool_status_btn.click(
426
+ fn=check_tools,
427
+ outputs=[tool_status_output]
428
+ )
429
+
430
+ # Submit button event
431
+ submit_btn.click(
432
+ fn=minion_respond,
433
+ inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
434
+ base_url_input, api_version_input, model_input, temperature_input,
435
+ max_tokens_input, route_dropdown, query_type_dropdown, check_checkbox,
436
+ use_tools_checkbox],
437
+ outputs=[output]
438
+ )
439
+
440
+ # Enter key submit
441
+ query_input.submit(
442
+ fn=minion_respond,
443
+ inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
444
+ base_url_input, api_version_input, model_input, temperature_input,
445
+ max_tokens_input, route_dropdown, query_type_dropdown, check_checkbox,
446
+ use_tools_checkbox],
447
+ outputs=[output]
448
+ )
449
+
450
+ # Cleanup function
451
+ async def cleanup_on_exit():
452
+ """Clean up MCP client on exit"""
453
+ global mcp_client
454
+ if mcp_client:
455
+ await mcp_client.cleanup()
456
+
457
+ if __name__ == "__main__":
458
+ try:
459
+ demo.launch(mcp_server=True)
460
+ finally:
461
+ asyncio.run(cleanup_on_exit())
app1.py → app_without_mcp.py RENAMED
@@ -132,7 +132,7 @@ available_routes = get_available_routes()
132
  async def minion_respond_async(query: str, preset_model: str, api_type: str,
133
  api_key: str, base_url: str, api_version: str,
134
  model: str, temperature: float, max_tokens: int,
135
- route: str, check_enabled: bool):
136
  """Respond to query using specified configuration"""
137
 
138
  # If a preset model is selected, use preset configuration
@@ -157,16 +157,22 @@ async def minion_respond_async(query: str, preset_model: str, api_type: str,
157
  brain = build_brain_with_config(llm_config_dict)
158
  # Handle empty route selection for auto route
159
  route_param = route if route else None
160
- obs, score, *_ = await brain.step(query=query, route=route_param, check=check_enabled)
 
 
 
 
 
 
161
  return obs
162
 
163
  def minion_respond(query: str, preset_model: str, api_type: str, api_key: str,
164
  base_url: str, api_version: str, model: str, temperature: float,
165
- max_tokens: int, route: str, check_enabled: bool):
166
  """Gradio sync interface, automatically schedules async"""
167
  return asyncio.run(minion_respond_async(
168
  query, preset_model, api_type, api_key, base_url,
169
- api_version, model, temperature, max_tokens, route, check_enabled
170
  ))
171
 
172
  def update_fields(preset_model: str):
@@ -184,8 +190,10 @@ def update_fields(preset_model: str):
184
  )
185
  else:
186
  config_obj = preset_configs.get(preset_model, default_config)
 
 
187
  return (
188
- config_obj.api_type,
189
  "***hidden***", # Hide API key display
190
  config_obj.base_url,
191
  config_obj.api_version,
@@ -194,6 +202,10 @@ def update_fields(preset_model: str):
194
  config_obj.max_tokens
195
  )
196
 
 
 
 
 
197
  # Create Gradio interface
198
  with gr.Blocks(title="Minion Brain Chat") as demo:
199
  gr.Markdown("# Minion Brain Chat\nIntelligent Q&A powered by Minion1 Brain")
@@ -207,6 +219,13 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
207
  )
208
  submit_btn = gr.Button("Submit", variant="primary")
209
 
 
 
 
 
 
 
 
210
  with gr.Column(scale=1):
211
  # Move route selection to the front
212
  route_dropdown = gr.Dropdown(
@@ -216,6 +235,15 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
216
  info="empty: auto select, raw: direct LLM, native: standard, cot: chain of thought, dcot: dynamic cot, plan: planning, python: code execution"
217
  )
218
 
 
 
 
 
 
 
 
 
 
219
  # Add check option
220
  check_checkbox = gr.Checkbox(
221
  label="Enable Check",
@@ -230,10 +258,11 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
230
  info="Select preset configuration or custom"
231
  )
232
 
233
- api_type_input = gr.Textbox(
234
  label="API Type",
 
235
  value=default_config.api_type,
236
- info="openai, azure, ollama, groq etc."
237
  )
238
 
239
  api_key_input = gr.Textbox(
@@ -279,13 +308,6 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
279
  info="Maximum number of tokens to generate"
280
  )
281
 
282
- # Move Answer section up
283
- output = gr.Textbox(
284
- label="Answer",
285
- lines=10,
286
- show_copy_button=True
287
- )
288
-
289
  # Update other fields when preset model changes
290
  preset_dropdown.change(
291
  fn=update_fields,
@@ -294,12 +316,19 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
294
  api_version_input, model_input, temperature_input, max_tokens_input]
295
  )
296
 
 
 
 
 
 
 
 
297
  # Submit button event
298
  submit_btn.click(
299
  fn=minion_respond,
300
  inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
301
  base_url_input, api_version_input, model_input, temperature_input,
302
- max_tokens_input, route_dropdown, check_checkbox],
303
  outputs=[output]
304
  )
305
 
@@ -308,7 +337,7 @@ with gr.Blocks(title="Minion Brain Chat") as demo:
308
  fn=minion_respond,
309
  inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
310
  base_url_input, api_version_input, model_input, temperature_input,
311
- max_tokens_input, route_dropdown, check_checkbox],
312
  outputs=[output]
313
  )
314
 
 
132
  async def minion_respond_async(query: str, preset_model: str, api_type: str,
133
  api_key: str, base_url: str, api_version: str,
134
  model: str, temperature: float, max_tokens: int,
135
+ route: str, query_type: str, check_enabled: bool):
136
  """Respond to query using specified configuration"""
137
 
138
  # If a preset model is selected, use preset configuration
 
157
  brain = build_brain_with_config(llm_config_dict)
158
  # Handle empty route selection for auto route
159
  route_param = route if route else None
160
+
161
+ # Add query_type to kwargs if route is python
162
+ kwargs = {'query': query, 'route': route_param, 'check': check_enabled}
163
+ if route == "python" and query_type:
164
+ kwargs['query_type'] = query_type
165
+
166
+ obs, score, *_ = await brain.step(**kwargs)
167
  return obs
168
 
169
  def minion_respond(query: str, preset_model: str, api_type: str, api_key: str,
170
  base_url: str, api_version: str, model: str, temperature: float,
171
+ max_tokens: int, route: str, query_type: str, check_enabled: bool):
172
  """Gradio sync interface, automatically schedules async"""
173
  return asyncio.run(minion_respond_async(
174
  query, preset_model, api_type, api_key, base_url,
175
+ api_version, model, temperature, max_tokens, route, query_type, check_enabled
176
  ))
177
 
178
  def update_fields(preset_model: str):
 
190
  )
191
  else:
192
  config_obj = preset_configs.get(preset_model, default_config)
193
+ # Ensure API type is from valid choices
194
+ api_type = config_obj.api_type if config_obj.api_type in ["azure", "openai", "groq", "ollama", "anthropic", "gemini"] else "azure"
195
  return (
196
+ api_type,
197
  "***hidden***", # Hide API key display
198
  config_obj.base_url,
199
  config_obj.api_version,
 
202
  config_obj.max_tokens
203
  )
204
 
205
+ def update_query_type_visibility(route: str):
206
+ """Show query_type dropdown only when route is python"""
207
+ return gr.update(visible=(route == "python"))
208
+
209
  # Create Gradio interface
210
  with gr.Blocks(title="Minion Brain Chat") as demo:
211
  gr.Markdown("# Minion Brain Chat\nIntelligent Q&A powered by Minion1 Brain")
 
219
  )
220
  submit_btn = gr.Button("Submit", variant="primary")
221
 
222
+ # Move Answer section to left column, closer to question input
223
+ output = gr.Textbox(
224
+ label="Answer",
225
+ lines=10,
226
+ show_copy_button=True
227
+ )
228
+
229
  with gr.Column(scale=1):
230
  # Move route selection to the front
231
  route_dropdown = gr.Dropdown(
 
235
  info="empty: auto select, raw: direct LLM, native: standard, cot: chain of thought, dcot: dynamic cot, plan: planning, python: code execution"
236
  )
237
 
238
+ # Add query_type option, visible only when route="python"
239
+ query_type_dropdown = gr.Dropdown(
240
+ label="Query Type",
241
+ choices=["calculate", "code_solution", "generate"],
242
+ value="calculate",
243
+ visible=False,
244
+ info="Type of query for python route"
245
+ )
246
+
247
  # Add check option
248
  check_checkbox = gr.Checkbox(
249
  label="Enable Check",
 
258
  info="Select preset configuration or custom"
259
  )
260
 
261
+ api_type_input = gr.Dropdown(
262
  label="API Type",
263
+ choices=["azure", "openai", "groq", "ollama", "anthropic", "gemini"],
264
  value=default_config.api_type,
265
+ info="Select API provider type"
266
  )
267
 
268
  api_key_input = gr.Textbox(
 
308
  info="Maximum number of tokens to generate"
309
  )
310
 
 
 
 
 
 
 
 
311
  # Update other fields when preset model changes
312
  preset_dropdown.change(
313
  fn=update_fields,
 
316
  api_version_input, model_input, temperature_input, max_tokens_input]
317
  )
318
 
319
+ # Update query_type visibility when route changes
320
+ route_dropdown.change(
321
+ fn=update_query_type_visibility,
322
+ inputs=[route_dropdown],
323
+ outputs=[query_type_dropdown]
324
+ )
325
+
326
  # Submit button event
327
  submit_btn.click(
328
  fn=minion_respond,
329
  inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
330
  base_url_input, api_version_input, model_input, temperature_input,
331
+ max_tokens_input, route_dropdown, query_type_dropdown, check_checkbox],
332
  outputs=[output]
333
  )
334
 
 
337
  fn=minion_respond,
338
  inputs=[query_input, preset_dropdown, api_type_input, api_key_input,
339
  base_url_input, api_version_input, model_input, temperature_input,
340
+ max_tokens_input, route_dropdown, query_type_dropdown, check_checkbox],
341
  outputs=[output]
342
  )
343
 
example_mcp_server.py ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Example MCP Server for testing with Minion Brain integration
4
+
5
+ This server provides simple tools for demonstration purposes:
6
+ - calculator: Basic arithmetic operations
7
+ - echo: Simple echo/repeat functionality
8
+ - timestamp: Get current timestamp
9
+ """
10
+
11
+ import asyncio
12
+ import json
13
+ import math
14
+ from datetime import datetime
15
+ from typing import Any, Dict
16
+
17
+ import mcp.types as types
18
+ from mcp.server import NotificationOptions, Server
19
+ from mcp.server.models import InitializationOptions
20
+ import mcp.server.stdio
21
+
22
+
23
+ # Create server instance
24
+ server = Server("example-tools-server")
25
+
26
+
27
+ @server.list_tools()
28
+ async def handle_list_tools() -> list[types.Tool]:
29
+ """List available tools."""
30
+ return [
31
+ types.Tool(
32
+ name="calculator",
33
+ description="Perform basic arithmetic calculations",
34
+ inputSchema={
35
+ "type": "object",
36
+ "properties": {
37
+ "expression": {
38
+ "type": "string",
39
+ "description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4')"
40
+ }
41
+ },
42
+ "required": ["expression"]
43
+ }
44
+ ),
45
+ types.Tool(
46
+ name="echo",
47
+ description="Echo back the provided text",
48
+ inputSchema={
49
+ "type": "object",
50
+ "properties": {
51
+ "text": {
52
+ "type": "string",
53
+ "description": "Text to echo back"
54
+ }
55
+ },
56
+ "required": ["text"]
57
+ }
58
+ ),
59
+ types.Tool(
60
+ name="timestamp",
61
+ description="Get current timestamp",
62
+ inputSchema={
63
+ "type": "object",
64
+ "properties": {
65
+ "format": {
66
+ "type": "string",
67
+ "description": "Timestamp format ('iso', 'unix', or 'readable')",
68
+ "default": "iso"
69
+ }
70
+ }
71
+ }
72
+ ),
73
+ types.Tool(
74
+ name="math_functions",
75
+ description="Advanced mathematical functions",
76
+ inputSchema={
77
+ "type": "object",
78
+ "properties": {
79
+ "function": {
80
+ "type": "string",
81
+ "description": "Math function to use",
82
+ "enum": ["sin", "cos", "tan", "log", "sqrt", "factorial"]
83
+ },
84
+ "value": {
85
+ "type": "number",
86
+ "description": "Input value for the function"
87
+ }
88
+ },
89
+ "required": ["function", "value"]
90
+ }
91
+ )
92
+ ]
93
+
94
+
95
+ @server.call_tool()
96
+ async def handle_call_tool(name: str, arguments: dict) -> list[types.TextContent]:
97
+ """Handle tool execution."""
98
+
99
+ if name == "calculator":
100
+ expression = arguments.get("expression", "")
101
+ try:
102
+ # Simple and safe evaluation for basic arithmetic
103
+ # Note: In production, use a proper math parser for security
104
+ allowed_chars = set("0123456789+-*/()., ")
105
+ if not all(c in allowed_chars for c in expression):
106
+ raise ValueError("Invalid characters in expression")
107
+
108
+ result = eval(expression)
109
+ return [types.TextContent(
110
+ type="text",
111
+ text=f"Result: {expression} = {result}"
112
+ )]
113
+ except Exception as e:
114
+ return [types.TextContent(
115
+ type="text",
116
+ text=f"Error: Unable to calculate '{expression}': {str(e)}"
117
+ )]
118
+
119
+ elif name == "echo":
120
+ text = arguments.get("text", "")
121
+ return [types.TextContent(
122
+ type="text",
123
+ text=f"Echo: {text}"
124
+ )]
125
+
126
+ elif name == "timestamp":
127
+ format_type = arguments.get("format", "iso")
128
+ now = datetime.now()
129
+
130
+ if format_type == "unix":
131
+ timestamp = str(int(now.timestamp()))
132
+ elif format_type == "readable":
133
+ timestamp = now.strftime("%Y-%m-%d %H:%M:%S")
134
+ else: # iso format
135
+ timestamp = now.isoformat()
136
+
137
+ return [types.TextContent(
138
+ type="text",
139
+ text=f"Current timestamp ({format_type}): {timestamp}"
140
+ )]
141
+
142
+ elif name == "math_functions":
143
+ function = arguments.get("function")
144
+ value = arguments.get("value")
145
+
146
+ try:
147
+ if function == "sin":
148
+ result = math.sin(math.radians(value))
149
+ elif function == "cos":
150
+ result = math.cos(math.radians(value))
151
+ elif function == "tan":
152
+ result = math.tan(math.radians(value))
153
+ elif function == "log":
154
+ result = math.log(value)
155
+ elif function == "sqrt":
156
+ result = math.sqrt(value)
157
+ elif function == "factorial":
158
+ result = math.factorial(int(value))
159
+ else:
160
+ raise ValueError(f"Unknown function: {function}")
161
+
162
+ return [types.TextContent(
163
+ type="text",
164
+ text=f"{function}({value}) = {result}"
165
+ )]
166
+ except Exception as e:
167
+ return [types.TextContent(
168
+ type="text",
169
+ text=f"Error: Unable to calculate {function}({value}): {str(e)}"
170
+ )]
171
+
172
+ else:
173
+ raise ValueError(f"Unknown tool: {name}")
174
+
175
+
176
+ async def main():
177
+ """Main function to run the MCP server."""
178
+ # Run the server using stdio transport
179
+ async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
180
+ await server.run(
181
+ read_stream,
182
+ write_stream,
183
+ InitializationOptions(
184
+ server_name="example-tools-server",
185
+ server_version="1.0.0",
186
+ capabilities=server.get_capabilities(
187
+ notification_options=NotificationOptions(),
188
+ experimental_capabilities={},
189
+ ),
190
+ ),
191
+ )
192
+
193
+
194
+ if __name__ == "__main__":
195
+ asyncio.run(main())
filesystem_tool_example.py ADDED
@@ -0,0 +1,217 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+
3
+ import asyncio
4
+ import os
5
+ import sys
6
+ from pathlib import Path
7
+
8
+ # Add current directory to path for imports
9
+ sys.path.append(str(Path(__file__).parent))
10
+
11
+ from mcp_integration import MCPBrainClient, create_final_answer_tool, add_filesystem_tool
12
+
13
+
14
+ async def demo_filesystem_tool():
15
+ """演示文件系统工具的基本功能"""
16
+ print("🔍 文件系统工具演示")
17
+ print("=" * 50)
18
+
19
+ try:
20
+ async with MCPBrainClient() as client:
21
+ print("✓ 初始化 MCP 客户端")
22
+
23
+ # 添加文件系统工具
24
+ try:
25
+ await add_filesystem_tool(client, workspace_paths=[
26
+ "/Users/femtozheng/workspace",
27
+ "/Users/femtozheng/python-project/minion-agent"
28
+ ])
29
+ print("✓ 文件系统工具添加成功")
30
+
31
+ # 获取所有工具
32
+ tools = client.get_tools_for_brain()
33
+ print(f"✓ 总共可用工具: {len(tools)}")
34
+
35
+ # 筛选文件系统相关工具
36
+ fs_tools = [t for t in tools if any(keyword in t.name.lower()
37
+ for keyword in ['file', 'read', 'write', 'list', 'directory'])]
38
+
39
+ if fs_tools:
40
+ print(f"\n📁 发现 {len(fs_tools)} 个文件系统工具:")
41
+ for i, tool in enumerate(fs_tools, 1):
42
+ print(f" {i}. {tool.name}")
43
+ print(f" 描述: {tool.description}")
44
+ if hasattr(tool, 'parameters') and tool.parameters:
45
+ props = tool.parameters.get('properties', {})
46
+ if props:
47
+ print(f" 参数: {', '.join(props.keys())}")
48
+ print()
49
+
50
+ # 演示如何在 brain.step 中使用
51
+ print("💡 在 brain.step 中使用示例:")
52
+ print("```python")
53
+ print("# 添加最终答案工具")
54
+ print("final_tool = create_final_answer_tool()")
55
+ print("all_tools = fs_tools + [final_tool]")
56
+ print()
57
+ print("# 在 brain.step 中使用")
58
+ print("obs, score, *_ = await brain.step(")
59
+ print(" query='请读取 README.md 文件的内容',")
60
+ print(" route='raw',")
61
+ print(" check=False,")
62
+ print(" tools=all_tools")
63
+ print(")")
64
+ print("```")
65
+
66
+ # 模拟 brain.step 调用
67
+ print("\n🧠 模拟 brain.step 集成:")
68
+ final_tool = create_final_answer_tool()
69
+ all_tools = fs_tools + [final_tool]
70
+
71
+ tool_specs = [tool.to_function_spec() for tool in all_tools]
72
+ print(f"✓ 生成了 {len(tool_specs)} 个工具规格")
73
+ print("✓ 工具已准备好供 brain.step 使用")
74
+
75
+ # 展示工具规格格式
76
+ if fs_tools:
77
+ sample_tool = fs_tools[0]
78
+ print(f"\n📋 示例工具规格 ({sample_tool.name}):")
79
+ spec = sample_tool.to_function_spec()
80
+ print(f" 类型: {spec.get('type', 'N/A')}")
81
+ print(f" 函数名: {spec.get('function', {}).get('name', 'N/A')}")
82
+ print(f" 描述: {spec.get('function', {}).get('description', 'N/A')}")
83
+
84
+ else:
85
+ print("⚠ 没有发现文件系统相关工具")
86
+ print("可能的原因:")
87
+ print("- @modelcontextprotocol/server-filesystem 未正确安装")
88
+ print("- Node.js/npx 环境问题")
89
+ print("- 工具名称不包含预期的关键词")
90
+
91
+ except Exception as e:
92
+ print(f"❌ 添加文件系统工具失败: {e}")
93
+ print("\n故障排除:")
94
+ print("1. 确保安装了 Node.js 和 npx")
95
+ print("2. 运行: npx @modelcontextprotocol/server-filesystem --help")
96
+ print("3. 检查网络连接")
97
+ print("4. 确保指定的路径存在且可访问")
98
+
99
+ except Exception as e:
100
+ print(f"❌ 演示失败: {e}")
101
+
102
+
103
+ async def test_filesystem_paths():
104
+ """测试不同的文件系统路径配置"""
105
+ print("\n🛠 测试自定义路径配置")
106
+ print("=" * 50)
107
+
108
+ # 测试不同的路径组合
109
+ test_paths = [
110
+ ["/Users/femtozheng/workspace"],
111
+ ["/Users/femtozheng/python-project/minion-agent"],
112
+ ["/Users/femtozheng/workspace", "/Users/femtozheng/python-project/minion-agent"],
113
+ [".", str(Path.home() / "Documents")] # 相对路径和绝对路径混合
114
+ ]
115
+
116
+ for i, paths in enumerate(test_paths, 1):
117
+ print(f"\n📁 测试配置 {i}: {paths}")
118
+ try:
119
+ # 检查路径是否存在
120
+ existing_paths = []
121
+ for path in paths:
122
+ if Path(path).exists():
123
+ existing_paths.append(path)
124
+ print(f" ✓ 路径存在: {path}")
125
+ else:
126
+ print(f" ⚠ 路径不存在: {path}")
127
+
128
+ if existing_paths:
129
+ async with MCPBrainClient() as client:
130
+ await add_filesystem_tool(client, workspace_paths=existing_paths)
131
+ tools = client.get_tools_for_brain()
132
+ fs_tools = [t for t in tools if any(keyword in t.name.lower()
133
+ for keyword in ['file', 'read', 'write', 'list'])]
134
+ print(f" ✓ 配置成功,发现 {len(fs_tools)} 个文件系统工具")
135
+ else:
136
+ print(" ⚠ 跳过测试(没有有效路径)")
137
+
138
+ except Exception as e:
139
+ print(f" ❌ 配置失败: {e}")
140
+
141
+
142
+ async def show_integration_example():
143
+ """展示完整的集成示例"""
144
+ print("\n🚀 完整集成示例")
145
+ print("=" * 50)
146
+
147
+ print("""
148
+ 这里是如何在实际项目中使用文件系统工具的示例:
149
+
150
+ ```python
151
+ from mcp_integration import MCPBrainClient, add_filesystem_tool, create_final_answer_tool
152
+ from minion.main.brain import Brain
153
+ from minion.main import LocalPythonEnv
154
+ from minion.providers import create_llm_provider
155
+
156
+ async def use_filesystem_in_brain():
157
+ # 1. 设置 MCP 客户端和文件系统工具
158
+ async with MCPBrainClient() as mcp_client:
159
+ await add_filesystem_tool(mcp_client, workspace_paths=[
160
+ "/Users/femtozheng/workspace",
161
+ "/Users/femtozheng/python-project/minion-agent"
162
+ ])
163
+
164
+ # 2. 获取所有工具
165
+ mcp_tools = mcp_client.get_tools_for_brain()
166
+ final_tool = create_final_answer_tool()
167
+ all_tools = mcp_tools + [final_tool]
168
+
169
+ # 3. 创建 brain 实例
170
+ llm = create_llm_provider(your_config)
171
+ python_env = LocalPythonEnv(verbose=False)
172
+ brain = Brain(python_env=python_env, llm=llm)
173
+
174
+ # 4. 使用 brain.step 处理文件操作
175
+ obs, score, *_ = await brain.step(
176
+ query="请读取项目根目录的 README.md 文件并总结其内容",
177
+ route="raw",
178
+ check=False,
179
+ tools=all_tools
180
+ )
181
+
182
+ print(f"Brain 响应: {obs}")
183
+
184
+ # 其他用例:
185
+ # - "列出 workspace 目录下的所有 Python 文件"
186
+ # - "读取 config.json 文件并解析其配置"
187
+ # - "在指定目录创建一个新的文档文件"
188
+ # - "搜索包含特定关键词的文件"
189
+ ```
190
+
191
+ 🎯 主要优势:
192
+ - 🔒 安全: 只能访问预先配置的路径
193
+ - 🔄 异步: 所有文件操作都是异步的
194
+ - 🧠 智能: AI 可以理解文件内容并进行推理
195
+ - 🛠 灵活: 支持读取、写入、列表等多种操作
196
+ """)
197
+
198
+
199
+ async def main():
200
+ """运行所有演示"""
201
+ print("📁 MCP 文件系统工具集成演示")
202
+ print("=" * 80)
203
+
204
+ await demo_filesystem_tool()
205
+ await test_filesystem_paths()
206
+ await show_integration_example()
207
+
208
+ print("\n✅ 演示完成!")
209
+ print("\n📝 下一步:")
210
+ print("1. 运行: python app_with_mcp.py")
211
+ print("2. 在界面中启用 'MCP Tools'")
212
+ print("3. 测试文件相关查询,如: '读取当前目录的文件列表'")
213
+ print("4. 或者直接在代码中使用文件系统工具")
214
+
215
+
216
+ if __name__ == "__main__":
217
+ asyncio.run(main())
mcp_integration.py ADDED
@@ -0,0 +1,466 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import logging
3
+ from contextlib import AsyncExitStack
4
+ from datetime import timedelta
5
+ from pathlib import Path
6
+ from typing import TYPE_CHECKING, Any, Dict, List, Literal, Optional, Union, overload, Callable
7
+
8
+ from typing_extensions import NotRequired, TypeAlias, TypedDict, Unpack
9
+
10
+ if TYPE_CHECKING:
11
+ from mcp import ClientSession
12
+
13
+ logger = logging.getLogger(__name__)
14
+
15
+ # Type alias for tool names
16
+ ToolName: TypeAlias = str
17
+
18
+ ServerType: TypeAlias = Literal["stdio", "sse", "http"]
19
+
20
+
21
+ class StdioServerParameters_T(TypedDict):
22
+ command: str
23
+ args: NotRequired[List[str]]
24
+ env: NotRequired[Dict[str, str]]
25
+ cwd: NotRequired[Union[str, Path, None]]
26
+
27
+
28
+ class SSEServerParameters_T(TypedDict):
29
+ url: str
30
+ headers: NotRequired[Dict[str, Any]]
31
+ timeout: NotRequired[float]
32
+ sse_read_timeout: NotRequired[float]
33
+
34
+
35
+ class StreamableHTTPParameters_T(TypedDict):
36
+ url: str
37
+ headers: NotRequired[dict[str, Any]]
38
+ timeout: NotRequired[timedelta]
39
+ sse_read_timeout: NotRequired[timedelta]
40
+ terminate_on_close: NotRequired[bool]
41
+
42
+
43
+ def format_mcp_result(result: Any) -> str:
44
+ #should we format mcp.types result to some result format handled by our framework?
45
+ return str(result)
46
+ """Format MCP tool result for minion brain.step"""
47
+ # if isinstance(result, dict):
48
+ # # Handle MCP result format
49
+ # if "content" in result:
50
+ # content_items = result["content"]
51
+ # if isinstance(content_items, list):
52
+ # texts = []
53
+ # for item in content_items:
54
+ # if isinstance(item, dict) and item.get("type") == "text":
55
+ # texts.append(item.get("text", ""))
56
+ # return "\n".join(texts)
57
+ # elif isinstance(content_items, str):
58
+ # return content_items
59
+ #
60
+ # # Handle other dict formats
61
+ # if "text" in result:
62
+ # return result["text"]
63
+ #
64
+ # # Fallback to JSON string
65
+ # return json.dumps(result, indent=2)
66
+ #
67
+ # elif isinstance(result, str):
68
+ # return result
69
+ # else:
70
+ # return str(result)
71
+
72
+
73
+ class BrainTool:
74
+ """
75
+ Adapter class to convert MCP tools to brain.step compatible format
76
+ """
77
+ def __init__(self, name: str, description: str, parameters: Dict[str, Any], session: "ClientSession"):
78
+ self.name = name
79
+ self.description = description
80
+ self.parameters = parameters
81
+ self.session = session
82
+
83
+ # Add attributes expected by minion framework
84
+ self.__name__ = name
85
+ self.__doc__ = description
86
+ self.__input_schema__ = parameters
87
+
88
+ async def __call__(self, **kwargs) -> str:
89
+ """Execute the tool with given parameters"""
90
+ try:
91
+ result = await self.session.call_tool(self.name, kwargs)
92
+ return format_mcp_result(result)
93
+ except Exception as e:
94
+ logger.error(f"Error executing tool {self.name}: {e}")
95
+ return f"Error: {str(e)}"
96
+
97
+ def to_function_spec(self) -> Dict[str, Any]:
98
+ """Convert to function specification format for brain.step"""
99
+ return {
100
+ "type": "function",
101
+ "function": {
102
+ "name": self.name,
103
+ "description": self.description,
104
+ "parameters": self.parameters,
105
+ }
106
+ }
107
+
108
+ def to_dict(self) -> Dict[str, Any]:
109
+ """Convert to dictionary format"""
110
+ return {
111
+ "name": self.name,
112
+ "description": self.description,
113
+ "parameters": self.parameters
114
+ }
115
+
116
+
117
+ class MCPBrainClient:
118
+ """
119
+ Client for connecting to MCP servers and providing tools to minion brain.step
120
+ """
121
+
122
+ def __init__(self):
123
+ # Initialize MCP sessions as a dictionary of ClientSession objects
124
+ self.sessions: Dict[ToolName, "ClientSession"] = {}
125
+ self.exit_stack = AsyncExitStack()
126
+ self.available_tools: List[BrainTool] = []
127
+
128
+ async def __aenter__(self):
129
+ """Enter the context manager"""
130
+ await self.exit_stack.__aenter__()
131
+ return self
132
+
133
+ async def __aexit__(self, exc_type, exc_val, exc_tb):
134
+ """Exit the context manager"""
135
+ await self.cleanup()
136
+
137
+ async def cleanup(self):
138
+ """Clean up resources"""
139
+ await self.exit_stack.aclose()
140
+
141
+ @overload
142
+ async def add_mcp_server(self, type: Literal["stdio"], **params: Unpack[StdioServerParameters_T]): ...
143
+
144
+ @overload
145
+ async def add_mcp_server(self, type: Literal["sse"], **params: Unpack[SSEServerParameters_T]): ...
146
+
147
+ @overload
148
+ async def add_mcp_server(self, type: Literal["http"], **params: Unpack[StreamableHTTPParameters_T]): ...
149
+
150
+ async def add_mcp_server(self, type: ServerType, **params: Any):
151
+ """Connect to an MCP server and add its tools to available tools
152
+
153
+ Args:
154
+ type (`str`):
155
+ Type of the server to connect to. Can be one of:
156
+ - "stdio": Standard input/output server (local)
157
+ - "sse": Server-sent events (SSE) server
158
+ - "http": StreamableHTTP server
159
+ **params (`Dict[str, Any]`):
160
+ Server parameters that can be either:
161
+ - For stdio servers:
162
+ - command (str): The command to run the MCP server
163
+ - args (List[str], optional): Arguments for the command
164
+ - env (Dict[str, str], optional): Environment variables for the command
165
+ - cwd (Union[str, Path, None], optional): Working directory for the command
166
+ - For SSE servers:
167
+ - url (str): The URL of the SSE server
168
+ - headers (Dict[str, Any], optional): Headers for the SSE connection
169
+ - timeout (float, optional): Connection timeout
170
+ - sse_read_timeout (float, optional): SSE read timeout
171
+ - For StreamableHTTP servers:
172
+ - url (str): The URL of the StreamableHTTP server
173
+ - headers (Dict[str, Any], optional): Headers for the StreamableHTTP connection
174
+ - timeout (timedelta, optional): Connection timeout
175
+ - sse_read_timeout (timedelta, optional): SSE read timeout
176
+ - terminate_on_close (bool, optional): Whether to terminate on close
177
+ """
178
+ from mcp import ClientSession, StdioServerParameters
179
+ from mcp import types as mcp_types
180
+
181
+ # Determine server type and create appropriate parameters
182
+ if type == "stdio":
183
+ # Handle stdio server
184
+ from mcp.client.stdio import stdio_client
185
+
186
+ logger.info(f"Connecting to stdio MCP server with command: {params['command']} {params.get('args', [])}")
187
+
188
+ client_kwargs = {"command": params["command"]}
189
+ for key in ["args", "env", "cwd"]:
190
+ if params.get(key) is not None:
191
+ client_kwargs[key] = params[key]
192
+ server_params = StdioServerParameters(**client_kwargs)
193
+ read, write = await self.exit_stack.enter_async_context(stdio_client(server_params))
194
+ elif type == "sse":
195
+ # Handle SSE server
196
+ from mcp.client.sse import sse_client
197
+
198
+ logger.info(f"Connecting to SSE MCP server at: {params['url']}")
199
+
200
+ client_kwargs = {"url": params["url"]}
201
+ for key in ["headers", "timeout", "sse_read_timeout"]:
202
+ if params.get(key) is not None:
203
+ client_kwargs[key] = params[key]
204
+ read, write = await self.exit_stack.enter_async_context(sse_client(**client_kwargs))
205
+ elif type == "http":
206
+ # Handle StreamableHTTP server
207
+ from mcp.client.streamable_http import streamablehttp_client
208
+
209
+ logger.info(f"Connecting to StreamableHTTP MCP server at: {params['url']}")
210
+
211
+ client_kwargs = {"url": params["url"]}
212
+ for key in ["headers", "timeout", "sse_read_timeout", "terminate_on_close"]:
213
+ if params.get(key) is not None:
214
+ client_kwargs[key] = params[key]
215
+ read, write, _ = await self.exit_stack.enter_async_context(streamablehttp_client(**client_kwargs))
216
+ else:
217
+ raise ValueError(f"Unsupported server type: {type}")
218
+
219
+ session = await self.exit_stack.enter_async_context(
220
+ ClientSession(
221
+ read_stream=read,
222
+ write_stream=write,
223
+ client_info=mcp_types.Implementation(
224
+ name="minion.MCPBrainClient",
225
+ version="1.0.0",
226
+ ),
227
+ )
228
+ )
229
+
230
+ logger.debug("Initializing session...")
231
+ await session.initialize()
232
+
233
+ # List available tools
234
+ response = await session.list_tools()
235
+ logger.debug("Connected to server with tools:", [tool.name for tool in response.tools])
236
+
237
+ for tool in response.tools:
238
+ if tool.name in self.sessions:
239
+ logger.warning(f"Tool '{tool.name}' already defined by another server. Skipping.")
240
+ continue
241
+
242
+ # Map tool names to their server for later lookup
243
+ self.sessions[tool.name] = session
244
+
245
+ # Create BrainTool wrapper
246
+ brain_tool = BrainTool(
247
+ name=tool.name,
248
+ description=tool.description,
249
+ parameters=tool.inputSchema,
250
+ session=session
251
+ )
252
+
253
+ # Add tool to the list of available tools
254
+ self.available_tools.append(brain_tool)
255
+
256
+ def get_tools_for_brain(self) -> List[BrainTool]:
257
+ """Get list of tools in the format expected by brain.step"""
258
+ return self.available_tools
259
+
260
+ def get_tool_functions(self) -> Dict[str, Callable]:
261
+ """Get dictionary of tool functions for direct execution"""
262
+ return {tool.name: tool for tool in self.available_tools}
263
+
264
+ def get_tool_specs(self) -> List[Dict[str, Any]]:
265
+ """Get list of tool specifications in ChatCompletion format"""
266
+ return [tool.to_function_spec() for tool in self.available_tools]
267
+
268
+ def get_tools_dict(self) -> List[Dict[str, Any]]:
269
+ """Get list of tools as dictionaries"""
270
+ return [tool.to_dict() for tool in self.available_tools]
271
+
272
+
273
+ # Helper function to create final answer tool (example implementation)
274
+ def create_final_answer_tool() -> BrainTool:
275
+ """
276
+ Create a final answer tool that can be used with brain.step
277
+ This is an example of how to create a local tool without MCP
278
+ """
279
+ class FinalAnswerSession:
280
+ async def call_tool(self, name: str, args: Dict[str, Any]) -> Dict[str, Any]:
281
+ return {
282
+ "content": [
283
+ {
284
+ "type": "text",
285
+ "text": args.get("answer", "No answer provided")
286
+ }
287
+ ]
288
+ }
289
+
290
+ session = FinalAnswerSession()
291
+
292
+ tool = BrainTool(
293
+ name="final_answer",
294
+ description="Provide the final answer to the user's question",
295
+ parameters={
296
+ "type": "object",
297
+ "properties": {
298
+ "answer": {
299
+ "type": "string",
300
+ "description": "The final answer to provide to the user"
301
+ }
302
+ },
303
+ "required": ["answer"]
304
+ },
305
+ session=session
306
+ )
307
+
308
+ return tool
309
+
310
+
311
+ def create_calculator_tool() -> BrainTool:
312
+ """
313
+ Create a local calculator tool for basic arithmetic
314
+ """
315
+ class CalculatorSession:
316
+ async def call_tool(self, name: str, args: Dict[str, Any]) -> Dict[str, Any]:
317
+ expression = args.get("expression", "")
318
+ try:
319
+ # Simple and safe evaluation for basic arithmetic
320
+ allowed_chars = set("0123456789+-*/()., ")
321
+ if not all(c in allowed_chars for c in expression):
322
+ raise ValueError("Invalid characters in expression")
323
+
324
+ result = eval(expression)
325
+ return {
326
+ "content": [
327
+ {
328
+ "type": "text",
329
+ "text": f"Calculation result: {expression} = {result}"
330
+ }
331
+ ]
332
+ }
333
+ except Exception as e:
334
+ return {
335
+ "content": [
336
+ {
337
+ "type": "text",
338
+ "text": f"Error: Unable to calculate '{expression}': {str(e)}"
339
+ }
340
+ ]
341
+ }
342
+
343
+ session = CalculatorSession()
344
+
345
+ tool = BrainTool(
346
+ name="calculator",
347
+ description="Perform basic arithmetic calculations",
348
+ parameters={
349
+ "type": "object",
350
+ "properties": {
351
+ "expression": {
352
+ "type": "string",
353
+ "description": "Mathematical expression to evaluate (e.g., '2 + 3 * 4')"
354
+ }
355
+ },
356
+ "required": ["expression"]
357
+ },
358
+ session=session
359
+ )
360
+
361
+ return tool
362
+
363
+
364
+ async def add_filesystem_tool(mcp_client: MCPBrainClient, workspace_paths: List[str] = None) -> None:
365
+ """
366
+ Add filesystem MCP tool to the client
367
+
368
+ Args:
369
+ mcp_client: The MCP client to add the tool to
370
+ workspace_paths: List of paths to allow access to. Defaults to common workspace paths.
371
+ """
372
+ if workspace_paths is None:
373
+ workspace_paths = [
374
+ "/Users/femtozheng/workspace",
375
+ "/Users/femtozheng/python-project/minion-agent"
376
+ ]
377
+
378
+ try:
379
+ await mcp_client.add_mcp_server(
380
+ "stdio",
381
+ command="npx",
382
+ args=["-y", "@modelcontextprotocol/server-filesystem"] + workspace_paths
383
+ )
384
+ logger.info(f"✓ Added filesystem tool with paths: {workspace_paths}")
385
+ except Exception as e:
386
+ logger.error(f"Failed to add filesystem tool: {e}")
387
+ raise
388
+
389
+
390
+ def create_filesystem_tool_factory(workspace_paths: List[str] = None):
391
+ """
392
+ Create a factory function for the filesystem tool
393
+
394
+ Args:
395
+ workspace_paths: List of paths to allow access to
396
+
397
+ Returns:
398
+ Async function that adds filesystem tool to an MCP client
399
+ """
400
+ if workspace_paths is None:
401
+ workspace_paths = [
402
+ "/Users/femtozheng/workspace",
403
+ "/Users/femtozheng/python-project/minion-agent"
404
+ ]
405
+
406
+ async def add_to_client(mcp_client: MCPBrainClient):
407
+ return await add_filesystem_tool(mcp_client, workspace_paths)
408
+
409
+ return add_to_client
410
+
411
+
412
+ class MCPToolConfig:
413
+ """Configuration for different MCP tools"""
414
+
415
+ FILESYSTEM_DEFAULT = {
416
+ "type": "stdio",
417
+ "command": "npx",
418
+ "args": ["-y", "@modelcontextprotocol/server-filesystem"],
419
+ "workspace_paths": [
420
+ "/Users/femtozheng/workspace",
421
+ "/Users/femtozheng/python-project/minion-agent"
422
+ ]
423
+ }
424
+
425
+ @staticmethod
426
+ def get_filesystem_config(workspace_paths: List[str] = None) -> Dict[str, Any]:
427
+ """Get filesystem tool configuration"""
428
+ config = MCPToolConfig.FILESYSTEM_DEFAULT.copy()
429
+ if workspace_paths:
430
+ config["workspace_paths"] = workspace_paths
431
+ config["args"] = ["-y", "@modelcontextprotocol/server-filesystem"] + workspace_paths
432
+ else:
433
+ config["args"] = ["-y", "@modelcontextprotocol/server-filesystem"] + config["workspace_paths"]
434
+ return config
435
+
436
+
437
+ # Example usage:
438
+ """
439
+ # Initialize MCP client
440
+ async def example_usage():
441
+ async with MCPBrainClient() as mcp_client:
442
+ # Add MCP servers
443
+ await mcp_client.add_mcp_server("sse", url="http://localhost:8080/sse")
444
+
445
+ # Get tools for brain.step
446
+ mcp_tools = mcp_client.get_tools_for_brain()
447
+
448
+ # Add final answer tool
449
+ final_answer_tool = create_final_answer_tool()
450
+ all_tools = mcp_tools + [final_answer_tool]
451
+
452
+ # Use with brain.step
453
+ from minion.main.brain import Brain
454
+ from minion.main import LocalPythonEnv
455
+ from minion.providers import create_llm_provider
456
+
457
+ # Create brain instance (you'll need to configure this)
458
+ # brain = Brain(...)
459
+
460
+ # obs, score, *_ = await brain.step(
461
+ # query="what's the solution 234*568",
462
+ # route="raw",
463
+ # check=False,
464
+ # tools=all_tools
465
+ # )
466
+ """
requirements.txt CHANGED
@@ -1,4 +1,4 @@
1
  gradio[mcp]==5.32.0
2
- huggingface_hub>=0.28.1
3
  minionx>=0.1.2
4
- python-dotenv>=1.0.0
 
 
1
  gradio[mcp]==5.32.0
 
2
  minionx>=0.1.2
3
+ python-dotenv>=1.0.0
4
+ mcp>=1.0.0