jackkuo commited on
Commit
2de095a
·
1 Parent(s): 3b5b659
DOCKER_README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Docker 化 MCP Hub Streamlit 应用
2
+
3
+ 本项目已完全 Docker 化,现在使用MCP(Model Context Protocol)服务器架构。
4
+
5
+ ## 🏗️ 项目架构
6
+
7
+ ```
8
+ .
9
+ ├── app.py # 主 Streamlit 应用
10
+ ├── requirements.txt # 主应用依赖
11
+ ├── Dockerfile # 主应用 Docker 镜像
12
+ ├── docker-compose.yml # Docker Compose 配置
13
+ ├── docker-config.json # Docker 环境配置文件
14
+ ├── docker-run.sh # 启动脚本
15
+ ├── docker-stop.sh # 停止脚本
16
+ ├── docker-logs.sh # 日志查看脚本
17
+ ├── config.json # MCP服务器配置
18
+ └── python-services/ # MCP服务器目录
19
+ ├── service1/ # RequestProcessor MCP服务器
20
+ │ ├── mcp_server.py # MCP服务器实现
21
+ │ └── requirements.txt # MCP服务器依赖
22
+ ├── service2/ # DataAnalyzer MCP服务器
23
+ │ ├── mcp_server.py # MCP服务器实现
24
+ │ └── requirements.txt # MCP服务器依赖
25
+ └── service3/ # MathComputer MCP服务器
26
+ ├── mcp_server.py # MCP服务器实现
27
+ └── requirements.txt # MCP服务器依赖
28
+ ```
29
+
30
+ ## 🚀 快速开始
31
+
32
+ ### 1. 启动应用
33
+
34
+ ```bash
35
+ # 给脚本执行权限
36
+ chmod +x docker-run.sh docker-stop.sh docker-logs.sh
37
+
38
+ # 启动应用
39
+ ./docker-run.sh
40
+ ```
41
+
42
+ ### 2. 访问服务
43
+
44
+ - **Streamlit 应用**: http://localhost:8501
45
+ - **MCP服务器**: 通过主应用自动加载,无需单独访问
46
+
47
+ ### 3. 停止服务
48
+
49
+ ```bash
50
+ ./docker-stop.sh
51
+ ```
52
+
53
+ ### 4. 查看日志
54
+
55
+ ```bash
56
+ # 查看应用日志
57
+ ./docker-logs.sh streamlit
58
+ ```
59
+
60
+ ## 🔧 架构说明
61
+
62
+ ### 旧架构 (已废弃)
63
+ - 三个独立的FastAPI HTTP服务
64
+ - 分别运行在端口8001、8002、8003
65
+ - 需要手动管理服务依赖
66
+
67
+ ### 新架构 (当前使用)
68
+ - 三个MCP服务器集成到主应用中
69
+ - 通过stdio传输方式通信
70
+ - 自动工具发现和注册
71
+ - 更好的集成性和扩展性
72
+
73
+ ## 📝 MCP服务器功能
74
+
75
+ ### RequestProcessor (service1)
76
+ - **功能**: 通用请求处理和数据分析
77
+ - **工具**: 请求处理、数据验证、服务信息
78
+
79
+ ### DataAnalyzer (service2)
80
+ - **功能**: 数据分析和统计计算
81
+ - **工具**: 数据分析、统计计算、结构分析
82
+
83
+ ### MathComputer (service3)
84
+ - **功能**: 数学计算和统计函数
85
+ - **工具**: 基本运算、高级统计、百分位数计算
86
+
87
+ ## 🔄 配置更新
88
+
89
+ MCP服务器配置在 `config.json` 中:
90
+
91
+ ```json
92
+ {
93
+ "request_processor": {
94
+ "command": "python",
95
+ "args": ["./python-services/service1/mcp_server.py"],
96
+ "transport": "stdio"
97
+ },
98
+ "data_analyzer": {
99
+ "command": "python",
100
+ "args": ["./python-services/service2/mcp_server.py"],
101
+ "transport": "stdio"
102
+ },
103
+ "math_computer": {
104
+ "command": "python",
105
+ "args": ["./python-services/service3/mcp_server.py"],
106
+ "transport": "stdio"
107
+ }
108
+ }
109
+ ```
110
+
111
+ ## 💡 优势
112
+
113
+ 1. **简化部署**: 只需要启动一个主应用容器
114
+ 2. **统一管理**: 所有MCP服务器通过主应用管理
115
+ 3. **自动发现**: 工具自动发现和注册
116
+ 4. **更好集成**: 与LangChain等框架无缝集成
117
+
118
+ ## 🐛 故障排除
119
+
120
+ 如果MCP服务器加载失败:
121
+
122
+ 1. 检查依赖是否正确安装
123
+ 2. 确认Python版本兼容性(建议3.8+)
124
+ 3. 检查文件路径是否正确
125
+ 4. 查看主应用日志获取详细信息
126
+
127
+ ## 🔮 扩展建议
128
+
129
+ 可以基于现有MCP服务器模板创建更多专用服务:
130
+ - 文件处理服务
131
+ - 数据库查询服务
132
+ - 外部API集成服务
133
+ - 机器学习推理服务
Dockerfile ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.12-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ build-essential \
8
+ curl \
9
+ software-properties-common \
10
+ git \
11
+ && rm -rf /var/lib/apt/lists/*
12
+
13
+ # Copy requirements first for better caching
14
+ COPY requirements.txt .
15
+
16
+ # Install Python dependencies
17
+ RUN pip install --no-cache-dir -r requirements.txt
18
+
19
+ # Install MCP server dependencies
20
+ RUN pip install --no-cache-dir fastmcp
21
+
22
+ # Copy application code
23
+ COPY . .
24
+
25
+ # Install MCP server dependencies for each service
26
+ RUN cd python-services/service1 && pip install --no-cache-dir -r requirements.txt
27
+ RUN cd python-services/service2 && pip install --no-cache-dir -r requirements.txt
28
+ RUN cd python-services/service3 && pip install --no-cache-dir -r requirements.txt
29
+
30
+ # Expose port
31
+ EXPOSE 8501
32
+
33
+ # Health check
34
+ HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
35
+
36
+ # Run the application
37
+ CMD ["streamlit", "run", "app.py", "--server.address=0.0.0.0", "--server.port=8501"]
MCP-HandsOn-ENG.ipynb DELETED
@@ -1,702 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "metadata": {},
6
- "source": [
7
- "# MCP + LangGraph Hands-On Tutorial\n",
8
- "\n",
9
- "- Author: [Teddy Notes](https://youtube.com/c/teddynote)\n",
10
- "- Lecture: [Fastcampus RAG trick notes](https://fastcampus.co.kr/data_online_teddy)\n",
11
- "\n",
12
- "**References**\n",
13
- "- https://modelcontextprotocol.io/introduction\n",
14
- "- https://github.com/langchain-ai/langchain-mcp-adapters"
15
- ]
16
- },
17
- {
18
- "cell_type": "markdown",
19
- "metadata": {},
20
- "source": [
21
- "## configure\n",
22
- "\n",
23
- "Refer to the installation instructions below to install `uv`.\n",
24
- "\n",
25
- "**How to install `uv`**\n",
26
- "\n",
27
- "```bash\n",
28
- "# macOS/Linux\n",
29
- "curl -LsSf https://astral.sh/uv/install.sh | sh\n",
30
- "\n",
31
- "# Windows (PowerShell)\n",
32
- "irm https://astral.sh/uv/install.ps1 | iex\n",
33
- "```\n",
34
- "\n",
35
- "Install **dependencies**\n",
36
- "\n",
37
- "```bash\n",
38
- "uv pip install -r requirements.txt\n",
39
- "```"
40
- ]
41
- },
42
- {
43
- "cell_type": "markdown",
44
- "metadata": {},
45
- "source": [
46
- "Gets the environment variables."
47
- ]
48
- },
49
- {
50
- "cell_type": "code",
51
- "execution_count": null,
52
- "metadata": {},
53
- "outputs": [],
54
- "source": [
55
- "from dotenv import load_dotenv\n",
56
- "\n",
57
- "load_dotenv(override=True)"
58
- ]
59
- },
60
- {
61
- "cell_type": "markdown",
62
- "metadata": {},
63
- "source": [
64
- "## MultiServerMCPClient"
65
- ]
66
- },
67
- {
68
- "cell_type": "markdown",
69
- "metadata": {},
70
- "source": [
71
- "Run `mcp_server_remote.py` in advance. Open a terminal with the virtual environment activated and run the server.\n",
72
- "\n",
73
- "> Command\n",
74
- "```bash\n",
75
- "source .venv/bin/activate\n",
76
- "python mcp_server_remote.py\n",
77
- "```\n",
78
- "\n",
79
- "Create and terminate a temporary Session connection using `async with`"
80
- ]
81
- },
82
- {
83
- "cell_type": "code",
84
- "execution_count": null,
85
- "metadata": {},
86
- "outputs": [],
87
- "source": [
88
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
89
- "from langgraph.prebuilt import create_react_agent\n",
90
- "from utils import ainvoke_graph, astream_graph\n",
91
- "from langchain_anthropic import ChatAnthropic\n",
92
- "\n",
93
- "model = ChatAnthropic(\n",
94
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
95
- ")\n",
96
- "\n",
97
- "async with MultiServerMCPClient(\n",
98
- " {\n",
99
- " \"weather\": {\n",
100
- " # Must match the server's port (port 8005)\n",
101
- " \"url\": \"http://localhost:8005/sse\",\n",
102
- " \"transport\": \"sse\",\n",
103
- " }\n",
104
- " }\n",
105
- ") as client:\n",
106
- " print(client.get_tools())\n",
107
- " agent = create_react_agent(model, client.get_tools())\n",
108
- " answer = await astream_graph(\n",
109
- " agent, {\"messages\": \"What's the weather like in Seoul?\"}\n",
110
- " )"
111
- ]
112
- },
113
- {
114
- "cell_type": "markdown",
115
- "metadata": {},
116
- "source": [
117
- "You might notice that you can't access the tool because the session is closed."
118
- ]
119
- },
120
- {
121
- "cell_type": "code",
122
- "execution_count": null,
123
- "metadata": {},
124
- "outputs": [],
125
- "source": [
126
- "await astream_graph(agent, {\"messages\": \"What's the weather like in Seoul?\"})"
127
- ]
128
- },
129
- {
130
- "cell_type": "markdown",
131
- "metadata": {},
132
- "source": [
133
- "Now let's change that to accessing the tool while maintaining an Async Session."
134
- ]
135
- },
136
- {
137
- "cell_type": "code",
138
- "execution_count": null,
139
- "metadata": {},
140
- "outputs": [],
141
- "source": [
142
- "# 1. Create client\n",
143
- "client = MultiServerMCPClient(\n",
144
- " {\n",
145
- " \"weather\": {\n",
146
- " \"url\": \"http://localhost:8005/sse\",\n",
147
- " \"transport\": \"sse\",\n",
148
- " }\n",
149
- " }\n",
150
- ")\n",
151
- "\n",
152
- "\n",
153
- "# 2. Explicitly initialize connection (this part is necessary)\n",
154
- "# Initialize\n",
155
- "await client.__aenter__()\n",
156
- "\n",
157
- "# Now tools are loaded\n",
158
- "print(client.get_tools()) # Tools are displayed"
159
- ]
160
- },
161
- {
162
- "cell_type": "markdown",
163
- "metadata": {},
164
- "source": [
165
- "Create an agent with langgraph(`create_react_agent`)."
166
- ]
167
- },
168
- {
169
- "cell_type": "code",
170
- "execution_count": 5,
171
- "metadata": {},
172
- "outputs": [],
173
- "source": [
174
- "# Create agent\n",
175
- "agent = create_react_agent(model, client.get_tools())"
176
- ]
177
- },
178
- {
179
- "cell_type": "markdown",
180
- "metadata": {},
181
- "source": [
182
- "Run the graph to see the results."
183
- ]
184
- },
185
- {
186
- "cell_type": "code",
187
- "execution_count": null,
188
- "metadata": {},
189
- "outputs": [],
190
- "source": [
191
- "await astream_graph(agent, {\"messages\": \"What's the weather like in Seoul?\"})"
192
- ]
193
- },
194
- {
195
- "cell_type": "markdown",
196
- "metadata": {},
197
- "source": [
198
- "## Stdio method\n",
199
- "\n",
200
- "The Stdio method is intended for use in a local environment.\n",
201
- "\n",
202
- "- Use standard input/output for communication"
203
- ]
204
- },
205
- {
206
- "cell_type": "code",
207
- "execution_count": null,
208
- "metadata": {},
209
- "outputs": [],
210
- "source": [
211
- "from mcp import ClientSession, StdioServerParameters\n",
212
- "from mcp.client.stdio import stdio_client\n",
213
- "from langgraph.prebuilt import create_react_agent\n",
214
- "from langchain_mcp_adapters.tools import load_mcp_tools\n",
215
- "from langchain_anthropic import ChatAnthropic\n",
216
- "\n",
217
- "# Initialize Anthropic's Claude model\n",
218
- "model = ChatAnthropic(\n",
219
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
220
- ")\n",
221
- "\n",
222
- "# Set up StdIO server parameters\n",
223
- "# - command: Path to Python interpreter\n",
224
- "# - args: MCP server script to execute\n",
225
- "server_params = StdioServerParameters(\n",
226
- " command=\"./.venv/bin/python\",\n",
227
- " args=[\"mcp_server_local.py\"],\n",
228
- ")\n",
229
- "\n",
230
- "# Use StdIO client to communicate with the server\n",
231
- "async with stdio_client(server_params) as (read, write):\n",
232
- " # Create client session\n",
233
- " async with ClientSession(read, write) as session:\n",
234
- " # Initialize connection\n",
235
- " await session.initialize()\n",
236
- "\n",
237
- " # Load MCP tools\n",
238
- " tools = await load_mcp_tools(session)\n",
239
- " print(tools)\n",
240
- "\n",
241
- " # Create agent\n",
242
- " agent = create_react_agent(model, tools)\n",
243
- "\n",
244
- " # Stream agent responses\n",
245
- " await astream_graph(agent, {\"messages\": \"What's the weather like in Seoul?\"})"
246
- ]
247
- },
248
- {
249
- "cell_type": "markdown",
250
- "metadata": {},
251
- "source": [
252
- "## Use MCP server with RAG deployed\n",
253
- "\n",
254
- "- File: `mcp_server_rag.py`\n",
255
- "\n",
256
- "Use the `mcp_server_rag.py` file that we built with langchain in advance.\n",
257
- "\n",
258
- "It uses stdio communication to get information about the tools, where it gets the `retriever` tool, which is the tool defined in `mcp_server_rag.py`. This file **doesn't** need to be running on the server beforehand."
259
- ]
260
- },
261
- {
262
- "cell_type": "code",
263
- "execution_count": null,
264
- "metadata": {},
265
- "outputs": [],
266
- "source": [
267
- "from mcp import ClientSession, StdioServerParameters\n",
268
- "from mcp.client.stdio import stdio_client\n",
269
- "from langchain_mcp_adapters.tools import load_mcp_tools\n",
270
- "from langgraph.prebuilt import create_react_agent\n",
271
- "from langchain_anthropic import ChatAnthropic\n",
272
- "from utils import astream_graph\n",
273
- "\n",
274
- "# Initialize Anthropic's Claude model\n",
275
- "model = ChatAnthropic(\n",
276
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
277
- ")\n",
278
- "\n",
279
- "# Set up StdIO server parameters for the RAG server\n",
280
- "server_params = StdioServerParameters(\n",
281
- " command=\"./.venv/bin/python\",\n",
282
- " args=[\"./mcp_server_rag.py\"],\n",
283
- ")\n",
284
- "\n",
285
- "# Use StdIO client to communicate with the RAG server\n",
286
- "async with stdio_client(server_params) as (read, write):\n",
287
- " # Create client session\n",
288
- " async with ClientSession(read, write) as session:\n",
289
- " # Initialize connection\n",
290
- " await session.initialize()\n",
291
- "\n",
292
- " # Load MCP tools (in this case, the retriever tool)\n",
293
- " tools = await load_mcp_tools(session)\n",
294
- "\n",
295
- " # Create and run the agent\n",
296
- " agent = create_react_agent(model, tools)\n",
297
- "\n",
298
- " # Stream agent responses\n",
299
- " await astream_graph(\n",
300
- " agent,\n",
301
- " {\n",
302
- " \"messages\": \"Search for the name of the generative AI developed by Samsung Electronics\"\n",
303
- " },\n",
304
- " )"
305
- ]
306
- },
307
- {
308
- "cell_type": "markdown",
309
- "metadata": {},
310
- "source": [
311
- "## Use a mix of SSE and Stdio methods\n",
312
- "\n",
313
- "- File: `mcp_server_rag.py` communicates over Stdio\n",
314
- "- `langchain-dev-docs` communicates via SSE\n",
315
- "\n",
316
- "Use a mix of SSE and Stdio methods."
317
- ]
318
- },
319
- {
320
- "cell_type": "code",
321
- "execution_count": null,
322
- "metadata": {},
323
- "outputs": [],
324
- "source": [
325
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
326
- "from langgraph.prebuilt import create_react_agent\n",
327
- "from langchain_anthropic import ChatAnthropic\n",
328
- "\n",
329
- "# Initialize Anthropic's Claude model\n",
330
- "model = ChatAnthropic(\n",
331
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
332
- ")\n",
333
- "\n",
334
- "# 1. Create multi-server MCP client\n",
335
- "client = MultiServerMCPClient(\n",
336
- " {\n",
337
- " \"document-retriever\": {\n",
338
- " \"command\": \"./.venv/bin/python\",\n",
339
- " # Update with the absolute path to mcp_server_rag.py file\n",
340
- " \"args\": [\"./mcp_server_rag.py\"],\n",
341
- " # Communicate via stdio (using standard input/output)\n",
342
- " \"transport\": \"stdio\",\n",
343
- " },\n",
344
- " \"langchain-dev-docs\": {\n",
345
- " # Make sure the SSE server is running\n",
346
- " \"url\": \"https://teddynote.io/mcp/langchain/sse\",\n",
347
- " # Communicate via SSE (Server-Sent Events)\n",
348
- " \"transport\": \"sse\",\n",
349
- " },\n",
350
- " }\n",
351
- ")\n",
352
- "\n",
353
- "\n",
354
- "# 2. Initialize connection explicitly through async context manager\n",
355
- "await client.__aenter__()"
356
- ]
357
- },
358
- {
359
- "cell_type": "markdown",
360
- "metadata": {},
361
- "source": [
362
- "Create an agent using `create_react_agent` in langgraph."
363
- ]
364
- },
365
- {
366
- "cell_type": "code",
367
- "execution_count": 10,
368
- "metadata": {},
369
- "outputs": [],
370
- "source": [
371
- "from langgraph.checkpoint.memory import MemorySaver\n",
372
- "from langchain_core.runnables import RunnableConfig\n",
373
- "\n",
374
- "prompt = (\n",
375
- " \"You are a smart agent. \"\n",
376
- " \"Use `retriever` tool to search on AI related documents and answer questions.\"\n",
377
- " \"Use `langchain-dev-docs` tool to search on langchain / langgraph related documents and answer questions.\"\n",
378
- " \"Answer in English.\"\n",
379
- ")\n",
380
- "agent = create_react_agent(\n",
381
- " model, client.get_tools(), prompt=prompt, checkpointer=MemorySaver()\n",
382
- ")"
383
- ]
384
- },
385
- {
386
- "cell_type": "markdown",
387
- "metadata": {},
388
- "source": [
389
- "Use the `retriever` tool defined in `mcp_server_rag.py` that you built to perform the search."
390
- ]
391
- },
392
- {
393
- "cell_type": "code",
394
- "execution_count": null,
395
- "metadata": {},
396
- "outputs": [],
397
- "source": [
398
- "config = RunnableConfig(recursion_limit=30, thread_id=1)\n",
399
- "await astream_graph(\n",
400
- " agent,\n",
401
- " {\n",
402
- " \"messages\": \"Use the `retriever` tool to search for the name of the generative AI developed by Samsung Electronics\"\n",
403
- " },\n",
404
- " config=config,\n",
405
- ")"
406
- ]
407
- },
408
- {
409
- "cell_type": "markdown",
410
- "metadata": {},
411
- "source": [
412
- "This time, we'll use the `langchain-dev-docs` tool to perform the search."
413
- ]
414
- },
415
- {
416
- "cell_type": "code",
417
- "execution_count": null,
418
- "metadata": {},
419
- "outputs": [],
420
- "source": [
421
- "config = RunnableConfig(recursion_limit=30, thread_id=1)\n",
422
- "await astream_graph(\n",
423
- " agent,\n",
424
- " {\n",
425
- " \"messages\": \"Please tell me about the definition of self-rag by referring to the langchain-dev-docs\"\n",
426
- " },\n",
427
- " config=config,\n",
428
- ")"
429
- ]
430
- },
431
- {
432
- "cell_type": "markdown",
433
- "metadata": {},
434
- "source": [
435
- "Use `MemorySaver` to maintain short-term memory, so multi-turn conversations are possible."
436
- ]
437
- },
438
- {
439
- "cell_type": "code",
440
- "execution_count": null,
441
- "metadata": {},
442
- "outputs": [],
443
- "source": [
444
- "await astream_graph(\n",
445
- " agent,\n",
446
- " {\"messages\": \"Summarize the previous content in bullet points\"},\n",
447
- " config=config,\n",
448
- ")"
449
- ]
450
- },
451
- {
452
- "cell_type": "markdown",
453
- "metadata": {},
454
- "source": [
455
- "## LangChain-integrated tools + MCP tools\n",
456
- "\n",
457
- "Here we confirm that tools integrated into LangChain can be used in conjunction with existing MCP-only tools."
458
- ]
459
- },
460
- {
461
- "cell_type": "code",
462
- "execution_count": 15,
463
- "metadata": {},
464
- "outputs": [],
465
- "source": [
466
- "from langchain_community.tools.tavily_search import TavilySearchResults\n",
467
- "\n",
468
- "# Initialize the Tavily search tool (news type, news from the last 3 days)\n",
469
- "tavily = TavilySearchResults(max_results=3, topic=\"news\", days=3)\n",
470
- "\n",
471
- "# Use it together with existing MCP tools\n",
472
- "tools = client.get_tools() + [tavily]"
473
- ]
474
- },
475
- {
476
- "cell_type": "markdown",
477
- "metadata": {},
478
- "source": [
479
- "Create an agent using `create_react_agent` in langgraph."
480
- ]
481
- },
482
- {
483
- "cell_type": "code",
484
- "execution_count": 16,
485
- "metadata": {},
486
- "outputs": [],
487
- "source": [
488
- "from langgraph.checkpoint.memory import MemorySaver\n",
489
- "from langchain_core.runnables import RunnableConfig\n",
490
- "\n",
491
- "prompt = \"You are a smart agent with various tools. Answer questions in English.\"\n",
492
- "agent = create_react_agent(model, tools, prompt=prompt, checkpointer=MemorySaver())"
493
- ]
494
- },
495
- {
496
- "cell_type": "markdown",
497
- "metadata": {},
498
- "source": [
499
- "Perform a search using the newly added `tavily` tool."
500
- ]
501
- },
502
- {
503
- "cell_type": "code",
504
- "execution_count": null,
505
- "metadata": {},
506
- "outputs": [],
507
- "source": [
508
- "await astream_graph(\n",
509
- " agent, {\"messages\": \"Tell me about today's news for me\"}, config=config\n",
510
- ")"
511
- ]
512
- },
513
- {
514
- "cell_type": "markdown",
515
- "metadata": {},
516
- "source": [
517
- "You can see that the `retriever` tool is working smoothly."
518
- ]
519
- },
520
- {
521
- "cell_type": "code",
522
- "execution_count": null,
523
- "metadata": {},
524
- "outputs": [],
525
- "source": [
526
- "await astream_graph(\n",
527
- " agent,\n",
528
- " {\n",
529
- " \"messages\": \"Use the `retriever` tool to search for the name of the generative AI developed by Samsung Electronics\"\n",
530
- " },\n",
531
- " config=config,\n",
532
- ")"
533
- ]
534
- },
535
- {
536
- "cell_type": "markdown",
537
- "metadata": {},
538
- "source": [
539
- "## Smithery MCP Server\n",
540
- "\n",
541
- "- Link: https://smithery.ai/\n",
542
- "\n",
543
- "List of tools used:\n",
544
- "\n",
545
- "- Sequential Thinking: https://smithery.ai/server/@smithery-ai/server-sequential-thinking\n",
546
- " - MCP server providing tools for dynamic and reflective problem-solving through structured thinking processes\n",
547
- "- Desktop Commander: https://smithery.ai/server/@wonderwhy-er/desktop-commander\n",
548
- " - Run terminal commands and manage files with various editing capabilities. Coding, shell and terminal, task automation\n",
549
- "\n",
550
- "**Note**\n",
551
- "\n",
552
- "- When importing tools provided by smithery in JSON format, you must set `\"transport\": \"stdio\"` as shown in the example below."
553
- ]
554
- },
555
- {
556
- "cell_type": "code",
557
- "execution_count": null,
558
- "metadata": {},
559
- "outputs": [],
560
- "source": [
561
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
562
- "from langgraph.prebuilt import create_react_agent\n",
563
- "from langchain_anthropic import ChatAnthropic\n",
564
- "\n",
565
- "# Initialize LLM model\n",
566
- "model = ChatAnthropic(model=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000)\n",
567
- "\n",
568
- "# 1. Create client\n",
569
- "client = MultiServerMCPClient(\n",
570
- " {\n",
571
- " \"server-sequential-thinking\": {\n",
572
- " \"command\": \"npx\",\n",
573
- " \"args\": [\n",
574
- " \"-y\",\n",
575
- " \"@smithery/cli@latest\",\n",
576
- " \"run\",\n",
577
- " \"@smithery-ai/server-sequential-thinking\",\n",
578
- " \"--key\",\n",
579
- " \"your_smithery_api_key\",\n",
580
- " ],\n",
581
- " \"transport\": \"stdio\", # Add communication using stdio method\n",
582
- " },\n",
583
- " \"desktop-commander\": {\n",
584
- " \"command\": \"npx\",\n",
585
- " \"args\": [\n",
586
- " \"-y\",\n",
587
- " \"@smithery/cli@latest\",\n",
588
- " \"run\",\n",
589
- " \"@wonderwhy-er/desktop-commander\",\n",
590
- " \"--key\",\n",
591
- " \"your_smithery_api_key\",\n",
592
- " ],\n",
593
- " \"transport\": \"stdio\", # Add communication using stdio method\n",
594
- " },\n",
595
- " \"document-retriever\": {\n",
596
- " \"command\": \"./.venv/bin/python\",\n",
597
- " # Update with the absolute path to the mcp_server_rag.py file\n",
598
- " \"args\": [\"./mcp_server_rag.py\"],\n",
599
- " # Communication using stdio (standard input/output)\n",
600
- " \"transport\": \"stdio\",\n",
601
- " },\n",
602
- " }\n",
603
- ")\n",
604
- "\n",
605
- "\n",
606
- "# 2. Explicitly initialize connection\n",
607
- "await client.__aenter__()"
608
- ]
609
- },
610
- {
611
- "cell_type": "markdown",
612
- "metadata": {},
613
- "source": [
614
- "Create an agent using `create_react_agent` in langgraph."
615
- ]
616
- },
617
- {
618
- "cell_type": "code",
619
- "execution_count": 23,
620
- "metadata": {},
621
- "outputs": [],
622
- "source": [
623
- "from langgraph.checkpoint.memory import MemorySaver\n",
624
- "from langchain_core.runnables import RunnableConfig\n",
625
- "\n",
626
- "# Set up configuration\n",
627
- "config = RunnableConfig(recursion_limit=30, thread_id=3)\n",
628
- "\n",
629
- "# Create agent\n",
630
- "agent = create_react_agent(model, client.get_tools(), checkpointer=MemorySaver())"
631
- ]
632
- },
633
- {
634
- "cell_type": "markdown",
635
- "metadata": {},
636
- "source": [
637
- "`Desktop Commander` 도구를 사용하여 터미널 명령을 실행합니다."
638
- ]
639
- },
640
- {
641
- "cell_type": "code",
642
- "execution_count": null,
643
- "metadata": {},
644
- "outputs": [],
645
- "source": [
646
- "await astream_graph(\n",
647
- " agent,\n",
648
- " {\n",
649
- " \"messages\": \"Draw the folder structure including the current path as a tree. However, exclude the .venv folder from the output.\"\n",
650
- " },\n",
651
- " config=config,\n",
652
- ")"
653
- ]
654
- },
655
- {
656
- "cell_type": "markdown",
657
- "metadata": {},
658
- "source": [
659
- "We'll use the `Sequential Thinking` tool to see if we can accomplish a relatively complex task."
660
- ]
661
- },
662
- {
663
- "cell_type": "code",
664
- "execution_count": null,
665
- "metadata": {},
666
- "outputs": [],
667
- "source": [
668
- "await astream_graph(\n",
669
- " agent,\n",
670
- " {\n",
671
- " \"messages\": (\n",
672
- " \"Use the `retriever` tool to search for information about generative AI developed by Samsung Electronics, \"\n",
673
- " \"and then use the `Sequential Thinking` tool to write a report.\"\n",
674
- " )\n",
675
- " },\n",
676
- " config=config,\n",
677
- ")"
678
- ]
679
- }
680
- ],
681
- "metadata": {
682
- "kernelspec": {
683
- "display_name": ".venv",
684
- "language": "python",
685
- "name": "python3"
686
- },
687
- "language_info": {
688
- "codemirror_mode": {
689
- "name": "ipython",
690
- "version": 3
691
- },
692
- "file_extension": ".py",
693
- "mimetype": "text/x-python",
694
- "name": "python",
695
- "nbconvert_exporter": "python",
696
- "pygments_lexer": "ipython3",
697
- "version": "3.12.8"
698
- }
699
- },
700
- "nbformat": 4,
701
- "nbformat_minor": 2
702
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md CHANGED
@@ -15,227 +15,82 @@ tags:
15
  - MCP
16
  ---
17
 
18
- # LangGraph Agents + MCP
19
 
20
- [![English](https://img.shields.io/badge/Language-English-blue)](README.md) [![Korean](https://img.shields.io/badge/Language-한국어-red)](README_KOR.md)
21
 
22
- [![GitHub](https://img.shields.io/badge/GitHub-langgraph--mcp--agents-black?logo=github)](https://github.com/teddylee777/langgraph-mcp-agents)
23
- [![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
24
- [![Python](https://img.shields.io/badge/Python-≥3.12-blue?logo=python&logoColor=white)](https://www.python.org/)
25
- [![Version](https://img.shields.io/badge/Version-0.1.0-orange)](https://github.com/teddylee777/langgraph-mcp-agents)
26
 
27
- ![project demo](./assets/project-demo.png)
 
 
 
 
28
 
29
- ## Project Overview
30
 
31
- ![project architecture](./assets/architecture.png)
32
-
33
- `LangChain-MCP-Adapters` is a toolkit provided by **LangChain AI** that enables AI agents to interact with external tools and data sources through the Model Context Protocol (MCP). This project provides a user-friendly interface for deploying ReAct agents that can access various data sources and APIs through MCP tools.
34
-
35
- ### Features
36
-
37
- - **Streamlit Interface**: A user-friendly web interface for interacting with LangGraph `ReAct Agent` with MCP tools
38
- - **Tool Management**: Add, remove, and configure MCP tools through the UI (Smithery JSON format supported). This is done dynamically without restarting the application
39
- - **Streaming Responses**: View agent responses and tool calls in real-time
40
- - **Conversation History**: Track and manage conversations with the agent
41
-
42
- ## MCP Architecture
43
-
44
- The Model Context Protocol (MCP) consists of three main components:
45
-
46
- 1. **MCP Host**: Programs seeking to access data through MCP, such as Claude Desktop, IDEs, or LangChain/LangGraph.
47
-
48
- 2. **MCP Client**: A protocol client that maintains a 1:1 connection with the server, acting as an intermediary between the host and server.
49
-
50
- 3. **MCP Server**: A lightweight program that exposes specific functionalities through a standardized model context protocol, serving as the primary data source.
51
-
52
- ## Quick Start with Docker
53
-
54
- You can easily run this project using Docker without setting up a local Python environment.
55
-
56
- ### Requirements (Docker Desktop)
57
-
58
- Install Docker Desktop from the link below:
59
-
60
- - [Install Docker Desktop](https://www.docker.com/products/docker-desktop/)
61
-
62
- ### Run with Docker Compose
63
-
64
- 1. Navigate to the `dockers` directory
65
-
66
- ```bash
67
- cd dockers
68
- ```
69
-
70
- 2. Create a `.env` file with your API keys in the project root directory.
71
-
72
- ```bash
73
- cp .env.example .env
74
  ```
75
-
76
- Enter your obtained API keys in the `.env` file.
77
-
78
- (Note) Not all API keys are required. Only enter the ones you need.
79
- - `ANTHROPIC_API_KEY`: If you enter an Anthropic API key, you can use "claude-3-7-sonnet-latest", "claude-3-5-sonnet-latest", "claude-3-haiku-latest" models.
80
- - `OPENAI_API_KEY`: If you enter an OpenAI API key, you can use "gpt-4o", "gpt-4o-mini" models.
81
- - `LANGSMITH_API_KEY`: If you enter a LangSmith API key, you can use LangSmith tracing.
82
-
83
- ```bash
84
- ANTHROPIC_API_KEY=your_anthropic_api_key
85
- OPENAI_API_KEY=your_openai_api_key
86
- LANGSMITH_API_KEY=your_langsmith_api_key
87
- LANGSMITH_TRACING=true
88
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
89
- LANGSMITH_PROJECT=LangGraph-MCP-Agents
90
  ```
91
 
92
- When using the login feature, set `USE_LOGIN` to `true` and enter `USER_ID` and `USER_PASSWORD`.
93
 
94
- ```bash
95
- USE_LOGIN=true
96
- USER_ID=admin
97
- USER_PASSWORD=admin123
98
- ```
99
 
100
- If you don't want to use the login feature, set `USE_LOGIN` to `false`.
 
 
101
 
102
- ```bash
103
- USE_LOGIN=false
104
- ```
105
 
106
- 3. Select the Docker Compose file that matches your system architecture.
 
 
107
 
108
- **AMD64/x86_64 Architecture (Intel/AMD Processors)**
109
 
 
110
  ```bash
111
- # Run container
112
- docker compose -f docker-compose.yaml up -d
 
 
113
  ```
114
 
115
- **ARM64 Architecture (Apple Silicon M1/M2/M3/M4)**
116
-
117
  ```bash
118
- # Run container
119
- docker compose -f docker-compose-mac.yaml up -d
120
  ```
121
 
122
- 4. Access the application in your browser at http://localhost:8585
123
-
124
- (Note)
125
- - If you need to modify ports or other settings, edit the docker-compose.yaml file before building.
126
-
127
- ## Install Directly from Source Code
128
-
129
- 1. Clone this repository
130
-
131
- ```bash
132
- git clone https://github.com/teddynote-lab/langgraph-mcp-agents.git
133
- cd langgraph-mcp-agents
134
- ```
135
-
136
- 2. Create a virtual environment and install dependencies using uv
137
-
138
- ```bash
139
- uv venv
140
- uv pip install -r requirements.txt
141
- source .venv/bin/activate # For Windows: .venv\Scripts\activate
142
- ```
143
-
144
- 3. Create a `.env` file with your API keys (copy from `.env.example`)
145
-
146
- ```bash
147
- cp .env.example .env
148
- ```
149
-
150
- Enter your obtained API keys in the `.env` file.
151
-
152
- (Note) Not all API keys are required. Only enter the ones you need.
153
- - `ANTHROPIC_API_KEY`: If you enter an Anthropic API key, you can use "claude-3-7-sonnet-latest", "claude-3-5-sonnet-latest", "claude-3-haiku-latest" models.
154
- - `OPENAI_API_KEY`: If you enter an OpenAI API key, you can use "gpt-4o", "gpt-4o-mini" models.
155
- - `LANGSMITH_API_KEY`: If you enter a LangSmith API key, you can use LangSmith tracing.
156
- ```bash
157
- ANTHROPIC_API_KEY=your_anthropic_api_key
158
- OPENAI_API_KEY=your_openai_api_key
159
- LANGSMITH_API_KEY=your_langsmith_api_key
160
- LANGSMITH_TRACING=true
161
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
162
- LANGSMITH_PROJECT=LangGraph-MCP-Agents
163
- ```
164
-
165
- 4. (New) Use the login/logout feature
166
-
167
- When using the login feature, set `USE_LOGIN` to `true` and enter `USER_ID` and `USER_PASSWORD`.
168
-
169
- ```bash
170
- USE_LOGIN=true
171
- USER_ID=admin
172
- USER_PASSWORD=admin123
173
- ```
174
-
175
- If you don't want to use the login feature, set `USE_LOGIN` to `false`.
176
-
177
- ```bash
178
- USE_LOGIN=false
179
- ```
180
-
181
- ## Usage
182
-
183
- 1. Start the Streamlit application.
184
-
185
- ```bash
186
- streamlit run app.py
187
- ```
188
-
189
- 2. The application will run in the browser and display the main interface.
190
-
191
- 3. Use the sidebar to add and configure MCP tools
192
-
193
- Visit [Smithery](https://smithery.ai/) to find useful MCP servers.
194
-
195
- First, select the tool you want to use.
196
-
197
- Click the COPY button in the JSON configuration on the right.
198
-
199
- ![copy from Smithery](./assets/smithery-copy-json.png)
200
-
201
- Paste the copied JSON string in the `Tool JSON` section.
202
-
203
- <img src="./assets/add-tools.png" alt="tool json" style="width: auto; height: auto;">
204
-
205
- Click the `Add Tool` button to add it to the "Registered Tools List" section.
206
-
207
- Finally, click the "Apply" button to apply the changes to initialize the agent with the new tools.
208
-
209
- <img src="./assets/apply-tool-configuration.png" alt="tool json" style="width: auto; height: auto;">
210
-
211
- 4. Check the agent's status.
212
-
213
- ![check status](./assets/check-status.png)
214
-
215
- 5. Interact with the ReAct agent that utilizes the configured MCP tools by asking questions in the chat interface.
216
-
217
- ![project demo](./assets/project-demo.png)
218
-
219
- ## Hands-on Tutorial
220
-
221
- For developers who want to learn more deeply about how MCP and LangGraph integration works, we provide a comprehensive Jupyter notebook tutorial:
222
-
223
- - Link: [MCP-HandsOn-KOR.ipynb](./MCP-HandsOn-KOR.ipynb)
224
-
225
- This hands-on tutorial covers:
226
-
227
- 1. **MCP Client Setup** - Learn how to configure and initialize the MultiServerMCPClient to connect to MCP servers
228
- 2. **Local MCP Server Integration** - Connect to locally running MCP servers via SSE and Stdio methods
229
- 3. **RAG Integration** - Access retriever tools using MCP for document retrieval capabilities
230
- 4. **Mixed Transport Methods** - Combine different transport protocols (SSE and Stdio) in a single agent
231
- 5. **LangChain Tools + MCP** - Integrate native LangChain tools alongside MCP tools
232
-
233
- This tutorial provides practical examples with step-by-step explanations that help you understand how to build and integrate MCP tools into LangGraph agents.
234
 
235
- ## License
236
 
237
- MIT License
 
238
 
239
- ## References
240
 
241
- - https://github.com/langchain-ai/langchain-mcp-adapters
 
 
 
 
 
15
  - MCP
16
  ---
17
 
18
+ # LLM-Agent-Chatbot-MCP
19
 
20
+ 一个基于MCP(Model Context Protocol)的智能体框架,提供复杂的推理能力和多种MCP工具集成。
21
 
22
+ ## 🚀 特性
 
 
 
23
 
24
+ - **MCP服务器集成**: 包含多个专用MCP服务器
25
+ - **智能体框架**: 基于LangGraph的智能体系统
26
+ - **Streamlit界面**: 现代化的Web用户界面
27
+ - **多模型支持**: 支持OpenAI、Anthropic等多种LLM
28
+ - **工具管理**: 自动工具发现和注册
29
 
30
+ ## 🏗️ 项目结构
31
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ```
33
+ .
34
+ ├── app.py # 主Streamlit应用
35
+ ├── config.json # MCP服务器配置
36
+ ├── python-services/ # MCP服务器目录
37
+ │ ├── service1/ # RequestProcessor MCP服务器
38
+ │ │ ├── mcp_server.py # MCP服务器实现
39
+ │ │ └── requirements.txt # 依赖文件
40
+ │ ├── service2/ # DataAnalyzer MCP服务器
41
+ │ │ ├── mcp_server.py # MCP服务器实现
42
+ │ │ └── requirements.txt # 依赖文件
43
+ │ └── service3/ # MathComputer MCP服务器
44
+ │ ├── mcp_server.py # MCP服务器实现
45
+ │ └── requirements.txt # 依赖文件
46
+ ├── mcp_server_time.py # 时间服务MCP服务器
47
+ └── requirements.txt # 主应用依赖
48
  ```
49
 
50
+ ## 🔧 MCP服务器
51
 
52
+ ### 1. RequestProcessor
53
+ - **功能**: 通用请求处理和数据分析
54
+ - **工具**: 请求处理、数据验证、服务信息
 
 
55
 
56
+ ### 2. DataAnalyzer
57
+ - **功能**: 数据分析和统计计算
58
+ - **工具**: 数据分析、统计计算、结构分析
59
 
60
+ ### 3. MathComputer
61
+ - **功能**: 数学计算和统计函数
62
+ - **工具**: 基本运算、高级统计、百分位数计算
63
 
64
+ ### 4. TimeService
65
+ - **功能**: 时区和时间服务
66
+ - **工具**: 多时区时间查询
67
 
68
+ ## 🚀 快速开始
69
 
70
+ ### 安装依赖
71
  ```bash
72
+ pip install -r requirements.txt
73
+ cd python-services/service1 && pip install -r requirements.txt
74
+ cd ../service2 && pip install -r requirements.txt
75
+ cd ../service3 && pip install -r requirements.txt
76
  ```
77
 
78
+ ### 启动应用
 
79
  ```bash
80
+ python app.py
 
81
  ```
82
 
83
+ 应用将在 http://localhost:8501 启动,自动加载所有MCP服务器。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
84
 
85
+ ## 📖 详细文档
86
 
87
+ - [MCP服务器说明](python-services/MCP_README.md)
88
+ - [Docker部署指南](DOCKER_README.md)
89
 
90
+ ## 🔮 扩展
91
 
92
+ 可以基于现有MCP服务器模板创建更多专用服务,如:
93
+ - 文件处理服务
94
+ - 数据库查询服务
95
+ - 外部API集成服务
96
+ - 机器学习推理服务
__pycache__/app.cpython-310.pyc DELETED
Binary file (21.6 kB)
 
__pycache__/app.cpython-312.pyc DELETED
Binary file (38.6 kB)
 
__pycache__/utils.cpython-310.pyc DELETED
Binary file (6.32 kB)
 
__pycache__/utils.cpython-312.pyc CHANGED
Binary files a/__pycache__/utils.cpython-312.pyc and b/__pycache__/utils.cpython-312.pyc differ
 
app.py CHANGED
@@ -4,6 +4,7 @@ import nest_asyncio
4
  import json
5
  import os
6
  import platform
 
7
 
8
  if platform.system() == "Windows":
9
  asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
@@ -223,12 +224,10 @@ async def cleanup_mcp_client():
223
  """
224
  if "mcp_client" in st.session_state and st.session_state.mcp_client is not None:
225
  try:
226
-
227
- await st.session_state.mcp_client.__aexit__(None, None, None)
228
  st.session_state.mcp_client = None
229
  except Exception as e:
230
  import traceback
231
-
232
  # st.warning(f"Error while terminating MCP client: {str(e)}")
233
  # st.warning(traceback.format_exc())
234
 
@@ -300,17 +299,31 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
300
  if not hasattr(callback_func, '_data_counter'):
301
  callback_func._data_counter = 0
302
 
303
-
 
 
304
 
305
-
 
 
 
 
306
 
307
-
308
-
 
 
 
 
 
 
 
 
 
309
  if isinstance(message_content, AIMessageChunk):
 
310
  content = message_content.content
311
 
312
-
313
-
314
  # If content is in list form (mainly occurs in Claude models)
315
  if isinstance(content, list) and len(content) > 0:
316
  message_chunk = content[0]
@@ -341,8 +354,6 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
341
  tool_call_info = message_content.tool_calls[0]
342
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
343
 
344
-
345
-
346
  with tool_placeholder.expander(
347
  "🔧 Tool Call Information", expanded=True
348
  ):
@@ -359,9 +370,7 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
359
  ):
360
  tool_call_info = message_content.invalid_tool_calls[0]
361
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
362
- with tool_placeholder.expander(
363
- "🔧 Tool Call Information (Invalid)", expanded=True
364
- ):
365
  st.markdown("".join(accumulated_tool))
366
  # Process if tool_call_chunks attribute exists
367
  elif (
@@ -383,11 +392,7 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
383
  f"```json\n{str(tool_call_chunk)}\n```\n"
384
  )
385
 
386
-
387
-
388
- with tool_placeholder.expander(
389
- "🔧 Tool Call Information", expanded=True
390
- ):
391
  st.markdown("".join(accumulated_tool))
392
  # Process if tool_calls exists in additional_kwargs (supports various model compatibility)
393
  elif (
@@ -397,11 +402,7 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
397
  tool_call_info = message_content.additional_kwargs["tool_calls"][0]
398
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
399
 
400
-
401
-
402
- with tool_placeholder.expander(
403
- "🔧 Tool Call Information", expanded=True
404
- ):
405
  st.markdown("".join(accumulated_tool))
406
  # Process if it's a tool message (tool response)
407
  elif isinstance(message_content, ToolMessage):
@@ -409,7 +410,13 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
409
  # Just store the tool name for later display
410
  if not hasattr(callback_func, '_pending_tool_completion'):
411
  callback_func._pending_tool_completion = []
412
- callback_func._pending_tool_completion.append(message_content.name or "Unknown Tool")
 
 
 
 
 
 
413
 
414
  # Convert streaming text to final result
415
  streaming_text_items = [item for item in accumulated_tool if item.startswith("\n📊 **Streaming Text**:")]
@@ -426,7 +433,17 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
426
 
427
  # Handle tool response content
428
  tool_content = message_content.content
429
-
 
 
 
 
 
 
 
 
 
 
430
 
431
  # Handle tool response content
432
  if isinstance(tool_content, str):
@@ -636,23 +653,90 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
636
  accumulated_tool.append(
637
  "\n```json\n" + str(tool_content) + "\n```\n"
638
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
639
  else:
640
  # Non-string content
641
  accumulated_tool.append(
642
  "\n```json\n" + str(tool_content) + "\n```\n"
643
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
644
 
645
  # Show pending tool completion status after all streaming content
646
  if hasattr(callback_func, '_pending_tool_completion') and callback_func._pending_tool_completion:
647
  for tool_name in callback_func._pending_tool_completion:
648
  accumulated_tool.append(f"\n✅ **Tool Completed**: {tool_name}\n")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
649
  # Clear the pending list
650
  callback_func._pending_tool_completion = []
651
-
652
-
653
-
654
- return None
655
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
656
  return callback_func, accumulated_text, accumulated_tool
657
 
658
 
@@ -759,12 +843,131 @@ async def initialize_session(mcp_config=None):
759
  # Load settings from config.json file
760
  mcp_config = load_config_from_json()
761
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
762
  client = MultiServerMCPClient(mcp_config)
763
- await client.__aenter__()
764
- tools = client.get_tools()
765
- st.session_state.tool_count = len(tools)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
766
  st.session_state.mcp_client = client
767
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
768
  # Initialize appropriate model based on selection
769
  selected_model = st.session_state.selected_model
770
 
@@ -785,12 +988,50 @@ async def initialize_session(mcp_config=None):
785
  temperature=0.1,
786
  max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
787
  )
788
- agent = create_react_agent(
789
- model,
790
- tools,
791
- checkpointer=MemorySaver(),
792
- prompt=SYSTEM_PROMPT,
793
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
794
  st.session_state.agent = agent
795
  st.session_state.session_initialized = True
796
  return True
@@ -962,11 +1203,29 @@ with st.sidebar:
962
  for tool_name, tool_config in parsed_tool.items():
963
  # Check URL field and set transport
964
  if "url" in tool_config:
965
- # Set transport to "sse" if URL exists
966
- tool_config["transport"] = "sse"
967
- st.info(
968
- f"URL detected in '{tool_name}' tool, setting transport to 'sse'."
969
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
970
 
971
  elif "transport" not in tool_config:
972
  # Set default "stdio" if URL doesn't exist and transport isn't specified
 
4
  import json
5
  import os
6
  import platform
7
+ import time
8
 
9
  if platform.system() == "Windows":
10
  asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
 
224
  """
225
  if "mcp_client" in st.session_state and st.session_state.mcp_client is not None:
226
  try:
227
+ # New version doesn't use async context managers, just set to None
 
228
  st.session_state.mcp_client = None
229
  except Exception as e:
230
  import traceback
 
231
  # st.warning(f"Error while terminating MCP client: {str(e)}")
232
  # st.warning(traceback.format_exc())
233
 
 
299
  if not hasattr(callback_func, '_data_counter'):
300
  callback_func._data_counter = 0
301
 
302
+ # Initialize tool result tracking
303
+ if not hasattr(callback_func, '_tool_results'):
304
+ callback_func._tool_results = {}
305
 
306
+ # Check if this is a tool result message
307
+ if isinstance(message_content, dict) and 'tool_results' in message_content:
308
+ tool_results = message_content['tool_results']
309
+ for tool_name, result in tool_results.items():
310
+ callback_func._tool_results[tool_name] = result
311
 
312
+ # Check if this is a tool call completion message
313
+ if isinstance(message_content, dict) and 'tool_calls' in message_content:
314
+ tool_calls = message_content['tool_calls']
315
+ for tool_call in tool_calls:
316
+ if isinstance(tool_call, dict) and 'name' in tool_call:
317
+ tool_name = tool_call['name']
318
+ if 'result' in tool_call:
319
+ # Store tool result
320
+ callback_func._tool_results[tool_name] = tool_call['result']
321
+
322
+ # Handle different message types
323
  if isinstance(message_content, AIMessageChunk):
324
+ # Process AIMessageChunk content
325
  content = message_content.content
326
 
 
 
327
  # If content is in list form (mainly occurs in Claude models)
328
  if isinstance(content, list) and len(content) > 0:
329
  message_chunk = content[0]
 
354
  tool_call_info = message_content.tool_calls[0]
355
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
356
 
 
 
357
  with tool_placeholder.expander(
358
  "🔧 Tool Call Information", expanded=True
359
  ):
 
370
  ):
371
  tool_call_info = message_content.invalid_tool_calls[0]
372
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
373
+ with tool_placeholder.expander("🔧 Tool Call Information (Invalid)", expanded=True):
 
 
374
  st.markdown("".join(accumulated_tool))
375
  # Process if tool_call_chunks attribute exists
376
  elif (
 
392
  f"```json\n{str(tool_call_chunk)}\n```\n"
393
  )
394
 
395
+ with tool_placeholder.expander("🔧 Tool Call Information", expanded=True):
 
 
 
 
396
  st.markdown("".join(accumulated_tool))
397
  # Process if tool_calls exists in additional_kwargs (supports various model compatibility)
398
  elif (
 
402
  tool_call_info = message_content.additional_kwargs["tool_calls"][0]
403
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
404
 
405
+ with tool_placeholder.expander("🔧 Tool Call Information", expanded=True):
 
 
 
 
406
  st.markdown("".join(accumulated_tool))
407
  # Process if it's a tool message (tool response)
408
  elif isinstance(message_content, ToolMessage):
 
410
  # Just store the tool name for later display
411
  if not hasattr(callback_func, '_pending_tool_completion'):
412
  callback_func._pending_tool_completion = []
413
+
414
+ tool_name = message_content.name or "Unknown Tool"
415
+ callback_func._pending_tool_completion.append(tool_name)
416
+
417
+ # Debug: Log tool message received
418
+ accumulated_tool.append(f"\n🔍 **Tool Message Received**: {tool_name}\n")
419
+ accumulated_tool.append(f"📋 **Message Type**: {type(message_content).__name__}\n")
420
 
421
  # Convert streaming text to final result
422
  streaming_text_items = [item for item in accumulated_tool if item.startswith("\n📊 **Streaming Text**:")]
 
433
 
434
  # Handle tool response content
435
  tool_content = message_content.content
436
+
437
+ # Debug: Log tool content
438
+ accumulated_tool.append(f"📄 **Tool Content Type**: {type(tool_content).__name__}\n")
439
+ if isinstance(tool_content, str):
440
+ accumulated_tool.append(f"📏 **Content Length**: {len(tool_content)} characters\n")
441
+ if len(tool_content) > 100:
442
+ accumulated_tool.append(f"📝 **Content Preview**: {tool_content[:100]}...\n")
443
+ else:
444
+ accumulated_tool.append(f"📝 **Content**: {tool_content}\n")
445
+ else:
446
+ accumulated_tool.append(f"📝 **Content**: {str(tool_content)[:200]}...\n")
447
 
448
  # Handle tool response content
449
  if isinstance(tool_content, str):
 
653
  accumulated_tool.append(
654
  "\n```json\n" + str(tool_content) + "\n```\n"
655
  )
656
+
657
+ # Capture tool result for display
658
+ if hasattr(callback_func, '_pending_tool_completion') and callback_func._pending_tool_completion:
659
+ # Get the last completed tool name
660
+ last_tool_name = callback_func._pending_tool_completion[-1] if callback_func._pending_tool_completion else "Unknown Tool"
661
+
662
+ # Store the tool result
663
+ if not hasattr(callback_func, '_tool_results'):
664
+ callback_func._tool_results = {}
665
+ callback_func._tool_results[last_tool_name] = tool_content
666
+
667
+ # Create tool result for display
668
+ callback_func._last_tool_result = {
669
+ 'name': last_tool_name,
670
+ 'output': tool_content
671
+ }
672
  else:
673
  # Non-string content
674
  accumulated_tool.append(
675
  "\n```json\n" + str(tool_content) + "\n```\n"
676
  )
677
+
678
+ # Capture tool result for non-string content too
679
+ if hasattr(callback_func, '_pending_tool_completion') and callback_func._pending_tool_completion:
680
+ last_tool_name = callback_func._pending_tool_completion[-1] if callback_func._pending_tool_completion else "Unknown Tool"
681
+
682
+ if not hasattr(callback_func, '_tool_results'):
683
+ callback_func._tool_results = {}
684
+ callback_func._tool_results[last_tool_name] = tool_content
685
+
686
+ callback_func._last_tool_result = {
687
+ 'name': last_tool_name,
688
+ 'output': tool_content
689
+ }
690
 
691
  # Show pending tool completion status after all streaming content
692
  if hasattr(callback_func, '_pending_tool_completion') and callback_func._pending_tool_completion:
693
  for tool_name in callback_func._pending_tool_completion:
694
  accumulated_tool.append(f"\n✅ **Tool Completed**: {tool_name}\n")
695
+
696
+ # Check if we have a result for this tool
697
+ if hasattr(callback_func, '_tool_results') and tool_name in callback_func._tool_results:
698
+ tool_result = callback_func._tool_results[tool_name]
699
+ callback_func._last_tool_result = {
700
+ 'name': tool_name,
701
+ 'output': tool_result
702
+ }
703
+ accumulated_tool.append(f"📊 **Tool Output Captured**: {len(str(tool_result))} characters\n")
704
+ else:
705
+ accumulated_tool.append(f"⚠️ **No Tool Output Captured** for {tool_name}\n")
706
+ # Try to create a basic result structure
707
+ callback_func._last_tool_result = {
708
+ 'name': tool_name,
709
+ 'output': f"Tool {tool_name} completed but output was not captured"
710
+ }
711
+
712
  # Clear the pending list
713
  callback_func._pending_tool_completion = []
714
+
715
+ # Enhanced tool result display for MCP tools
716
+ if hasattr(callback_func, '_last_tool_result') and callback_func._last_tool_result:
717
+ tool_result = callback_func._last_tool_result
718
+ if isinstance(tool_result, dict):
719
+ # Extract tool name and result
720
+ tool_name = tool_result.get('name', 'Unknown Tool')
721
+ tool_output = tool_result.get('output', tool_result.get('result', tool_result.get('content', str(tool_result))))
722
+
723
+ accumulated_tool.append(f"\n🔧 **Tool Result - {tool_name}**:\n")
724
+ if isinstance(tool_output, str) and tool_output.strip():
725
+ # Format the output nicely
726
+ if len(tool_output) > 200:
727
+ accumulated_tool.append(f"```\n{tool_output[:200]}...\n```\n")
728
+ accumulated_tool.append(f"📏 *Output truncated. Full length: {len(tool_output)} characters*\n")
729
+ else:
730
+ accumulated_tool.append(f"```\n{tool_output}\n```\n")
731
+ else:
732
+ accumulated_tool.append(f"```json\n{tool_output}\n```\n")
733
+ else:
734
+ accumulated_tool.append(f"\n🔧 **Tool Result**:\n```\n{str(tool_result)}\n```\n")
735
+
736
+ # Clear the tool result after displaying
737
+ callback_func._last_tool_result = None
738
+
739
+ # Return the callback function and accumulated lists
740
  return callback_func, accumulated_text, accumulated_tool
741
 
742
 
 
843
  # Load settings from config.json file
844
  mcp_config = load_config_from_json()
845
 
846
+ # Validate MCP configuration before connecting
847
+ st.info("🔍 Validating MCP server configurations...")
848
+ config_errors = []
849
+ for server_name, server_config in mcp_config.items():
850
+ st.write(f"📋 Checking {server_name}...")
851
+
852
+ # Check required fields
853
+ if "transport" not in server_config:
854
+ config_errors.append(f"{server_name}: Missing 'transport' field")
855
+ st.error(f"❌ {server_name}: Missing 'transport' field")
856
+ elif server_config["transport"] not in ["stdio", "sse", "http", "streamable_http", "websocket"]:
857
+ config_errors.append(f"{server_name}: Invalid transport '{server_config['transport']}'")
858
+ st.error(f"❌ {server_name}: Invalid transport '{server_config['transport']}'")
859
+
860
+ if "url" in server_config:
861
+ if "transport" in server_config and server_config["transport"] == "stdio":
862
+ config_errors.append(f"{server_name}: Cannot use 'stdio' transport with URL")
863
+ st.error(f"❌ {server_name}: Cannot use 'stdio' transport with URL")
864
+ elif "command" not in server_config:
865
+ config_errors.append(f"{server_name}: Missing 'command' field for stdio transport")
866
+ st.error(f"❌ {server_name}: Missing 'command' field for stdio transport")
867
+ elif "args" not in server_config:
868
+ config_errors.append(f"{server_name}: Missing 'args' field for stdio transport")
869
+ st.error(f"❌ {server_name}: Missing 'args' field for stdio transport")
870
+
871
+ if config_errors:
872
+ st.error("🚫 Configuration validation failed!")
873
+ st.error("Please fix the following issues:")
874
+ for error in config_errors:
875
+ st.error(f" • {error}")
876
+ return False
877
+
878
+ st.success("✅ MCP configuration validation passed!")
879
+
880
  client = MultiServerMCPClient(mcp_config)
881
+
882
+ # Get tools with error handling for malformed schemas
883
+ try:
884
+ tools = await client.get_tools()
885
+ st.session_state.tool_count = len(tools)
886
+ st.success(f"✅ Successfully loaded {len(tools)} tools from all MCP servers")
887
+ except Exception as e:
888
+ st.error(f"❌ Error loading MCP tools: {str(e)}")
889
+ st.error(f"🔍 Error type: {type(e).__name__}")
890
+ st.error(f"📋 Full error details: {repr(e)}")
891
+ st.warning("🔄 Attempting to load tools individually to identify problematic servers...")
892
+
893
+ # Try to load tools from each server individually
894
+ tools = []
895
+ failed_servers = []
896
+
897
+ for server_name, server_config in mcp_config.items():
898
+ try:
899
+ st.info(f"🔄 Testing connection to {server_name}...")
900
+ st.json(server_config) # Show server configuration
901
+
902
+ # Create a single server client to test
903
+ single_client = MultiServerMCPClient({server_name: server_config})
904
+ server_tools = await single_client.get_tools()
905
+ tools.extend(server_tools)
906
+ st.success(f"✅ Loaded {len(server_tools)} tools from {server_name}")
907
+
908
+ except Exception as server_error:
909
+ error_msg = f"❌ Failed to load tools from {server_name}"
910
+ st.error(error_msg)
911
+ st.error(f" Error: {str(server_error)}")
912
+ st.error(f" Type: {type(server_error).__name__}")
913
+ st.error(f" Details: {repr(server_error)}")
914
+ failed_servers.append(server_name)
915
+ continue
916
+
917
+ # Summary of results
918
+ if failed_servers:
919
+ st.error(f"🚫 Failed servers: {', '.join(failed_servers)}")
920
+ st.error("💡 Check server configurations and ensure servers are running")
921
+
922
+ if not tools:
923
+ st.error("❌ No tools could be loaded from any MCP server. Please check your server configurations.")
924
+ st.error("🔧 Troubleshooting tips:")
925
+ st.error(" 1. Ensure all MCP servers are running")
926
+ st.error(" 2. Check network connectivity and ports")
927
+ st.error(" 3. Verify server configurations in config.json")
928
+ st.error(" 4. Check server logs for errors")
929
+ return False
930
+ else:
931
+ st.success(f"✅ Successfully loaded {len(tools)} tools from working servers")
932
+ st.warning(f"⚠️ Some servers failed: {', '.join(failed_servers)}" if failed_servers else "✅ All servers loaded successfully")
933
+
934
  st.session_state.mcp_client = client
935
 
936
+ # Validate and filter tools to remove malformed schemas
937
+ def validate_tool(tool):
938
+ try:
939
+ # Try to access the tool's schema to validate it
940
+ if hasattr(tool, 'schema'):
941
+ # This will trigger schema validation
942
+ _ = tool.schema
943
+
944
+ # Additional validation: check if tool can be converted to OpenAI format
945
+ # This catches the FileData reference issue
946
+ try:
947
+ from langchain_core.utils.function_calling import convert_to_openai_tool
948
+ _ = convert_to_openai_tool(tool)
949
+ return True
950
+ except Exception as schema_error:
951
+ if "FileData" in str(schema_error) or "Reference" in str(schema_error):
952
+ st.warning(f"⚠️ Tool '{getattr(tool, 'name', 'unknown')}' has malformed schema: {str(schema_error)}")
953
+ return False
954
+
955
+ except Exception as e:
956
+ st.warning(f"⚠️ Tool '{getattr(tool, 'name', 'unknown')}' validation failed: {str(e)}")
957
+ return False
958
+
959
+ # Filter out invalid tools
960
+ valid_tools = [tool for tool in tools if validate_tool(tool)]
961
+ if len(valid_tools) < len(tools):
962
+ st.warning(f"⚠️ Filtered out {len(tools) - len(valid_tools)} tools with malformed schemas")
963
+ tools = valid_tools
964
+ st.session_state.tool_count = len(tools)
965
+
966
+ # Ensure we have at least some valid tools
967
+ if not tools:
968
+ st.error("❌ No valid tools could be loaded. Please check your MCP server configurations.")
969
+ return False
970
+
971
  # Initialize appropriate model based on selection
972
  selected_model = st.session_state.selected_model
973
 
 
988
  temperature=0.1,
989
  max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
990
  )
991
+
992
+ # Create agent with error handling
993
+ try:
994
+ agent = create_react_agent(
995
+ model,
996
+ tools,
997
+ checkpointer=MemorySaver(),
998
+ prompt=SYSTEM_PROMPT,
999
+ )
1000
+ except Exception as agent_error:
1001
+ st.error(f"❌ Failed to create agent: {str(agent_error)}")
1002
+ st.warning("🔄 Attempting to create agent with individual tool validation...")
1003
+
1004
+ # Try to create agent with tools one by one
1005
+ working_tools = []
1006
+ for i, tool in enumerate(tools):
1007
+ try:
1008
+ test_agent = create_react_agent(
1009
+ model,
1010
+ [tool],
1011
+ checkpointer=MemorySaver(),
1012
+ prompt=SYSTEM_PROMPT,
1013
+ )
1014
+ working_tools.append(tool)
1015
+ st.success(f"✅ Tool {i+1} validated successfully")
1016
+ except Exception as tool_error:
1017
+ st.error(f"❌ Tool {i+1} failed validation: {str(tool_error)}")
1018
+ continue
1019
+
1020
+ if not working_tools:
1021
+ st.error("❌ No tools could be used to create the agent. Please check your MCP server configurations.")
1022
+ return False
1023
+
1024
+ # Create agent with only working tools
1025
+ tools = working_tools
1026
+ st.session_state.tool_count = len(tools)
1027
+ agent = create_react_agent(
1028
+ model,
1029
+ tools,
1030
+ checkpointer=MemorySaver(),
1031
+ prompt=SYSTEM_PROMPT,
1032
+ )
1033
+ st.success(f"✅ Agent created successfully with {len(tools)} working tools")
1034
+
1035
  st.session_state.agent = agent
1036
  st.session_state.session_initialized = True
1037
  return True
 
1203
  for tool_name, tool_config in parsed_tool.items():
1204
  # Check URL field and set transport
1205
  if "url" in tool_config:
1206
+ # Set transport to "streamable_http" if URL exists (preferred) or fallback to "sse"
1207
+ if "transport" not in tool_config:
1208
+ tool_config["transport"] = "streamable_http"
1209
+ st.info(
1210
+ f"URL detected in '{tool_name}' tool, setting transport to 'streamable_http' (recommended)."
1211
+ )
1212
+ elif tool_config["transport"] == "sse":
1213
+ st.info(
1214
+ f"'{tool_name}' tool using SSE transport (deprecated but still supported)."
1215
+ )
1216
+ elif tool_config["transport"] == "streamable_http":
1217
+ st.success(
1218
+ f"'{tool_name}' tool using Streamable HTTP transport (recommended)."
1219
+ )
1220
+ elif tool_config["transport"] == "http":
1221
+ st.warning(
1222
+ f"'{tool_name}' tool using HTTP transport (updating to 'streamable_http' for better compatibility)."
1223
+ )
1224
+ tool_config["transport"] = "streamable_http"
1225
+ elif tool_config["transport"] == "websocket":
1226
+ st.info(
1227
+ f"'{tool_name}' tool using WebSocket transport."
1228
+ )
1229
 
1230
  elif "transport" not in tool_config:
1231
  # Set default "stdio" if URL doesn't exist and transport isn't specified
assets/add-tools.png DELETED

Git LFS Details

  • SHA256: 26e391479f22766ae27a514c0024cc904a2c7f6284ec197a2ffd8818ff4bf891
  • Pointer size: 131 Bytes
  • Size of remote file: 142 kB
assets/apply-tool-configuration.png DELETED

Git LFS Details

  • SHA256: de77834acdeb7ae839cc708f96ee830e4449ef7aef288960f875ef27f5f8a03b
  • Pointer size: 130 Bytes
  • Size of remote file: 84.8 kB
assets/architecture.png DELETED

Git LFS Details

  • SHA256: 2558ee4532c9904d79d749d2d7585cf189c3eb4bca47b2878cc31071408722f7
  • Pointer size: 131 Bytes
  • Size of remote file: 724 kB
assets/check-status.png DELETED

Git LFS Details

  • SHA256: 20ae6b4b0142450ff795857a72f447d4d65442c836071d6a3403b595d18e4e7d
  • Pointer size: 130 Bytes
  • Size of remote file: 90.2 kB
assets/project-demo.png DELETED

Git LFS Details

  • SHA256: 6ee62d0775f50b681d9f2a870f6015ce6fafe78ac36bfeb63143f3d7b7e4b0ba
  • Pointer size: 131 Bytes
  • Size of remote file: 442 kB
assets/smithery-copy-json.png DELETED

Git LFS Details

  • SHA256: 33b454b9db3d2f07e6f37ef2f1762de6b56ce60ee3b8562f3bfc697197cfc54e
  • Pointer size: 131 Bytes
  • Size of remote file: 456 kB
assets/smithery-json.png DELETED

Git LFS Details

  • SHA256: 1ad0c4900faa1e3b38d5e6ee74a1d2a239f33cf2a5d041461aedef067bea6af6
  • Pointer size: 131 Bytes
  • Size of remote file: 150 kB
config.json CHANGED
@@ -6,6 +6,27 @@
6
  ],
7
  "transport": "stdio"
8
  },
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  "qa": {
10
  "transport": "sse",
11
  "url": "http://10.15.56.148:9230/sse"
@@ -13,5 +34,9 @@
13
  "review_generate": {
14
  "transport": "sse",
15
  "url": "http://10.15.56.148:8000/review"
 
 
 
 
16
  }
17
  }
 
6
  ],
7
  "transport": "stdio"
8
  },
9
+ "request_processor": {
10
+ "command": "python",
11
+ "args": [
12
+ "./python-services/service1/mcp_server.py"
13
+ ],
14
+ "transport": "stdio"
15
+ },
16
+ "data_analyzer": {
17
+ "command": "python",
18
+ "args": [
19
+ "./python-services/service2/mcp_server.py"
20
+ ],
21
+ "transport": "stdio"
22
+ },
23
+ "math_computer": {
24
+ "command": "python",
25
+ "args": [
26
+ "./python-services/service3/mcp_server.py"
27
+ ],
28
+ "transport": "stdio"
29
+ },
30
  "qa": {
31
  "transport": "sse",
32
  "url": "http://10.15.56.148:9230/sse"
 
34
  "review_generate": {
35
  "transport": "sse",
36
  "url": "http://10.15.56.148:8000/review"
37
+ },
38
+ "机械问题": {
39
+ "transport": "streamable_http",
40
+ "url": "http://127.0.0.1:7860/gradio_api/mcp/"
41
  }
42
  }
docker-compose.yml ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: '3.8'
2
+
3
+ services:
4
+ # 主Streamlit应用
5
+ streamlit-app:
6
+ build: .
7
+ container_name: streamlit-mcp-app
8
+ ports:
9
+ - "8501:8501"
10
+ environment:
11
+ - USE_LOGIN=false
12
+ volumes:
13
+ - ./config.json:/app/config.json:ro
14
+ - ./.env:/app/.env:ro
15
+ networks:
16
+ - mcp-network
17
+ restart: unless-stopped
18
+
19
+ networks:
20
+ mcp-network:
21
+ driver: bridge
22
+
23
+ volumes:
24
+ app-data:
docker-config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "get_current_time": {
3
+ "command": "python",
4
+ "args": [
5
+ "./mcp_server_time.py"
6
+ ],
7
+ "transport": "stdio"
8
+ },
9
+ "python_service_1": {
10
+ "transport": "http",
11
+ "url": "http://python-service-1:8000/process"
12
+ },
13
+ "python_service_2": {
14
+ "transport": "http",
15
+ "url": "http://python-service-2:8000/analyze"
16
+ },
17
+ "python_service_3": {
18
+ "transport": "http",
19
+ "url": "http://python-service-3:8000/compute"
20
+ },
21
+ "qa": {
22
+ "transport": "sse",
23
+ "url": "http://10.15.56.148:9230/sse"
24
+ },
25
+ "review_generate": {
26
+ "transport": "sse",
27
+ "url": "http://10.15.56.148:8000/review"
28
+ },
29
+ "机械问题": {
30
+ "transport": "streamable_http",
31
+ "url": "http://127.0.0.1:7860/gradio_api/mcp/"
32
+ }
33
+ }
dockers/.env.example DELETED
@@ -1,10 +0,0 @@
1
- ANTHROPIC_API_KEY=sk-ant-api03...
2
- OPENAI_API_KEY=sk-proj-o0gulL2J2a...
3
- LANGSMITH_API_KEY=lsv2_sk_ed22...
4
- LANGSMITH_TRACING=true
5
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
6
- LANGSMITH_PROJECT=LangGraph-MCP-Agents
7
-
8
- USE_LOGIN=true
9
- USER_ID=admin
10
- USER_PASSWORD=admin1234
 
 
 
 
 
 
 
 
 
 
 
dockers/config.json DELETED
@@ -1,9 +0,0 @@
1
- {
2
- "get_current_time": {
3
- "command": "python",
4
- "args": [
5
- "./mcp_server_time.py"
6
- ],
7
- "transport": "stdio"
8
- }
9
- }
 
 
 
 
 
 
 
 
 
 
dockers/docker-compose-KOR-mac.yaml DELETED
@@ -1,36 +0,0 @@
1
- services:
2
- app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- args:
7
- BUILDPLATFORM: ${BUILDPLATFORM:-linux/arm64}
8
- TARGETPLATFORM: "linux/arm64"
9
- image: teddylee777/langgraph-mcp-agents:KOR-0.2.1
10
- platform: "linux/arm64"
11
- ports:
12
- - "8585:8585"
13
- env_file:
14
- - ./.env
15
- environment:
16
- - PYTHONUNBUFFERED=1
17
- # Mac-specific optimizations
18
- - NODE_OPTIONS=--max_old_space_size=2048
19
- # Delegated file system performance for macOS
20
- - PYTHONMALLOC=malloc
21
- - USE_LOGIN=${USE_LOGIN:-false}
22
- - USER_ID=${USER_ID:-}
23
- - USER_PASSWORD=${USER_PASSWORD:-}
24
- - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
25
- - OPENAI_API_KEY=${OPENAI_API_KEY}
26
- - NODE_OPTIONS=${NODE_OPTIONS:-}
27
- volumes:
28
- - ./data:/app/data:cached
29
- - ./config.json:/app/config.json
30
- restart: unless-stopped
31
- healthcheck:
32
- test: ["CMD", "curl", "--fail", "http://localhost:8585/_stcore/health"]
33
- interval: 30s
34
- timeout: 10s
35
- retries: 3
36
- start_period: 40s
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dockers/docker-compose-KOR.yaml DELETED
@@ -1,33 +0,0 @@
1
- services:
2
- app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- args:
7
- BUILDPLATFORM: ${BUILDPLATFORM:-linux/amd64}
8
- TARGETPLATFORM: ${TARGETPLATFORM:-linux/amd64}
9
- image: teddylee777/langgraph-mcp-agents:KOR-0.2.1
10
- platform: ${TARGETPLATFORM:-linux/amd64}
11
- ports:
12
- - "8585:8585"
13
- volumes:
14
- - ./.env:/app/.env:ro
15
- - ./data:/app/data:rw
16
- - ./config.json:/app/config.json
17
- env_file:
18
- - ./.env
19
- environment:
20
- - PYTHONUNBUFFERED=1
21
- - USE_LOGIN=${USE_LOGIN:-false}
22
- - USER_ID=${USER_ID:-}
23
- - USER_PASSWORD=${USER_PASSWORD:-}
24
- - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
25
- - OPENAI_API_KEY=${OPENAI_API_KEY}
26
- - NODE_OPTIONS=${NODE_OPTIONS:-}
27
- restart: unless-stopped
28
- healthcheck:
29
- test: ["CMD", "curl", "--fail", "http://localhost:8585/_stcore/health"]
30
- interval: 30s
31
- timeout: 10s
32
- retries: 3
33
- start_period: 40s
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dockers/docker-compose-mac.yaml DELETED
@@ -1,36 +0,0 @@
1
- services:
2
- app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- args:
7
- BUILDPLATFORM: ${BUILDPLATFORM:-linux/arm64}
8
- TARGETPLATFORM: "linux/arm64"
9
- image: teddylee777/langgraph-mcp-agents:0.2.1
10
- platform: "linux/arm64"
11
- ports:
12
- - "8585:8585"
13
- env_file:
14
- - ./.env
15
- environment:
16
- - PYTHONUNBUFFERED=1
17
- # Mac-specific optimizations
18
- - NODE_OPTIONS=--max_old_space_size=2048
19
- # Delegated file system performance for macOS
20
- - PYTHONMALLOC=malloc
21
- - USE_LOGIN=${USE_LOGIN:-false}
22
- - USER_ID=${USER_ID:-}
23
- - USER_PASSWORD=${USER_PASSWORD:-}
24
- - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
25
- - OPENAI_API_KEY=${OPENAI_API_KEY}
26
- - NODE_OPTIONS=${NODE_OPTIONS:-}
27
- volumes:
28
- - ./data:/app/data:cached
29
- - ./config.json:/app/config.json
30
- restart: unless-stopped
31
- healthcheck:
32
- test: ["CMD", "curl", "--fail", "http://localhost:8585/_stcore/health"]
33
- interval: 30s
34
- timeout: 10s
35
- retries: 3
36
- start_period: 40s
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dockers/docker-compose.yaml DELETED
@@ -1,34 +0,0 @@
1
- services:
2
- app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- args:
7
- BUILDPLATFORM: ${BUILDPLATFORM:-linux/amd64}
8
- TARGETPLATFORM: ${TARGETPLATFORM:-linux/amd64}
9
- image: teddylee777/langgraph-mcp-agents:0.2.1
10
- platform: ${TARGETPLATFORM:-linux/amd64}
11
- ports:
12
- - "8585:8585"
13
- volumes:
14
- # Optionally, you can remove this volume if you don’t need the file at runtime
15
- - ./.env:/app/.env:ro
16
- - ./data:/app/data:rw
17
- - ./config.json:/app/config.json
18
- env_file:
19
- - ./.env
20
- environment:
21
- - PYTHONUNBUFFERED=1
22
- - USE_LOGIN=${USE_LOGIN:-false}
23
- - USER_ID=${USER_ID:-}
24
- - USER_PASSWORD=${USER_PASSWORD:-}
25
- - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
26
- - OPENAI_API_KEY=${OPENAI_API_KEY}
27
- - NODE_OPTIONS=${NODE_OPTIONS:-}
28
- restart: unless-stopped
29
- healthcheck:
30
- test: ["CMD", "curl", "--fail", "http://localhost:8585/_stcore/health"]
31
- interval: 30s
32
- timeout: 10s
33
- retries: 3
34
- start_period: 40s
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
mcp_server_local.py DELETED
@@ -1,36 +0,0 @@
1
- from mcp.server.fastmcp import FastMCP
2
-
3
- # Initialize FastMCP server with configuration
4
- mcp = FastMCP(
5
- "Weather", # Name of the MCP server
6
- instructions="You are a weather assistant that can answer questions about the weather in a given location.", # Instructions for the LLM on how to use this tool
7
- host="0.0.0.0", # Host address (0.0.0.0 allows connections from any IP)
8
- port=8005, # Port number for the server
9
- )
10
-
11
-
12
- @mcp.tool()
13
- async def get_weather(location: str) -> str:
14
- """
15
- Get current weather information for the specified location.
16
-
17
- This function simulates a weather service by returning a fixed response.
18
- In a production environment, this would connect to a real weather API.
19
-
20
- Args:
21
- location (str): The name of the location (city, region, etc.) to get weather for
22
-
23
- Returns:
24
- str: A string containing the weather information for the specified location
25
- """
26
- # Return a mock weather response
27
- # In a real implementation, this would call a weather API
28
- return f"It's always Sunny in {location}"
29
-
30
-
31
- if __name__ == "__main__":
32
- # Start the MCP server with stdio transport
33
- # stdio transport allows the server to communicate with clients
34
- # through standard input/output streams, making it suitable for
35
- # local development and testing
36
- mcp.run(transport="stdio")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
mcp_server_rag.py DELETED
@@ -1,89 +0,0 @@
1
- from langchain_text_splitters import RecursiveCharacterTextSplitter
2
- from langchain_community.document_loaders import PyMuPDFLoader
3
- from langchain_community.vectorstores import FAISS
4
- from langchain_openai import OpenAIEmbeddings
5
- from mcp.server.fastmcp import FastMCP
6
- from dotenv import load_dotenv
7
- from typing import Any
8
-
9
- # Load environment variables from .env file (contains API keys)
10
- load_dotenv(override=True)
11
-
12
-
13
- def create_retriever() -> Any:
14
- """
15
- Creates and returns a document retriever based on FAISS vector store.
16
-
17
- This function performs the following steps:
18
- 1. Loads a PDF document(place your PDF file in the data folder)
19
- 2. Splits the document into manageable chunks
20
- 3. Creates embeddings for each chunk
21
- 4. Builds a FAISS vector store from the embeddings
22
- 5. Returns a retriever interface to the vector store
23
-
24
- Returns:
25
- Any: A retriever object that can be used to query the document database
26
- """
27
- # Step 1: Load Documents
28
- # PyMuPDFLoader is used to extract text from PDF files
29
- loader = PyMuPDFLoader("data/sample.pdf")
30
- docs = loader.load()
31
-
32
- # Step 2: Split Documents
33
- # Recursive splitter divides documents into chunks with some overlap to maintain context
34
- text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
35
- split_documents = text_splitter.split_documents(docs)
36
-
37
- # Step 3: Create Embeddings
38
- # OpenAI's text-embedding-3-small model is used to convert text chunks into vector embeddings
39
- embeddings = OpenAIEmbeddings(model="text-embedding-3-small")
40
-
41
- # Step 4: Create Vector Database
42
- # FAISS is an efficient similarity search library that stores vector embeddings
43
- # and allows for fast retrieval of similar vectors
44
- vectorstore = FAISS.from_documents(documents=split_documents, embedding=embeddings)
45
-
46
- # Step 5: Create Retriever
47
- # The retriever provides an interface to search the vector database
48
- # and retrieve documents relevant to a query
49
- retriever = vectorstore.as_retriever()
50
- return retriever
51
-
52
-
53
- # Initialize FastMCP server with configuration
54
- mcp = FastMCP(
55
- "Retriever",
56
- instructions="A Retriever that can retrieve information from the database.",
57
- host="0.0.0.0",
58
- port=8005,
59
- )
60
-
61
-
62
- @mcp.tool()
63
- async def retrieve(query: str) -> str:
64
- """
65
- Retrieves information from the document database based on the query.
66
-
67
- This function creates a retriever, queries it with the provided input,
68
- and returns the concatenated content of all retrieved documents.
69
-
70
- Args:
71
- query (str): The search query to find relevant information
72
-
73
- Returns:
74
- str: Concatenated text content from all retrieved documents
75
- """
76
- # Create a new retriever instance for each query
77
- # Note: In production, consider caching the retriever for better performance
78
- retriever = create_retriever()
79
-
80
- # Use the invoke() method to get relevant documents based on the query
81
- retrieved_docs = retriever.invoke(query)
82
-
83
- # Join all document contents with newlines and return as a single string
84
- return "\n".join([doc.page_content for doc in retrieved_docs])
85
-
86
-
87
- if __name__ == "__main__":
88
- # Run the MCP server with stdio transport for integration with MCP clients
89
- mcp.run(transport="stdio")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
mcp_server_remote.py DELETED
@@ -1,37 +0,0 @@
1
- from mcp.server.fastmcp import FastMCP
2
-
3
- mcp = FastMCP(
4
- "Weather", # Name of the MCP server
5
- instructions="You are a weather assistant that can answer questions about the weather in a given location.", # Instructions for the LLM on how to use this tool
6
- host="0.0.0.0", # Host address (0.0.0.0 allows connections from any IP)
7
- port=8005, # Port number for the server
8
- )
9
-
10
-
11
- @mcp.tool()
12
- async def get_weather(location: str) -> str:
13
- """
14
- Get current weather information for the specified location.
15
-
16
- This function simulates a weather service by returning a fixed response.
17
- In a production environment, this would connect to a real weather API.
18
-
19
- Args:
20
- location (str): The name of the location (city, region, etc.) to get weather for
21
-
22
- Returns:
23
- str: A string containing the weather information for the specified location
24
- """
25
- # Return a mock weather response
26
- # In a real implementation, this would call a weather API
27
- return f"It's always Sunny in {location}"
28
-
29
-
30
- if __name__ == "__main__":
31
- # Print a message indicating the server is starting
32
- print("mcp remote server is running...")
33
-
34
- # Start the MCP server with SSE transport
35
- # Server-Sent Events (SSE) transport allows the server to communicate with clients
36
- # over HTTP, making it suitable for remote/distributed deployments
37
- mcp.run(transport="sse")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
packages.txt DELETED
@@ -1,5 +0,0 @@
1
- curl
2
- gnupg
3
- ca-certificates
4
- nodejs
5
- npm
 
 
 
 
 
 
pyproject.toml DELETED
@@ -1,21 +0,0 @@
1
- [project]
2
- name = "langgraph-mcp-agents"
3
- version = "0.1.0"
4
- description = "LangGraph Agent with MCP Adapters"
5
- readme = "README.md"
6
- requires-python = ">=3.12"
7
- dependencies = [
8
- "nest-asyncio>=1.6.0",
9
- "faiss-cpu>=1.10.0",
10
- "jupyter>=1.1.1",
11
- "langchain-anthropic>=0.3.10",
12
- "langchain-community>=0.3.20",
13
- "langchain-mcp-adapters>=0.0.7",
14
- "langchain-openai>=0.3.11",
15
- "langgraph>=0.3.21",
16
- "mcp[cli]>=1.6.0",
17
- "notebook>=7.3.3",
18
- "pymupdf>=1.25.4",
19
- "python-dotenv>=1.1.0",
20
- "streamlit>=1.44.1",
21
- ]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
python-services/MCP_README.md ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MCP 服务器说明
2
+
3
+ 本项目现在包含三个MCP(Model Context Protocol)服务器,它们替代了原来的FastAPI HTTP服务。
4
+
5
+ ## 🚀 MCP 服务器概览
6
+
7
+ ### 1. RequestProcessor (service1)
8
+ **功能**: 通用请求处理和数据分析
9
+ **工具**:
10
+ - `process_request`: 处理各种类型的请求和响应
11
+ - `get_service_info`: 获取服务信息
12
+ - `validate_data`: 验证数据结构
13
+
14
+ **使用示例**:
15
+ ```python
16
+ # 处理请求
17
+ result = await process_request(
18
+ message="analyze this data",
19
+ data={"key": "value"}
20
+ )
21
+
22
+ # 验证数据
23
+ validation = await validate_data({"name": "test", "value": 123})
24
+ ```
25
+
26
+ ### 2. DataAnalyzer (service2)
27
+ **功能**: 数据分析和统计计算
28
+ **工具**:
29
+ - `analyze_data`: 执行数据分析
30
+ - `get_data_statistics`: 计算数值统计
31
+ - `get_service_statistics`: 获取服务统计信息
32
+
33
+ **使用示例**:
34
+ ```python
35
+ # 分析数据
36
+ analysis = await analyze_data(
37
+ input_data={"scores": [85, 90, 78, 92, 88]},
38
+ operation="statistics"
39
+ )
40
+
41
+ # 计算统计
42
+ stats = await get_data_statistics([1, 2, 3, 4, 5])
43
+ ```
44
+
45
+ ### 3. MathComputer (service3)
46
+ **功能**: 数学计算和统计函数
47
+ **工具**:
48
+ - `compute_operation`: 基本数学运算
49
+ - `get_supported_operations`: 获取支持的操作
50
+ - `advanced_math_operations`: 高级数学运算
51
+
52
+ **使用示例**:
53
+ ```python
54
+ # 基本计算
55
+ result = await compute_operation(
56
+ numbers=[1, 2, 3, 4, 5],
57
+ operation="average"
58
+ )
59
+
60
+ # 高级运算
61
+ percentile = await advanced_math_operations(
62
+ operation="percentile",
63
+ numbers=[1, 2, 3, 4, 5],
64
+ percentile=75
65
+ )
66
+ ```
67
+
68
+ ## 🔧 配置说明
69
+
70
+ 这些MCP服务器已在 `config.json` 中配置:
71
+
72
+ ```json
73
+ {
74
+ "request_processor": {
75
+ "command": "python",
76
+ "args": ["./python-services/service1/mcp_server.py"],
77
+ "transport": "stdio"
78
+ },
79
+ "data_analyzer": {
80
+ "command": "python",
81
+ "args": ["./python-services/service2/mcp_server.py"],
82
+ "transport": "stdio"
83
+ },
84
+ "math_computer": {
85
+ "command": "python",
86
+ "args": ["./python-services/service3/mcp_server.py"],
87
+ "transport": "stdio"
88
+ }
89
+ }
90
+ ```
91
+
92
+ ## 🚀 启动方式
93
+
94
+ ### 方式1: 通过主应用启动
95
+ 主Streamlit应用会自动启动这些MCP服务器:
96
+ ```bash
97
+ python app.py
98
+ ```
99
+
100
+ ### 方式2: 独立启动
101
+ ```bash
102
+ # 启动RequestProcessor
103
+ python python-services/service1/mcp_server.py
104
+
105
+ # 启动DataAnalyzer
106
+ python python-services/service2/mcp_server.py
107
+
108
+ # 启动MathComputer
109
+ python python-services/service3/mcp_server.py
110
+ ```
111
+
112
+ ## 📋 依赖要求
113
+
114
+ 每个MCP服务器需要以下依赖:
115
+ ```
116
+ mcp
117
+ fastmcp
118
+ pydantic
119
+ ```
120
+
121
+ ## 🔄 从HTTP服务迁移
122
+
123
+ 原来的三个FastAPI HTTP服务已被MCP服务器替代:
124
+
125
+ | 原HTTP服务 | 新MCP服务器 | 功能对比 |
126
+ |-----------|------------|----------|
127
+ | service1 (8001) | RequestProcessor | 请求处理 → 通用MCP工具 |
128
+ | service2 (8002) | DataAnalyzer | 数据分析 → 数据分析MCP工具 |
129
+ | service3 (8003) | MathComputer | 数学计算 → 数学计算MCP工具 |
130
+
131
+ ## 💡 优势
132
+
133
+ 1. **统一协议**: 所有服务都使用MCP协议
134
+ 2. **更好的集成**: 与LangChain等框架无缝集成
135
+ 3. **工具发现**: 自动工具发现和注册
136
+ 4. **类型安全**: 更好的参数类型定义和验证
137
+ 5. **异步支持**: 原生异步操作支持
138
+
139
+ ## 🐛 故障排除
140
+
141
+ 如果MCP服务器启动失败:
142
+
143
+ 1. 检查依赖是否正确安装:`pip install mcp fastmcp`
144
+ 2. 确认Python版本兼容性(建议3.8+)
145
+ 3. 检查文件路径是否正确
146
+ 4. 查看错误日志获取详细信息
147
+
148
+ ## 🔮 扩展建议
149
+
150
+ 可以基于这些MCP服务器模板创建更多专用服务:
151
+ - 文件处理服务
152
+ - 数据库查询服务
153
+ - 外部API集成服务
154
+ - 机器学习推理服务
python-services/service1/Dockerfile ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ build-essential \
8
+ curl \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy requirements first for better caching
12
+ COPY requirements.txt .
13
+
14
+ # Install Python dependencies
15
+ RUN pip install --no-cache-dir -r requirements.txt
16
+
17
+ # Copy application code
18
+ COPY . .
19
+
20
+ # Expose port
21
+ EXPOSE 8000
22
+
23
+ # Health check
24
+ HEALTHCHECK CMD curl --fail http://localhost:8000/health || exit 1
25
+
26
+ # Run the service
27
+ CMD ["python", "main.py"]
python-services/service1/mcp_server.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from mcp.server.fastmcp import FastMCP
2
+ from typing import Dict, Any, Optional
3
+ import json
4
+ from datetime import datetime
5
+
6
+ # Initialize FastMCP server
7
+ mcp = FastMCP(
8
+ "RequestProcessor", # Name of the MCP server
9
+ instructions="You are a request processing assistant that can handle various types of requests and perform data processing tasks.",
10
+ host="0.0.0.0",
11
+ port=8001,
12
+ )
13
+
14
+ @mcp.tool()
15
+ async def process_request(message: str, data: Optional[Dict[str, Any]] = None) -> str:
16
+ """
17
+ Process various types of requests and perform data processing.
18
+
19
+ Args:
20
+ message (str): The main message or request description
21
+ data (Dict[str, Any], optional): Additional data to process
22
+
23
+ Returns:
24
+ str: Processing result and response
25
+ """
26
+ try:
27
+ if data is None:
28
+ data = {}
29
+
30
+ # Process the request based on message content
31
+ processed_data = {
32
+ "original_message": message,
33
+ "processed_at": datetime.now().isoformat(),
34
+ "service_version": "1.0.0",
35
+ "data_size": len(str(data)),
36
+ "processing_status": "success"
37
+ }
38
+
39
+ # Add any additional processing logic here
40
+ if "analyze" in message.lower():
41
+ processed_data["analysis_type"] = "text_analysis"
42
+ processed_data["word_count"] = len(message.split())
43
+
44
+ if "validate" in message.lower():
45
+ processed_data["validation_type"] = "data_validation"
46
+ processed_data["is_valid"] = True
47
+
48
+ return json.dumps(processed_data, indent=2, ensure_ascii=False)
49
+
50
+ except Exception as e:
51
+ return f"Error processing request: {str(e)}"
52
+
53
+ @mcp.tool()
54
+ async def get_service_info() -> str:
55
+ """
56
+ Get information about the request processing service.
57
+
58
+ Returns:
59
+ str: Service information
60
+ """
61
+ info = {
62
+ "service_name": "RequestProcessor",
63
+ "version": "1.0.0",
64
+ "status": "running",
65
+ "capabilities": [
66
+ "request_processing",
67
+ "data_analysis",
68
+ "text_validation",
69
+ "general_data_handling"
70
+ ],
71
+ "description": "A versatile request processing service that can handle various types of data and requests"
72
+ }
73
+
74
+ return json.dumps(info, indent=2, ensure_ascii=False)
75
+
76
+ @mcp.tool()
77
+ async def validate_data(data: Dict[str, Any]) -> str:
78
+ """
79
+ Validate the structure and content of provided data.
80
+
81
+ Args:
82
+ data (Dict[str, Any]): Data to validate
83
+
84
+ Returns:
85
+ str: Validation results
86
+ """
87
+ try:
88
+ validation_result = {
89
+ "is_valid": True,
90
+ "validation_timestamp": datetime.now().isoformat(),
91
+ "data_type": type(data).__name__,
92
+ "data_keys": list(data.keys()) if isinstance(data, dict) else [],
93
+ "data_size": len(str(data)),
94
+ "issues": []
95
+ }
96
+
97
+ # Basic validation logic
98
+ if not data:
99
+ validation_result["is_valid"] = False
100
+ validation_result["issues"].append("Data is empty")
101
+
102
+ if isinstance(data, dict):
103
+ for key, value in data.items():
104
+ if value is None:
105
+ validation_result["issues"].append(f"Key '{key}' has null value")
106
+
107
+ if validation_result["issues"]:
108
+ validation_result["is_valid"] = False
109
+
110
+ return json.dumps(validation_result, indent=2, ensure_ascii=False)
111
+
112
+ except Exception as e:
113
+ return f"Error during validation: {str(e)}"
114
+
115
+ if __name__ == "__main__":
116
+ # Start the MCP server with stdio transport
117
+ mcp.run(transport="stdio")
python-services/service1/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ mcp
4
+ fastmcp
5
+ pydantic
python-services/service2/Dockerfile ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ build-essential \
8
+ curl \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy requirements first for better caching
12
+ COPY requirements.txt .
13
+
14
+ # Install Python dependencies
15
+ RUN pip install --no-cache-dir -r requirements.txt
16
+
17
+ # Copy application code
18
+ COPY . .
19
+
20
+ # Expose port
21
+ EXPOSE 8000
22
+
23
+ # Health check
24
+ HEALTHCHECK CMD curl --fail http://localhost:8000/health || exit 1
25
+
26
+ # Run the service
27
+ CMD ["python", "main.py"]
python-services/service2/mcp_server.py ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from mcp.server.fastmcp import FastMCP
2
+ from typing import Dict, Any, List, Optional
3
+ import json
4
+ from datetime import datetime
5
+ import statistics
6
+
7
+ # Initialize FastMCP server
8
+ mcp = FastMCP(
9
+ "DataAnalyzer", # Name of the MCP server
10
+ instructions="You are a data analysis assistant that can perform various types of data analysis, statistics, and data processing tasks.",
11
+ host="0.0.0.0",
12
+ port=8002,
13
+ )
14
+
15
+ @mcp.tool()
16
+ async def analyze_data(input_data: Dict[str, Any], operation: str = "default") -> str:
17
+ """
18
+ Perform data analysis on the provided input data.
19
+
20
+ Args:
21
+ input_data (Dict[str, Any]): Data to analyze
22
+ operation (str): Type of analysis to perform
23
+
24
+ Returns:
25
+ str: Analysis results
26
+ """
27
+ try:
28
+ analysis_result = {
29
+ "input_size": len(str(input_data)),
30
+ "operation_type": operation,
31
+ "analysis_timestamp": datetime.now().isoformat(),
32
+ "data_summary": f"Processed {len(input_data)} items",
33
+ "analysis_results": {}
34
+ }
35
+
36
+ # Perform different types of analysis based on operation
37
+ if operation == "statistics":
38
+ if isinstance(input_data, dict):
39
+ values = [v for v in input_data.values() if isinstance(v, (int, float))]
40
+ if values:
41
+ analysis_result["analysis_results"] = {
42
+ "count": len(values),
43
+ "mean": statistics.mean(values),
44
+ "median": statistics.median(values),
45
+ "min": min(values),
46
+ "max": max(values),
47
+ "std_dev": statistics.stdev(values) if len(values) > 1 else 0
48
+ }
49
+
50
+ elif operation == "text_analysis":
51
+ text_content = str(input_data)
52
+ analysis_result["analysis_results"] = {
53
+ "character_count": len(text_content),
54
+ "word_count": len(text_content.split()),
55
+ "line_count": len(text_content.splitlines()),
56
+ "unique_words": len(set(text_content.lower().split()))
57
+ }
58
+
59
+ elif operation == "structure_analysis":
60
+ analysis_result["analysis_results"] = {
61
+ "data_type": type(input_data).__name__,
62
+ "is_nested": any(isinstance(v, (dict, list)) for v in input_data.values()) if isinstance(input_data, dict) else False,
63
+ "key_types": {k: type(v).__name__ for k, v in input_data.items()} if isinstance(input_data, dict) else {},
64
+ "depth": _get_nesting_depth(input_data)
65
+ }
66
+
67
+ return json.dumps(analysis_result, indent=2, ensure_ascii=False)
68
+
69
+ except Exception as e:
70
+ return f"Error analyzing data: {str(e)}"
71
+
72
+ @mcp.tool()
73
+ async def get_data_statistics(data: List[float]) -> str:
74
+ """
75
+ Calculate comprehensive statistics for a list of numerical data.
76
+
77
+ Args:
78
+ data (List[float]): List of numerical values
79
+
80
+ Returns:
81
+ str: Statistical analysis results
82
+ """
83
+ try:
84
+ if not data:
85
+ return "Error: No data provided"
86
+
87
+ if not all(isinstance(x, (int, float)) for x in data):
88
+ return "Error: All values must be numerical"
89
+
90
+ stats = {
91
+ "count": len(data),
92
+ "sum": sum(data),
93
+ "mean": statistics.mean(data),
94
+ "median": statistics.median(data),
95
+ "min": min(data),
96
+ "max": max(data),
97
+ "range": max(data) - min(data),
98
+ "variance": statistics.variance(data) if len(data) > 1 else 0,
99
+ "std_dev": statistics.stdev(data) if len(data) > 1 else 0,
100
+ "quartiles": statistics.quantiles(data, n=4) if len(data) > 1 else []
101
+ }
102
+
103
+ return json.dumps(stats, indent=2, ensure_ascii=False)
104
+
105
+ except Exception as e:
106
+ return f"Error calculating statistics: {str(e)}"
107
+
108
+ @mcp.tool()
109
+ async def get_service_statistics() -> str:
110
+ """
111
+ Get statistics and information about the data analysis service.
112
+
113
+ Returns:
114
+ str: Service statistics and information
115
+ """
116
+ stats = {
117
+ "service_name": "DataAnalyzer",
118
+ "version": "1.0.0",
119
+ "status": "running",
120
+ "port": 8002,
121
+ "endpoints": ["analyze_data", "get_data_statistics", "get_service_statistics"],
122
+ "capabilities": [
123
+ "statistical_analysis",
124
+ "text_analysis",
125
+ "structure_analysis",
126
+ "data_processing"
127
+ ],
128
+ "description": "A comprehensive data analysis service providing statistical calculations and data insights"
129
+ }
130
+
131
+ return json.dumps(stats, indent=2, ensure_ascii=False)
132
+
133
+ def _get_nesting_depth(obj, current_depth=0):
134
+ """Helper function to calculate nesting depth of data structures."""
135
+ if isinstance(obj, dict):
136
+ if not obj:
137
+ return current_depth
138
+ return max(_get_nesting_depth(v, current_depth + 1) for v in obj.values())
139
+ elif isinstance(obj, list):
140
+ if not obj:
141
+ return current_depth
142
+ return max(_get_nesting_depth(item, current_depth + 1) for item in obj)
143
+ else:
144
+ return current_depth
145
+
146
+ if __name__ == "__main__":
147
+ # Start the MCP server with stdio transport
148
+ mcp.run(transport="stdio")
python-services/service2/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ mcp
4
+ fastmcp
5
+ pydantic
6
+ statistics
python-services/service3/Dockerfile ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.11-slim
2
+
3
+ WORKDIR /app
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y \
7
+ build-essential \
8
+ curl \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ # Copy requirements first for better caching
12
+ COPY requirements.txt .
13
+
14
+ # Install Python dependencies
15
+ RUN pip install --no-cache-dir -r requirements.txt
16
+
17
+ # Copy application code
18
+ COPY . .
19
+
20
+ # Expose port
21
+ EXPOSE 8000
22
+
23
+ # Health check
24
+ HEALTHCHECK CMD curl --fail http://localhost:8000/health || exit 1
25
+
26
+ # Run the service
27
+ CMD ["python", "main.py"]
python-services/service3/mcp_server.py ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from mcp.server.fastmcp import FastMCP
2
+ from typing import List, Optional
3
+ import json
4
+ from datetime import datetime
5
+ import math
6
+
7
+ # Initialize FastMCP server
8
+ mcp = FastMCP(
9
+ "MathComputer", # Name of the MCP server
10
+ instructions="You are a mathematical computation assistant that can perform various mathematical operations and calculations.",
11
+ host="0.0.0.0",
12
+ port=8003,
13
+ )
14
+
15
+ @mcp.tool()
16
+ async def compute_operation(numbers: List[float], operation: str = "sum") -> str:
17
+ """
18
+ Perform mathematical operations on a list of numbers.
19
+
20
+ Args:
21
+ numbers (List[float]): List of numbers to operate on
22
+ operation (str): Mathematical operation to perform
23
+
24
+ Returns:
25
+ str: Computation results
26
+ """
27
+ try:
28
+ if not numbers:
29
+ return "Error: Numbers list cannot be empty"
30
+
31
+ if not all(isinstance(x, (int, float)) for x in numbers):
32
+ return "Error: All values must be numerical"
33
+
34
+ result = 0
35
+ operation_details = {}
36
+
37
+ if operation == "sum":
38
+ result = sum(numbers)
39
+ operation_details = {
40
+ "method": "sum",
41
+ "count": len(numbers),
42
+ "formula": "sum(numbers)"
43
+ }
44
+ elif operation == "average":
45
+ result = sum(numbers) / len(numbers)
46
+ operation_details = {
47
+ "method": "average",
48
+ "count": len(numbers),
49
+ "formula": "sum(numbers) / len(numbers)"
50
+ }
51
+ elif operation == "max":
52
+ result = max(numbers)
53
+ operation_details = {
54
+ "method": "max",
55
+ "count": len(numbers),
56
+ "formula": "max(numbers)"
57
+ }
58
+ elif operation == "min":
59
+ result = min(numbers)
60
+ operation_details = {
61
+ "method": "min",
62
+ "count": len(numbers),
63
+ "formula": "min(numbers)"
64
+ }
65
+ elif operation == "product":
66
+ result = math.prod(numbers)
67
+ operation_details = {
68
+ "method": "product",
69
+ "count": len(numbers),
70
+ "formula": "math.prod(numbers)"
71
+ }
72
+ elif operation == "geometric_mean":
73
+ if any(x <= 0 for x in numbers):
74
+ return "Error: Geometric mean requires all positive numbers"
75
+ result = math.pow(math.prod(numbers), 1/len(numbers))
76
+ operation_details = {
77
+ "method": "geometric_mean",
78
+ "count": len(numbers),
79
+ "formula": "pow(product, 1/n)"
80
+ }
81
+ elif operation == "harmonic_mean":
82
+ if any(x == 0 for x in numbers):
83
+ return "Error: Harmonic mean requires all non-zero numbers"
84
+ result = len(numbers) / sum(1/x for x in numbers)
85
+ operation_details = {
86
+ "method": "harmonic_mean",
87
+ "count": len(numbers),
88
+ "formula": "n / sum(1/x)"
89
+ }
90
+ else:
91
+ return f"Error: Unsupported operation '{operation}'. Supported operations: sum, average, max, min, product, geometric_mean, harmonic_mean"
92
+
93
+ computation_result = {
94
+ "status": "success",
95
+ "result": result,
96
+ "operation": operation,
97
+ "service_name": "MathComputer",
98
+ "details": operation_details,
99
+ "computation_timestamp": datetime.now().isoformat(),
100
+ "input_numbers": numbers,
101
+ "input_count": len(numbers)
102
+ }
103
+
104
+ return json.dumps(computation_result, indent=2, ensure_ascii=False)
105
+
106
+ except Exception as e:
107
+ return f"Error during computation: {str(e)}"
108
+
109
+ @mcp.tool()
110
+ async def get_supported_operations() -> str:
111
+ """
112
+ Get list of supported mathematical operations.
113
+
114
+ Returns:
115
+ str: Supported operations and descriptions
116
+ """
117
+ operations_info = {
118
+ "service_name": "MathComputer",
119
+ "supported_operations": [
120
+ "sum",
121
+ "average",
122
+ "max",
123
+ "min",
124
+ "product",
125
+ "geometric_mean",
126
+ "harmonic_mean"
127
+ ],
128
+ "descriptions": {
129
+ "sum": "Calculate the sum of all numbers",
130
+ "average": "Calculate the arithmetic mean",
131
+ "max": "Find the maximum value",
132
+ "min": "Find the minimum value",
133
+ "product": "Calculate the product of all numbers",
134
+ "geometric_mean": "Calculate the geometric mean (requires positive numbers)",
135
+ "harmonic_mean": "Calculate the harmonic mean (requires non-zero numbers)"
136
+ },
137
+ "description": "Mathematical computation service with advanced statistical functions"
138
+ }
139
+
140
+ return json.dumps(operations_info, indent=2, ensure_ascii=False)
141
+
142
+ @mcp.tool()
143
+ async def advanced_math_operations(operation: str, numbers: List[float], **kwargs) -> str:
144
+ """
145
+ Perform advanced mathematical operations.
146
+
147
+ Args:
148
+ operation (str): Advanced operation to perform
149
+ numbers (List[float]): List of numbers
150
+ **kwargs: Additional parameters for specific operations
151
+
152
+ Returns:
153
+ str: Advanced computation results
154
+ """
155
+ try:
156
+ if not numbers:
157
+ return "Error: Numbers list cannot be empty"
158
+
159
+ if operation == "percentile":
160
+ percentile = kwargs.get("percentile", 50)
161
+ if not 0 <= percentile <= 100:
162
+ return "Error: Percentile must be between 0 and 100"
163
+
164
+ sorted_numbers = sorted(numbers)
165
+ index = (percentile / 100) * (len(sorted_numbers) - 1)
166
+ if index.is_integer():
167
+ result = sorted_numbers[int(index)]
168
+ else:
169
+ lower = sorted_numbers[int(index)]
170
+ upper = sorted_numbers[int(index) + 1]
171
+ result = lower + (upper - lower) * (index - int(index))
172
+
173
+ operation_details = {
174
+ "method": "percentile",
175
+ "percentile": percentile,
176
+ "count": len(numbers),
177
+ "formula": f"percentile_{percentile}(sorted_numbers)"
178
+ }
179
+
180
+ elif operation == "standard_deviation":
181
+ if len(numbers) < 2:
182
+ return "Error: Standard deviation requires at least 2 numbers"
183
+
184
+ mean = sum(numbers) / len(numbers)
185
+ variance = sum((x - mean) ** 2 for x in numbers) / (len(numbers) - 1)
186
+ result = math.sqrt(variance)
187
+
188
+ operation_details = {
189
+ "method": "standard_deviation",
190
+ "count": len(numbers),
191
+ "formula": "sqrt(sum((x - mean)²) / (n-1))"
192
+ }
193
+
194
+ elif operation == "variance":
195
+ if len(numbers) < 2:
196
+ return "Error: Variance requires at least 2 numbers"
197
+
198
+ mean = sum(numbers) / len(numbers)
199
+ result = sum((x - mean) ** 2 for x in numbers) / (len(numbers) - 1)
200
+
201
+ operation_details = {
202
+ "method": "variance",
203
+ "count": len(numbers),
204
+ "formula": "sum((x - mean)²) / (n-1)"
205
+ }
206
+
207
+ else:
208
+ return f"Error: Unsupported advanced operation '{operation}'. Supported: percentile, standard_deviation, variance"
209
+
210
+ computation_result = {
211
+ "status": "success",
212
+ "result": result,
213
+ "operation": operation,
214
+ "service_name": "MathComputer",
215
+ "details": operation_details,
216
+ "computation_timestamp": datetime.now().isoformat(),
217
+ "input_numbers": numbers,
218
+ "input_count": len(numbers),
219
+ "additional_params": kwargs
220
+ }
221
+
222
+ return json.dumps(computation_result, indent=2, ensure_ascii=False)
223
+
224
+ except Exception as e:
225
+ return f"Error during advanced computation: {str(e)}"
226
+
227
+ if __name__ == "__main__":
228
+ # Start the MCP server with stdio transport
229
+ mcp.run(transport="stdio")
python-services/service3/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ fastapi
2
+ uvicorn
3
+ mcp
4
+ fastmcp
5
+ pydantic
requirements.txt CHANGED
@@ -6,8 +6,10 @@ langchain-mcp-adapters==0.0.9
6
  langchain-openai>=0.3.11
7
  langgraph>=0.3.21
8
  mcp>=1.6.0
 
9
  notebook>=7.3.3
10
  pymupdf>=1.25.4
11
  python-dotenv>=1.1.0
12
  streamlit>=1.44.1
13
- nest-asyncio>=1.6.0
 
 
6
  langchain-openai>=0.3.11
7
  langgraph>=0.3.21
8
  mcp>=1.6.0
9
+ fastmcp
10
  notebook>=7.3.3
11
  pymupdf>=1.25.4
12
  python-dotenv>=1.1.0
13
  streamlit>=1.44.1
14
+ nest-asyncio>=1.6.0
15
+ fastapi
runtime.txt DELETED
@@ -1 +0,0 @@
1
- python: 3.12
 
 
utils.py CHANGED
@@ -19,20 +19,20 @@ async def astream_graph(
19
  include_subgraphs: bool = False,
20
  ) -> Dict[str, Any]:
21
  """
22
- LangGraph의 실행 결과를 비동기적으로 스트리밍하고 직접 출력하는 함수입니다.
23
 
24
  Args:
25
- graph (CompiledStateGraph): 실행할 컴파일된 LangGraph 객체
26
- inputs (dict): 그래프에 전달할 입력값 딕셔너리
27
- config (Optional[RunnableConfig]): 실행 설정 (선택적)
28
- node_names (List[str], optional): 출력할 노드 이름 목록. 기본값은 리스트
29
- callback (Optional[Callable], optional): 청크 처리를 위한 콜백 함수. 기본값은 None
30
- 콜백 함수는 {"node": str, "content": Any} 형태의 딕셔너리를 인자로 받습니다.
31
- stream_mode (str, optional): 스트리밍 모드 ("messages" 또는 "updates"). 기본값은 "messages"
32
- include_subgraphs (bool, optional): 서브그래프 포함 여부. 기본값은 False
33
 
34
  Returns:
35
- Dict[str, Any]: 최종 결과 (선택적)
36
  """
37
  config = config or {}
38
  final_result = {}
@@ -53,53 +53,53 @@ async def astream_graph(
53
  "metadata": metadata,
54
  }
55
 
56
- # node_names 비어있거나 현재 노드가 node_names에 있는 경우에만 처리
57
  if not node_names or curr_node in node_names:
58
- # 콜백 함수가 있는 경우 실행
59
  if callback:
60
  result = callback({"node": curr_node, "content": chunk_msg})
61
  if hasattr(result, "__await__"):
62
  await result
63
- # 콜백이 없는 경우 기본 출력
64
  else:
65
- # 노드가 변경된 경우에만 구분선 출력
66
  if curr_node != prev_node:
67
  print("\n" + "=" * 50)
68
  print(f"🔄 Node: \033[1;36m{curr_node}\033[0m 🔄")
69
  print("- " * 25)
70
 
71
- # Claude/Anthropic 모델의 토큰 청크 처리 - 항상 텍스트만 추출
72
  if hasattr(chunk_msg, "content"):
73
- # 리스트 형태의 content (Anthropic/Claude 스타일)
74
  if isinstance(chunk_msg.content, list):
75
  for item in chunk_msg.content:
76
  if isinstance(item, dict) and "text" in item:
77
  print(item["text"], end="", flush=True)
78
- # 문자열 형태의 content
79
  elif isinstance(chunk_msg.content, str):
80
  print(chunk_msg.content, end="", flush=True)
81
- # 형태의 chunk_msg 처리
82
  else:
83
  print(chunk_msg, end="", flush=True)
84
 
85
  prev_node = curr_node
86
 
87
  elif stream_mode == "updates":
88
- # 에러 수정: 언패킹 방식 변경
89
- # REACT 에이전트 일부 그래프에서는 단일 딕셔너리만 반환함
90
  async for chunk in graph.astream(
91
  inputs, config, stream_mode=stream_mode, subgraphs=include_subgraphs
92
  ):
93
- # 반환 형식에 따라 처리 방법 분기
94
  if isinstance(chunk, tuple) and len(chunk) == 2:
95
- # 기존 예상 형식: (namespace, chunk_dict)
96
  namespace, node_chunks = chunk
97
  else:
98
- # 단일 딕셔너리만 반환하는 경우 (REACT 에이전트 )
99
- namespace = [] # 네임스페이스 (루트 그래프)
100
- node_chunks = chunk # chunk 자체가 노드 청크 딕셔너리
101
 
102
- # 딕셔너리인지 확인하고 항목 처리
103
  if isinstance(node_chunks, dict):
104
  for node_name, node_chunk in node_chunks.items():
105
  final_result = {
@@ -108,28 +108,28 @@ async def astream_graph(
108
  "namespace": namespace,
109
  }
110
 
111
- # node_names 비어있지 않은 경우에만 필터링
112
  if len(node_names) > 0 and node_name not in node_names:
113
  continue
114
 
115
- # 콜백 함수가 있는 경우 실행
116
  if callback is not None:
117
  result = callback({"node": node_name, "content": node_chunk})
118
  if hasattr(result, "__await__"):
119
  await result
120
- # 콜백이 없는 경우 기본 출력
121
  else:
122
- # 노드가 변경된 경우에만 구분선 출력 (messages 모드와 동일하게)
123
  if node_name != prev_node:
124
  print("\n" + "=" * 50)
125
  print(f"🔄 Node: \033[1;36m{node_name}\033[0m 🔄")
126
  print("- " * 25)
127
 
128
- # 노드의 청크 데이터 출력 - 텍스트 중심으로 처리
129
  if isinstance(node_chunk, dict):
130
  for k, v in node_chunk.items():
131
  if isinstance(v, BaseMessage):
132
- # BaseMessage content 속성이 텍스트나 리스트인 경우를 처리
133
  if hasattr(v, "content"):
134
  if isinstance(v.content, list):
135
  for item in v.content:
@@ -190,16 +190,16 @@ async def astream_graph(
190
  else:
191
  print(node_chunk, end="", flush=True)
192
 
193
- # 구분선을 여기서 출력하지 않음 (messages 모드와 동일하게)
194
 
195
  prev_node = node_name
196
  else:
197
- # 딕셔너리가 아닌 경우 전체 청크 출력
198
  print("\n" + "=" * 50)
199
  print(f"🔄 Raw output 🔄")
200
  print("- " * 25)
201
  print(node_chunks, end="", flush=True)
202
- # 구분선을 여기서 출력하지 않음
203
  final_result = {"content": node_chunks}
204
 
205
  else:
@@ -207,7 +207,7 @@ async def astream_graph(
207
  f"Invalid stream_mode: {stream_mode}. Must be 'messages' or 'updates'."
208
  )
209
 
210
- # 필요에 따라 최종 결과 반환
211
  return final_result
212
 
213
 
@@ -220,19 +220,19 @@ async def ainvoke_graph(
220
  include_subgraphs: bool = True,
221
  ) -> Dict[str, Any]:
222
  """
223
- LangGraph 앱의 실행 결과를 비동기적으로 스트리밍하여 출력하는 함수입니다.
224
 
225
  Args:
226
- graph (CompiledStateGraph): 실행할 컴파일된 LangGraph 객체
227
- inputs (dict): 그래프에 전달할 입력값 딕셔너리
228
- config (Optional[RunnableConfig]): 실행 설정 (선택적)
229
- node_names (List[str], optional): 출력할 노드 이름 목록. 기본값은 리스트
230
- callback (Optional[Callable], optional): 청크 처리를 위한 콜백 함수. 기본값은 None
231
- 콜백 함수는 {"node": str, "content": Any} 형태의 딕셔너리를 인자로 받습니다.
232
- include_subgraphs (bool, optional): 서브그래프 포함 여부. 기본값은 True
233
 
234
  Returns:
235
- Dict[str, Any]: 최종 결과 (마지막 노드의 출력)
236
  """
237
  config = config or {}
238
  final_result = {}
@@ -240,20 +240,20 @@ async def ainvoke_graph(
240
  def format_namespace(namespace):
241
  return namespace[-1].split(":")[0] if len(namespace) > 0 else "root graph"
242
 
243
- # subgraphs 매개변수를 통해 서브그래프의 출력도 포함
244
  async for chunk in graph.astream(
245
  inputs, config, stream_mode="updates", subgraphs=include_subgraphs
246
  ):
247
- # 반환 형식에 따라 처리 방법 분기
248
  if isinstance(chunk, tuple) and len(chunk) == 2:
249
- # 기존 예상 형식: (namespace, chunk_dict)
250
  namespace, node_chunks = chunk
251
  else:
252
- # 단일 딕셔너리만 반환하는 경우 (REACT 에이전트 )
253
- namespace = [] # 네임스페이스 (루트 그래프)
254
- node_chunks = chunk # chunk 자체가 노드 청크 딕셔너리
255
 
256
- # 딕셔너리인지 확인하고 항목 처리
257
  if isinstance(node_chunks, dict):
258
  for node_name, node_chunk in node_chunks.items():
259
  final_result = {
@@ -262,17 +262,17 @@ async def ainvoke_graph(
262
  "namespace": namespace,
263
  }
264
 
265
- # node_names 비어있지 않은 경우에만 필터링
266
  if node_names and node_name not in node_names:
267
  continue
268
 
269
- # 콜백 함수가 있는 경우 실행
270
  if callback is not None:
271
  result = callback({"node": node_name, "content": node_chunk})
272
- # 코루틴인 경우 await
273
  if hasattr(result, "__await__"):
274
  await result
275
- # 콜백이 없는 경우 기본 출력
276
  else:
277
  print("\n" + "=" * 50)
278
  formatted_namespace = format_namespace(namespace)
@@ -284,7 +284,7 @@ async def ainvoke_graph(
284
  )
285
  print("- " * 25)
286
 
287
- # 노드의 청크 데이터 출력
288
  if isinstance(node_chunk, dict):
289
  for k, v in node_chunk.items():
290
  if isinstance(v, BaseMessage):
@@ -310,7 +310,7 @@ async def ainvoke_graph(
310
  print(node_chunk)
311
  print("=" * 50)
312
  else:
313
- # 딕셔너리가 아닌 경우 전체 청크 출력
314
  print("\n" + "=" * 50)
315
  print(f"🔄 Raw output 🔄")
316
  print("- " * 25)
@@ -318,5 +318,5 @@ async def ainvoke_graph(
318
  print("=" * 50)
319
  final_result = {"content": node_chunks}
320
 
321
- # 최종 결과 반환
322
  return final_result
 
19
  include_subgraphs: bool = False,
20
  ) -> Dict[str, Any]:
21
  """
22
+ A function that asynchronously streams and directly outputs the execution results of LangGraph.
23
 
24
  Args:
25
+ graph (CompiledStateGraph): The compiled LangGraph object to execute
26
+ inputs (dict): Input dictionary to pass to the graph
27
+ config (Optional[RunnableConfig]): Execution configuration (optional)
28
+ node_names (List[str], optional): List of node names to output. Default is empty list
29
+ callback (Optional[Callable], optional): Callback function for processing each chunk. Default is None
30
+ The callback function receives a dictionary in the form {"node": str, "content": Any}.
31
+ stream_mode (str, optional): Streaming mode ("messages" or "updates"). Default is "messages"
32
+ include_subgraphs (bool, optional): Whether to include subgraphs. Default is False
33
 
34
  Returns:
35
+ Dict[str, Any]: Final result (optional)
36
  """
37
  config = config or {}
38
  final_result = {}
 
53
  "metadata": metadata,
54
  }
55
 
56
+ # Only process if node_names is empty or current node is in node_names
57
  if not node_names or curr_node in node_names:
58
+ # Execute callback function if it exists
59
  if callback:
60
  result = callback({"node": curr_node, "content": chunk_msg})
61
  if hasattr(result, "__await__"):
62
  await result
63
+ # Default output if no callback
64
  else:
65
+ # Only output separator when node changes
66
  if curr_node != prev_node:
67
  print("\n" + "=" * 50)
68
  print(f"🔄 Node: \033[1;36m{curr_node}\033[0m 🔄")
69
  print("- " * 25)
70
 
71
+ # Handle Claude/Anthropic model token chunks - always extract text only
72
  if hasattr(chunk_msg, "content"):
73
+ # List form content (Anthropic/Claude style)
74
  if isinstance(chunk_msg.content, list):
75
  for item in chunk_msg.content:
76
  if isinstance(item, dict) and "text" in item:
77
  print(item["text"], end="", flush=True)
78
+ # String form content
79
  elif isinstance(chunk_msg.content, str):
80
  print(chunk_msg.content, end="", flush=True)
81
+ # Handle other forms of chunk_msg
82
  else:
83
  print(chunk_msg, end="", flush=True)
84
 
85
  prev_node = curr_node
86
 
87
  elif stream_mode == "updates":
88
+ # Error fix: Change unpacking method
89
+ # Some graphs like REACT agents return only a single dictionary
90
  async for chunk in graph.astream(
91
  inputs, config, stream_mode=stream_mode, subgraphs=include_subgraphs
92
  ):
93
+ # Branch processing method based on return format
94
  if isinstance(chunk, tuple) and len(chunk) == 2:
95
+ # Expected format: (namespace, chunk_dict)
96
  namespace, node_chunks = chunk
97
  else:
98
+ # Case where only single dictionary is returned (REACT agents, etc.)
99
+ namespace = [] # Empty namespace (root graph)
100
+ node_chunks = chunk # chunk itself is the node chunk dictionary
101
 
102
+ # Check if it's a dictionary and process items
103
  if isinstance(node_chunks, dict):
104
  for node_name, node_chunk in node_chunks.items():
105
  final_result = {
 
108
  "namespace": namespace,
109
  }
110
 
111
+ # Only filter if node_names is not empty
112
  if len(node_names) > 0 and node_name not in node_names:
113
  continue
114
 
115
+ # Execute callback function if it exists
116
  if callback is not None:
117
  result = callback({"node": node_name, "content": node_chunk})
118
  if hasattr(result, "__await__"):
119
  await result
120
+ # Default output if no callback
121
  else:
122
+ # Only output separator when node changes (same as messages mode)
123
  if node_name != prev_node:
124
  print("\n" + "=" * 50)
125
  print(f"🔄 Node: \033[1;36m{node_name}\033[0m 🔄")
126
  print("- " * 25)
127
 
128
+ # Output node chunk data - process with text focus
129
  if isinstance(node_chunk, dict):
130
  for k, v in node_chunk.items():
131
  if isinstance(v, BaseMessage):
132
+ # Handle cases where BaseMessage's content attribute is text or list
133
  if hasattr(v, "content"):
134
  if isinstance(v.content, list):
135
  for item in v.content:
 
190
  else:
191
  print(node_chunk, end="", flush=True)
192
 
193
+ # Don't output separator here (same as messages mode)
194
 
195
  prev_node = node_name
196
  else:
197
+ # Output entire chunk if not a dictionary
198
  print("\n" + "=" * 50)
199
  print(f"🔄 Raw output 🔄")
200
  print("- " * 25)
201
  print(node_chunks, end="", flush=True)
202
+ # Don't output separator here
203
  final_result = {"content": node_chunks}
204
 
205
  else:
 
207
  f"Invalid stream_mode: {stream_mode}. Must be 'messages' or 'updates'."
208
  )
209
 
210
+ # Return final result as needed
211
  return final_result
212
 
213
 
 
220
  include_subgraphs: bool = True,
221
  ) -> Dict[str, Any]:
222
  """
223
+ A function that asynchronously streams and outputs the execution results of LangGraph apps.
224
 
225
  Args:
226
+ graph (CompiledStateGraph): The compiled LangGraph object to execute
227
+ inputs (dict): Input dictionary to pass to the graph
228
+ config (Optional[RunnableConfig]): Execution configuration (optional)
229
+ node_names (List[str], optional): List of node names to output. Default is empty list
230
+ callback (Optional[Callable], optional): Callback function for processing each chunk. Default is None
231
+ The callback function receives a dictionary in the form {"node": str, "content": Any}.
232
+ include_subgraphs (bool, optional): Whether to include subgraphs. Default is True
233
 
234
  Returns:
235
+ Dict[str, Any]: Final result (last node's output)
236
  """
237
  config = config or {}
238
  final_result = {}
 
240
  def format_namespace(namespace):
241
  return namespace[-1].split(":")[0] if len(namespace) > 0 else "root graph"
242
 
243
+ # Include subgraph output through subgraphs parameter
244
  async for chunk in graph.astream(
245
  inputs, config, stream_mode="updates", subgraphs=include_subgraphs
246
  ):
247
+ # Branch processing method based on return format
248
  if isinstance(chunk, tuple) and len(chunk) == 2:
249
+ # Expected format: (namespace, chunk_dict)
250
  namespace, node_chunks = chunk
251
  else:
252
+ # Case where only single dictionary is returned (REACT agents, etc.)
253
+ namespace = [] # Empty namespace (root graph)
254
+ node_chunks = chunk # chunk itself is the node chunk dictionary
255
 
256
+ # Check if it's a dictionary and process items
257
  if isinstance(node_chunks, dict):
258
  for node_name, node_chunk in node_chunks.items():
259
  final_result = {
 
262
  "namespace": namespace,
263
  }
264
 
265
+ # Only filter if node_names is not empty
266
  if node_names and node_name not in node_names:
267
  continue
268
 
269
+ # Execute callback function if it exists
270
  if callback is not None:
271
  result = callback({"node": node_name, "content": node_chunk})
272
+ # Await if it's a coroutine
273
  if hasattr(result, "__await__"):
274
  await result
275
+ # Default output if no callback
276
  else:
277
  print("\n" + "=" * 50)
278
  formatted_namespace = format_namespace(namespace)
 
284
  )
285
  print("- " * 25)
286
 
287
+ # Output node chunk data
288
  if isinstance(node_chunk, dict):
289
  for k, v in node_chunk.items():
290
  if isinstance(v, BaseMessage):
 
310
  print(node_chunk)
311
  print("=" * 50)
312
  else:
313
+ # Output entire chunk if not a dictionary
314
  print("\n" + "=" * 50)
315
  print(f"🔄 Raw output 🔄")
316
  print("- " * 25)
 
318
  print("=" * 50)
319
  final_result = {"content": node_chunks}
320
 
321
+ # Return final result
322
  return final_result
uv.lock DELETED
The diff for this file is too large to render. See raw diff