jackkuo commited on
Commit
d7500c0
ยท
1 Parent(s): c3b9f62
.env ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ANTHROPIC_API_KEY=sk-0nEqu5ChgjT6aFweA12bC37d6f8f485eAd63848e4c57041d
2
+ ANTHROPIC_BASE_URL=https://openai.sohoyo.io
3
+
4
+ # OPENAI_API_KEY=sk-0nEqu5ChgjT6aFweA12bC37d6f8f485eAd63848e4c57041d
5
+ # OPENAI_BASE_URL=https://openai.sohoyo.io/v1
MCP-HandsOn-KOR.ipynb DELETED
@@ -1,701 +0,0 @@
1
- {
2
- "cells": [
3
- {
4
- "cell_type": "markdown",
5
- "metadata": {},
6
- "source": [
7
- "# MCP + LangGraph ํ•ธ์ฆˆ์˜จ ํŠœํ† ๋ฆฌ์–ผ\n",
8
- "\n",
9
- "- ์ž‘์„ฑ์ž: [ํ…Œ๋””๋…ธํŠธ](https://youtube.com/c/teddynote)\n",
10
- "- ๊ฐ•์˜: [ํŒจ์ŠคํŠธ์บ ํผ์Šค RAG ๋น„๋ฒ•๋…ธํŠธ](https://fastcampus.co.kr/data_online_teddy)\n",
11
- "\n",
12
- "**์ฐธ๊ณ ์ž๋ฃŒ**\n",
13
- "- https://modelcontextprotocol.io/introduction\n",
14
- "- https://github.com/langchain-ai/langchain-mcp-adapters"
15
- ]
16
- },
17
- {
18
- "cell_type": "markdown",
19
- "metadata": {},
20
- "source": [
21
- "## ํ™˜๊ฒฝ์„ค์ •\n",
22
- "\n",
23
- "์•„๋ž˜ ์„ค์น˜ ๋ฐฉ๋ฒ•์„ ์ฐธ๊ณ ํ•˜์—ฌ `uv` ๋ฅผ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค.\n",
24
- "\n",
25
- "**uv ์„ค์น˜ ๋ฐฉ๋ฒ•**\n",
26
- "\n",
27
- "```bash\n",
28
- "# macOS/Linux\n",
29
- "curl -LsSf https://astral.sh/uv/install.sh | sh\n",
30
- "\n",
31
- "# Windows (PowerShell)\n",
32
- "irm https://astral.sh/uv/install.ps1 | iex\n",
33
- "```\n",
34
- "\n",
35
- "**์˜์กด์„ฑ ์„ค์น˜**\n",
36
- "\n",
37
- "```bash\n",
38
- "uv pip install -r requirements.txt\n",
39
- "```"
40
- ]
41
- },
42
- {
43
- "cell_type": "markdown",
44
- "metadata": {},
45
- "source": [
46
- "ํ™˜๊ฒฝ๋ณ€์ˆ˜๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค."
47
- ]
48
- },
49
- {
50
- "cell_type": "code",
51
- "execution_count": null,
52
- "metadata": {},
53
- "outputs": [],
54
- "source": [
55
- "from dotenv import load_dotenv\n",
56
- "\n",
57
- "load_dotenv(override=True)"
58
- ]
59
- },
60
- {
61
- "cell_type": "markdown",
62
- "metadata": {},
63
- "source": [
64
- "## MultiServerMCPClient"
65
- ]
66
- },
67
- {
68
- "cell_type": "markdown",
69
- "metadata": {},
70
- "source": [
71
- "์‚ฌ์ „์— `mcp_server_remote.py` ๋ฅผ ์‹คํ–‰ํ•ด๋‘ก๋‹ˆ๋‹ค. ํ„ฐ๋ฏธ๋„์„ ์—ด๊ณ  ๊ฐ€์ƒํ™˜๊ฒฝ์ด ํ™œ์„ฑํ™” ๋˜์–ด ์žˆ๋Š” ์ƒํƒœ์—์„œ ์„œ๋ฒ„๋ฅผ ์‹คํ–‰ํ•ด ์ฃผ์„ธ์š”.\n",
72
- "\n",
73
- "> ๋ช…๋ น์–ด\n",
74
- "```bash\n",
75
- "source .venv/bin/activate\n",
76
- "python mcp_server_remote.py\n",
77
- "```\n",
78
- "\n",
79
- "`async with` ๋กœ ์ผ์‹œ์ ์ธ Session ์—ฐ๊ฒฐ์„ ์ƒ์„ฑ ํ›„ ํ•ด์ œ"
80
- ]
81
- },
82
- {
83
- "cell_type": "code",
84
- "execution_count": null,
85
- "metadata": {},
86
- "outputs": [],
87
- "source": [
88
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
89
- "from langgraph.prebuilt import create_react_agent\n",
90
- "from utils import ainvoke_graph, astream_graph\n",
91
- "from langchain_anthropic import ChatAnthropic\n",
92
- "\n",
93
- "model = ChatAnthropic(\n",
94
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
95
- ")\n",
96
- "\n",
97
- "async with MultiServerMCPClient(\n",
98
- " {\n",
99
- " \"weather\": {\n",
100
- " # ์„œ๋ฒ„์˜ ํฌํŠธ์™€ ์ผ์น˜ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.(8005๋ฒˆ ํฌํŠธ)\n",
101
- " \"url\": \"http://localhost:8005/sse\",\n",
102
- " \"transport\": \"sse\",\n",
103
- " }\n",
104
- " }\n",
105
- ") as client:\n",
106
- " print(client.get_tools())\n",
107
- " agent = create_react_agent(model, client.get_tools())\n",
108
- " answer = await astream_graph(agent, {\"messages\": \"์„œ์šธ์˜ ๋‚ ์”จ๋Š” ์–ด๋– ๋‹ˆ?\"})"
109
- ]
110
- },
111
- {
112
- "cell_type": "markdown",
113
- "metadata": {},
114
- "source": [
115
- "๋‹ค์Œ์˜ ๊ฒฝ์šฐ์—๋Š” session ์ด ๋‹ซํ˜”๊ธฐ ๋•Œ๋ฌธ์— ๋„๊ตฌ์— ์ ‘๊ทผํ•  ์ˆ˜ ์—†๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค."
116
- ]
117
- },
118
- {
119
- "cell_type": "code",
120
- "execution_count": null,
121
- "metadata": {},
122
- "outputs": [],
123
- "source": [
124
- "await astream_graph(agent, {\"messages\": \"์„œ์šธ์˜ ๋‚ ์”จ๋Š” ์–ด๋– ๋‹ˆ?\"})"
125
- ]
126
- },
127
- {
128
- "cell_type": "markdown",
129
- "metadata": {},
130
- "source": [
131
- "์ด์ œ ๊ทธ๋Ÿผ Async Session ์„ ์œ ์ง€ํ•˜๋ฉฐ ๋„๊ตฌ์— ์ ‘๊ทผํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ ๋ณ€๊ฒฝํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค."
132
- ]
133
- },
134
- {
135
- "cell_type": "code",
136
- "execution_count": null,
137
- "metadata": {},
138
- "outputs": [],
139
- "source": [
140
- "# 1. ํด๋ผ์ด์–ธํŠธ ์ƒ์„ฑ\n",
141
- "client = MultiServerMCPClient(\n",
142
- " {\n",
143
- " \"weather\": {\n",
144
- " \"url\": \"http://localhost:8005/sse\",\n",
145
- " \"transport\": \"sse\",\n",
146
- " }\n",
147
- " }\n",
148
- ")\n",
149
- "\n",
150
- "\n",
151
- "# 2. ๋ช…์‹œ์ ์œผ๋กœ ์—ฐ๊ฒฐ ์ดˆ๊ธฐํ™” (์ด ๋ถ€๋ถ„์ด ํ•„์š”ํ•จ)\n",
152
- "# ์ดˆ๊ธฐํ™”\n",
153
- "await client.__aenter__()\n",
154
- "\n",
155
- "# ์ด์ œ ๋„๊ตฌ๊ฐ€ ๋กœ๋“œ๋จ\n",
156
- "print(client.get_tools()) # ๋„๊ตฌ๊ฐ€ ํ‘œ์‹œ๋จ"
157
- ]
158
- },
159
- {
160
- "cell_type": "markdown",
161
- "metadata": {},
162
- "source": [
163
- "langgraph ์˜ ์—์ด์ „ํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."
164
- ]
165
- },
166
- {
167
- "cell_type": "code",
168
- "execution_count": 5,
169
- "metadata": {},
170
- "outputs": [],
171
- "source": [
172
- "# ์—์ด์ „ํŠธ ์ƒ์„ฑ\n",
173
- "agent = create_react_agent(model, client.get_tools())"
174
- ]
175
- },
176
- {
177
- "cell_type": "markdown",
178
- "metadata": {},
179
- "source": [
180
- "๊ทธ๋ž˜ํ”„๋ฅผ ์‹คํ–‰ํ•˜์—ฌ ๊ฒฐ๊ณผ๋ฅผ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค."
181
- ]
182
- },
183
- {
184
- "cell_type": "code",
185
- "execution_count": null,
186
- "metadata": {},
187
- "outputs": [],
188
- "source": [
189
- "await astream_graph(agent, {\"messages\": \"์„œ์šธ์˜ ๋‚ ์”จ๋Š” ์–ด๋– ๋‹ˆ?\"})"
190
- ]
191
- },
192
- {
193
- "cell_type": "markdown",
194
- "metadata": {},
195
- "source": [
196
- "## Stdio ํ†ต์‹  ๋ฐฉ์‹\n",
197
- "\n",
198
- "Stdio ํ†ต์‹  ๋ฐฉ์‹์€ ๋กœ์ปฌ ํ™˜๊ฒฝ์—์„œ ์‚ฌ์šฉํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.\n",
199
- "\n",
200
- "- ํ†ต์‹ ์„ ์œ„ํ•ด ํ‘œ์ค€ ์ž…๋ ฅ/์ถœ๋ ฅ ์‚ฌ์šฉ\n",
201
- "\n",
202
- "์ฐธ๊ณ : ์•„๋ž˜์˜ python ๊ฒฝ๋กœ๋Š” ์ˆ˜์ •ํ•˜์„ธ์š”!"
203
- ]
204
- },
205
- {
206
- "cell_type": "code",
207
- "execution_count": null,
208
- "metadata": {},
209
- "outputs": [],
210
- "source": [
211
- "from mcp import ClientSession, StdioServerParameters\n",
212
- "from mcp.client.stdio import stdio_client\n",
213
- "from langgraph.prebuilt import create_react_agent\n",
214
- "from langchain_mcp_adapters.tools import load_mcp_tools\n",
215
- "from langchain_anthropic import ChatAnthropic\n",
216
- "\n",
217
- "# Anthropic์˜ Claude ๋ชจ๋ธ ์ดˆ๊ธฐํ™”\n",
218
- "model = ChatAnthropic(\n",
219
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
220
- ")\n",
221
- "\n",
222
- "# StdIO ์„œ๋ฒ„ ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •\n",
223
- "# - command: Python ์ธํ„ฐํ”„๋ฆฌํ„ฐ ๊ฒฝ๋กœ\n",
224
- "# - args: ์‹คํ–‰ํ•  MCP ์„œ๋ฒ„ ์Šคํฌ๋ฆฝํŠธ\n",
225
- "server_params = StdioServerParameters(\n",
226
- " command=\"./.venv/bin/python\",\n",
227
- " args=[\"mcp_server_local.py\"],\n",
228
- ")\n",
229
- "\n",
230
- "# StdIO ํด๋ผ์ด์–ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์„œ๋ฒ„์™€ ํ†ต์‹ \n",
231
- "async with stdio_client(server_params) as (read, write):\n",
232
- " # ํด๋ผ์ด์–ธํŠธ ์„ธ์…˜ ์ƒ์„ฑ\n",
233
- " async with ClientSession(read, write) as session:\n",
234
- " # ์—ฐ๊ฒฐ ์ดˆ๊ธฐํ™”\n",
235
- " await session.initialize()\n",
236
- "\n",
237
- " # MCP ๋„๊ตฌ ๋กœ๋“œ\n",
238
- " tools = await load_mcp_tools(session)\n",
239
- " print(tools)\n",
240
- "\n",
241
- " # ์—์ด์ „ํŠธ ์ƒ์„ฑ\n",
242
- " agent = create_react_agent(model, tools)\n",
243
- "\n",
244
- " # ์—์ด์ „ํŠธ ์‘๋‹ต ์ŠคํŠธ๋ฆฌ๋ฐ\n",
245
- " await astream_graph(agent, {\"messages\": \"์„œ์šธ์˜ ๋‚ ์”จ๋Š” ์–ด๋– ๋‹ˆ?\"})"
246
- ]
247
- },
248
- {
249
- "cell_type": "markdown",
250
- "metadata": {},
251
- "source": [
252
- "## RAG ๋ฅผ ๊ตฌ์ถ•ํ•œ MCP ์„œ๋ฒ„ ์‚ฌ์šฉ\n",
253
- "\n",
254
- "- ํŒŒ์ผ: `mcp_server_rag.py`\n",
255
- "\n",
256
- "์‚ฌ์ „์— langchain ์œผ๋กœ ๊ตฌ์ถ•ํ•œ `mcp_server_rag.py` ํŒŒ์ผ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.\n",
257
- "\n",
258
- "stdio ํ†ต์‹  ๋ฐฉ์‹์œผ๋กœ ๋„๊ตฌ์— ๋Œ€ํ•œ ์ •๋ณด๋ฅผ ๊ฐ€์ ธ์˜ต๋‹ˆ๋‹ค. ์—ฌ๊ธฐ์„œ ๋„๊ตฌ๋Š” `retriever` ๋„๊ตฌ๋ฅผ ๊ฐ€์ ธ์˜ค๊ฒŒ ๋˜๋ฉฐ, ์ด ๋„๊ตฌ๋Š” `mcp_server_rag.py` ์—์„œ ์ •์˜๋œ ๋„๊ตฌ์ž…๋‹ˆ๋‹ค. ์ด ํŒŒ์ผ์€ ์‚ฌ์ „์— ์„œ๋ฒ„์—์„œ ์‹คํ–‰๋˜์ง€ **์•Š์•„๋„** ๋ฉ๋‹ˆ๋‹ค."
259
- ]
260
- },
261
- {
262
- "cell_type": "code",
263
- "execution_count": null,
264
- "metadata": {},
265
- "outputs": [],
266
- "source": [
267
- "from mcp import ClientSession, StdioServerParameters\n",
268
- "from mcp.client.stdio import stdio_client\n",
269
- "from langchain_mcp_adapters.tools import load_mcp_tools\n",
270
- "from langgraph.prebuilt import create_react_agent\n",
271
- "from langchain_anthropic import ChatAnthropic\n",
272
- "from utils import astream_graph\n",
273
- "\n",
274
- "# Anthropic์˜ Claude ๋ชจ๋ธ ์ดˆ๊ธฐํ™”\n",
275
- "model = ChatAnthropic(\n",
276
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
277
- ")\n",
278
- "\n",
279
- "# RAG ์„œ๋ฒ„๋ฅผ ์œ„ํ•œ StdIO ์„œ๋ฒ„ ํŒŒ๋ผ๋ฏธํ„ฐ ์„ค์ •\n",
280
- "server_params = StdioServerParameters(\n",
281
- " command=\"./.venv/bin/python\",\n",
282
- " args=[\"./mcp_server_rag.py\"],\n",
283
- ")\n",
284
- "\n",
285
- "# StdIO ํด๋ผ์ด์–ธํŠธ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ RAG ์„œ๋ฒ„์™€ ํ†ต์‹ \n",
286
- "async with stdio_client(server_params) as (read, write):\n",
287
- " # ํด๋ผ์ด์–ธํŠธ ์„ธ์…˜ ์ƒ์„ฑ\n",
288
- " async with ClientSession(read, write) as session:\n",
289
- " # ์—ฐ๊ฒฐ ์ดˆ๊ธฐํ™”\n",
290
- " await session.initialize()\n",
291
- "\n",
292
- " # MCP ๋„๊ตฌ ๋กœ๋“œ (์—ฌ๊ธฐ์„œ๋Š” retriever ๋„๊ตฌ)\n",
293
- " tools = await load_mcp_tools(session)\n",
294
- "\n",
295
- " # ์—์ด์ „ํŠธ ์ƒ์„ฑ ๋ฐ ์‹คํ–‰\n",
296
- " agent = create_react_agent(model, tools)\n",
297
- "\n",
298
- " # ์—์ด์ „ํŠธ ์‘๋‹ต ์ŠคํŠธ๋ฆฌ๋ฐ\n",
299
- " await astream_graph(\n",
300
- " agent, {\"messages\": \"์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI์˜ ์ด๋ฆ„์„ ๊ฒ€์ƒ‰ํ•ด์ค˜\"}\n",
301
- " )"
302
- ]
303
- },
304
- {
305
- "cell_type": "markdown",
306
- "metadata": {},
307
- "source": [
308
- "## SSE ๋ฐฉ์‹๊ณผ StdIO ๋ฐฉ์‹ ํ˜ผํ•ฉ ์‚ฌ์šฉ\n",
309
- "\n",
310
- "- ํŒŒ์ผ: `mcp_server_rag.py` ๋Š” StdIO ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹ \n",
311
- "- `langchain-dev-docs` ๋Š” SSE ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹ \n",
312
- "\n",
313
- "SSE ๋ฐฉ์‹๊ณผ StdIO ๋ฐฉ์‹์„ ํ˜ผํ•ฉํ•˜์—ฌ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค."
314
- ]
315
- },
316
- {
317
- "cell_type": "code",
318
- "execution_count": null,
319
- "metadata": {},
320
- "outputs": [],
321
- "source": [
322
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
323
- "from langgraph.prebuilt import create_react_agent\n",
324
- "from langchain_anthropic import ChatAnthropic\n",
325
- "\n",
326
- "# Anthropic์˜ Claude ๋ชจ๋ธ ์ดˆ๊ธฐํ™”\n",
327
- "model = ChatAnthropic(\n",
328
- " model_name=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000\n",
329
- ")\n",
330
- "\n",
331
- "# 1. ๋‹ค์ค‘ ์„œ๋ฒ„ MCP ํด๋ผ์ด์–ธํŠธ ์ƒ์„ฑ\n",
332
- "client = MultiServerMCPClient(\n",
333
- " {\n",
334
- " \"document-retriever\": {\n",
335
- " \"command\": \"./.venv/bin/python\",\n",
336
- " # mcp_server_rag.py ํŒŒ์ผ์˜ ์ ˆ๋Œ€ ๊ฒฝ๋กœ๋กœ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค\n",
337
- " \"args\": [\"./mcp_server_rag.py\"],\n",
338
- " # stdio ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹  (ํ‘œ์ค€ ์ž…์ถœ๋ ฅ ์‚ฌ์šฉ)\n",
339
- " \"transport\": \"stdio\",\n",
340
- " },\n",
341
- " \"langchain-dev-docs\": {\n",
342
- " # SSE ์„œ๋ฒ„๊ฐ€ ์‹คํ–‰ ์ค‘์ธ์ง€ ํ™•์ธํ•˜์„ธ์š”\n",
343
- " \"url\": \"https://teddynote.io/mcp/langchain/sse\",\n",
344
- " # SSE(Server-Sent Events) ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹ \n",
345
- " \"transport\": \"sse\",\n",
346
- " },\n",
347
- " }\n",
348
- ")\n",
349
- "\n",
350
- "\n",
351
- "# 2. ๋น„๋™๊ธฐ ์ปจํ…์ŠคํŠธ ๋งค๋‹ˆ์ €๋ฅผ ํ†ตํ•œ ๋ช…์‹œ์  ์—ฐ๊ฒฐ ์ดˆ๊ธฐํ™”\n",
352
- "await client.__aenter__()"
353
- ]
354
- },
355
- {
356
- "cell_type": "markdown",
357
- "metadata": {},
358
- "source": [
359
- "langgraph ์˜ `create_react_agent` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—์ด์ „ํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."
360
- ]
361
- },
362
- {
363
- "cell_type": "code",
364
- "execution_count": 10,
365
- "metadata": {},
366
- "outputs": [],
367
- "source": [
368
- "from langgraph.checkpoint.memory import MemorySaver\n",
369
- "from langchain_core.runnables import RunnableConfig\n",
370
- "\n",
371
- "prompt = (\n",
372
- " \"You are a smart agent. \"\n",
373
- " \"Use `retriever` tool to search on AI related documents and answer questions.\"\n",
374
- " \"Use `langchain-dev-docs` tool to search on langchain / langgraph related documents and answer questions.\"\n",
375
- " \"Answer in Korean.\"\n",
376
- ")\n",
377
- "agent = create_react_agent(\n",
378
- " model, client.get_tools(), prompt=prompt, checkpointer=MemorySaver()\n",
379
- ")"
380
- ]
381
- },
382
- {
383
- "cell_type": "markdown",
384
- "metadata": {},
385
- "source": [
386
- "๊ตฌ์ถ•ํ•ด ๋†“์€ `mcp_server_rag.py` ์—์„œ ์ •์˜ํ•œ `retriever` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒ€์ƒ‰์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค."
387
- ]
388
- },
389
- {
390
- "cell_type": "code",
391
- "execution_count": null,
392
- "metadata": {},
393
- "outputs": [],
394
- "source": [
395
- "config = RunnableConfig(recursion_limit=30, thread_id=1)\n",
396
- "await astream_graph(\n",
397
- " agent,\n",
398
- " {\n",
399
- " \"messages\": \"`retriever` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI ์ด๋ฆ„์„ ๊ฒ€์ƒ‰ํ•ด์ค˜\"\n",
400
- " },\n",
401
- " config=config,\n",
402
- ")"
403
- ]
404
- },
405
- {
406
- "cell_type": "markdown",
407
- "metadata": {},
408
- "source": [
409
- "์ด๋ฒˆ์—๋Š” `langchain-dev-docs` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒ€์ƒ‰์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค."
410
- ]
411
- },
412
- {
413
- "cell_type": "code",
414
- "execution_count": null,
415
- "metadata": {},
416
- "outputs": [],
417
- "source": [
418
- "config = RunnableConfig(recursion_limit=30, thread_id=1)\n",
419
- "await astream_graph(\n",
420
- " agent,\n",
421
- " {\"messages\": \"langgraph-dev-docs ์ฐธ๊ณ ํ•ด์„œ self-rag ์˜ ์ •์˜์— ๋Œ€ํ•ด์„œ ์•Œ๋ ค์ค˜\"},\n",
422
- " config=config,\n",
423
- ")"
424
- ]
425
- },
426
- {
427
- "cell_type": "markdown",
428
- "metadata": {},
429
- "source": [
430
- "`MemorySaver` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋‹จ๊ธฐ ๊ธฐ์–ต์„ ์œ ์ง€ํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ, multi-turn ๋Œ€ํ™”๋„ ๊ฐ€๋Šฅํ•ฉ๋‹ˆ๋‹ค."
431
- ]
432
- },
433
- {
434
- "cell_type": "code",
435
- "execution_count": null,
436
- "metadata": {},
437
- "outputs": [],
438
- "source": [
439
- "await astream_graph(\n",
440
- " agent, {\"messages\": \"์ด์ „์˜ ๋‚ด์šฉ์„ bullet point ๋กœ ์š”์•ฝํ•ด์ค˜\"}, config=config\n",
441
- ")"
442
- ]
443
- },
444
- {
445
- "cell_type": "markdown",
446
- "metadata": {},
447
- "source": [
448
- "## LangChain ์— ํ†ตํ•ฉ๋œ ๋„๊ตฌ + MCP ๋„๊ตฌ\n",
449
- "\n",
450
- "์—ฌ๊ธฐ์„œ๋Š” LangChain ์— ํ†ตํ•ฉ๋œ ๋„๊ตฌ๋ฅผ ๊ธฐ์กด์˜ MCP ๋กœ๋งŒ ์ด๋ฃจ์–ด์ง„ ๋„๊ตฌ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉ์ด ๊ฐ€๋Šฅํ•œ์ง€ ํ…Œ์ŠคํŠธ ํ•ฉ๋‹ˆ๋‹ค."
451
- ]
452
- },
453
- {
454
- "cell_type": "code",
455
- "execution_count": 14,
456
- "metadata": {},
457
- "outputs": [],
458
- "source": [
459
- "from langchain_community.tools.tavily_search import TavilySearchResults\n",
460
- "\n",
461
- "# Tavily ๊ฒ€์ƒ‰ ๋„๊ตฌ๋ฅผ ์ดˆ๊ธฐํ™” ํ•ฉ๋‹ˆ๋‹ค. (news ํƒ€์ž…, ์ตœ๊ทผ 3์ผ ๋‚ด ๋‰ด์Šค)\n",
462
- "tavily = TavilySearchResults(max_results=3, topic=\"news\", days=3)\n",
463
- "\n",
464
- "# ๊ธฐ์กด์˜ MCP ๋„๊ตฌ์™€ ํ•จ๊ป˜ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.\n",
465
- "tools = client.get_tools() + [tavily]"
466
- ]
467
- },
468
- {
469
- "cell_type": "markdown",
470
- "metadata": {},
471
- "source": [
472
- "langgraph ์˜ `create_react_agent` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—์ด์ „ํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."
473
- ]
474
- },
475
- {
476
- "cell_type": "code",
477
- "execution_count": 15,
478
- "metadata": {},
479
- "outputs": [],
480
- "source": [
481
- "from langgraph.checkpoint.memory import MemorySaver\n",
482
- "from langchain_core.runnables import RunnableConfig\n",
483
- "\n",
484
- "# ์žฌ๊ท€ ์ œํ•œ ๋ฐ ์Šค๋ ˆ๋“œ ์•„์ด๋”” ์„ค์ •\n",
485
- "config = RunnableConfig(recursion_limit=30, thread_id=2)\n",
486
- "\n",
487
- "# ํ”„๋กฌํ”„ํŠธ ์„ค์ •\n",
488
- "prompt = \"You are a smart agent with various tools. Answer questions in Korean.\"\n",
489
- "\n",
490
- "# ์—์ด์ „ํŠธ ์ƒ์„ฑ\n",
491
- "agent = create_react_agent(model, tools, prompt=prompt, checkpointer=MemorySaver())"
492
- ]
493
- },
494
- {
495
- "cell_type": "markdown",
496
- "metadata": {},
497
- "source": [
498
- "์ƒˆ๋กญ๊ฒŒ ์ถ”๊ฐ€ํ•œ `tavily` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๊ฒ€์ƒ‰์„ ์ˆ˜ํ–‰ํ•ฉ๋‹ˆ๋‹ค."
499
- ]
500
- },
501
- {
502
- "cell_type": "code",
503
- "execution_count": null,
504
- "metadata": {},
505
- "outputs": [],
506
- "source": [
507
- "await astream_graph(agent, {\"messages\": \"์˜ค๋Š˜ ๋‰ด์Šค ์ฐพ์•„์ค˜\"}, config=config)"
508
- ]
509
- },
510
- {
511
- "cell_type": "markdown",
512
- "metadata": {},
513
- "source": [
514
- "`retriever` ๋„๊ตฌ๊ฐ€ ์›ํ™œํ•˜๊ฒŒ ์ž‘๋™ํ•˜๋Š” ๊ฒƒ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค."
515
- ]
516
- },
517
- {
518
- "cell_type": "code",
519
- "execution_count": null,
520
- "metadata": {},
521
- "outputs": [],
522
- "source": [
523
- "await astream_graph(\n",
524
- " agent,\n",
525
- " {\n",
526
- " \"messages\": \"`retriever` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI ์ด๋ฆ„์„ ๊ฒ€์ƒ‰ํ•ด์ค˜\"\n",
527
- " },\n",
528
- " config=config,\n",
529
- ")"
530
- ]
531
- },
532
- {
533
- "cell_type": "markdown",
534
- "metadata": {},
535
- "source": [
536
- "## Smithery ์—์„œ ์ œ๊ณตํ•˜๋Š” MCP ์„œ๋ฒ„\n",
537
- "\n",
538
- "- ๋งํฌ: https://smithery.ai/"
539
- ]
540
- },
541
- {
542
- "cell_type": "markdown",
543
- "metadata": {},
544
- "source": [
545
- "์‚ฌ์šฉํ•œ ๋„๊ตฌ ๋ชฉ๋ก์€ ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค.\n",
546
- "\n",
547
- "- Sequential Thinking: https://smithery.ai/server/@smithery-ai/server-sequential-thinking\n",
548
- " - ๊ตฌ์กฐํ™”๋œ ์‚ฌ๊ณ  ํ”„๋กœ์„ธ์Šค๋ฅผ ํ†ตํ•ด ์—ญ๋™์ ์ด๊ณ  ์„ฑ์ฐฐ์ ์ธ ๋ฌธ์ œ ํ•ด๊ฒฐ์„ ์œ„ํ•œ ๋„๊ตฌ๋ฅผ ์ œ๊ณตํ•˜๋Š” MCP ์„œ๋ฒ„\n",
549
- "- Desktop Commander: https://smithery.ai/server/@wonderwhy-er/desktop-commander\n",
550
- " - ๋‹ค์–‘ํ•œ ํŽธ์ง‘ ๊ธฐ๋Šฅ์œผ๋กœ ํ„ฐ๋ฏธ๋„ ๋ช…๋ น์„ ์‹คํ–‰ํ•˜๊ณ  ํŒŒ์ผ์„ ๊ด€๋ฆฌํ•˜์„ธ์š”. ์ฝ”๋”ฉ, ์…ธ ๋ฐ ํ„ฐ๋ฏธ๋„, ์ž‘์—… ์ž๋™ํ™”\n",
551
- "\n",
552
- "**์ฐธ๊ณ **\n",
553
- "\n",
554
- "- smithery ์—์„œ ์ œ๊ณตํ•˜๋Š” ๋„๊ตฌ๋ฅผ JSON ํ˜•์‹์œผ๋กœ ๊ฐ€์ ธ์˜ฌ๋•Œ, ์•„๋ž˜์˜ ์˜ˆ์‹œ์ฒ˜๋Ÿผ `\"transport\": \"stdio\"` ๋กœ ๊ผญ ์„ค์ •ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค."
555
- ]
556
- },
557
- {
558
- "cell_type": "code",
559
- "execution_count": null,
560
- "metadata": {},
561
- "outputs": [],
562
- "source": [
563
- "from langchain_mcp_adapters.client import MultiServerMCPClient\n",
564
- "from langgraph.prebuilt import create_react_agent\n",
565
- "from langchain_anthropic import ChatAnthropic\n",
566
- "\n",
567
- "# LLM ๋ชจ๋ธ ์ดˆ๊ธฐํ™”\n",
568
- "model = ChatAnthropic(model=\"claude-3-7-sonnet-latest\", temperature=0, max_tokens=20000)\n",
569
- "\n",
570
- "# 1. ํด๋ผ์ด์–ธํŠธ ์ƒ์„ฑ\n",
571
- "client = MultiServerMCPClient(\n",
572
- " {\n",
573
- " \"server-sequential-thinking\": {\n",
574
- " \"command\": \"npx\",\n",
575
- " \"args\": [\n",
576
- " \"-y\",\n",
577
- " \"@smithery/cli@latest\",\n",
578
- " \"run\",\n",
579
- " \"@smithery-ai/server-sequential-thinking\",\n",
580
- " \"--key\",\n",
581
- " \"89a4780a-53b7-4b7b-92e9-a29815f2669b\",\n",
582
- " ],\n",
583
- " \"transport\": \"stdio\", # stdio ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹ ์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.\n",
584
- " },\n",
585
- " \"desktop-commander\": {\n",
586
- " \"command\": \"npx\",\n",
587
- " \"args\": [\n",
588
- " \"-y\",\n",
589
- " \"@smithery/cli@latest\",\n",
590
- " \"run\",\n",
591
- " \"@wonderwhy-er/desktop-commander\",\n",
592
- " \"--key\",\n",
593
- " \"89a4780a-53b7-4b7b-92e9-a29815f2669b\",\n",
594
- " ],\n",
595
- " \"transport\": \"stdio\", # stdio ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹ ์„ ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.\n",
596
- " },\n",
597
- " \"document-retriever\": {\n",
598
- " \"command\": \"./.venv/bin/python\",\n",
599
- " # mcp_server_rag.py ํŒŒ์ผ์˜ ์ ˆ๋Œ€ ๊ฒฝ๋กœ๋กœ ์—…๋ฐ์ดํŠธํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค\n",
600
- " \"args\": [\"./mcp_server_rag.py\"],\n",
601
- " # stdio ๋ฐฉ์‹์œผ๋กœ ํ†ต์‹  (ํ‘œ์ค€ ์ž…์ถœ๋ ฅ ์‚ฌ์šฉ)\n",
602
- " \"transport\": \"stdio\",\n",
603
- " },\n",
604
- " }\n",
605
- ")\n",
606
- "\n",
607
- "\n",
608
- "# 2. ๋ช…์‹œ์ ์œผ๋กœ ์—ฐ๊ฒฐ ์ดˆ๊ธฐํ™”\n",
609
- "await client.__aenter__()"
610
- ]
611
- },
612
- {
613
- "cell_type": "markdown",
614
- "metadata": {},
615
- "source": [
616
- "langgraph ์˜ `create_react_agent` ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์—์ด์ „ํŠธ๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค."
617
- ]
618
- },
619
- {
620
- "cell_type": "code",
621
- "execution_count": 19,
622
- "metadata": {},
623
- "outputs": [],
624
- "source": [
625
- "from langgraph.checkpoint.memory import MemorySaver\n",
626
- "from langchain_core.runnables import RunnableConfig\n",
627
- "\n",
628
- "config = RunnableConfig(recursion_limit=30, thread_id=3)\n",
629
- "agent = create_react_agent(model, client.get_tools(), checkpointer=MemorySaver())"
630
- ]
631
- },
632
- {
633
- "cell_type": "markdown",
634
- "metadata": {},
635
- "source": [
636
- "`Desktop Commander` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ํ„ฐ๋ฏธ๋„ ๋ช…๋ น์„ ์‹คํ–‰ํ•ฉ๋‹ˆ๋‹ค."
637
- ]
638
- },
639
- {
640
- "cell_type": "code",
641
- "execution_count": null,
642
- "metadata": {},
643
- "outputs": [],
644
- "source": [
645
- "await astream_graph(\n",
646
- " agent,\n",
647
- " {\n",
648
- " \"messages\": \"ํ˜„์žฌ ๊ฒฝ๋กœ๋ฅผ ํฌํ•จํ•œ ํ•˜์œ„ ํด๋” ๊ตฌ์กฐ๋ฅผ tree ๋กœ ๊ทธ๋ ค์ค˜. ๋‹จ, .venv ํด๋”๋Š” ์ œ์™ธํ•˜๊ณ  ์ถœ๋ ฅํ•ด์ค˜.\"\n",
649
- " },\n",
650
- " config=config,\n",
651
- ")"
652
- ]
653
- },
654
- {
655
- "cell_type": "markdown",
656
- "metadata": {},
657
- "source": [
658
- "์ด๋ฒˆ์—๋Š” `Sequential Thinking` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋น„๊ต์  ๋ณต์žกํ•œ ์ž‘์—…์„ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ๋Š”์ง€ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค."
659
- ]
660
- },
661
- {
662
- "cell_type": "code",
663
- "execution_count": null,
664
- "metadata": {},
665
- "outputs": [],
666
- "source": [
667
- "await astream_graph(\n",
668
- " agent,\n",
669
- " {\n",
670
- " \"messages\": (\n",
671
- " \"`retriever` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ์‚ผ์„ฑ์ „์ž๊ฐ€ ๊ฐœ๋ฐœํ•œ ์ƒ์„ฑํ˜• AI ๊ด€๋ จ ๋‚ด์šฉ์„ ๊ฒ€์ƒ‰ํ•˜๊ณ  \"\n",
672
- " \"`Sequential Thinking` ๋„๊ตฌ๋ฅผ ์‚ฌ์šฉํ•ด์„œ ๋ณด๊ณ ์„œ๋ฅผ ์ž‘์„ฑํ•ด์ค˜.\"\n",
673
- " )\n",
674
- " },\n",
675
- " config=config,\n",
676
- ")"
677
- ]
678
- }
679
- ],
680
- "metadata": {
681
- "kernelspec": {
682
- "display_name": ".venv",
683
- "language": "python",
684
- "name": "python3"
685
- },
686
- "language_info": {
687
- "codemirror_mode": {
688
- "name": "ipython",
689
- "version": 3
690
- },
691
- "file_extension": ".py",
692
- "mimetype": "text/x-python",
693
- "name": "python",
694
- "nbconvert_exporter": "python",
695
- "pygments_lexer": "ipython3",
696
- "version": "3.12.8"
697
- }
698
- },
699
- "nbformat": 4,
700
- "nbformat_minor": 2
701
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README_KOR.md DELETED
@@ -1,232 +0,0 @@
1
- # LangGraph ์—์ด์ „ํŠธ + MCP
2
-
3
- [![English](https://img.shields.io/badge/Language-English-blue)](README.md) [![Korean](https://img.shields.io/badge/Language-ํ•œ๊ตญ์–ด-red)](README_KOR.md)
4
-
5
- [![GitHub](https://img.shields.io/badge/GitHub-langgraph--mcp--agents-black?logo=github)](https://github.com/teddylee777/langgraph-mcp-agents)
6
- [![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
7
- [![Python](https://img.shields.io/badge/Python-โ‰ฅ3.12-blue?logo=python&logoColor=white)](https://www.python.org/)
8
- [![Version](https://img.shields.io/badge/Version-0.1.0-orange)](https://github.com/teddylee777/langgraph-mcp-agents)
9
-
10
- ![project demo](./assets/project-demo.png)
11
-
12
- ## ํ”„๋กœ์ ํŠธ ๊ฐœ์š”
13
-
14
- ![project architecture](./assets/architecture.png)
15
-
16
- `LangChain-MCP-Adapters`๋Š” **LangChain AI**์—์„œ ์ œ๊ณตํ•˜๋Š” ํˆดํ‚ท์œผ๋กœ, AI ์—์ด์ „ํŠธ๊ฐ€ Model Context Protocol(MCP)์„ ํ†ตํ•ด ์™ธ๋ถ€ ๋„๊ตฌ ๋ฐ ๋ฐ์ดํ„ฐ ์†Œ์Šค์™€ ์ƒํ˜ธ์ž‘์šฉํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•ด์ค๋‹ˆ๋‹ค. ์ด ํ”„๋กœ์ ํŠธ๋Š” MCP ๋„๊ตฌ๋ฅผ ํ†ตํ•ด ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ ์†Œ์Šค์™€ API์— ์ ‘๊ทผํ•  ์ˆ˜ ์žˆ๋Š” ReAct ์—์ด์ „ํŠธ๋ฅผ ๋ฐฐํฌํ•˜๊ธฐ ์œ„ํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.
17
-
18
- ### ํŠน์ง•
19
-
20
- - **Streamlit ์ธํ„ฐํŽ˜์ด์Šค**: MCP ๋„๊ตฌ๊ฐ€ ํฌํ•จ๋œ LangGraph `ReAct Agent`์™€ ์ƒํ˜ธ์ž‘์šฉํ•˜๊ธฐ ์œ„ํ•œ ์‚ฌ์šฉ์ž ์นœํ™”์ ์ธ ์›น ์ธํ„ฐํŽ˜์ด์Šค
21
- - **๋„๊ตฌ ๊ด€๋ฆฌ**: UI๋ฅผ ํ†ตํ•ด MCP ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€, ์ œ๊ฑฐ ๋ฐ ๊ตฌ์„ฑ(Smithery JSON ํ˜•์‹ ์ง€์›). ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์žฌ์‹œ์ž‘ํ•˜์ง€ ์•Š๊ณ ๋„ ๋™์ ์œผ๋กœ ์ด๋ฃจ์–ด์ง‘๋‹ˆ๋‹ค.
22
- - **์ŠคํŠธ๋ฆฌ๋ฐ ์‘๋‹ต**: ์—์ด์ „ํŠธ ์‘๋‹ต๊ณผ ๋„๊ตฌ ํ˜ธ์ถœ์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ™•์ธ
23
- - **๋Œ€ํ™” ๊ธฐ๋ก**: ์—์ด์ „ํŠธ์™€์˜ ๋Œ€ํ™” ์ถ”์  ๋ฐ ๊ด€๋ฆฌ
24
-
25
- ## MCP ์•„ํ‚คํ…์ฒ˜
26
-
27
- MCP(Model Context Protocol)๋Š” ์„ธ ๊ฐ€์ง€ ์ฃผ์š” ๊ตฌ์„ฑ ์š”์†Œ๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ์Šต๋‹ˆ๋‹ค.
28
-
29
- 1. **MCP ํ˜ธ์ŠคํŠธ**: Claude Desktop, IDE ๋˜๋Š” LangChain/LangGraph์™€ ๊ฐ™์ด MCP๋ฅผ ํ†ตํ•ด ๋ฐ์ดํ„ฐ์— ์ ‘๊ทผํ•˜๊ณ ์ž ํ•˜๋Š” ํ”„๋กœ๊ทธ๋žจ.
30
-
31
- 2. **MCP ํด๋ผ์ด์–ธํŠธ**: ์„œ๋ฒ„์™€ 1:1 ์—ฐ๊ฒฐ์„ ์œ ์ง€ํ•˜๋Š” ํ”„๋กœํ† ์ฝœ ํด๋ผ์ด์–ธํŠธ๋กœ, ํ˜ธ์ŠคํŠธ์™€ ์„œ๋ฒ„ ์‚ฌ์ด์˜ ์ค‘๊ฐœ์ž ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค.
32
-
33
- 3. **MCP ์„œ๋ฒ„**: ํ‘œ์ค€ํ™”๋œ ๋ชจ๋ธ ์ปจํ…์ŠคํŠธ ํ”„๋กœํ† ์ฝœ์„ ํ†ตํ•ด ํŠน์ • ๊ธฐ๋Šฅ์„ ๋…ธ์ถœํ•˜๋Š” ๊ฒฝ๋Ÿ‰ ํ”„๋กœ๊ทธ๋žจ์œผ๋กœ, ์ฃผ์š” ๋ฐ์ดํ„ฐ ์†Œ์Šค ์—ญํ• ์„ ํ•ฉ๋‹ˆ๋‹ค.
34
-
35
- ## Docker ๋กœ ๋น ๋ฅธ ์‹คํ–‰
36
-
37
- ๋กœ์ปฌ Python ํ™˜๊ฒฝ์„ ์„ค์ •ํ•˜์ง€ ์•Š๊ณ ๋„ Docker๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์ด ํ”„๋กœ์ ํŠธ๋ฅผ ์‰ฝ๊ฒŒ ์‹คํ–‰ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.
38
-
39
- ### ํ•„์ˆ˜ ์š”๊ตฌ์‚ฌํ•ญ(Docker Desktop)
40
-
41
- ์•„๋ž˜์˜ ๋งํฌ์—์„œ Docker Desktop์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค.
42
-
43
- - [Docker Desktop ์„ค์น˜](https://www.docker.com/products/docker-desktop/)
44
-
45
- ### Docker Compose๋กœ ์‹คํ–‰ํ•˜๊ธฐ
46
-
47
- 1. `dockers` ๋””๋ ‰ํ† ๋ฆฌ๋กœ ์ด๋™
48
-
49
- ```bash
50
- cd dockers
51
- ```
52
-
53
- 2. ํ”„๋กœ์ ํŠธ ๋ฃจํŠธ ๋””๋ ‰ํ† ๋ฆฌ์— API ํ‚ค๊ฐ€ ํฌํ•จ๋œ `.env` ํŒŒ์ผ ์ƒ์„ฑ.
54
-
55
- ```bash
56
- cp .env.example .env
57
- ```
58
-
59
- ๋ฐœ๊ธ‰ ๋ฐ›์€ API ํ‚ค๋ฅผ `.env` ํŒŒ์ผ์— ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค.
60
-
61
- (์ฐธ๊ณ ) ๋ชจ๋“  API ํ‚ค๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๊ฒฝ์šฐ์—๋งŒ ์ž…๋ ฅํ•˜์„ธ์š”.
62
- - `ANTHROPIC_API_KEY`: Anthropic API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ "claude-3-7-sonnet-latest", "claude-3-5-sonnet-latest", "claude-3-haiku-latest" ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
63
- - `OPENAI_API_KEY`: OpenAI API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ "gpt-4o", "gpt-4o-mini" ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
64
- - `LANGSMITH_API_KEY`: LangSmith API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ LangSmith tracing์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
65
-
66
- ```bash
67
- ANTHROPIC_API_KEY=your_anthropic_api_key
68
- OPENAI_API_KEY=your_openai_api_key
69
- LANGSMITH_API_KEY=your_langsmith_api_key
70
- LANGSMITH_PROJECT=LangGraph-MCP-Agents
71
- LANGSMITH_TRACING=true
72
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
73
- ```
74
-
75
- (์‹ ๊ทœ ๊ธฐ๋Šฅ) ๋กœ๊ทธ์ธ/๋กœ๊ทธ์•„์›ƒ ๊ธฐ๋Šฅ ์‚ฌ์šฉ
76
-
77
- ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉ์‹œ `USE_LOGIN`์„ `true`๋กœ ์„ค์ •ํ•˜๊ณ , `USER_ID`์™€ `USER_PASSWORD`๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค.
78
-
79
- ```bash
80
- USE_LOGIN=true
81
- USER_ID=admin
82
- USER_PASSWORD=admin123
83
- ```
84
-
85
- ๋งŒ์•ฝ, ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์ง€ ์•Š๋‹ค๋ฉด, `USE_LOGIN`์„ `false`๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.
86
-
87
- ```bash
88
- USE_LOGIN=false
89
- ```
90
-
91
- 3. ์‹œ์Šคํ…œ ์•„ํ‚คํ…์ฒ˜์— ๋งž๋Š” Docker Compose ํŒŒ์ผ ์„ ํƒ.
92
-
93
- **AMD64/x86_64 ์•„ํ‚คํ…์ฒ˜(Intel/AMD ํ”„๋กœ์„ธ์„œ)**
94
-
95
- ```bash
96
- # ์ปจํ…Œ์ด๋„ˆ ์‹คํ–‰
97
- docker compose -f docker-compose-KOR.yaml up -d
98
- ```
99
-
100
- **ARM64 ์•„ํ‚คํ…์ฒ˜(Apple Silicon M1/M2/M3/M4)**
101
-
102
- ```bash
103
- # ์ปจํ…Œ์ด๋„ˆ ์‹คํ–‰
104
- docker compose -f docker-compose-KOR-mac.yaml up -d
105
- ```
106
-
107
- 4. ๋ธŒ๋ผ์šฐ์ €์—์„œ http://localhost:8585 ๋กœ ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜ ์ ‘์†
108
-
109
- (์ฐธ๊ณ )
110
- - ํฌํŠธ๋‚˜ ๋‹ค๋ฅธ ์„ค์ •์„ ์ˆ˜์ •ํ•ด์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ, ๋นŒ๋“œ ์ „์— ํ•ด๋‹น docker-compose-KOR.yaml ํŒŒ์ผ์„ ํŽธ์ง‘ํ•˜์„ธ์š”.
111
-
112
- ## ์†Œ์Šค์ฝ”๋“œ๋กœ ๋ถ€ํ„ฐ ์ง์ ‘ ์„ค์น˜
113
-
114
- 1. ์ด ์ €์žฅ์†Œ๋ฅผ ํด๋ก ํ•ฉ๋‹ˆ๋‹ค
115
-
116
- ```bash
117
- git clone https://github.com/teddynote-lab/langgraph-mcp-agents.git
118
- cd langgraph-mcp-agents
119
- ```
120
-
121
- 2. ๊ฐ€์ƒ ํ™˜๊ฒฝ์„ ์ƒ์„ฑํ•˜๊ณ  uv๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์กด์„ฑ์„ ์„ค์น˜ํ•ฉ๋‹ˆ๋‹ค
122
-
123
- ```bash
124
- uv venv
125
- uv pip install -r requirements.txt
126
- source .venv/bin/activate # Windows์˜ ๊ฒฝ์šฐ: .venv\Scripts\activate
127
- ```
128
-
129
- 3. API ํ‚ค๊ฐ€ ํฌํ•จ๋œ `.env` ๏ฟฝ๏ฟฝ์ผ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค(`.env.example` ์—์„œ ๋ณต์‚ฌ)
130
-
131
- ```bash
132
- cp .env.example .env
133
- ```
134
-
135
- ๋ฐœ๊ธ‰ ๋ฐ›์€ API ํ‚ค๋ฅผ `.env` ํŒŒ์ผ์— ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค.
136
-
137
- (์ฐธ๊ณ ) ๋ชจ๋“  API ํ‚ค๊ฐ€ ํ•„์š”ํ•˜์ง€ ์•Š์Šต๋‹ˆ๋‹ค. ํ•„์š”ํ•œ ๊ฒฝ์šฐ์—๋งŒ ์ž…๋ ฅํ•˜์„ธ์š”.
138
- - `ANTHROPIC_API_KEY`: Anthropic API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ "claude-3-7-sonnet-latest", "claude-3-5-sonnet-latest", "claude-3-haiku-latest" ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
139
- - `OPENAI_API_KEY`: OpenAI API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ "gpt-4o", "gpt-4o-mini" ๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
140
- - `LANGSMITH_API_KEY`: LangSmith API ํ‚ค๋ฅผ ์ž…๋ ฅํ•  ๊ฒฝ์šฐ LangSmith tracing์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค.
141
-
142
- ```bash
143
- ANTHROPIC_API_KEY=your_anthropic_api_key
144
- OPENAI_API_KEY=your_openai_api_key(optional)
145
- LANGSMITH_API_KEY=your_langsmith_api_key
146
- LANGSMITH_PROJECT=LangGraph-MCP-Agents
147
- LANGSMITH_TRACING=true
148
- LANGSMITH_ENDPOINT=https://api.smith.langchain.com
149
- ```
150
-
151
- 4. (์‹ ๊ทœ ๊ธฐ๋Šฅ) ๋กœ๊ทธ์ธ/๋กœ๊ทธ์•„์›ƒ ๊ธฐ๋Šฅ ์‚ฌ์šฉ
152
-
153
- ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉ์‹œ `USE_LOGIN`์„ `true`๋กœ ์„ค์ •ํ•˜๊ณ , `USER_ID`์™€ `USER_PASSWORD`๋ฅผ ์ž…๋ ฅํ•ฉ๋‹ˆ๋‹ค.
154
-
155
- ```bash
156
- USE_LOGIN=true
157
- USER_ID=admin
158
- USER_PASSWORD=admin123
159
- ```
160
-
161
- ๋งŒ์•ฝ, ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์„ ์‚ฌ์šฉํ•˜๊ณ  ์‹ถ์ง€ ์•Š๋‹ค๋ฉด, `USE_LOGIN`์„ `false`๋กœ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค.
162
-
163
- ```bash
164
- USE_LOGIN=false
165
- ```
166
-
167
- ## ์‚ฌ์šฉ๋ฒ•
168
-
169
- 1. Streamlit ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์„ ์‹œ์ž‘ํ•ฉ๋‹ˆ๋‹ค. (ํ•œ๊ตญ์–ด ๋ฒ„์ „ ํŒŒ์ผ์€ `app_KOR.py` ์ž…๋‹ˆ๋‹ค.)
170
-
171
- ```bash
172
- streamlit run app_KOR.py
173
- ```
174
-
175
- 2. ์• ํ”Œ๋ฆฌ์ผ€์ด์…˜์ด ๋ธŒ๋ผ์šฐ์ €์—์„œ ์‹คํ–‰๋˜์–ด ๋ฉ”์ธ ์ธํ„ฐํŽ˜์ด์Šค๋ฅผ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค.
176
-
177
- 3. ์‚ฌ์ด๋“œ๋ฐ”๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ MCP ๋„๊ตฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ๊ตฌ์„ฑํ•ฉ๋‹ˆ๋‹ค
178
-
179
- ์œ ์šฉํ•œ MCP ์„œ๋ฒ„๋ฅผ ์ฐพ์œผ๋ ค๋ฉด [Smithery](https://smithery.ai/)๋ฅผ ๋ฐฉ๋ฌธํ•˜์„ธ์š”.
180
-
181
- ๋จผ์ €, ์‚ฌ์šฉํ•˜๊ณ ์ž ํ•˜๋Š” ๋„๊ตฌ๋ฅผ ์„ ํƒํ•ฉ๋‹ˆ๋‹ค.
182
-
183
- ์˜ค๋ฅธ์ชฝ์˜ JSON ๊ตฌ์„ฑ์—์„œ COPY ๋ฒ„ํŠผ์„ ๋ˆ„๋ฆ…๋‹ˆ๋‹ค.
184
-
185
- ![copy from Smithery](./assets/smithery-copy-json.png)
186
-
187
- ๋ณต์‚ฌ๋œ JSON ๋ฌธ์ž์—ด์„ `Tool JSON` ์„น์…˜์— ๋ถ™์—ฌ๋„ฃ์Šต๋‹ˆ๋‹ค.
188
-
189
- <img src="./assets/add-tools.png" alt="tool json" style="width: auto; height: auto;">
190
-
191
- `Add Tool` ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ "Registered Tools List" ์„น์…˜์— ์ถ”๊ฐ€ํ•ฉ๋‹ˆ๋‹ค.
192
-
193
- ๋งˆ์ง€๋ง‰์œผ๋กœ, "Apply" ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ ์ƒˆ๋กœ์šด ๋„๊ตฌ๋กœ ์—์ด์ „ํŠธ๋ฅผ ์ดˆ๊ธฐํ™”ํ•˜๋„๋ก ๋ณ€๊ฒฝ์‚ฌํ•ญ์„ ์ ์šฉํ•ฉ๋‹ˆ๋‹ค.
194
-
195
- <img src="./assets/apply-tool-configuration.png" alt="tool json" style="width: auto; height: auto;">
196
-
197
- 4. ์—์ด์ „ํŠธ์˜ ์ƒํƒœ๋ฅผ ํ™•์ธํ•ฉ๋‹ˆ๋‹ค.
198
-
199
- ![check status](./assets/check-status.png)
200
-
201
- 5. ์ฑ„ํŒ… ์ธํ„ฐํŽ˜์ด์Šค์—์„œ ์งˆ๋ฌธ์„ ํ•˜์—ฌ ๊ตฌ์„ฑ๋œ MCP ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•˜๋Š” ReAct ์—์ด์ „ํŠธ์™€ ์ƒํ˜ธ์ž‘์šฉํ•ฉ๋‹ˆ๋‹ค.
202
-
203
- ![project demo](./assets/project-demo.png)
204
-
205
- ## ํ•ธ์ฆˆ์˜จ ํŠœํ† ๋ฆฌ์–ผ
206
-
207
- ๊ฐœ๋ฐœ์ž๊ฐ€ MCP์™€ LangGraph์˜ ํ†ตํ•ฉ ์ž‘๋™ ๋ฐฉ์‹์— ๋Œ€ํ•ด ๋” ๊นŠ์ด ์•Œ์•„๋ณด๋ ค๋ฉด, ํฌ๊ด„์ ์ธ Jupyter ๋…ธํŠธ๋ถ ํŠœํ† ๋ฆฌ์–ผ์„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค:
208
-
209
- - ๋งํฌ: [MCP-HandsOn-KOR.ipynb](./MCP-HandsOn-KOR.ipynb)
210
-
211
- ์ด ํ•ธ์ฆˆ์˜จ ํŠœํ† ๋ฆฌ์–ผ์€ ๋‹ค์Œ ๋‚ด์šฉ์„ ๋‹ค๋ฃน๋‹ˆ๋‹ค.
212
-
213
- 1. **MCP ํด๋ผ์ด์–ธํŠธ ์„ค์ •** - MCP ์„œ๋ฒ„์— ์—ฐ๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ MultiServerMCPClient ๊ตฌ์„ฑ ๋ฐ ์ดˆ๊ธฐํ™” ๋ฐฉ๋ฒ• ํ•™์Šต
214
- 2. **๋กœ์ปฌ MCP ์„œ๋ฒ„ ํ†ตํ•ฉ** - SSE ๋ฐ Stdio ๋ฉ”์„œ๋“œ๋ฅผ ํ†ตํ•ด ๋กœ์ปฌ์—์„œ ์‹คํ–‰ ์ค‘์ธ MCP ์„œ๋ฒ„์— ์—ฐ๊ฒฐ
215
- 3. **RAG ํ†ตํ•ฉ** - ๋ฌธ์„œ ๊ฒ€์ƒ‰ ๊ธฐ๋Šฅ์„ ์œ„ํ•ด MCP๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ๋ฆฌํŠธ๋ฆฌ๋ฒ„ ๋„๊ตฌ ์ ‘๊ทผ
216
- 4. **ํ˜ผํ•ฉ ์ „์†ก ๋ฐฉ๋ฒ•** - ํ•˜๋‚˜์˜ ์—์ด์ „ํŠธ์—์„œ ๋‹ค์–‘ํ•œ ์ „์†ก ํ”„๋กœํ† ์ฝœ(SSE ๋ฐ Stdio) ๊ฒฐํ•ฉ
217
- 5. **LangChain ๋„๊ตฌ + MCP** - MCP ๋„๊ตฌ์™€ ํ•จ๊ป˜ ๋„ค์ดํ‹ฐ๋ธŒ LangChain ๋„๊ตฌ ํ†ตํ•ฉ
218
-
219
- ์ด ํŠœํ† ๋ฆฌ์–ผ์€ MCP ๋„๊ตฌ๋ฅผ LangGraph ์—์ด์ „ํŠธ์— ๊ตฌ์ถ•ํ•˜๊ณ  ํ†ตํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ดํ•ดํ•˜๋Š” ๋ฐ ๋„์›€์ด ๋˜๋Š” ๋‹จ๊ณ„๋ณ„ ์„ค๋ช…์ด ํฌํ•จ๋œ ์‹ค์šฉ์ ์ธ ์˜ˆ์ œ๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค.
220
-
221
- ## ๋ผ์ด์„ ์Šค
222
-
223
- MIT License
224
-
225
- ## ํŠœํ† ๋ฆฌ์–ผ ๋น„๋””์˜ค ๋ณด๊ธฐ(ํ•œ๊ตญ์–ด)
226
-
227
- [![Tutorial Video Thumbnail](https://img.youtube.com/vi/ISrYHGg2C2c/maxresdefault.jpg)](https://youtu.be/ISrYHGg2C2c?si=eWmKFVUS1BLtPm5U)
228
-
229
- ## ์ฐธ๊ณ  ์ž๋ฃŒ
230
-
231
- - https://github.com/langchain-ai/langchain-mcp-adapters
232
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
__pycache__/app.cpython-310.pyc ADDED
Binary file (21.6 kB). View file
 
__pycache__/app.cpython-312.pyc ADDED
Binary file (38.6 kB). View file
 
__pycache__/utils.cpython-310.pyc ADDED
Binary file (6.32 kB). View file
 
__pycache__/utils.cpython-312.pyc ADDED
Binary file (10.5 kB). View file
 
app.py CHANGED
@@ -52,10 +52,14 @@ def load_config_from_json():
52
  }
53
  }
54
 
 
 
55
  try:
56
  if os.path.exists(CONFIG_FILE_PATH):
57
  with open(CONFIG_FILE_PATH, "r", encoding="utf-8") as f:
58
- return json.load(f)
 
 
59
  else:
60
  # Create file with default settings if it doesn't exist
61
  save_config_to_json(default_config)
@@ -186,8 +190,8 @@ Guidelines:
186
  OUTPUT_TOKEN_INFO = {
187
  "claude-3-5-sonnet-latest": {"max_tokens": 8192},
188
  "claude-3-5-haiku-latest": {"max_tokens": 8192},
189
- "claude-3-7-sonnet-latest": {"max_tokens": 64000},
190
- "gpt-4o": {"max_tokens": 16000},
191
  "gpt-4o-mini": {"max_tokens": 16000},
192
  }
193
 
@@ -198,10 +202,10 @@ if "session_initialized" not in st.session_state:
198
  st.session_state.history = [] # List for storing conversation history
199
  st.session_state.mcp_client = None # Storage for MCP client object
200
  st.session_state.timeout_seconds = (
201
- 120 # Response generation time limit (seconds), default 120 seconds
202
  )
203
  st.session_state.selected_model = (
204
- "claude-3-7-sonnet-latest" # Default model selection
205
  )
206
  st.session_state.recursion_limit = 100 # Recursion call limit, default 100
207
 
@@ -230,6 +234,9 @@ async def cleanup_mcp_client():
230
  # st.warning(traceback.format_exc())
231
 
232
 
 
 
 
233
  def print_message():
234
  """
235
  Displays chat history on the screen.
@@ -272,6 +279,7 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
272
 
273
  This function creates a callback function to display responses generated from the LLM in real-time.
274
  It displays text responses and tool call information in separate areas.
 
275
 
276
  Args:
277
  text_placeholder: Streamlit component to display text responses
@@ -288,9 +296,26 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
288
  def callback_func(message: dict):
289
  nonlocal accumulated_text, accumulated_tool
290
  message_content = message.get("content", None)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
291
 
292
  if isinstance(message_content, AIMessageChunk):
293
  content = message_content.content
 
 
 
294
  # If content is in list form (mainly occurs in Claude models)
295
  if isinstance(content, list) and len(content) > 0:
296
  message_chunk = content[0]
@@ -320,12 +345,16 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
320
  ):
321
  tool_call_info = message_content.tool_calls[0]
322
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
 
 
 
323
  with tool_placeholder.expander(
324
  "๐Ÿ”ง Tool Call Information", expanded=True
325
  ):
326
  st.markdown("".join(accumulated_tool))
327
  # Process if content is a simple string
328
  elif isinstance(content, str):
 
329
  accumulated_text.append(content)
330
  text_placeholder.markdown("".join(accumulated_text))
331
  # Process if invalid tool call information exists
@@ -345,9 +374,22 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
345
  and message_content.tool_call_chunks
346
  ):
347
  tool_call_chunk = message_content.tool_call_chunks[0]
 
 
 
 
 
 
 
 
 
 
348
  accumulated_tool.append(
349
- "\n```json\n" + str(tool_call_chunk) + "\n```\n"
350
  )
 
 
 
351
  with tool_placeholder.expander(
352
  "๐Ÿ”ง Tool Call Information", expanded=True
353
  ):
@@ -359,17 +401,330 @@ def get_streaming_callback(text_placeholder, tool_placeholder):
359
  ):
360
  tool_call_info = message_content.additional_kwargs["tool_calls"][0]
361
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
 
 
 
362
  with tool_placeholder.expander(
363
  "๐Ÿ”ง Tool Call Information", expanded=True
364
  ):
365
  st.markdown("".join(accumulated_tool))
366
  # Process if it's a tool message (tool response)
367
  elif isinstance(message_content, ToolMessage):
368
- accumulated_tool.append(
369
- "\n```json\n" + str(message_content.content) + "\n```\n"
370
- )
371
- with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
372
- st.markdown("".join(accumulated_tool))
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
373
  return None
374
 
375
  return callback_func, accumulated_text, accumulated_tool
@@ -412,8 +767,37 @@ async def process_query(query, text_placeholder, tool_placeholder, timeout_secon
412
  timeout=timeout_seconds,
413
  )
414
  except asyncio.TimeoutError:
415
- error_msg = f"โฑ๏ธ Request time exceeded {timeout_seconds} seconds. Please try again later."
 
 
 
 
416
  return {"error": error_msg}, error_msg, ""
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
417
 
418
  final_text = "".join(accumulated_text_obj)
419
  final_tool = "".join(accumulated_tool_obj)
@@ -448,6 +832,7 @@ async def initialize_session(mcp_config=None):
448
  if mcp_config is None:
449
  # Load settings from config.json file
450
  mcp_config = load_config_from_json()
 
451
  client = MultiServerMCPClient(mcp_config)
452
  await client.__aenter__()
453
  tools = client.get_tools()
@@ -458,7 +843,7 @@ async def initialize_session(mcp_config=None):
458
  selected_model = st.session_state.selected_model
459
 
460
  if selected_model in [
461
- "claude-3-7-sonnet-latest",
462
  "claude-3-5-sonnet-latest",
463
  "claude-3-5-haiku-latest",
464
  ]:
@@ -469,6 +854,7 @@ async def initialize_session(mcp_config=None):
469
  )
470
  else: # Use OpenAI model
471
  model = ChatOpenAI(
 
472
  model=selected_model,
473
  temperature=0.1,
474
  max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
@@ -497,7 +883,7 @@ with st.sidebar:
497
  if has_anthropic_key:
498
  available_models.extend(
499
  [
500
- "claude-3-7-sonnet-latest",
501
  "claude-3-5-sonnet-latest",
502
  "claude-3-5-haiku-latest",
503
  ]
@@ -514,7 +900,7 @@ with st.sidebar:
514
  "โš ๏ธ API keys are not configured. Please add ANTHROPIC_API_KEY or OPENAI_API_KEY to your .env file."
515
  )
516
  # Add Claude model as default (to show UI even without keys)
517
- available_models = ["claude-3-7-sonnet-latest"]
518
 
519
  # Model selection dropdown
520
  previous_model = st.session_state.selected_model
@@ -542,7 +928,7 @@ with st.sidebar:
542
  st.session_state.timeout_seconds = st.slider(
543
  "โฑ๏ธ Response generation time limit (seconds)",
544
  min_value=60,
545
- max_value=300,
546
  value=st.session_state.timeout_seconds,
547
  step=10,
548
  help="Set the maximum time for the agent to generate a response. Complex tasks may require more time.",
@@ -655,6 +1041,7 @@ with st.sidebar:
655
  st.info(
656
  f"URL detected in '{tool_name}' tool, setting transport to 'sse'."
657
  )
 
658
  elif "transport" not in tool_config:
659
  # Set default "stdio" if URL doesn't exist and transport isn't specified
660
  tool_config["transport"] = "stdio"
 
52
  }
53
  }
54
 
55
+
56
+
57
  try:
58
  if os.path.exists(CONFIG_FILE_PATH):
59
  with open(CONFIG_FILE_PATH, "r", encoding="utf-8") as f:
60
+ config = json.load(f)
61
+
62
+ return config
63
  else:
64
  # Create file with default settings if it doesn't exist
65
  save_config_to_json(default_config)
 
190
  OUTPUT_TOKEN_INFO = {
191
  "claude-3-5-sonnet-latest": {"max_tokens": 8192},
192
  "claude-3-5-haiku-latest": {"max_tokens": 8192},
193
+ "claude-3-5-sonnet-20241022": {"max_tokens": 64000},
194
+ "gpt-4o": {"max_tokens": 4096}, # 16000},
195
  "gpt-4o-mini": {"max_tokens": 16000},
196
  }
197
 
 
202
  st.session_state.history = [] # List for storing conversation history
203
  st.session_state.mcp_client = None # Storage for MCP client object
204
  st.session_state.timeout_seconds = (
205
+ 30000 # Response generation time limit (seconds), default 120 seconds
206
  )
207
  st.session_state.selected_model = (
208
+ "claude-3-5-sonnet-20241022" # Default model selection
209
  )
210
  st.session_state.recursion_limit = 100 # Recursion call limit, default 100
211
 
 
234
  # st.warning(traceback.format_exc())
235
 
236
 
237
+
238
+
239
+
240
  def print_message():
241
  """
242
  Displays chat history on the screen.
 
279
 
280
  This function creates a callback function to display responses generated from the LLM in real-time.
281
  It displays text responses and tool call information in separate areas.
282
+ It also supports real-time streaming updates from MCP tools.
283
 
284
  Args:
285
  text_placeholder: Streamlit component to display text responses
 
296
  def callback_func(message: dict):
297
  nonlocal accumulated_text, accumulated_tool
298
  message_content = message.get("content", None)
299
+
300
+ # Initialize data counter for tracking data: messages
301
+ if not hasattr(callback_func, '_data_counter'):
302
+ callback_func._data_counter = 0
303
+
304
+ # Initialize persistent storage for all processed data
305
+ if not hasattr(callback_func, '_persistent_data'):
306
+ callback_func._persistent_data = []
307
+ callback_func._persistent_data.append("๐Ÿš€ **Session Started** - All data will be preserved\n")
308
+ callback_func._persistent_data.append("---\n")
309
+
310
+
311
+
312
+
313
 
314
  if isinstance(message_content, AIMessageChunk):
315
  content = message_content.content
316
+
317
+
318
+
319
  # If content is in list form (mainly occurs in Claude models)
320
  if isinstance(content, list) and len(content) > 0:
321
  message_chunk = content[0]
 
345
  ):
346
  tool_call_info = message_content.tool_calls[0]
347
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
348
+
349
+
350
+
351
  with tool_placeholder.expander(
352
  "๐Ÿ”ง Tool Call Information", expanded=True
353
  ):
354
  st.markdown("".join(accumulated_tool))
355
  # Process if content is a simple string
356
  elif isinstance(content, str):
357
+ # Regular text content
358
  accumulated_text.append(content)
359
  text_placeholder.markdown("".join(accumulated_text))
360
  # Process if invalid tool call information exists
 
374
  and message_content.tool_call_chunks
375
  ):
376
  tool_call_chunk = message_content.tool_call_chunks[0]
377
+ tool_name = tool_call_chunk.get('name', 'Unknown')
378
+
379
+ # Only show tool call info if it's a new tool or has meaningful changes
380
+ if not hasattr(callback_func, '_last_tool_name') or callback_func._last_tool_name != tool_name:
381
+ accumulated_tool.append(
382
+ f"\n๐Ÿ”ง **Tool Call**: {tool_name}\n"
383
+ )
384
+ callback_func._last_tool_name = tool_name
385
+
386
+ # Show tool call details in a more compact format
387
  accumulated_tool.append(
388
+ f"```json\n{str(tool_call_chunk)}\n```\n"
389
  )
390
+
391
+
392
+
393
  with tool_placeholder.expander(
394
  "๐Ÿ”ง Tool Call Information", expanded=True
395
  ):
 
401
  ):
402
  tool_call_info = message_content.additional_kwargs["tool_calls"][0]
403
  accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
404
+
405
+
406
+
407
  with tool_placeholder.expander(
408
  "๐Ÿ”ง Tool Call Information", expanded=True
409
  ):
410
  st.markdown("".join(accumulated_tool))
411
  # Process if it's a tool message (tool response)
412
  elif isinstance(message_content, ToolMessage):
413
+ # Don't show Tool Completed immediately - wait for all streaming content
414
+ # Just store the tool name for later display
415
+ if not hasattr(callback_func, '_pending_tool_completion'):
416
+ callback_func._pending_tool_completion = []
417
+ callback_func._pending_tool_completion.append(message_content.name or "Unknown Tool")
418
+
419
+ # Convert streaming text to final result
420
+ streaming_text_items = [item for item in accumulated_tool if item.startswith("\n๐Ÿ“Š **Streaming Text**:")]
421
+ if streaming_text_items:
422
+ # Get the last streaming text (most complete)
423
+ last_streaming = streaming_text_items[-1]
424
+ # Extract the text content
425
+ final_text = last_streaming.replace("\n๐Ÿ“Š **Streaming Text**: ", "").strip()
426
+ if final_text:
427
+ # Remove all streaming text entries
428
+ accumulated_tool = [item for item in accumulated_tool if not item.startswith("\n๐Ÿ“Š **Streaming Text**:")]
429
+ # Add the final complete result
430
+ accumulated_tool.append(f"\n๐Ÿ“Š **Final Result**: {final_text}\n")
431
+
432
+ # Handle tool response content
433
+ tool_content = message_content.content
434
+
435
+
436
+ # Handle tool response content
437
+ if isinstance(tool_content, str):
438
+ # Look for SSE data patterns
439
+ if "data:" in tool_content:
440
+ # Parse SSE data and extract meaningful content
441
+ lines = tool_content.split('\n')
442
+ for line in lines:
443
+ line = line.strip()
444
+ if line.startswith('data:'):
445
+ # Increment data counter for each data: message
446
+ callback_func._data_counter += 1
447
+
448
+ try:
449
+ # Extract JSON content from SSE data
450
+ json_str = line[5:].strip() # Remove 'data:' prefix
451
+ if json_str:
452
+ # Try to parse as JSON
453
+ import json
454
+ try:
455
+ data_obj = json.loads(json_str)
456
+ if isinstance(data_obj, dict):
457
+ # Handle different types of SSE data
458
+ if data_obj.get("type") == "result":
459
+ content = data_obj.get("content", "")
460
+ if content:
461
+ # Check for specific server output formats
462
+ if "```bdd-long-task-start" in content:
463
+ # Extract task info
464
+ import re
465
+ match = re.search(r'```bdd-long-task-start\s*\n(.*?)\n```', content, re.DOTALL)
466
+ if match:
467
+ try:
468
+ task_info = json.loads(match.group(1))
469
+ task_id = task_info.get('id', 'Unknown')
470
+ task_label = task_info.get('label', 'Unknown task')
471
+ accumulated_tool.append(f"\n๐Ÿš€ **Task Started** [{task_id}]: {task_label}\n")
472
+ except:
473
+ accumulated_tool.append(f"\n๐Ÿš€ **Task Started**: {content}\n")
474
+ # Real-time UI update for task start
475
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
476
+ # Show data counter at the top
477
+ st.markdown(f"**๐Ÿ“Š Total Data Messages: {callback_func._data_counter}**")
478
+ st.markdown("---")
479
+ st.markdown("".join(accumulated_tool))
480
+ elif "```bdd-long-task-end" in content:
481
+ # Extract task info
482
+ import re
483
+ match = re.search(r'```bdd-long-task-end\s*\n(.*?)\n```', content, re.DOTALL)
484
+ if match:
485
+ try:
486
+ task_info = json.loads(match.group(1))
487
+ task_id = task_info.get('id', 'Unknown')
488
+ accumulated_tool.append(f"\nโœ… **Task Completed** [{task_id}]\n")
489
+ except:
490
+ accumulated_tool.append(f"\nโœ… **Task Completed**: {content}\n")
491
+ # Real-time UI update for task completion
492
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
493
+ # Show data counter at the top
494
+ st.markdown(f"**๐Ÿ“Š Total Data Messages: {callback_func._data_counter}**")
495
+ st.markdown("---")
496
+ st.markdown("".join(accumulated_tool))
497
+ elif "```bdd-resource-lookup" in content:
498
+ # Extract resource info
499
+ import re
500
+ match = re.search(r'```bdd-resource-lookup\s*\n(.*?)\n```', content, re.DOTALL)
501
+ if match:
502
+ try:
503
+ resources = json.loads(match.group(1))
504
+ if isinstance(resources, list):
505
+ accumulated_tool.append(f"\n๐Ÿ“š **Resources Found**: {len(resources)} items\n")
506
+ for i, resource in enumerate(resources[:3]): # Show first 3
507
+ source = resource.get('source', 'Unknown')
508
+ doc_id = resource.get('docId', 'Unknown')
509
+ citation = resource.get('citation', '')
510
+ accumulated_tool.append(f" - {source}: {doc_id} [citation:{citation}]\n")
511
+ if len(resources) > 3:
512
+ accumulated_tool.append(f" ... and {len(resources) - 3} more\n")
513
+ except:
514
+ accumulated_tool.append(f"\n๐Ÿ“š **Resources**: {content}\n")
515
+ # Real-time UI update for resources
516
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
517
+ # Show data counter at the top
518
+ st.markdown(f"**๐Ÿ“Š Total Data Messages: {callback_func._data_counter}**")
519
+ st.markdown("---")
520
+ st.markdown("".join(accumulated_tool))
521
+ elif "```bdd-chat-agent-task" in content:
522
+ # Extract chat agent task info
523
+ import re
524
+ match = re.search(r'```bdd-chat-agent-task\s*\n(.*?)\n```', content, re.DOTALL)
525
+ if match:
526
+ try:
527
+ task_info = json.loads(match.group(1))
528
+ task_type = task_info.get('type', 'Unknown')
529
+ task_label = task_info.get('label', 'Unknown')
530
+ task_status = task_info.get('status', 'Unknown')
531
+ accumulated_tool.append(f"\n๐Ÿค– **Agent Task** [{task_status}]: {task_type} - {task_label}\n")
532
+ except:
533
+ accumulated_tool.append(f"\n๐Ÿค– **Agent Task**: {content}\n")
534
+ elif "ping - " in content:
535
+ # Extract timestamp from ping messages
536
+ timestamp = content.split("ping - ")[-1]
537
+ accumulated_tool.append(f"โฑ๏ธ **Progress Update**: {timestamp}\n")
538
+ elif data_obj.get("type") == "done":
539
+ # Task completion
540
+ accumulated_tool.append(f"\n๐ŸŽฏ **Task Done**: {content}\n")
541
+ else:
542
+ # Regular result content - accumulate text for better readability
543
+ if not hasattr(callback_func, '_result_buffer'):
544
+ callback_func._result_buffer = ""
545
+ callback_func._result_buffer += content
546
+
547
+ # For simple text streams (like health check or mock mock), update more frequently
548
+ # Check if this is a simple text response (not BDD format)
549
+ is_simple_text = not any(marker in content for marker in ['```bdd-', 'ping -', 'data:'])
550
+
551
+ # For simple text streams, always update immediately to show all fragments
552
+ if is_simple_text and content.strip():
553
+ # Clear previous streaming text entries and add updated one
554
+ accumulated_tool = [item for item in accumulated_tool if not item.startswith("\n๐Ÿ“Š **Streaming Text**:")]
555
+
556
+ # Add the updated complete streaming text in one line
557
+ accumulated_tool.append(f"\n๐Ÿ“Š **Streaming Text**: {callback_func._result_buffer}\n")
558
+
559
+ # Immediate UI update for text streams
560
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
561
+ st.markdown("".join(accumulated_tool))
562
+ else:
563
+ # For complex content, use timed updates
564
+ update_interval = 0.2 if len(content.strip()) <= 10 else 0.5
565
+
566
+ # Only update display periodically to avoid excessive updates
567
+ if not hasattr(callback_func, '_last_update_time'):
568
+ callback_func._last_update_time = 0
569
+
570
+ import time
571
+ current_time = time.time()
572
+ if current_time - callback_func._last_update_time > update_interval:
573
+ # For complex content, show accumulated buffer
574
+ accumulated_tool.append(f"\n๐Ÿ“Š **Result Update**:\n")
575
+ accumulated_tool.append(f"```\n{callback_func._result_buffer}\n```\n")
576
+ callback_func._last_update_time = current_time
577
+
578
+ # Real-time UI update
579
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
580
+ st.markdown("".join(accumulated_tool))
581
+ else:
582
+ # Handle other data types that are not "result" type
583
+ # This ensures ALL data: messages are processed and displayed
584
+ data_type = data_obj.get("type", "unknown")
585
+ data_content = data_obj.get("content", str(data_obj))
586
+
587
+ # Add timestamp for real-time tracking
588
+ import time
589
+ timestamp = time.strftime("%H:%M:%S")
590
+
591
+ # Format the data for display
592
+ data_entry = ""
593
+ if isinstance(data_content, str):
594
+ data_entry = f"\n๐Ÿ“ก **Data [{data_type}]** [{timestamp}]: {data_content}\n"
595
+ else:
596
+ data_entry = f"\n๐Ÿ“ก **Data [{data_type}]** [{timestamp}]:\n```json\n{json.dumps(data_obj, indent=2)}\n```\n"
597
+
598
+ # Add to both temporary and persistent storage
599
+ accumulated_tool.append(data_entry)
600
+ callback_func._persistent_data.append(data_entry)
601
+
602
+ # Immediate real-time UI update for any data: message
603
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
604
+ # Show data counter at the top
605
+ st.markdown(f"**๐Ÿ“Š Total Data Messages: {callback_func._data_counter}**")
606
+ st.markdown("---")
607
+ # Show persistent data first, then current accumulated data
608
+ st.markdown("".join(callback_func._persistent_data))
609
+ st.markdown("---")
610
+ st.markdown("**๐Ÿ”„ Current Stream:**")
611
+ st.markdown("".join(accumulated_tool))
612
+ else:
613
+ # Handle non-dict data objects
614
+ import time
615
+ timestamp = time.strftime("%H:%M:%S")
616
+ data_entry = f"\n๐Ÿ“ก **Raw Data** [{timestamp}]:\n```json\n{json_str}\n```\n"
617
+
618
+ # Add to both temporary and persistent storage
619
+ accumulated_tool.append(data_entry)
620
+ callback_func._persistent_data.append(data_entry)
621
+
622
+ # Immediate real-time UI update
623
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
624
+ # Show data counter at the top
625
+ st.markdown(f"**๐Ÿ“Š Total Data Messages: {callback_func._data_counter}**")
626
+ st.markdown("---")
627
+ # Show persistent data first, then current accumulated data
628
+ st.markdown("".join(callback_func._persistent_data))
629
+ st.markdown("---")
630
+ st.markdown("**๐Ÿ”„ Current Stream:**")
631
+ st.markdown("".join(accumulated_tool))
632
+ except json.JSONDecodeError:
633
+ # If not valid JSON, check if it's streaming text content
634
+ if json_str and len(json_str.strip()) > 0:
635
+ # This might be streaming text, accumulate it
636
+ if not hasattr(callback_func, '_stream_buffer'):
637
+ callback_func._stream_buffer = ""
638
+ callback_func._stream_buffer += json_str
639
+
640
+ # Only show streaming content periodically
641
+ if not hasattr(callback_func, '_stream_update_time'):
642
+ callback_func._stream_update_time = 0
643
+
644
+ import time
645
+ current_time = time.time()
646
+ if current_time - callback_func._stream_update_time > 0.3: # Update every 0.3 seconds for better responsiveness
647
+ # Add new streaming update without clearing previous ones
648
+ if callback_func._stream_buffer.strip():
649
+ accumulated_tool.append(f"\n๐Ÿ“ **Streaming Update**: {callback_func._stream_buffer}\n")
650
+ callback_func._stream_update_time = current_time
651
+
652
+ # Real-time UI update
653
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
654
+ st.markdown("".join(accumulated_tool))
655
+ else:
656
+ # Handle empty or whitespace-only data
657
+ import time
658
+ timestamp = time.strftime("%H:%M:%S")
659
+ accumulated_tool.append(f"\n๐Ÿ“ก **Empty Data** [{timestamp}]: (empty or whitespace)\n")
660
+
661
+ # Immediate real-time UI update
662
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
663
+ st.markdown("".join(accumulated_tool))
664
+ except Exception as e:
665
+ # Fallback: treat as plain text, but only if it's meaningful
666
+ import time
667
+ timestamp = time.strftime("%H:%M:%S")
668
+ if line.strip() and len(line.strip()) > 1: # Only show non-trivial content
669
+ accumulated_tool.append(f"\n๐Ÿ“ **Info** [{timestamp}]: {line.strip()}\n")
670
+ else:
671
+ accumulated_tool.append(f"\nโš ๏ธ **Error** [{timestamp}]: {str(e)}\n")
672
+
673
+ # Immediate real-time UI update for error cases
674
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
675
+ st.markdown("".join(accumulated_tool))
676
+ elif line.startswith('ping - '):
677
+ # Handle ping messages directly
678
+ timestamp = line.split('ping - ')[-1]
679
+ accumulated_tool.append(f"โฑ๏ธ **Progress Update**: {timestamp}\n")
680
+
681
+ # Immediate real-time UI update for ping messages
682
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
683
+ st.markdown("".join(accumulated_tool))
684
+ elif line and not line.startswith(':'):
685
+ # Other non-empty lines - capture any other data patterns
686
+ import time
687
+ timestamp = time.strftime("%H:%M:%S")
688
+
689
+ # Check if this line contains any meaningful data
690
+ if line.strip() and len(line.strip()) > 1:
691
+ # Try to detect if it's JSON-like content
692
+ if line.strip().startswith('{') or line.strip().startswith('['):
693
+ try:
694
+ # Try to parse as JSON for better formatting
695
+ import json
696
+ parsed_json = json.loads(line.strip())
697
+ accumulated_tool.append(f"\n๐Ÿ“ก **JSON Data** [{timestamp}]:\n```json\n{json.dumps(parsed_json, indent=2)}\n```\n")
698
+ except:
699
+ # If not valid JSON, show as regular data
700
+ accumulated_tool.append(f"\n๐Ÿ“ก **Data** [{timestamp}]: {line.strip()}\n")
701
+ else:
702
+ # Regular text data
703
+ accumulated_tool.append(f"\n๐Ÿ“ **Info** [{timestamp}]: {line.strip()}\n")
704
+
705
+ # Immediate real-time UI update for any captured data
706
+ with tool_placeholder.expander("๐Ÿ”ง Tool Call Information", expanded=True):
707
+ st.markdown("".join(accumulated_tool))
708
+ else:
709
+ # Regular tool response content
710
+ accumulated_tool.append(
711
+ "\n```json\n" + str(tool_content) + "\n```\n"
712
+ )
713
+ else:
714
+ # Non-string content
715
+ accumulated_tool.append(
716
+ "\n```json\n" + str(tool_content) + "\n```\n"
717
+ )
718
+
719
+ # Show pending tool completion status after all streaming content
720
+ if hasattr(callback_func, '_pending_tool_completion') and callback_func._pending_tool_completion:
721
+ for tool_name in callback_func._pending_tool_completion:
722
+ accumulated_tool.append(f"\nโœ… **Tool Completed**: {tool_name}\n")
723
+ # Clear the pending list
724
+ callback_func._pending_tool_completion = []
725
+
726
+
727
+
728
  return None
729
 
730
  return callback_func, accumulated_text, accumulated_tool
 
767
  timeout=timeout_seconds,
768
  )
769
  except asyncio.TimeoutError:
770
+ # On timeout, reset thread to avoid leaving an incomplete tool call in memory
771
+ st.session_state.thread_id = random_uuid()
772
+ error_msg = (
773
+ f"โฑ๏ธ Request time exceeded {timeout_seconds} seconds. Conversation was reset. Please retry."
774
+ )
775
  return {"error": error_msg}, error_msg, ""
776
+ except ValueError as e:
777
+ # Handle invalid chat history caused by incomplete tool calls
778
+ if "Found AIMessages with tool_calls" in str(e):
779
+ # Reset thread and retry once
780
+ st.session_state.thread_id = random_uuid()
781
+ try:
782
+ response = await asyncio.wait_for(
783
+ astream_graph(
784
+ st.session_state.agent,
785
+ {"messages": [HumanMessage(content=query)]},
786
+ callback=streaming_callback,
787
+ config=RunnableConfig(
788
+ recursion_limit=st.session_state.recursion_limit,
789
+ thread_id=st.session_state.thread_id,
790
+ ),
791
+ ),
792
+ timeout=timeout_seconds,
793
+ )
794
+ except Exception:
795
+ error_msg = (
796
+ "โš ๏ธ Conversation state was invalid and has been reset. Please try again."
797
+ )
798
+ return {"error": error_msg}, error_msg, ""
799
+ else:
800
+ raise
801
 
802
  final_text = "".join(accumulated_text_obj)
803
  final_tool = "".join(accumulated_tool_obj)
 
832
  if mcp_config is None:
833
  # Load settings from config.json file
834
  mcp_config = load_config_from_json()
835
+
836
  client = MultiServerMCPClient(mcp_config)
837
  await client.__aenter__()
838
  tools = client.get_tools()
 
843
  selected_model = st.session_state.selected_model
844
 
845
  if selected_model in [
846
+ "claude-3-5-sonnet-20241022",
847
  "claude-3-5-sonnet-latest",
848
  "claude-3-5-haiku-latest",
849
  ]:
 
854
  )
855
  else: # Use OpenAI model
856
  model = ChatOpenAI(
857
+ base_url=os.environ.get("OPENAI_API_BASE"),
858
  model=selected_model,
859
  temperature=0.1,
860
  max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
 
883
  if has_anthropic_key:
884
  available_models.extend(
885
  [
886
+ "claude-3-5-sonnet-20241022",
887
  "claude-3-5-sonnet-latest",
888
  "claude-3-5-haiku-latest",
889
  ]
 
900
  "โš ๏ธ API keys are not configured. Please add ANTHROPIC_API_KEY or OPENAI_API_KEY to your .env file."
901
  )
902
  # Add Claude model as default (to show UI even without keys)
903
+ available_models = ["claude-3-5-sonnet-20241022"]
904
 
905
  # Model selection dropdown
906
  previous_model = st.session_state.selected_model
 
928
  st.session_state.timeout_seconds = st.slider(
929
  "โฑ๏ธ Response generation time limit (seconds)",
930
  min_value=60,
931
+ max_value=300000,
932
  value=st.session_state.timeout_seconds,
933
  step=10,
934
  help="Set the maximum time for the agent to generate a response. Complex tasks may require more time.",
 
1041
  st.info(
1042
  f"URL detected in '{tool_name}' tool, setting transport to 'sse'."
1043
  )
1044
+
1045
  elif "transport" not in tool_config:
1046
  # Set default "stdio" if URL doesn't exist and transport isn't specified
1047
  tool_config["transport"] = "stdio"
app_KOR.py DELETED
@@ -1,848 +0,0 @@
1
- import streamlit as st
2
- import asyncio
3
- import nest_asyncio
4
- import json
5
- import os
6
- import platform
7
-
8
- if platform.system() == "Windows":
9
- asyncio.set_event_loop_policy(asyncio.WindowsProactorEventLoopPolicy())
10
-
11
- # nest_asyncio ์ ์šฉ: ์ด๋ฏธ ์‹คํ–‰ ์ค‘์ธ ์ด๋ฒคํŠธ ๋ฃจํ”„ ๋‚ด์—์„œ ์ค‘์ฒฉ ํ˜ธ์ถœ ํ—ˆ์šฉ
12
- nest_asyncio.apply()
13
-
14
- # ์ „์—ญ ์ด๋ฒคํŠธ ๋ฃจํ”„ ์ƒ์„ฑ ๋ฐ ์žฌ์‚ฌ์šฉ (ํ•œ๋ฒˆ ์ƒ์„ฑํ•œ ํ›„ ๊ณ„์† ์‚ฌ์šฉ)
15
- if "event_loop" not in st.session_state:
16
- loop = asyncio.new_event_loop()
17
- st.session_state.event_loop = loop
18
- asyncio.set_event_loop(loop)
19
-
20
- from langgraph.prebuilt import create_react_agent
21
- from langchain_anthropic import ChatAnthropic
22
- from langchain_openai import ChatOpenAI
23
- from langchain_core.messages import HumanMessage
24
- from dotenv import load_dotenv
25
- from langchain_mcp_adapters.client import MultiServerMCPClient
26
- from utils import astream_graph, random_uuid
27
- from langchain_core.messages.ai import AIMessageChunk
28
- from langchain_core.messages.tool import ToolMessage
29
- from langgraph.checkpoint.memory import MemorySaver
30
- from langchain_core.runnables import RunnableConfig
31
-
32
- # ํ™˜๊ฒฝ ๋ณ€์ˆ˜ ๋กœ๋“œ (.env ํŒŒ์ผ์—์„œ API ํ‚ค ๋“ฑ์˜ ์„ค์ •์„ ๊ฐ€์ ธ์˜ด)
33
- load_dotenv(override=True)
34
-
35
- # config.json ํŒŒ์ผ ๊ฒฝ๋กœ ์„ค์ •
36
- CONFIG_FILE_PATH = "config.json"
37
-
38
- # JSON ์„ค์ • ํŒŒ์ผ ๋กœ๋“œ ํ•จ์ˆ˜
39
- def load_config_from_json():
40
- """
41
- config.json ํŒŒ์ผ์—์„œ ์„ค์ •์„ ๋กœ๋“œํ•ฉ๋‹ˆ๋‹ค.
42
- ํŒŒ์ผ์ด ์—†๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ์„ค์ •์œผ๋กœ ํŒŒ์ผ์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
43
-
44
- ๋ฐ˜ํ™˜๊ฐ’:
45
- dict: ๋กœ๋“œ๋œ ์„ค์ •
46
- """
47
- default_config = {
48
- "get_current_time": {
49
- "command": "python",
50
- "args": ["./mcp_server_time.py"],
51
- "transport": "stdio"
52
- }
53
- }
54
-
55
- try:
56
- if os.path.exists(CONFIG_FILE_PATH):
57
- with open(CONFIG_FILE_PATH, "r", encoding="utf-8") as f:
58
- return json.load(f)
59
- else:
60
- # ํŒŒ์ผ์ด ์—†๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ ์„ค์ •์œผ๋กœ ํŒŒ์ผ ์ƒ์„ฑ
61
- save_config_to_json(default_config)
62
- return default_config
63
- except Exception as e:
64
- st.error(f"์„ค์ • ํŒŒ์ผ ๋กœ๋“œ ์ค‘ ์˜ค๋ฅ˜ ๋ฐœ์ƒ: {str(e)}")
65
- return default_config
66
-
67
- # JSON ์„ค์ • ํŒŒ์ผ ์ €์žฅ ํ•จ์ˆ˜
68
- def save_config_to_json(config):
69
- """
70
- ์„ค์ •์„ config.json ํŒŒ์ผ์— ์ €์žฅํ•ฉ๋‹ˆ๋‹ค.
71
-
72
- ๋งค๊ฐœ๋ณ€์ˆ˜:
73
- config (dict): ์ €์žฅํ•  ์„ค์ •
74
-
75
- ๋ฐ˜ํ™˜๊ฐ’:
76
- bool: ์ €์žฅ ์„ฑ๊ณต ์—ฌ๋ถ€
77
- """
78
- try:
79
- with open(CONFIG_FILE_PATH, "w", encoding="utf-8") as f:
80
- json.dump(config, f, indent=2, ensure_ascii=False)
81
- return True
82
- except Exception as e:
83
- st.error(f"์„ค์ • ํŒŒ์ผ ์ €์žฅ ์ค‘ ์˜ค๋ฅ˜ ๋ฐœ์ƒ: {str(e)}")
84
- return False
85
-
86
- # ๋กœ๊ทธ์ธ ์„ธ์…˜ ๋ณ€์ˆ˜ ์ดˆ๊ธฐํ™”
87
- if "authenticated" not in st.session_state:
88
- st.session_state.authenticated = False
89
-
90
- # ๋กœ๊ทธ์ธ ํ•„์š” ์—ฌ๋ถ€ ํ™•์ธ
91
- use_login = os.environ.get("USE_LOGIN", "false").lower() == "true"
92
-
93
- # ๋กœ๊ทธ์ธ ์ƒํƒœ์— ๋”ฐ๋ผ ํŽ˜์ด์ง€ ์„ค์ • ๋ณ€๊ฒฝ
94
- if use_login and not st.session_state.authenticated:
95
- # ๋กœ๊ทธ์ธ ํŽ˜์ด์ง€๋Š” ๊ธฐ๋ณธ(narrow) ๋ ˆ์ด์•„์›ƒ ์‚ฌ์šฉ
96
- st.set_page_config(page_title="Agent with MCP Tools", page_icon="๐Ÿง ")
97
- else:
98
- # ๋ฉ”์ธ ์•ฑ์€ wide ๋ ˆ์ด์•„์›ƒ ์‚ฌ์šฉ
99
- st.set_page_config(page_title="Agent with MCP Tools", page_icon="๐Ÿง ", layout="wide")
100
-
101
- # ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์ด ํ™œ์„ฑํ™”๋˜์–ด ์žˆ๊ณ  ์•„์ง ์ธ์ฆ๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ ๋กœ๊ทธ์ธ ํ™”๋ฉด ํ‘œ์‹œ
102
- if use_login and not st.session_state.authenticated:
103
- st.title("๐Ÿ” ๋กœ๊ทธ์ธ")
104
- st.markdown("์‹œ์Šคํ…œ์„ ์‚ฌ์šฉํ•˜๋ ค๋ฉด ๋กœ๊ทธ์ธ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค.")
105
-
106
- # ๋กœ๊ทธ์ธ ํผ์„ ํ™”๋ฉด ์ค‘์•™์— ์ข๊ฒŒ ๋ฐฐ์น˜
107
- with st.form("login_form"):
108
- username = st.text_input("์•„์ด๋””")
109
- password = st.text_input("๋น„๋ฐ€๋ฒˆํ˜ธ", type="password")
110
- submit_button = st.form_submit_button("๋กœ๊ทธ์ธ")
111
-
112
- if submit_button:
113
- expected_username = os.environ.get("USER_ID")
114
- expected_password = os.environ.get("USER_PASSWORD")
115
-
116
- if username == expected_username and password == expected_password:
117
- st.session_state.authenticated = True
118
- st.success("โœ… ๋กœ๊ทธ์ธ ์„ฑ๊ณต! ์ž ์‹œ๋งŒ ๊ธฐ๋‹ค๋ ค์ฃผ์„ธ์š”...")
119
- st.rerun()
120
- else:
121
- st.error("โŒ ์•„์ด๋”” ๋˜๋Š” ๋น„๋ฐ€๋ฒˆํ˜ธ๊ฐ€ ์˜ฌ๋ฐ”๋ฅด์ง€ ์•Š์Šต๋‹ˆ๋‹ค.")
122
-
123
- # ๋กœ๊ทธ์ธ ํ™”๋ฉด์—์„œ๋Š” ๋ฉ”์ธ ์•ฑ์„ ํ‘œ์‹œํ•˜์ง€ ์•Š์Œ
124
- st.stop()
125
-
126
- # ์‚ฌ์ด๋“œ๋ฐ” ์ตœ์ƒ๋‹จ์— ์ €์ž ์ •๋ณด ์ถ”๊ฐ€ (๋‹ค๋ฅธ ์‚ฌ์ด๋“œ๋ฐ” ์š”์†Œ๋ณด๋‹ค ๋จผ์ € ๋ฐฐ์น˜)
127
- st.sidebar.markdown("### โœ๏ธ Made by [ํ…Œ๋””๋…ธํŠธ](https://youtube.com/c/teddynote) ๐Ÿš€")
128
- st.sidebar.markdown(
129
- "### ๐Ÿ’ป [Project Page](https://github.com/teddynote-lab/langgraph-mcp-agents)"
130
- )
131
-
132
- st.sidebar.divider() # ๊ตฌ๋ถ„์„  ์ถ”๊ฐ€
133
-
134
- # ๊ธฐ์กด ํŽ˜์ด์ง€ ํƒ€์ดํ‹€ ๋ฐ ์„ค๋ช…
135
- st.title("๐Ÿ’ฌ MCP ๋„๊ตฌ ํ™œ์šฉ ์—์ด์ „ํŠธ")
136
- st.markdown("โœจ MCP ๋„๊ตฌ๋ฅผ ํ™œ์šฉํ•œ ReAct ์—์ด์ „ํŠธ์—๊ฒŒ ์งˆ๋ฌธํ•ด๋ณด์„ธ์š”.")
137
-
138
- SYSTEM_PROMPT = """<ROLE>
139
- You are a smart agent with an ability to use tools.
140
- You will be given a question and you will use the tools to answer the question.
141
- Pick the most relevant tool to answer the question.
142
- If you are failed to answer the question, try different tools to get context.
143
- Your answer should be very polite and professional.
144
- </ROLE>
145
-
146
- ----
147
-
148
- <INSTRUCTIONS>
149
- Step 1: Analyze the question
150
- - Analyze user's question and final goal.
151
- - If the user's question is consist of multiple sub-questions, split them into smaller sub-questions.
152
-
153
- Step 2: Pick the most relevant tool
154
- - Pick the most relevant tool to answer the question.
155
- - If you are failed to answer the question, try different tools to get context.
156
-
157
- Step 3: Answer the question
158
- - Answer the question in the same language as the question.
159
- - Your answer should be very polite and professional.
160
-
161
- Step 4: Provide the source of the answer(if applicable)
162
- - If you've used the tool, provide the source of the answer.
163
- - Valid sources are either a website(URL) or a document(PDF, etc).
164
-
165
- Guidelines:
166
- - If you've used the tool, your answer should be based on the tool's output(tool's output is more important than your own knowledge).
167
- - If you've used the tool, and the source is valid URL, provide the source(URL) of the answer.
168
- - Skip providing the source if the source is not URL.
169
- - Answer in the same language as the question.
170
- - Answer should be concise and to the point.
171
- - Avoid response your output with any other information than the answer and the source.
172
- </INSTRUCTIONS>
173
-
174
- ----
175
-
176
- <OUTPUT_FORMAT>
177
- (concise answer to the question)
178
-
179
- **Source**(if applicable)
180
- - (source1: valid URL)
181
- - (source2: valid URL)
182
- - ...
183
- </OUTPUT_FORMAT>
184
- """
185
-
186
- OUTPUT_TOKEN_INFO = {
187
- "claude-3-5-sonnet-latest": {"max_tokens": 8192},
188
- "claude-3-5-haiku-latest": {"max_tokens": 8192},
189
- "claude-3-7-sonnet-latest": {"max_tokens": 64000},
190
- "gpt-4o": {"max_tokens": 16000},
191
- "gpt-4o-mini": {"max_tokens": 16000},
192
- }
193
-
194
- # ์„ธ์…˜ ์ƒํƒœ ์ดˆ๊ธฐํ™”
195
- if "session_initialized" not in st.session_state:
196
- st.session_state.session_initialized = False # ์„ธ์…˜ ์ดˆ๊ธฐํ™” ์ƒํƒœ ํ”Œ๋ž˜๊ทธ
197
- st.session_state.agent = None # ReAct ์—์ด์ „ํŠธ ๊ฐ์ฒด ์ €์žฅ ๊ณต๊ฐ„
198
- st.session_state.history = [] # ๋Œ€ํ™” ๊ธฐ๋ก ์ €์žฅ ๋ฆฌ์ŠคํŠธ
199
- st.session_state.mcp_client = None # MCP ํด๋ผ์ด์–ธํŠธ ๊ฐ์ฒด ์ €์žฅ ๊ณต๊ฐ„
200
- st.session_state.timeout_seconds = 120 # ์‘๋‹ต ์ƒ์„ฑ ์ œํ•œ ์‹œ๊ฐ„(์ดˆ), ๊ธฐ๋ณธ๊ฐ’ 120์ดˆ
201
- st.session_state.selected_model = "claude-3-7-sonnet-latest" # ๊ธฐ๋ณธ ๋ชจ๋ธ ์„ ํƒ
202
- st.session_state.recursion_limit = 100 # ์žฌ๊ท€ ํ˜ธ์ถœ ์ œํ•œ, ๊ธฐ๋ณธ๊ฐ’ 100
203
-
204
- if "thread_id" not in st.session_state:
205
- st.session_state.thread_id = random_uuid()
206
-
207
-
208
- # --- ํ•จ์ˆ˜ ์ •์˜ ๋ถ€๋ถ„ ---
209
-
210
-
211
- async def cleanup_mcp_client():
212
- """
213
- ๊ธฐ์กด MCP ํด๋ผ์ด์–ธํŠธ๋ฅผ ์•ˆ์ „ํ•˜๊ฒŒ ์ข…๋ฃŒํ•ฉ๋‹ˆ๋‹ค.
214
-
215
- ๊ธฐ์กด ํด๋ผ์ด์–ธํŠธ๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์ •์ƒ์ ์œผ๋กœ ๋ฆฌ์†Œ์Šค๋ฅผ ํ•ด์ œํ•ฉ๋‹ˆ๋‹ค.
216
- """
217
- if "mcp_client" in st.session_state and st.session_state.mcp_client is not None:
218
- try:
219
-
220
- await st.session_state.mcp_client.__aexit__(None, None, None)
221
- st.session_state.mcp_client = None
222
- except Exception as e:
223
- import traceback
224
-
225
- # st.warning(f"MCP ํด๋ผ์ด์–ธํŠธ ์ข…๋ฃŒ ์ค‘ ์˜ค๋ฅ˜: {str(e)}")
226
- # st.warning(traceback.format_exc())
227
-
228
-
229
- def print_message():
230
- """
231
- ์ฑ„ํŒ… ๊ธฐ๋ก์„ ํ™”๋ฉด์— ์ถœ๋ ฅํ•ฉ๋‹ˆ๋‹ค.
232
-
233
- ์‚ฌ์šฉ์ž์™€ ์–ด์‹œ์Šคํ„ดํŠธ์˜ ๋ฉ”์‹œ์ง€๋ฅผ ๊ตฌ๋ถ„ํ•˜์—ฌ ํ™”๋ฉด์— ํ‘œ์‹œํ•˜๊ณ ,
234
- ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋Š” ์–ด์‹œ์Šคํ„ดํŠธ ๋ฉ”์‹œ์ง€ ์ปจํ…Œ์ด๋„ˆ ๋‚ด์— ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค.
235
- """
236
- i = 0
237
- while i < len(st.session_state.history):
238
- message = st.session_state.history[i]
239
-
240
- if message["role"] == "user":
241
- st.chat_message("user", avatar="๐Ÿง‘โ€๐Ÿ’ป").markdown(message["content"])
242
- i += 1
243
- elif message["role"] == "assistant":
244
- # ์–ด์‹œ์Šคํ„ดํŠธ ๋ฉ”์‹œ์ง€ ์ปจํ…Œ์ด๋„ˆ ์ƒ์„ฑ
245
- with st.chat_message("assistant", avatar="๐Ÿค–"):
246
- # ์–ด์‹œ์Šคํ„ดํŠธ ๋ฉ”์‹œ์ง€ ๋‚ด์šฉ ํ‘œ์‹œ
247
- st.markdown(message["content"])
248
-
249
- # ๋‹ค์Œ ๋ฉ”์‹œ์ง€๊ฐ€ ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด์ธ์ง€ ํ™•์ธ
250
- if (
251
- i + 1 < len(st.session_state.history)
252
- and st.session_state.history[i + 1]["role"] == "assistant_tool"
253
- ):
254
- # ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋ฅผ ๋™์ผํ•œ ์ปจํ…Œ์ด๋„ˆ ๋‚ด์— expander๋กœ ํ‘œ์‹œ
255
- with st.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด", expanded=False):
256
- st.markdown(st.session_state.history[i + 1]["content"])
257
- i += 2 # ๋‘ ๋ฉ”์‹œ์ง€๋ฅผ ํ•จ๊ป˜ ์ฒ˜๋ฆฌํ–ˆ์œผ๋ฏ€๋กœ 2 ์ฆ๊ฐ€
258
- else:
259
- i += 1 # ์ผ๋ฐ˜ ๋ฉ”์‹œ์ง€๋งŒ ์ฒ˜๋ฆฌํ–ˆ์œผ๋ฏ€๋กœ 1 ์ฆ๊ฐ€
260
- else:
261
- # assistant_tool ๋ฉ”์‹œ์ง€๋Š” ์œ„์—์„œ ์ฒ˜๋ฆฌ๋˜๋ฏ€๋กœ ๊ฑด๋„ˆ๋œ€
262
- i += 1
263
-
264
-
265
- def get_streaming_callback(text_placeholder, tool_placeholder):
266
- """
267
- ์ŠคํŠธ๋ฆฌ๋ฐ ์ฝœ๋ฐฑ ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
268
-
269
- ์ด ํ•จ์ˆ˜๋Š” LLM์—์„œ ์ƒ์„ฑ๋˜๋Š” ์‘๋‹ต์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ํ™”๋ฉด์— ํ‘œ์‹œํ•˜๊ธฐ ์œ„ํ•œ ์ฝœ๋ฐฑ ํ•จ์ˆ˜๋ฅผ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
270
- ํ…์ŠคํŠธ ์‘๋‹ต๊ณผ ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋ฅผ ๊ฐ๊ฐ ๋‹ค๋ฅธ ์˜์—ญ์— ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค.
271
-
272
- ๋งค๊ฐœ๋ณ€์ˆ˜:
273
- text_placeholder: ํ…์ŠคํŠธ ์‘๋‹ต์„ ํ‘œ์‹œํ•  Streamlit ์ปดํฌ๋„ŒํŠธ
274
- tool_placeholder: ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋ฅผ ํ‘œ์‹œํ•  Streamlit ์ปดํฌ๋„ŒํŠธ
275
-
276
- ๋ฐ˜ํ™˜๊ฐ’:
277
- callback_func: ์ŠคํŠธ๋ฆฌ๋ฐ ์ฝœ๋ฐฑ ํ•จ์ˆ˜
278
- accumulated_text: ๋ˆ„์ ๋œ ํ…์ŠคํŠธ ์‘๋‹ต์„ ์ €์žฅํ•˜๋Š” ๋ฆฌ์ŠคํŠธ
279
- accumulated_tool: ๋ˆ„์ ๋œ ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋ฅผ ์ €์žฅํ•˜๋Š” ๋ฆฌ์ŠคํŠธ
280
- """
281
- accumulated_text = []
282
- accumulated_tool = []
283
-
284
- def callback_func(message: dict):
285
- nonlocal accumulated_text, accumulated_tool
286
- message_content = message.get("content", None)
287
-
288
- if isinstance(message_content, AIMessageChunk):
289
- content = message_content.content
290
- # ์ฝ˜ํ…์ธ ๊ฐ€ ๋ฆฌ์ŠคํŠธ ํ˜•ํƒœ์ธ ๊ฒฝ์šฐ (Claude ๋ชจ๋ธ ๋“ฑ์—์„œ ์ฃผ๋กœ ๋ฐœ์ƒ)
291
- if isinstance(content, list) and len(content) > 0:
292
- message_chunk = content[0]
293
- # ํ…์ŠคํŠธ ํƒ€์ž…์ธ ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ
294
- if message_chunk["type"] == "text":
295
- accumulated_text.append(message_chunk["text"])
296
- text_placeholder.markdown("".join(accumulated_text))
297
- # ๋„๊ตฌ ์‚ฌ์šฉ ํƒ€์ž…์ธ ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ
298
- elif message_chunk["type"] == "tool_use":
299
- if "partial_json" in message_chunk:
300
- accumulated_tool.append(message_chunk["partial_json"])
301
- else:
302
- tool_call_chunks = message_content.tool_call_chunks
303
- tool_call_chunk = tool_call_chunks[0]
304
- accumulated_tool.append(
305
- "\n```json\n" + str(tool_call_chunk) + "\n```\n"
306
- )
307
- with tool_placeholder.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด", expanded=True):
308
- st.markdown("".join(accumulated_tool))
309
- # tool_calls ์†์„ฑ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ (OpenAI ๋ชจ๋ธ ๋“ฑ์—์„œ ์ฃผ๋กœ ๋ฐœ์ƒ)
310
- elif (
311
- hasattr(message_content, "tool_calls")
312
- and message_content.tool_calls
313
- and len(message_content.tool_calls[0]["name"]) > 0
314
- ):
315
- tool_call_info = message_content.tool_calls[0]
316
- accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
317
- with tool_placeholder.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด", expanded=True):
318
- st.markdown("".join(accumulated_tool))
319
- # ๋‹จ์ˆœ ๋ฌธ์ž์—ด์ธ ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ
320
- elif isinstance(content, str):
321
- accumulated_text.append(content)
322
- text_placeholder.markdown("".join(accumulated_text))
323
- # ์œ ํšจํ•˜์ง€ ์•Š์€ ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ
324
- elif (
325
- hasattr(message_content, "invalid_tool_calls")
326
- and message_content.invalid_tool_calls
327
- ):
328
- tool_call_info = message_content.invalid_tool_calls[0]
329
- accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
330
- with tool_placeholder.expander(
331
- "๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด (์œ ํšจํ•˜์ง€ ์•Š์Œ)", expanded=True
332
- ):
333
- st.markdown("".join(accumulated_tool))
334
- # tool_call_chunks ์†์„ฑ์ด ์žˆ๋Š” ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ
335
- elif (
336
- hasattr(message_content, "tool_call_chunks")
337
- and message_content.tool_call_chunks
338
- ):
339
- tool_call_chunk = message_content.tool_call_chunks[0]
340
- accumulated_tool.append(
341
- "\n```json\n" + str(tool_call_chunk) + "\n```\n"
342
- )
343
- with tool_placeholder.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด", expanded=True):
344
- st.markdown("".join(accumulated_tool))
345
- # additional_kwargs์— tool_calls๊ฐ€ ์žˆ๋Š” ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ (๋‹ค์–‘ํ•œ ๋ชจ๋ธ ํ˜ธํ™˜์„ฑ ์ง€์›)
346
- elif (
347
- hasattr(message_content, "additional_kwargs")
348
- and "tool_calls" in message_content.additional_kwargs
349
- ):
350
- tool_call_info = message_content.additional_kwargs["tool_calls"][0]
351
- accumulated_tool.append("\n```json\n" + str(tool_call_info) + "\n```\n")
352
- with tool_placeholder.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด", expanded=True):
353
- st.markdown("".join(accumulated_tool))
354
- # ๋„๊ตฌ ๋ฉ”์‹œ์ง€์ธ ๊ฒฝ์šฐ ์ฒ˜๋ฆฌ (๋„๊ตฌ์˜ ์‘๋‹ต)
355
- elif isinstance(message_content, ToolMessage):
356
- accumulated_tool.append(
357
- "\n```json\n" + str(message_content.content) + "\n```\n"
358
- )
359
- with tool_placeholder.expander("๐Ÿ”ง ๋„๊ตฌ ํ˜ธ๏ฟฝ๏ฟฝ๏ฟฝ ์ •๋ณด", expanded=True):
360
- st.markdown("".join(accumulated_tool))
361
- return None
362
-
363
- return callback_func, accumulated_text, accumulated_tool
364
-
365
-
366
- async def process_query(query, text_placeholder, tool_placeholder, timeout_seconds=60):
367
- """
368
- ์‚ฌ์šฉ์ž ์งˆ๋ฌธ์„ ์ฒ˜๋ฆฌํ•˜๊ณ  ์‘๋‹ต์„ ์ƒ์„ฑํ•ฉ๋‹ˆ๋‹ค.
369
-
370
- ์ด ํ•จ์ˆ˜๋Š” ์‚ฌ์šฉ์ž์˜ ์งˆ๋ฌธ์„ ์—์ด์ „ํŠธ์— ์ „๋‹ฌํ•˜๊ณ , ์‘๋‹ต์„ ์‹ค์‹œ๊ฐ„์œผ๋กœ ์ŠคํŠธ๋ฆฌ๋ฐํ•˜์—ฌ ํ‘œ์‹œํ•ฉ๋‹ˆ๋‹ค.
371
- ์ง€์ •๋œ ์‹œ๊ฐ„ ๋‚ด์— ์‘๋‹ต์ด ์™„๋ฃŒ๋˜์ง€ ์•Š์œผ๋ฉด ํƒ€์ž„์•„์›ƒ ์˜ค๋ฅ˜๋ฅผ ๋ฐ˜ํ™˜ํ•ฉ๋‹ˆ๋‹ค.
372
-
373
- ๋งค๊ฐœ๋ณ€์ˆ˜:
374
- query: ์‚ฌ์šฉ์ž๊ฐ€ ์ž…๋ ฅํ•œ ์งˆ๋ฌธ ํ…์ŠคํŠธ
375
- text_placeholder: ํ…์ŠคํŠธ ์‘๋‹ต์„ ํ‘œ์‹œํ•  Streamlit ์ปดํฌ๋„ŒํŠธ
376
- tool_placeholder: ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด๋ฅผ ํ‘œ์‹œํ•  Streamlit ์ปดํฌ๋„ŒํŠธ
377
- timeout_seconds: ์‘๋‹ต ์ƒ์„ฑ ์ œํ•œ ์‹œ๊ฐ„(์ดˆ)
378
-
379
- ๋ฐ˜ํ™˜๊ฐ’:
380
- response: ์—์ด์ „ํŠธ์˜ ์‘๋‹ต ๊ฐ์ฒด
381
- final_text: ์ตœ์ข… ํ…์ŠคํŠธ ์‘๋‹ต
382
- final_tool: ์ตœ์ข… ๋„๊ตฌ ํ˜ธ์ถœ ์ •๋ณด
383
- """
384
- try:
385
- if st.session_state.agent:
386
- streaming_callback, accumulated_text_obj, accumulated_tool_obj = (
387
- get_streaming_callback(text_placeholder, tool_placeholder)
388
- )
389
- try:
390
- response = await asyncio.wait_for(
391
- astream_graph(
392
- st.session_state.agent,
393
- {"messages": [HumanMessage(content=query)]},
394
- callback=streaming_callback,
395
- config=RunnableConfig(
396
- recursion_limit=st.session_state.recursion_limit,
397
- thread_id=st.session_state.thread_id,
398
- ),
399
- ),
400
- timeout=timeout_seconds,
401
- )
402
- except asyncio.TimeoutError:
403
- error_msg = f"โฑ๏ธ ์š”์ฒญ ์‹œ๊ฐ„์ด {timeout_seconds}์ดˆ๋ฅผ ์ดˆ๊ณผํ–ˆ์Šต๋‹ˆ๋‹ค. ๋‚˜์ค‘์— ๋‹ค์‹œ ์‹œ๋„ํ•ด ์ฃผ์„ธ์š”."
404
- return {"error": error_msg}, error_msg, ""
405
-
406
- final_text = "".join(accumulated_text_obj)
407
- final_tool = "".join(accumulated_tool_obj)
408
- return response, final_text, final_tool
409
- else:
410
- return (
411
- {"error": "๐Ÿšซ ์—์ด์ „ํŠธ๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค."},
412
- "๐Ÿšซ ์—์ด์ „ํŠธ๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค.",
413
- "",
414
- )
415
- except Exception as e:
416
- import traceback
417
-
418
- error_msg = f"โŒ ์ฟผ๋ฆฌ ์ฒ˜๋ฆฌ ์ค‘ ์˜ค๋ฅ˜ ๋ฐœ์ƒ: {str(e)}\n{traceback.format_exc()}"
419
- return {"error": error_msg}, error_msg, ""
420
-
421
-
422
- async def initialize_session(mcp_config=None):
423
- """
424
- MCP ์„ธ์…˜๊ณผ ์—์ด์ „ํŠธ๋ฅผ ์ดˆ๊ธฐํ™”ํ•ฉ๋‹ˆ๋‹ค.
425
-
426
- ๋งค๊ฐœ๋ณ€์ˆ˜:
427
- mcp_config: MCP ๋„๊ตฌ ์„ค์ • ์ •๋ณด(JSON). None์ธ ๊ฒฝ์šฐ ๊ธฐ๋ณธ ์„ค์ • ์‚ฌ์šฉ
428
-
429
- ๋ฐ˜ํ™˜๊ฐ’:
430
- bool: ์ดˆ๊ธฐํ™” ์„ฑ๊ณต ์—ฌ๋ถ€
431
- """
432
- with st.spinner("๐Ÿ”„ MCP ์„œ๋ฒ„์— ์—ฐ๊ฒฐ ์ค‘..."):
433
- # ๋จผ์ € ๊ธฐ์กด ํด๋ผ์ด์–ธํŠธ๋ฅผ ์•ˆ์ „ํ•˜๊ฒŒ ์ •๋ฆฌ
434
- await cleanup_mcp_client()
435
-
436
- if mcp_config is None:
437
- # config.json ํŒŒ์ผ์—์„œ ์„ค์ • ๋กœ๋“œ
438
- mcp_config = load_config_from_json()
439
- client = MultiServerMCPClient(mcp_config)
440
- await client.__aenter__()
441
- tools = client.get_tools()
442
- st.session_state.tool_count = len(tools)
443
- st.session_state.mcp_client = client
444
-
445
- # ์„ ํƒ๋œ ๋ชจ๋ธ์— ๋”ฐ๋ผ ์ ์ ˆํ•œ ๋ชจ๋ธ ์ดˆ๊ธฐํ™”
446
- selected_model = st.session_state.selected_model
447
-
448
- if selected_model in [
449
- "claude-3-7-sonnet-latest",
450
- "claude-3-5-sonnet-latest",
451
- "claude-3-5-haiku-latest",
452
- ]:
453
- model = ChatAnthropic(
454
- model=selected_model,
455
- temperature=0.1,
456
- max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
457
- )
458
- else: # OpenAI ๋ชจ๋ธ ์‚ฌ์šฉ
459
- model = ChatOpenAI(
460
- model=selected_model,
461
- temperature=0.1,
462
- max_tokens=OUTPUT_TOKEN_INFO[selected_model]["max_tokens"],
463
- )
464
- agent = create_react_agent(
465
- model,
466
- tools,
467
- checkpointer=MemorySaver(),
468
- prompt=SYSTEM_PROMPT,
469
- )
470
- st.session_state.agent = agent
471
- st.session_state.session_initialized = True
472
- return True
473
-
474
-
475
- # --- ์‚ฌ์ด๋“œ๋ฐ”: ์‹œ์Šคํ…œ ์„ค์ • ์„น์…˜ ---
476
- with st.sidebar:
477
- st.subheader("โš™๏ธ ์‹œ์Šคํ…œ ์„ค์ •")
478
-
479
- # ๋ชจ๋ธ ์„ ํƒ ๊ธฐ๋Šฅ
480
- # ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ ๋ชฉ๋ก ์ƒ์„ฑ
481
- available_models = []
482
-
483
- # Anthropic API ํ‚ค ํ™•์ธ
484
- has_anthropic_key = os.environ.get("ANTHROPIC_API_KEY") is not None
485
- if has_anthropic_key:
486
- available_models.extend(
487
- [
488
- "claude-3-7-sonnet-latest",
489
- "claude-3-5-sonnet-latest",
490
- "claude-3-5-haiku-latest",
491
- ]
492
- )
493
-
494
- # OpenAI API ํ‚ค ํ™•์ธ
495
- has_openai_key = os.environ.get("OPENAI_API_KEY") is not None
496
- if has_openai_key:
497
- available_models.extend(["gpt-4o", "gpt-4o-mini"])
498
-
499
- # ์‚ฌ์šฉ ๊ฐ€๋Šฅํ•œ ๋ชจ๋ธ์ด ์—†๋Š” ๊ฒฝ์šฐ ๋ฉ”์‹œ์ง€ ํ‘œ์‹œ
500
- if not available_models:
501
- st.warning(
502
- "โš ๏ธ API ํ‚ค๊ฐ€ ์„ค์ •๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. .env ํŒŒ์ผ์— ANTHROPIC_API_KEY ๋˜๋Š” OPENAI_API_KEY๋ฅผ ์ถ”๊ฐ€ํ•ด์ฃผ์„ธ์š”."
503
- )
504
- # ๊ธฐ๋ณธ๊ฐ’์œผ๋กœ Claude ๋ชจ๋ธ ์ถ”๊ฐ€ (ํ‚ค๊ฐ€ ์—†์–ด๋„ UI๋ฅผ ๋ณด์—ฌ์ฃผ๊ธฐ ์œ„ํ•จ)
505
- available_models = ["claude-3-7-sonnet-latest"]
506
-
507
- # ๋ชจ๋ธ ์„ ํƒ ๋“œ๋กญ๋‹ค์šด
508
- previous_model = st.session_state.selected_model
509
- st.session_state.selected_model = st.selectbox(
510
- "๐Ÿค– ์‚ฌ์šฉํ•  ๋ชจ๋ธ ์„ ํƒ",
511
- options=available_models,
512
- index=(
513
- available_models.index(st.session_state.selected_model)
514
- if st.session_state.selected_model in available_models
515
- else 0
516
- ),
517
- help="Anthropic ๋ชจ๋ธ์€ ANTHROPIC_API_KEY๊ฐ€, OpenAI ๋ชจ๋ธ์€ OPENAI_API_KEY๊ฐ€ ํ™˜๊ฒฝ๋ณ€์ˆ˜๋กœ ์„ค์ •๋˜์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.",
518
- )
519
-
520
- # ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜์—ˆ์„ ๋•Œ ์„ธ์…˜ ์ดˆ๊ธฐํ™” ํ•„์š” ์•Œ๋ฆผ
521
- if (
522
- previous_model != st.session_state.selected_model
523
- and st.session_state.session_initialized
524
- ):
525
- st.warning(
526
- "โš ๏ธ ๋ชจ๋ธ์ด ๋ณ€๊ฒฝ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ ๋ณ€๊ฒฝ์‚ฌํ•ญ์„ ์ ์šฉํ•˜์„ธ์š”."
527
- )
528
-
529
- # ํƒ€์ž„์•„์›ƒ ์„ค์ • ์Šฌ๋ผ์ด๋” ์ถ”๊ฐ€
530
- st.session_state.timeout_seconds = st.slider(
531
- "โฑ๏ธ ์‘๋‹ต ์ƒ์„ฑ ์ œํ•œ ์‹œ๊ฐ„(์ดˆ)",
532
- min_value=60,
533
- max_value=300,
534
- value=st.session_state.timeout_seconds,
535
- step=10,
536
- help="์—์ด์ „ํŠธ๊ฐ€ ์‘๋‹ต์„ ์ƒ์„ฑํ•˜๋Š” ์ตœ๋Œ€ ์‹œ๊ฐ„์„ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ณต์žกํ•œ ์ž‘์—…์€ ๋” ๊ธด ์‹œ๊ฐ„์ด ํ•„์š”ํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.",
537
- )
538
-
539
- st.session_state.recursion_limit = st.slider(
540
- "โฑ๏ธ ์žฌ๊ท€ ํ˜ธ์ถœ ์ œํ•œ(ํšŸ์ˆ˜)",
541
- min_value=10,
542
- max_value=200,
543
- value=st.session_state.recursion_limit,
544
- step=10,
545
- help="์žฌ๊ท€ ํ˜ธ์ถœ ์ œํ•œ ํšŸ์ˆ˜๋ฅผ ์„ค์ •ํ•ฉ๋‹ˆ๋‹ค. ๋„ˆ๋ฌด ๋†’์€ ๊ฐ’์„ ์„ค์ •ํ•˜๋ฉด ๋ฉ”๋ชจ๋ฆฌ ๋ถ€์กฑ ๋ฌธ์ œ๊ฐ€ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค.",
546
- )
547
-
548
- st.divider() # ๊ตฌ๋ถ„์„  ์ถ”๊ฐ€
549
-
550
- # ๋„๊ตฌ ์„ค์ • ์„น์…˜ ์ถ”๊ฐ€
551
- st.subheader("๐Ÿ”ง ๋„๊ตฌ ์„ค์ •")
552
-
553
- # expander ์ƒํƒœ๋ฅผ ์„ธ์…˜ ์ƒํƒœ๋กœ ๊ด€๋ฆฌ
554
- if "mcp_tools_expander" not in st.session_state:
555
- st.session_state.mcp_tools_expander = False
556
-
557
- # MCP ๋„๊ตฌ ์ถ”๊ฐ€ ์ธํ„ฐํŽ˜์ด์Šค
558
- with st.expander("๐Ÿงฐ MCP ๋„๊ตฌ ์ถ”๊ฐ€", expanded=st.session_state.mcp_tools_expander):
559
- # config.json ํŒŒ์ผ์—์„œ ์„ค์ • ๋กœ๋“œํ•˜์—ฌ ํ‘œ์‹œ
560
- loaded_config = load_config_from_json()
561
- default_config_text = json.dumps(loaded_config, indent=2, ensure_ascii=False)
562
-
563
- # pending config๊ฐ€ ์—†์œผ๋ฉด ๊ธฐ์กด mcp_config_text ๊ธฐ๋ฐ˜์œผ๋กœ ์ƒ์„ฑ
564
- if "pending_mcp_config" not in st.session_state:
565
- try:
566
- st.session_state.pending_mcp_config = loaded_config
567
- except Exception as e:
568
- st.error(f"์ดˆ๊ธฐ pending config ์„ค์ • ์‹คํŒจ: {e}")
569
-
570
- # ๊ฐœ๋ณ„ ๋„๊ตฌ ์ถ”๊ฐ€๋ฅผ ์œ„ํ•œ UI
571
- st.subheader("๋„๊ตฌ ์ถ”๊ฐ€")
572
- st.markdown(
573
- """
574
- [์–ด๋–ป๊ฒŒ ์„ค์ • ํ•˜๋‚˜์š”?](https://teddylee777.notion.site/MCP-1d324f35d12980c8b018e12afdf545a1?pvs=4)
575
-
576
- โš ๏ธ **์ค‘์š”**: JSON์„ ๋ฐ˜๋“œ์‹œ ์ค‘๊ด„ํ˜ธ(`{}`)๋กœ ๊ฐ์‹ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค."""
577
- )
578
-
579
- # ๋ณด๋‹ค ๋ช…ํ™•ํ•œ ์˜ˆ์‹œ ์ œ๊ณต
580
- example_json = {
581
- "github": {
582
- "command": "npx",
583
- "args": [
584
- "-y",
585
- "@smithery/cli@latest",
586
- "run",
587
- "@smithery-ai/github",
588
- "--config",
589
- '{"githubPersonalAccessToken":"your_token_here"}',
590
- ],
591
- "transport": "stdio",
592
- }
593
- }
594
-
595
- default_text = json.dumps(example_json, indent=2, ensure_ascii=False)
596
-
597
- new_tool_json = st.text_area(
598
- "๋„๊ตฌ JSON",
599
- default_text,
600
- height=250,
601
- )
602
-
603
- # ์ถ”๊ฐ€ํ•˜๊ธฐ ๋ฒ„ํŠผ
604
- if st.button(
605
- "๋„๊ตฌ ์ถ”๊ฐ€",
606
- type="primary",
607
- key="add_tool_button",
608
- use_container_width=True,
609
- ):
610
- try:
611
- # ์ž…๋ ฅ๊ฐ’ ๊ฒ€์ฆ
612
- if not new_tool_json.strip().startswith(
613
- "{"
614
- ) or not new_tool_json.strip().endswith("}"):
615
- st.error("JSON์€ ์ค‘๊ด„ํ˜ธ({})๋กœ ์‹œ์ž‘ํ•˜๊ณ  ๋๋‚˜์•ผ ํ•ฉ๋‹ˆ๋‹ค.")
616
- st.markdown('์˜ฌ๋ฐ”๋ฅธ ํ˜•์‹: `{ "๋„๊ตฌ์ด๋ฆ„": { ... } }`')
617
- else:
618
- # JSON ํŒŒ์‹ฑ
619
- parsed_tool = json.loads(new_tool_json)
620
-
621
- # mcpServers ํ˜•์‹์ธ์ง€ ํ™•์ธํ•˜๊ณ  ์ฒ˜๋ฆฌ
622
- if "mcpServers" in parsed_tool:
623
- # mcpServers ์•ˆ์˜ ๋‚ด์šฉ์„ ์ตœ์ƒ์œ„๋กœ ์ด๋™
624
- parsed_tool = parsed_tool["mcpServers"]
625
- st.info(
626
- "'mcpServers' ํ˜•์‹์ด ๊ฐ์ง€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ž๋™์œผ๋กœ ๋ณ€ํ™˜ํ•ฉ๋‹ˆ๋‹ค."
627
- )
628
-
629
- # ์ž…๋ ฅ๋œ ๋„๊ตฌ ์ˆ˜ ํ™•์ธ
630
- if len(parsed_tool) == 0:
631
- st.error("์ตœ์†Œ ํ•˜๋‚˜ ์ด์ƒ์˜ ๋„๊ตฌ๋ฅผ ์ž…๋ ฅํ•ด์ฃผ์„ธ์š”.")
632
- else:
633
- # ๋ชจ๋“  ๋„๊ตฌ์— ๋Œ€ํ•ด ์ฒ˜๋ฆฌ
634
- success_tools = []
635
- for tool_name, tool_config in parsed_tool.items():
636
- # URL ํ•„๋“œ ํ™•์ธ ๋ฐ transport ์„ค์ •
637
- if "url" in tool_config:
638
- # URL์ด ์žˆ๋Š” ๊ฒฝ์šฐ transport๋ฅผ "sse"๋กœ ์„ค์ •
639
- tool_config["transport"] = "sse"
640
- st.info(
641
- f"'{tool_name}' ๋„๊ตฌ์— URL์ด ๊ฐ์ง€๋˜์–ด transport๋ฅผ 'sse'๋กœ ์„ค์ •ํ–ˆ์Šต๋‹ˆ๋‹ค."
642
- )
643
- elif "transport" not in tool_config:
644
- # URL์ด ์—†๊ณ  transport๋„ ์—†๋Š” ๊ฒฝ์šฐ ๊ธฐ๋ณธ๊ฐ’ "stdio" ์„ค์ •
645
- tool_config["transport"] = "stdio"
646
-
647
- # ํ•„์ˆ˜ ํ•„๋“œ ํ™•์ธ
648
- if (
649
- "command" not in tool_config
650
- and "url" not in tool_config
651
- ):
652
- st.error(
653
- f"'{tool_name}' ๋„๊ตฌ ์„ค์ •์—๋Š” 'command' ๋˜๋Š” 'url' ํ•„๋“œ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค."
654
- )
655
- elif "command" in tool_config and "args" not in tool_config:
656
- st.error(
657
- f"'{tool_name}' ๋„๊ตฌ ์„ค์ •์—๋Š” 'args' ํ•„๋“œ๊ฐ€ ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค."
658
- )
659
- elif "command" in tool_config and not isinstance(
660
- tool_config["args"], list
661
- ):
662
- st.error(
663
- f"'{tool_name}' ๋„๊ตฌ์˜ 'args' ํ•„๋“œ๋Š” ๋ฐ˜๋“œ์‹œ ๋ฐฐ์—ด([]) ํ˜•์‹์ด์–ด์•ผ ํ•ฉ๋‹ˆ๋‹ค."
664
- )
665
- else:
666
- # pending_mcp_config์— ๋„๊ตฌ ์ถ”๊ฐ€
667
- st.session_state.pending_mcp_config[tool_name] = (
668
- tool_config
669
- )
670
- success_tools.append(tool_name)
671
-
672
- # ์„ฑ๊ณต ๋ฉ”์‹œ์ง€
673
- if success_tools:
674
- if len(success_tools) == 1:
675
- st.success(
676
- f"{success_tools[0]} ๋„๊ตฌ๊ฐ€ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ ์šฉํ•˜๋ ค๋ฉด '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ์ฃผ์„ธ์š”."
677
- )
678
- else:
679
- tool_names = ", ".join(success_tools)
680
- st.success(
681
- f"์ด {len(success_tools)}๊ฐœ ๋„๊ตฌ({tool_names})๊ฐ€ ์ถ”๊ฐ€๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ ์šฉํ•˜๋ ค๋ฉด '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ์ฃผ์„ธ์š”."
682
- )
683
- # ์ถ”๊ฐ€๋˜๋ฉด expander๋ฅผ ์ ‘์–ด์คŒ
684
- st.session_state.mcp_tools_expander = False
685
- st.rerun()
686
- except json.JSONDecodeError as e:
687
- st.error(f"JSON ํŒŒ์‹ฑ ์—๋Ÿฌ: {e}")
688
- st.markdown(
689
- f"""
690
- **์ˆ˜์ • ๋ฐฉ๋ฒ•**:
691
- 1. JSON ํ˜•์‹์ด ์˜ฌ๋ฐ”๋ฅธ์ง€ ํ™•์ธํ•˜์„ธ์š”.
692
- 2. ๋ชจ๋“  ํ‚ค๋Š” ํฐ๋”ฐ์˜ดํ‘œ(")๋กœ ๊ฐ์‹ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
693
- 3. ๋ฌธ์ž์—ด ๊ฐ’๋„ ํฐ๋”ฐ์˜ดํ‘œ(")๋กœ ๊ฐ์‹ธ์•ผ ํ•ฉ๋‹ˆ๋‹ค.
694
- 4. ๋ฌธ์ž์—ด ๋‚ด์—์„œ ํฐ๋”ฐ์˜ดํ‘œ๋ฅผ ์‚ฌ์šฉํ•  ๊ฒฝ์šฐ ์ด์Šค์ผ€์ดํ”„(\\")ํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
695
- """
696
- )
697
- except Exception as e:
698
- st.error(f"์˜ค๋ฅ˜ ๋ฐœ์ƒ: {e}")
699
-
700
- # ๋“ฑ๋ก๋œ ๋„๊ตฌ ๋ชฉ๋ก ํ‘œ์‹œ ๋ฐ ์‚ญ์ œ ๋ฒ„ํŠผ ์ถ”๊ฐ€
701
- with st.expander("๐Ÿ“‹ ๋“ฑ๋ก๋œ ๋„๊ตฌ ๋ชฉ๋ก", expanded=True):
702
- try:
703
- pending_config = st.session_state.pending_mcp_config
704
- except Exception as e:
705
- st.error("์œ ํšจํ•œ MCP ๋„๊ตฌ ์„ค์ •์ด ์•„๋‹™๋‹ˆ๋‹ค.")
706
- else:
707
- # pending config์˜ ํ‚ค(๋„๊ตฌ ์ด๋ฆ„) ๋ชฉ๋ก์„ ์ˆœํšŒํ•˜๋ฉฐ ํ‘œ์‹œ
708
- for tool_name in list(pending_config.keys()):
709
- col1, col2 = st.columns([8, 2])
710
- col1.markdown(f"- **{tool_name}**")
711
- if col2.button("์‚ญ์ œ", key=f"delete_{tool_name}"):
712
- # pending config์—์„œ ํ•ด๋‹น ๋„๊ตฌ ์‚ญ์ œ (์ฆ‰์‹œ ๏ฟฝ๏ฟฝ์šฉ๋˜์ง€๋Š” ์•Š์Œ)
713
- del st.session_state.pending_mcp_config[tool_name]
714
- st.success(
715
- f"{tool_name} ๋„๊ตฌ๊ฐ€ ์‚ญ์ œ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ์ ์šฉํ•˜๋ ค๋ฉด '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ๋ˆŒ๋Ÿฌ์ฃผ์„ธ์š”."
716
- )
717
-
718
- st.divider() # ๊ตฌ๋ถ„์„  ์ถ”๊ฐ€
719
-
720
- # --- ์‚ฌ์ด๋“œ๋ฐ”: ์‹œ์Šคํ…œ ์ •๋ณด ๋ฐ ์ž‘์—… ๋ฒ„ํŠผ ์„น์…˜ ---
721
- with st.sidebar:
722
- st.subheader("๐Ÿ“Š ์‹œ์Šคํ…œ ์ •๋ณด")
723
- st.write(f"๐Ÿ› ๏ธ MCP ๋„๊ตฌ ์ˆ˜: {st.session_state.get('tool_count', '์ดˆ๊ธฐํ™” ์ค‘...')}")
724
- selected_model_name = st.session_state.selected_model
725
- st.write(f"๐Ÿง  ํ˜„์žฌ ๋ชจ๋ธ: {selected_model_name}")
726
-
727
- # ์„ค์ • ์ ์šฉํ•˜๊ธฐ ๋ฒ„ํŠผ์„ ์—ฌ๊ธฐ๋กœ ์ด๋™
728
- if st.button(
729
- "์„ค์ • ์ ์šฉํ•˜๊ธฐ",
730
- key="apply_button",
731
- type="primary",
732
- use_container_width=True,
733
- ):
734
- # ์ ์šฉ ์ค‘ ๋ฉ”์‹œ์ง€ ํ‘œ์‹œ
735
- apply_status = st.empty()
736
- with apply_status.container():
737
- st.warning("๐Ÿ”„ ๋ณ€๊ฒฝ์‚ฌํ•ญ์„ ์ ์šฉํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ์ž ์‹œ๋งŒ ๊ธฐ๋‹ค๋ ค์ฃผ์„ธ์š”...")
738
- progress_bar = st.progress(0)
739
-
740
- # ์„ค์ • ์ €์žฅ
741
- st.session_state.mcp_config_text = json.dumps(
742
- st.session_state.pending_mcp_config, indent=2, ensure_ascii=False
743
- )
744
-
745
- # config.json ํŒŒ์ผ์— ์„ค์ • ์ €์žฅ
746
- save_result = save_config_to_json(st.session_state.pending_mcp_config)
747
- if not save_result:
748
- st.error("โŒ ์„ค์ • ํŒŒ์ผ ์ €์žฅ์— ์‹คํŒจํ–ˆ์Šต๋‹ˆ๋‹ค.")
749
-
750
- progress_bar.progress(15)
751
-
752
- # ์„ธ์…˜ ์ดˆ๊ธฐํ™” ์ค€๋น„
753
- st.session_state.session_initialized = False
754
- st.session_state.agent = None
755
-
756
- # ์ง„ํ–‰ ์ƒํƒœ ์—…๋ฐ์ดํŠธ
757
- progress_bar.progress(30)
758
-
759
- # ์ดˆ๊ธฐํ™” ์‹คํ–‰
760
- success = st.session_state.event_loop.run_until_complete(
761
- initialize_session(st.session_state.pending_mcp_config)
762
- )
763
-
764
- # ์ง„ํ–‰ ์ƒํƒœ ์—…๋ฐ์ดํŠธ
765
- progress_bar.progress(100)
766
-
767
- if success:
768
- st.success("โœ… ์ƒˆ๋กœ์šด ์„ค์ •์ด ์ ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.")
769
- # ๋„๊ตฌ ์ถ”๊ฐ€ expander ์ ‘๊ธฐ
770
- if "mcp_tools_expander" in st.session_state:
771
- st.session_state.mcp_tools_expander = False
772
- else:
773
- st.error("โŒ ์„ค์ • ์ ์šฉ์— ์‹คํŒจํ•˜์˜€์Šต๋‹ˆ๋‹ค.")
774
-
775
- # ํŽ˜์ด์ง€ ์ƒˆ๋กœ๊ณ ์นจ
776
- st.rerun()
777
-
778
- st.divider() # ๊ตฌ๋ถ„์„  ์ถ”๊ฐ€
779
-
780
- # ์ž‘์—… ๋ฒ„ํŠผ ์„น์…˜
781
- st.subheader("๐Ÿ”„ ์ž‘์—…")
782
-
783
- # ๋Œ€ํ™” ์ดˆ๊ธฐํ™” ๋ฒ„ํŠผ
784
- if st.button("๋Œ€ํ™” ์ดˆ๊ธฐํ™”", use_container_width=True, type="primary"):
785
- # thread_id ์ดˆ๊ธฐํ™”
786
- st.session_state.thread_id = random_uuid()
787
-
788
- # ๋Œ€ํ™” ํžˆ์Šคํ† ๋ฆฌ ์ดˆ๊ธฐํ™”
789
- st.session_state.history = []
790
-
791
- # ์•Œ๋ฆผ ๋ฉ”์‹œ์ง€
792
- st.success("โœ… ๋Œ€ํ™”๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์—ˆ์Šต๋‹ˆ๋‹ค.")
793
-
794
- # ํŽ˜์ด์ง€ ์ƒˆ๋กœ๊ณ ์นจ
795
- st.rerun()
796
-
797
- # ๋กœ๊ทธ์ธ ๊ธฐ๋Šฅ์ด ํ™œ์„ฑํ™”๋œ ๊ฒฝ์šฐ์—๋งŒ ๋กœ๊ทธ์•„์›ƒ ๋ฒ„ํŠผ ํ‘œ์‹œ
798
- if use_login and st.session_state.authenticated:
799
- st.divider() # ๊ตฌ๋ถ„์„  ์ถ”๊ฐ€
800
- if st.button("๋กœ๊ทธ์•„์›ƒ", use_container_width=True, type="secondary"):
801
- st.session_state.authenticated = False
802
- st.success("โœ… ๋กœ๊ทธ์•„์›ƒ ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.")
803
- st.rerun()
804
-
805
- # --- ๊ธฐ๋ณธ ์„ธ์…˜ ์ดˆ๊ธฐํ™” (์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์€ ๊ฒฝ์šฐ) ---
806
- if not st.session_state.session_initialized:
807
- st.info(
808
- "MCP ์„œ๋ฒ„์™€ ์—์ด์ „ํŠธ๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์™ผ์ชฝ ์‚ฌ์ด๋“œ๋ฐ”์˜ '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ดˆ๊ธฐํ™”ํ•ด์ฃผ์„ธ์š”."
809
- )
810
-
811
-
812
- # --- ๋Œ€ํ™” ๊ธฐ๋ก ์ถœ๋ ฅ ---
813
- print_message()
814
-
815
- # --- ์‚ฌ์šฉ์ž ์ž…๋ ฅ ๋ฐ ์ฒ˜๋ฆฌ ---
816
- user_query = st.chat_input("๐Ÿ’ฌ ์งˆ๋ฌธ์„ ์ž…๋ ฅํ•˜์„ธ์š”")
817
- if user_query:
818
- if st.session_state.session_initialized:
819
- st.chat_message("user", avatar="๐Ÿง‘โ€๐Ÿ’ป").markdown(user_query)
820
- with st.chat_message("assistant", avatar="๐Ÿค–"):
821
- tool_placeholder = st.empty()
822
- text_placeholder = st.empty()
823
- resp, final_text, final_tool = (
824
- st.session_state.event_loop.run_until_complete(
825
- process_query(
826
- user_query,
827
- text_placeholder,
828
- tool_placeholder,
829
- st.session_state.timeout_seconds,
830
- )
831
- )
832
- )
833
- if "error" in resp:
834
- st.error(resp["error"])
835
- else:
836
- st.session_state.history.append({"role": "user", "content": user_query})
837
- st.session_state.history.append(
838
- {"role": "assistant", "content": final_text}
839
- )
840
- if final_tool.strip():
841
- st.session_state.history.append(
842
- {"role": "assistant_tool", "content": final_tool}
843
- )
844
- st.rerun()
845
- else:
846
- st.warning(
847
- "โš ๏ธ MCP ์„œ๋ฒ„์™€ ์—์ด์ „ํŠธ๊ฐ€ ์ดˆ๊ธฐํ™”๋˜์ง€ ์•Š์•˜์Šต๋‹ˆ๋‹ค. ์™ผ์ชฝ ์‚ฌ์ด๋“œ๋ฐ”์˜ '์„ค์ • ์ ์šฉํ•˜๊ธฐ' ๋ฒ„ํŠผ์„ ํด๋ฆญํ•˜์—ฌ ์ดˆ๊ธฐํ™”ํ•ด์ฃผ์„ธ์š”."
848
- )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -5,5 +5,13 @@
5
  "./mcp_server_time.py"
6
  ],
7
  "transport": "stdio"
 
 
 
 
 
 
 
 
8
  }
9
  }
 
5
  "./mcp_server_time.py"
6
  ],
7
  "transport": "stdio"
8
+ },
9
+ "qa": {
10
+ "transport": "sse",
11
+ "url": "http://10.15.56.148:8000/qa"
12
+ },
13
+ "review_generate": {
14
+ "transport": "sse",
15
+ "url": "http://10.15.56.148:8000/review"
16
  }
17
  }
run.sh ADDED
@@ -0,0 +1 @@
 
 
1
+ streamlit run app.py --server.address=0.0.0.0