stivenDR14 commited on
Commit
fb513c1
·
1 Parent(s): 81917a3
Files changed (5) hide show
  1. .gitignore +8 -0
  2. README_AGENT.md +180 -0
  3. agent.py +390 -0
  4. app.py +16 -13
  5. requirements.txt +19 -2
.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ .env
2
+ .venv
3
+ __pycache__
4
+ *.pyc
5
+ *.pyo
6
+ *.pyd
7
+ *.pyw
8
+ *.pyz
README_AGENT.md ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🤖 Advanced AI Agent with LlamaIndex
2
+
3
+ Este es un agente de IA avanzado construido con LlamaIndex que incluye capacidades de CodeAct, búsqueda web y herramientas matemáticas.
4
+
5
+ ## 🚀 Características
6
+
7
+ ### 🧠 Capacidades del Agente
8
+
9
+ - **Razonamiento Avanzado**: Utiliza LlamaIndex con modelos de Hugging Face
10
+ - **CodeAct Agent**: Puede escribir y ejecutar código Python para resolver problemas complejos
11
+ - **Herramientas Matemáticas**: Cálculos básicos y avanzados
12
+ - **Búsqueda Web**: Integración con DuckDuckGo y Wikipedia
13
+ - **Modo Fallback**: Funciona incluso sin configuración completa
14
+
15
+ ### 🛠 Herramientas Disponibles
16
+
17
+ 1. **Matemáticas**: suma, resta, multiplicación, división, potencias, porcentajes
18
+ 2. **Búsqueda DuckDuckGo**: Búsquedas web en tiempo real
19
+ 3. **Wikipedia**: Búsqueda de información factual
20
+ 4. **Peticiones Web**: Llamadas a APIs
21
+ 5. **Ejecución de Código**: Python con librerías como numpy, pandas, math
22
+
23
+ ## ⚙️ Configuración
24
+
25
+ ### 1. Instalar Dependencias
26
+
27
+ ```bash
28
+ pip install -r requirements.txt
29
+ ```
30
+
31
+ ### 2. Configurar Variables de Entorno
32
+
33
+ Crea un archivo `.env` con:
34
+
35
+ ```bash
36
+ # Token de Hugging Face (opcional pero recomendado)
37
+ HUGGINGFACE_TOKEN=tu_token_aqui
38
+ ```
39
+
40
+ Para obtener tu token:
41
+
42
+ 1. Ve a [Hugging Face Settings](https://huggingface.co/settings/tokens)
43
+ 2. Crea un nuevo token
44
+ 3. Copia el token en tu archivo `.env`
45
+
46
+ ### 3. Usar el Agente
47
+
48
+ #### Desde Python:
49
+
50
+ ```python
51
+ from agent import BasicAgent
52
+
53
+ # Inicializar el agente
54
+ agent = BasicAgent()
55
+
56
+ # Hacer una pregunta
57
+ result = agent("¿Cuál es la raíz cuadrada de 144?")
58
+ print(result)
59
+ ```
60
+
61
+ #### Desde la Interfaz Gradio:
62
+
63
+ ```bash
64
+ python app.py
65
+ ```
66
+
67
+ Luego ve a `http://127.0.0.1:7860`
68
+
69
+ ## 🎯 Formato de Respuestas
70
+
71
+ El agente está configurado para dar respuestas en el formato:
72
+
73
+ ```
74
+ [Explicación del razonamiento...]
75
+
76
+ FINAL ANSWER: [RESPUESTA_FINAL]
77
+ ```
78
+
79
+ ### Reglas para FINAL ANSWER:
80
+
81
+ - **Números**: Sin comas ni símbolos (ej: `42` no `42,000` o `$42`)
82
+ - **Texto**: Sin artículos ni abreviaciones (ej: `New York` no `NYC`)
83
+ - **Listas**: Separadas por comas siguiendo las reglas anteriores
84
+
85
+ ## 📝 Ejemplos de Uso
86
+
87
+ ### Matemáticas Básicas
88
+
89
+ ```python
90
+ agent("¿Cuánto es 15 + 27?")
91
+ # FINAL ANSWER: 42
92
+ ```
93
+
94
+ ### Cálculos Complejos
95
+
96
+ ```python
97
+ agent("Calcula el 15% de 240 y súmale 50")
98
+ # El agente escribirá código: (240 * 15 / 100) + 50
99
+ # FINAL ANSWER: 86
100
+ ```
101
+
102
+ ### Búsqueda de Información
103
+
104
+ ```python
105
+ agent("¿Cuál es la capital de Francia?")
106
+ # Buscará en Wikipedia
107
+ # FINAL ANSWER: Paris
108
+ ```
109
+
110
+ ### Problemas Complejos
111
+
112
+ ```python
113
+ agent("Necesito calcular la suma de los primeros 10 números de Fibonacci")
114
+ # El agente escribirá código para calcular Fibonacci
115
+ # FINAL ANSWER: 143
116
+ ```
117
+
118
+ ## 🔧 Modos de Funcionamiento
119
+
120
+ ### 1. Modo Completo (con HUGGINGFACE_TOKEN)
121
+
122
+ - Utiliza modelos de IA avanzados
123
+ - CodeAct Agent completo
124
+ - Todas las herramientas disponibles
125
+
126
+ ### 2. Modo Básico (sin token)
127
+
128
+ - Procesamiento básico con expresiones regulares
129
+ - Matemáticas simples
130
+ - Respuestas limitadas pero funcionales
131
+
132
+ ## 🚨 Importante para Producción
133
+
134
+ ⚠️ **Seguridad**: El agente puede ejecutar código Python. En producción:
135
+
136
+ - Usar sandboxing (Docker, contenedores)
137
+ - Validar entradas del usuario
138
+ - Limitar acceso a recursos del sistema
139
+
140
+ ## 🐛 Resolución de Problemas
141
+
142
+ ### Error: No se puede importar LlamaIndex
143
+
144
+ ```bash
145
+ pip install --upgrade llama-index-core llama-index-llms-huggingface
146
+ ```
147
+
148
+ ### Error: Token inválido
149
+
150
+ - Verifica que tu token de Hugging Face sea válido
151
+ - Asegúrate que el archivo `.env` esté en el directorio correcto
152
+
153
+ ### Error: Modelo no encontrado
154
+
155
+ - Algunos modelos requieren acceso especial
156
+ - Prueba con modelos públicos como `microsoft/DialoGPT-medium`
157
+
158
+ ## 📚 Personalización
159
+
160
+ ### Cambiar el Modelo
161
+
162
+ En `agent.py`, modifica:
163
+
164
+ ```python
165
+ model_name="tu-modelo-preferido"
166
+ ```
167
+
168
+ ### Agregar Nuevas Herramientas
169
+
170
+ 1. Define tu función
171
+ 2. Crea un `FunctionTool.from_defaults(fn=tu_funcion)`
172
+ 3. Agrégala a `self.tools`
173
+
174
+ ### Personalizar el Prompt
175
+
176
+ Modifica la variable `enhanced_prompt` en el método `__call__`
177
+
178
+ ---
179
+
180
+ ¡Tu agente de IA está listo para resolver problemas complejos! 🎉
agent.py ADDED
@@ -0,0 +1,390 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Intelligent AI Agent using LlamaIndex with CodeAct capabilities
3
+ This module contains the agent class with advanced tools and reasoning.
4
+ """
5
+
6
+ import os
7
+ import asyncio
8
+ import io
9
+ import contextlib
10
+ import ast
11
+ import traceback
12
+ from typing import Any, Dict, Tuple, List
13
+
14
+ # Load environment variables from .env file
15
+ try:
16
+ from dotenv import load_dotenv
17
+ load_dotenv()
18
+ print("✅ .env file loaded successfully")
19
+ except ImportError:
20
+ print("⚠️ python-dotenv not available, .env file not loaded")
21
+ except Exception as e:
22
+ print(f"⚠️ Error loading .env file: {e}")
23
+
24
+ # LlamaIndex imports
25
+ try:
26
+ from llama_index.core.agent.workflow import CodeActAgent
27
+ from llama_index.core.workflow import Context
28
+ from llama_index.core.agent.workflow import (
29
+ ToolCall,
30
+ ToolCallResult,
31
+ AgentStream,
32
+ )
33
+ from llama_index.llms.huggingface import HuggingFaceLLM
34
+ from llama_index.core.tools import FunctionTool
35
+ from llama_index.tools.duckduckgo import DuckDuckGoSearchToolSpec
36
+ from llama_index.tools.wikipedia import WikipediaToolSpec
37
+ from llama_index.tools.requests import RequestsToolSpec
38
+ LLAMA_INDEX_AVAILABLE = True
39
+ except ImportError as e:
40
+ print(f"LlamaIndex imports not available: {e}")
41
+ LLAMA_INDEX_AVAILABLE = False
42
+
43
+ MODEL = "microsoft/Phi-3.5-mini-instruct"
44
+
45
+ class SimpleCodeExecutor:
46
+ """
47
+ A simple code executor that runs Python code with state persistence.
48
+ NOTE: not safe for production use! Use with caution.
49
+ """
50
+
51
+ def __init__(self, locals_dict: Dict[str, Any], globals_dict: Dict[str, Any]):
52
+ """Initialize the code executor."""
53
+ self.globals = globals_dict
54
+ self.locals = locals_dict
55
+
56
+ def execute(self, code: str) -> Tuple[bool, str, Any]:
57
+ """Execute Python code and capture output and return values."""
58
+ stdout = io.StringIO()
59
+ stderr = io.StringIO()
60
+
61
+ output = ""
62
+ return_value = None
63
+ try:
64
+ with contextlib.redirect_stdout(stdout), contextlib.redirect_stderr(stderr):
65
+ try:
66
+ tree = ast.parse(code)
67
+ last_node = tree.body[-1] if tree.body else None
68
+
69
+ if isinstance(last_node, ast.Expr):
70
+ last_line = code.rstrip().split("\n")[-1]
71
+ exec_code = (
72
+ code[: -len(last_line)]
73
+ + "\n__result__ = "
74
+ + last_line
75
+ )
76
+ exec(exec_code, self.globals, self.locals)
77
+ return_value = self.locals.get("__result__")
78
+ else:
79
+ exec(code, self.globals, self.locals)
80
+ except:
81
+ exec(code, self.globals, self.locals)
82
+
83
+ output = stdout.getvalue()
84
+ if stderr.getvalue():
85
+ output += "\n" + stderr.getvalue()
86
+
87
+ except Exception as e:
88
+ output = f"Error: {type(e).__name__}: {str(e)}\n"
89
+ output += traceback.format_exc()
90
+
91
+ if return_value is not None:
92
+ output += "\n\n" + str(return_value)
93
+
94
+ return output
95
+
96
+
97
+ class BasicAgent:
98
+ """
99
+ Advanced AI Agent using LlamaIndex with CodeAct capabilities and multiple tools.
100
+ """
101
+
102
+ def __init__(self):
103
+ """Initialize the agent with LLM, tools, and code executor."""
104
+ print("Initializing Advanced AI Agent with LlamaIndex...")
105
+
106
+ # Get Hugging Face token
107
+ self.hf_token = os.getenv("HUGGINGFACE_TOKEN")
108
+ print(self.hf_token)
109
+ if not self.hf_token:
110
+ print("Warning: HUGGINGFACE_TOKEN not found. Using default model.")
111
+
112
+ # Initialize LLM
113
+ self._initialize_llm()
114
+
115
+ # Initialize tools
116
+ self._initialize_tools()
117
+
118
+ # Initialize code executor
119
+ self._initialize_code_executor()
120
+
121
+ # Initialize CodeAct Agent
122
+ self._initialize_agent()
123
+
124
+ print("Advanced AI Agent initialized successfully.")
125
+
126
+ def _initialize_llm(self):
127
+ """Initialize the Hugging Face LLM."""
128
+ if not LLAMA_INDEX_AVAILABLE:
129
+ print("LlamaIndex not available, using basic mode")
130
+ self.llm = None
131
+ return
132
+
133
+ try:
134
+ # Using a capable model for reasoning and code generation
135
+ # Note: For production, consider using models like meta-llama/Llama-2-7b-chat-hf or similar
136
+ model_kwargs = {"temperature": 0.1, "max_length": 512}
137
+ generate_kwargs = {"temperature": 0.1, "do_sample": True}
138
+
139
+ if self.hf_token:
140
+ # Use token if available
141
+ self.llm = HuggingFaceLLM(
142
+ model_name=MODEL,
143
+ tokenizer_name=MODEL, # Explicitly use the same model for tokenizer
144
+ model_kwargs=model_kwargs,
145
+ generate_kwargs=generate_kwargs,
146
+ tokenizer_kwargs={"token": self.hf_token},
147
+ )
148
+ else:
149
+ # Try without token for public models
150
+ self.llm = HuggingFaceLLM(
151
+ model_name=MODEL,
152
+ tokenizer_name=MODEL, # Explicitly use the same model for tokenizer
153
+ model_kwargs=model_kwargs,
154
+ generate_kwargs=generate_kwargs,
155
+ )
156
+ print("✅ LLM initialized successfully")
157
+ except Exception as e:
158
+ print(f"Error initializing LLM: {e}")
159
+ # Fallback to a basic setup
160
+ self.llm = None
161
+
162
+ def _initialize_tools(self):
163
+ """Initialize all available tools."""
164
+ self.tools = []
165
+
166
+ # Store basic math functions for fallback mode
167
+ self.math_functions = {
168
+ 'add': lambda a, b: a + b,
169
+ 'subtract': lambda a, b: a - b,
170
+ 'multiply': lambda a, b: a * b,
171
+ 'divide': lambda a, b: a / b if b != 0 else "Error: Division by zero",
172
+ 'power': lambda a, b: a ** b,
173
+ 'percentage': lambda v, p: (v * p) / 100,
174
+ }
175
+
176
+ if not LLAMA_INDEX_AVAILABLE:
177
+ print("Tools initialization skipped - LlamaIndex not available")
178
+ return
179
+
180
+ # Mathematical tools
181
+ def add_numbers(a: float, b: float) -> float:
182
+ """Add two numbers together."""
183
+ return a + b
184
+
185
+ def subtract_numbers(a: float, b: float) -> float:
186
+ """Subtract second number from first number."""
187
+ return a - b
188
+
189
+ def multiply_numbers(a: float, b: float) -> float:
190
+ """Multiply two numbers."""
191
+ return a * b
192
+
193
+ def divide_numbers(a: float, b: float) -> float:
194
+ """Divide first number by second number."""
195
+ if b == 0:
196
+ return "Error: Division by zero"
197
+ return a / b
198
+
199
+ def power_numbers(a: float, b: float) -> float:
200
+ """Raise first number to the power of second number."""
201
+ return a ** b
202
+
203
+ def calculate_percentage(value: float, percentage: float) -> float:
204
+ """Calculate percentage of a value."""
205
+ return (value * percentage) / 100
206
+
207
+ # Create function tools
208
+ try:
209
+ math_tools = [
210
+ FunctionTool.from_defaults(fn=add_numbers),
211
+ FunctionTool.from_defaults(fn=subtract_numbers),
212
+ FunctionTool.from_defaults(fn=multiply_numbers),
213
+ FunctionTool.from_defaults(fn=divide_numbers),
214
+ FunctionTool.from_defaults(fn=power_numbers),
215
+ FunctionTool.from_defaults(fn=calculate_percentage),
216
+ ]
217
+ self.tools.extend(math_tools)
218
+ print("✅ Math tools initialized")
219
+ except Exception as e:
220
+ print(f"Warning: Could not initialize math tools: {e}")
221
+
222
+ # Initialize search tools
223
+ try:
224
+ # DuckDuckGo search
225
+ ddg_spec = DuckDuckGoSearchToolSpec()
226
+ ddg_tools = ddg_spec.to_tool_list()
227
+ self.tools.extend(ddg_tools)
228
+ print("✅ DuckDuckGo search tool initialized")
229
+ except Exception as e:
230
+ print(f"Warning: Could not initialize DuckDuckGo tool: {e}")
231
+
232
+ try:
233
+ # Wikipedia search
234
+ wiki_spec = WikipediaToolSpec()
235
+ wiki_tools = wiki_spec.to_tool_list()
236
+ self.tools.extend(wiki_tools)
237
+ print("✅ Wikipedia tool initialized")
238
+ except Exception as e:
239
+ print(f"Warning: Could not initialize Wikipedia tool: {e}")
240
+
241
+ try:
242
+ # Web requests tool
243
+ requests_spec = RequestsToolSpec()
244
+ requests_tools = requests_spec.to_tool_list()
245
+ self.tools.extend(requests_tools)
246
+ print("✅ Web requests tool initialized")
247
+ except Exception as e:
248
+ print(f"Warning: Could not initialize requests tool: {e}")
249
+
250
+ print(f"✅ Total {len(self.tools)} tools initialized")
251
+
252
+ def _initialize_code_executor(self):
253
+ """Initialize the code executor with necessary imports and functions."""
254
+ # Prepare locals with math functions
255
+ code_locals = {
256
+ "add_numbers": lambda a, b: a + b,
257
+ "subtract_numbers": lambda a, b: a - b,
258
+ "multiply_numbers": lambda a, b: a * b,
259
+ "divide_numbers": lambda a, b: a / b if b != 0 else "Error: Division by zero",
260
+ "power_numbers": lambda a, b: a ** b,
261
+ "calculate_percentage": lambda v, p: (v * p) / 100,
262
+ }
263
+
264
+ # Prepare globals with common imports
265
+ code_globals = {
266
+ "__builtins__": __builtins__,
267
+ "math": __import__("math"),
268
+ "datetime": __import__("datetime"),
269
+ "json": __import__("json"),
270
+ "re": __import__("re"),
271
+ }
272
+
273
+ try:
274
+ code_globals["numpy"] = __import__("numpy")
275
+ code_globals["np"] = code_globals["numpy"]
276
+ except ImportError:
277
+ pass
278
+
279
+ try:
280
+ code_globals["pandas"] = __import__("pandas")
281
+ code_globals["pd"] = code_globals["pandas"]
282
+ except ImportError:
283
+ pass
284
+
285
+ self.code_executor = SimpleCodeExecutor(code_locals, code_globals)
286
+ print("✅ Code executor initialized")
287
+
288
+ def _initialize_agent(self):
289
+ """Initialize the CodeAct Agent (deferred initialization)."""
290
+ if not self.llm:
291
+ print("Warning: No LLM available, using basic mode")
292
+ self.agent = None
293
+ self.context = None
294
+ return
295
+
296
+ # Store initialization parameters for deferred initialization
297
+ self._agent_params = {
298
+ 'code_execute_fn': self.code_executor.execute,
299
+ 'llm': self.llm,
300
+ 'tools': self.tools
301
+ }
302
+ self.agent = None
303
+ self.context = None
304
+ print("✅ CodeAct Agent parameters prepared (deferred initialization)")
305
+
306
+ def _ensure_agent_initialized(self):
307
+ """Ensure the CodeAct agent is initialized when needed."""
308
+ if self.agent is None and hasattr(self, '_agent_params'):
309
+ try:
310
+ # Reset any existing context to avoid conflicts
311
+ if hasattr(self, 'context') and self.context:
312
+ try:
313
+ # Clean up existing context if possible
314
+ self.context = None
315
+ except:
316
+ pass
317
+
318
+ # Create the CodeAct Agent without assuming event loop state
319
+ self.agent = CodeActAgent(**self._agent_params)
320
+ print("✅ CodeAct Agent initialized (deferred)")
321
+
322
+ except Exception as e:
323
+ print(f"Error in deferred agent initialization: {e}")
324
+ print("Continuing with fallback mode...")
325
+ return False
326
+ return self.agent is not None
327
+
328
+ async def __call__(self, question: str) -> str:
329
+ """
330
+ Main method that processes a question and returns an answer.
331
+ """
332
+ print(f"Agent received question (first 100 chars): {question[:100]}...")
333
+
334
+ # Ensure agent is initialized (for deferred initialization)
335
+ self._ensure_agent_initialized()
336
+
337
+ # Enhanced prompt with specific formatting requirements
338
+ enhanced_prompt = f"""
339
+ You are a general AI assistant. I will ask you a question. Report your thoughts, and finish your answer with the following template: FINAL ANSWER: [YOUR FINAL ANSWER].
340
+
341
+ YOUR FINAL ANSWER should be a number OR as few words as possible OR a comma separated list of numbers and/or strings. If you are asked for a number, don't use comma to write your number neither use units such as $ or percent sign unless specified otherwise. If you are asked for a string, don't use articles, neither abbreviations (e.g. for cities), and write the digits in plain text unless specified otherwise. If you are asked for a comma separated list, apply the above rules depending of whether the element to be put in the list is a number or a string.
342
+
343
+ Available tools and capabilities:
344
+ - Mathematical calculations (addition, subtraction, multiplication, division, powers, percentages)
345
+ - Web search using DuckDuckGo
346
+ - Wikipedia search for factual information
347
+ - Web requests for API calls
348
+ - Code execution for complex calculations and data processing
349
+ - Python libraries: math, datetime, json, re, numpy (if available), pandas (if available)
350
+
351
+ Question: {question}
352
+
353
+ Think step by step, use the available tools when necessary, and provide your final answer in the specified format.
354
+ """
355
+
356
+ if self.agent:
357
+ try:
358
+ # Use the CodeAct agent for advanced reasoning
359
+ response = await self._async_agent_run(enhanced_prompt)
360
+ return response
361
+ except Exception as e:
362
+ print(f"Error with CodeAct agent: {e}")
363
+ return f"FINAL ANSWER: Error processing question - {str(e)}"
364
+ else:
365
+ return "FINAL ANSWER: Agent not properly initialized"
366
+
367
+
368
+ async def _async_agent_run(self, prompt: str) -> str:
369
+ """Run the agent asynchronously."""
370
+ try:
371
+ # Create a fresh context for this run to avoid loop conflicts
372
+ context = Context(self.agent)
373
+ handler = self.agent.run(prompt, ctx=context)
374
+
375
+ async for event in handler.stream_events():
376
+ if isinstance(event, ToolCallResult):
377
+ print(
378
+ f"\n-----------\nCode execution result:\n{event.tool_output}"
379
+ )
380
+ elif isinstance(event, ToolCall):
381
+ print(f"\n-----------\nParsed code:\n{event.tool_kwargs['code']}")
382
+ elif isinstance(event, AgentStream):
383
+ print(f"{event.delta}", end="", flush=True)
384
+
385
+ return await handler
386
+ except Exception as e:
387
+ print(f"Async agent error: {e}")
388
+ return f"FINAL ANSWER: Error in agent processing - {str(e)}"
389
+
390
+
app.py CHANGED
@@ -3,23 +3,25 @@ import gradio as gr
3
  import requests
4
  import inspect
5
  import pandas as pd
 
 
 
 
 
 
 
 
 
 
 
 
 
6
 
7
  # (Keep Constants as is)
8
  # --- Constants ---
9
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
10
 
11
- # --- Basic Agent Definition ---
12
- # ----- THIS IS WERE YOU CAN BUILD WHAT YOU WANT ------
13
- class BasicAgent:
14
- def __init__(self):
15
- print("BasicAgent initialized.")
16
- def __call__(self, question: str) -> str:
17
- print(f"Agent received question (first 50 chars): {question[:50]}...")
18
- fixed_answer = "This is a default answer."
19
- print(f"Agent returning fixed answer: {fixed_answer}")
20
- return fixed_answer
21
-
22
- def run_and_submit_all( profile: gr.OAuthProfile | None):
23
  """
24
  Fetches all questions, runs the BasicAgent on them, submits all answers,
25
  and displays the results.
@@ -73,6 +75,7 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
73
  results_log = []
74
  answers_payload = []
75
  print(f"Running agent on {len(questions_data)} questions...")
 
76
  for item in questions_data:
77
  task_id = item.get("task_id")
78
  question_text = item.get("question")
@@ -80,7 +83,7 @@ def run_and_submit_all( profile: gr.OAuthProfile | None):
80
  print(f"Skipping item with missing task_id or question: {item}")
81
  continue
82
  try:
83
- submitted_answer = agent(question_text)
84
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
85
  results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": submitted_answer})
86
  except Exception as e:
 
3
  import requests
4
  import inspect
5
  import pandas as pd
6
+ from typing import Optional
7
+
8
+ # Load environment variables from .env file
9
+ try:
10
+ from dotenv import load_dotenv
11
+ load_dotenv()
12
+ print("✅ .env file loaded successfully")
13
+ except ImportError:
14
+ print("⚠️ python-dotenv not available, .env file not loaded")
15
+ except Exception as e:
16
+ print(f"⚠️ Error loading .env file: {e}")
17
+
18
+ from agent import BasicAgent
19
 
20
  # (Keep Constants as is)
21
  # --- Constants ---
22
  DEFAULT_API_URL = "https://agents-course-unit4-scoring.hf.space"
23
 
24
+ async def run_and_submit_all(profile: Optional[gr.OAuthProfile]):
 
 
 
 
 
 
 
 
 
 
 
25
  """
26
  Fetches all questions, runs the BasicAgent on them, submits all answers,
27
  and displays the results.
 
75
  results_log = []
76
  answers_payload = []
77
  print(f"Running agent on {len(questions_data)} questions...")
78
+ questions_data = questions_data[:1]
79
  for item in questions_data:
80
  task_id = item.get("task_id")
81
  question_text = item.get("question")
 
83
  print(f"Skipping item with missing task_id or question: {item}")
84
  continue
85
  try:
86
+ submitted_answer = await agent(question_text)
87
  answers_payload.append({"task_id": task_id, "submitted_answer": submitted_answer})
88
  results_log.append({"Task ID": task_id, "Question": question_text, "Submitted Answer": submitted_answer})
89
  except Exception as e:
requirements.txt CHANGED
@@ -1,2 +1,19 @@
1
- gradio
2
- requests
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio[oauth]
2
+ requests
3
+ pandas
4
+ python-dotenv
5
+ # LlamaIndex core dependencies
6
+ llama-index-core
7
+ llama-index-llms-huggingface
8
+ llama-index-workflows
9
+
10
+ # LlamaIndex tools
11
+ llama-index-tools-duckduckgo
12
+ llama-index-tools-wikipedia
13
+ llama-index-tools-requests
14
+
15
+ # Additional dependencies for ML and data processing
16
+ torch
17
+ transformers
18
+ numpy
19
+ asyncio