Priyanshi Saxena commited on
Commit
2c26464
Β·
1 Parent(s): 81a07ed

Add comprehensive logging and debug endpoints for tool calling

Browse files
Files changed (3) hide show
  1. README_old.md +0 -431
  2. app.py +126 -11
  3. src/tools/base_tool.py +22 -0
README_old.md DELETED
@@ -1,431 +0,0 @@
1
- ---
2
- title: Web3 Research Co-Pilot
3
- emoji: πŸš€
4
- colorFrom: blue
5
- colorTo: purple
6
- sdk: docker
7
- sdk_version: "latest"
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- # πŸš€ Web3 Research Co-Pilot
13
-
14
- AI-powered cryptocurrency research assistant with real-time market data and professional analysis.
15
-
16
- **🌐 Live Demo**: https://archcoder-web3-copilot.hf.space
17
-
18
- ## ⚑ Quick Start
19
-
20
- ### 1. Get API Key
21
- Get your **Google Gemini API key** from [Google AI Studio](https://aistudio.google.com/)
22
-
23
- ### 2. Run Locally
24
- ```bash
25
- git clone https://github.com/Transcendental-Programmer/web3-research-agent.git
26
- cd web3-research-agent
27
- pip install -r requirements.txt
28
-
29
- # Create .env file
30
- echo "GEMINI_API_KEY=your_api_key_here" > .env
31
-
32
- # Run the app
33
- python app.py
34
- ```
35
-
36
- ### 3. Access
37
- Open http://localhost:7860 in your browser
38
-
39
- ## πŸ”§ Optional Setup
40
-
41
- Add these to your `.env` for enhanced features:
42
- ```bash
43
- COINGECKO_API_KEY=your_coingecko_key # Higher rate limits
44
- ETHERSCAN_API_KEY=your_etherscan_key # Blockchain data
45
- ```
46
-
47
- ## 🐳 Deploy to HuggingFace Spaces
48
-
49
- 1. Fork this repo
50
- 2. Create a new [HuggingFace Space](https://huggingface.co/spaces) with Docker SDK
51
- 3. Set environment variables in Space settings
52
- 4. Connect your repo and deploy
53
-
54
- ## ✨ Features
55
-
56
- - **AI Analysis**: Google Gemini-powered insights
57
- - **Live Data**: CoinGecko, DeFiLlama, Etherscan APIs
58
- - **Visualizations**: Interactive charts and graphs
59
- - **Professional UI**: Clean, responsive interface
60
- - **Multi-chain**: Ethereum, DeFi, Layer 2 support
61
-
62
- ## 🎯 Usage Examples
63
-
64
- - "What's Bitcoin's price and market sentiment?"
65
- - "Show me top DeFi protocols by TVL"
66
- - "Analyze Ethereum gas prices"
67
- - "Compare BTC vs ETH performance"
68
-
69
- ## οΏ½ Troubleshooting
70
-
71
- **Can't access localhost:7860?**
72
- - Check if app is running: `ps aux | grep "python app.py"`
73
- - Try your local IP: `hostname -I` then http://YOUR_IP:7860
74
- - In cloud environments, use provided forwarded URL
75
-
76
- **API errors?**
77
- - Verify GEMINI_API_KEY in .env file
78
- - Check API key permissions
79
-
80
- ## πŸ“ Project Structure
81
-
82
- ```
83
- web3-research-agent/
84
- β”œβ”€β”€ app.py # Main FastAPI app
85
- β”œβ”€β”€ requirements.txt # Dependencies
86
- β”œβ”€β”€ Dockerfile # Container config
87
- β”œβ”€β”€ test_suite.py # Tests
88
- └── src/ # Source code
89
- β”œβ”€β”€ research_agent.py
90
- β”œβ”€β”€ visualizations.py
91
- └── ...
92
- ```
93
-
94
- ---
95
-
96
- **Built for the Web3 community** ❀️
97
- pip install -r requirements.txt
98
- ```
99
-
100
- 3. **Set up environment variables**
101
-
102
- Create a `.env` file in the project root:
103
- ```bash
104
- cp .env.example .env
105
- ```
106
-
107
- Edit `.env` with your API keys:
108
- ```properties
109
- # Required
110
- GEMINI_API_KEY=your_gemini_api_key_here
111
-
112
- # Optional (for enhanced functionality)
113
- COINGECKO_API_KEY=your_coingecko_api_key_here
114
- ETHERSCAN_API_KEY=your_etherscan_api_key_here
115
- ```
116
-
117
- 4. **Run the application**
118
- ```bash
119
- python app.py
120
- ```
121
-
122
- 5. **Access the interface**
123
- - **Local Development**: http://localhost:7860
124
- - **Production/Cloud**: Check your hosting platform's port forwarding
125
- - **Docker**: http://localhost:7860
126
-
127
- ## πŸš€ HuggingFace Spaces Deployment
128
-
129
- This project is configured for HuggingFace Spaces deployment with Docker SDK.
130
-
131
- ### Quick Deploy to HF Spaces
132
-
133
- 1. **Fork/Clone this repository**
134
- 2. **Create a new HuggingFace Space**:
135
- - Go to [HuggingFace Spaces](https://huggingface.co/spaces)
136
- - Click "Create new Space"
137
- - Choose "Docker" as the SDK
138
- - Upload your repository files
139
-
140
- 3. **Configure Environment Variables**:
141
- In your Space settings, add:
142
- ```
143
- GEMINI_API_KEY=your_gemini_api_key
144
- COINGECKO_API_KEY=your_coingecko_api_key
145
- ETHERSCAN_API_KEY=your_etherscan_api_key
146
- ```
147
-
148
- 4. **Deploy via Git**:
149
- ```bash
150
- git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
151
- git push hf main
152
- ```
153
-
154
- ### HF Spaces Configuration
155
-
156
- The project includes a proper `README.md` header for HF Spaces:
157
- ```yaml
158
- title: Web3 Research Co-Pilot
159
- emoji: πŸš€
160
- colorFrom: blue
161
- colorTo: purple
162
- sdk: docker
163
- app_file: app.py
164
- ```
165
-
166
- ## 🐳 Docker Deployment
167
-
168
- ### Local Docker
169
-
170
- ```bash
171
- # Build the image
172
- docker build -t web3-research-copilot .
173
-
174
- # Run the container
175
- docker run -p 7860:7860 \
176
- -e GEMINI_API_KEY=your_key_here \
177
- -e COINGECKO_API_KEY=your_key_here \
178
- -e ETHERSCAN_API_KEY=your_key_here \
179
- web3-research-copilot
180
- ```
181
-
182
- ### HuggingFace Spaces
183
-
184
- This project is optimized for HuggingFace Spaces deployment:
185
-
186
- 1. **Create a new Space**:
187
- - Go to [HuggingFace Spaces](https://huggingface.co/spaces)
188
- - Click "Create new Space"
189
- - Choose "Docker" as the SDK
190
- - Set visibility to "Public" or "Private"
191
-
192
- 2. **Configure Environment Variables**:
193
- In your Space settings, add:
194
- ```
195
- GEMINI_API_KEY=your_gemini_api_key
196
- COINGECKO_API_KEY=your_coingecko_api_key
197
- ETHERSCAN_API_KEY=your_etherscan_api_key
198
- ```
199
-
200
- 3. **Deploy**:
201
- ```bash
202
- git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
203
- git push hf main
204
- ```
205
-
206
- **Live Example**: https://archcoder-web3-copilot.hf.space
207
-
208
- ## πŸ”„ Development Setup
209
-
210
- ### Adding HuggingFace Remote
211
-
212
- To deploy to your HuggingFace Space:
213
-
214
- ```bash
215
- # Add HuggingFace remote
216
- git remote add hf https://huggingface.co/spaces/ArchCoder/web3-copilot
217
-
218
- # Or for your own space
219
- git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
220
-
221
- # Deploy to HF Spaces
222
- git push hf main
223
- ```
224
-
225
- ## πŸ“Š Testing
226
-
227
- Run the comprehensive test suite:
228
-
229
- ```bash
230
- python test_suite.py
231
- ```
232
-
233
- Expected output:
234
- ```
235
- πŸš€ Web3 Research Co-Pilot - Test Suite
236
- ==================================================
237
- βœ… All imports successful
238
- βœ… Configuration validated
239
- βœ… Visualizations working
240
- βœ… Tools initialized
241
- βœ… Service functional
242
- βœ… API endpoints healthy
243
- βœ… Performance acceptable
244
-
245
- Tests passed: 7/7
246
- Success rate: 100.0%
247
- πŸŽ‰ All tests passed!
248
- ```
249
-
250
- ## πŸ—οΈ Project Structure
251
-
252
- ```
253
- web3-research-agent/
254
- β”œβ”€β”€ app.py # Main FastAPI application
255
- β”œβ”€β”€ requirements.txt # Python dependencies
256
- β”œβ”€β”€ pyproject.toml # Project configuration
257
- β”œβ”€β”€ uv.lock # Dependency lock file
258
- β”œβ”€β”€ test_suite.py # Comprehensive test suite
259
- β”œβ”€β”€ .env.example # Environment template
260
- β”œβ”€β”€ src/ # Source code
261
- β”‚ β”œβ”€β”€ __init__.py
262
- β”‚ β”œβ”€β”€ research_agent.py # Main agent logic
263
- β”‚ β”œβ”€β”€ enhanced_agent.py # Enhanced agent features
264
- β”‚ β”œβ”€β”€ config.py # Configuration management
265
- β”‚ β”œβ”€β”€ cache_manager.py # Response caching
266
- β”‚ β”œβ”€β”€ api_clients.py # External API clients
267
- β”‚ β”œβ”€β”€ defillama_client.py # DeFiLlama integration
268
- β”‚ β”œβ”€β”€ news_aggregator.py # News and social data
269
- β”‚ β”œβ”€β”€ portfolio_analyzer.py # Portfolio analysis
270
- β”‚ └── visualizations.py # Chart generation
271
- └── README.md # This file
272
- ```
273
-
274
- ## πŸ”§ Configuration
275
-
276
- ### Required Environment Variables
277
-
278
- | Variable | Description | Required |
279
- |----------|-------------|----------|
280
- | `GEMINI_API_KEY` | Google Gemini API key for AI analysis | βœ… |
281
-
282
- ### Optional Environment Variables
283
-
284
- | Variable | Description | Default |
285
- |----------|-------------|---------|
286
- | `COINGECKO_API_KEY` | CoinGecko API key for enhanced rate limits | None |
287
- | `ETHERSCAN_API_KEY` | Etherscan API key for blockchain data | None |
288
- | `AIRAA_WEBHOOK_URL` | AIRAA integration webhook URL | None |
289
- | `AIRAA_API_KEY` | AIRAA API authentication key | None |
290
-
291
- ### Getting API Keys
292
-
293
- 1. **Google Gemini API**:
294
- - Go to [Google AI Studio](https://aistudio.google.com/)
295
- - Create a new API key
296
- - Copy the key to your `.env` file
297
-
298
- 2. **CoinGecko API** (optional):
299
- - Sign up at [CoinGecko](https://www.coingecko.com/api)
300
- - Get your free API key
301
- - Provides higher rate limits
302
-
303
- 3. **Etherscan API** (optional):
304
- - Register at [Etherscan](https://etherscan.io/apis)
305
- - Create a free API key
306
- - Enables blockchain data queries
307
-
308
- ## 🎯 Usage Examples
309
-
310
- ### Market Analysis
311
- ```
312
- "Analyze Bitcoin price trends and institutional adoption patterns"
313
- ```
314
-
315
- ### DeFi Research
316
- ```
317
- "Compare top DeFi protocols by TVL, yield, and risk metrics"
318
- ```
319
-
320
- ### Layer 2 Analysis
321
- ```
322
- "Evaluate Ethereum Layer 2 scaling solutions and adoption metrics"
323
- ```
324
-
325
- ### Yield Optimization
326
- ```
327
- "Identify optimal yield farming strategies across multiple chains"
328
- ```
329
-
330
- ## 🌐 API Endpoints
331
-
332
- | Endpoint | Method | Description |
333
- |----------|--------|-------------|
334
- | `/` | GET | Web interface |
335
- | `/status` | GET | System status and configuration |
336
- | `/query` | POST | Process research queries |
337
- | `/health` | GET | Health check |
338
-
339
- ### Query API Example
340
-
341
- ```bash
342
- curl -X POST "http://localhost:7860/query" \
343
- -H "Content-Type: application/json" \
344
- -d '{"query": "What is the current Bitcoin price?"}'
345
- ```
346
-
347
- ## πŸ› Troubleshooting
348
-
349
- ### Port Access Issues
350
-
351
- **Problem**: Can't access the app on http://localhost:7860 or http://0.0.0.0:7860
352
-
353
- **Solutions**:
354
-
355
- 1. **Check if the app is running**:
356
- ```bash
357
- ps aux | grep "python app.py"
358
- ```
359
-
360
- 2. **Verify port binding**:
361
- ```bash
362
- netstat -tlnp | grep :7860
363
- ```
364
-
365
- 3. **For Development Environments (VS Code, etc.)**:
366
- - Look for port forwarding notifications
367
- - Check your IDE's "Ports" or "Forwarded Ports" tab
368
- - Use the forwarded URL provided by your development environment
369
-
370
- 4. **For Cloud/Remote Environments**:
371
- - The app binds to `0.0.0.0:7860` for external access
372
- - Use your platform's provided URL (not localhost)
373
- - Check firewall rules if on a VPS/server
374
-
375
- 5. **Local Network Access**:
376
- ```bash
377
- # Find your local IP
378
- hostname -I
379
- # Access via: http://YOUR_IP:7860
380
- ```
381
-
382
- ### Common Issues
383
-
384
- 1. **"GEMINI_API_KEY not configured"**
385
- - Ensure you've set the API key in your `.env` file
386
- - Verify the key is valid and has proper permissions
387
-
388
- 2. **"Connection refused" on port 7860**
389
- - Check if another process is using port 7860: `lsof -i :7860`
390
- - Ensure the app started successfully: `python app.py`
391
-
392
- 3. **Import errors**
393
- - Ensure all dependencies are installed: `pip install -r requirements.txt`
394
- - Check Python version compatibility (3.11+)
395
-
396
- 4. **Slow visualization loading**
397
- - Check your internet connection
398
- - API rate limits may be affecting data retrieval
399
-
400
- ### Getting Help
401
-
402
- - Check the test suite: `python test_suite.py`
403
- - Review logs in the terminal output
404
- - Verify API keys are configured correctly
405
- - Open an issue on GitHub with error details
406
-
407
- ## 🀝 Contributing
408
-
409
- 1. Fork the repository
410
- 2. Create a feature branch
411
- 3. Make your changes
412
- 4. Add tests for new functionality
413
- 5. Run the test suite
414
- 6. Submit a pull request
415
-
416
- ## πŸ“„ License
417
-
418
- This project is licensed under the MIT License - see the LICENSE file for details.
419
-
420
- ## πŸ™ Acknowledgments
421
-
422
- - **Google Gemini** for AI capabilities
423
- - **CoinGecko** for comprehensive market data
424
- - **DeFiLlama** for DeFi protocol analytics
425
- - **Etherscan** for blockchain data
426
- - **FastAPI** for the web framework
427
- - **Plotly** for interactive visualizations
428
-
429
- ---
430
-
431
- **Built with ❀️ for the Web3 community**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
app.py CHANGED
@@ -71,9 +71,10 @@ class Web3CoPilotService:
71
 
72
  async def process_query(self, query: str) -> QueryResponse:
73
  """Process research query with visualizations"""
74
- logger.info(f"Processing query: {query[:100]}...")
75
 
76
  if not query.strip():
 
77
  return QueryResponse(
78
  success=False,
79
  response="Please provide a research query.",
@@ -82,6 +83,7 @@ class Web3CoPilotService:
82
 
83
  try:
84
  if not self.enabled:
 
85
  response = """**Research Assistant - Limited Mode**
86
 
87
  API access available for basic cryptocurrency data:
@@ -92,28 +94,35 @@ API access available for basic cryptocurrency data:
92
  Configure GEMINI_API_KEY environment variable for full AI analysis."""
93
  return QueryResponse(success=True, response=response, sources=["System"])
94
 
95
- logger.info("Processing with AI research agent...")
 
 
96
  result = await self.agent.research_query(query)
 
97
 
98
  if result.get("success"):
99
  response = result.get("result", "No analysis generated")
100
  sources = result.get("sources", [])
101
  metadata = result.get("metadata", {})
102
 
 
 
103
  # Generate visualizations if relevant data is available
104
  visualizations = []
105
  if metadata:
 
106
  vis_html = await self._generate_visualizations(metadata, query)
107
  if vis_html:
108
  visualizations.append(vis_html)
 
109
 
110
  # Send to AIRAA if enabled
111
  if self.airaa and self.airaa.enabled:
112
  try:
113
  await self.airaa.send_research_data(query, response)
114
- logger.info("Data sent to AIRAA")
115
  except Exception as e:
116
- logger.warning(f"AIRAA integration failed: {e}")
117
 
118
  return QueryResponse(
119
  success=True,
@@ -124,11 +133,11 @@ Configure GEMINI_API_KEY environment variable for full AI analysis."""
124
  )
125
  else:
126
  error_msg = result.get("error", "Research analysis failed")
127
- logger.error(f"Research failed: {error_msg}")
128
  return QueryResponse(success=False, response=error_msg, error=error_msg)
129
 
130
  except Exception as e:
131
- logger.error(f"Query processing error: {e}")
132
  error_msg = f"Processing error: {str(e)}"
133
  return QueryResponse(success=False, response=error_msg, error=error_msg)
134
 
@@ -677,34 +686,60 @@ async def get_homepage(request: Request):
677
  const sendBtn = document.getElementById('sendBtn');
678
  const query = input.value.trim();
679
 
680
- if (!query) return;
 
 
 
681
 
 
682
  addMessage('user', query);
683
  input.value = '';
684
 
 
685
  sendBtn.disabled = true;
686
  sendBtn.innerHTML = '<span class="loading">Processing</span>';
687
 
688
  try {
 
 
 
689
  const response = await fetch('/query', {
690
  method: 'POST',
691
  headers: { 'Content-Type': 'application/json' },
692
  body: JSON.stringify({ query, chat_history: chatHistory })
693
  });
694
 
 
 
 
 
 
 
 
695
  const result = await response.json();
 
 
 
 
 
 
696
 
697
  if (result.success) {
698
  addMessage('assistant', result.response, result.sources, result.visualizations);
 
699
  } else {
700
- addMessage('assistant', result.response || 'Analysis failed. Please try again.');
 
701
  }
702
  } catch (error) {
703
- addMessage('assistant', 'Connection error. Please check your network and try again.');
 
704
  } finally {
 
705
  sendBtn.disabled = false;
706
  sendBtn.innerHTML = 'Research';
707
  input.focus();
 
708
  }
709
  }
710
 
@@ -790,8 +825,40 @@ async def get_status():
790
 
791
  @app.post("/query", response_model=QueryResponse)
792
  async def process_query(request: QueryRequest):
793
- """Process research query"""
794
- return await service.process_query(request.query)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
795
 
796
  @app.get("/health")
797
  async def health_check():
@@ -803,6 +870,54 @@ async def health_check():
803
  "version": "2.0.0"
804
  }
805
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
806
  if __name__ == "__main__":
807
  import uvicorn
808
  logger.info("Starting Web3 Research Co-Pilot...")
 
71
 
72
  async def process_query(self, query: str) -> QueryResponse:
73
  """Process research query with visualizations"""
74
+ logger.info(f"πŸ” Processing query: {query[:100]}...")
75
 
76
  if not query.strip():
77
+ logger.warning("⚠️ Empty query received")
78
  return QueryResponse(
79
  success=False,
80
  response="Please provide a research query.",
 
83
 
84
  try:
85
  if not self.enabled:
86
+ logger.info("ℹ️ Processing in limited mode (no GEMINI_API_KEY)")
87
  response = """**Research Assistant - Limited Mode**
88
 
89
  API access available for basic cryptocurrency data:
 
94
  Configure GEMINI_API_KEY environment variable for full AI analysis."""
95
  return QueryResponse(success=True, response=response, sources=["System"])
96
 
97
+ logger.info("πŸ€– Processing with AI research agent...")
98
+ logger.info(f"πŸ› οΈ Available tools: {[tool.name for tool in self.agent.tools] if self.agent else []}")
99
+
100
  result = await self.agent.research_query(query)
101
+ logger.info(f"πŸ”„ Agent research completed: success={result.get('success')}")
102
 
103
  if result.get("success"):
104
  response = result.get("result", "No analysis generated")
105
  sources = result.get("sources", [])
106
  metadata = result.get("metadata", {})
107
 
108
+ logger.info(f"πŸ“Š Response generated: {len(response)} chars, {len(sources)} sources")
109
+
110
  # Generate visualizations if relevant data is available
111
  visualizations = []
112
  if metadata:
113
+ logger.info("πŸ“ˆ Checking for visualization data...")
114
  vis_html = await self._generate_visualizations(metadata, query)
115
  if vis_html:
116
  visualizations.append(vis_html)
117
+ logger.info("βœ… Visualization generated")
118
 
119
  # Send to AIRAA if enabled
120
  if self.airaa and self.airaa.enabled:
121
  try:
122
  await self.airaa.send_research_data(query, response)
123
+ logger.info("πŸ“€ Data sent to AIRAA")
124
  except Exception as e:
125
+ logger.warning(f"⚠️ AIRAA integration failed: {e}")
126
 
127
  return QueryResponse(
128
  success=True,
 
133
  )
134
  else:
135
  error_msg = result.get("error", "Research analysis failed")
136
+ logger.error(f"❌ Research failed: {error_msg}")
137
  return QueryResponse(success=False, response=error_msg, error=error_msg)
138
 
139
  except Exception as e:
140
+ logger.error(f"πŸ’₯ Query processing error: {e}", exc_info=True)
141
  error_msg = f"Processing error: {str(e)}"
142
  return QueryResponse(success=False, response=error_msg, error=error_msg)
143
 
 
686
  const sendBtn = document.getElementById('sendBtn');
687
  const query = input.value.trim();
688
 
689
+ if (!query) {
690
+ console.log('❌ Empty query, not sending');
691
+ return;
692
+ }
693
 
694
+ console.log('πŸ“€ Sending query:', query);
695
  addMessage('user', query);
696
  input.value = '';
697
 
698
+ // Update button state
699
  sendBtn.disabled = true;
700
  sendBtn.innerHTML = '<span class="loading">Processing</span>';
701
 
702
  try {
703
+ console.log('πŸ”„ Making API request...');
704
+ const requestStart = Date.now();
705
+
706
  const response = await fetch('/query', {
707
  method: 'POST',
708
  headers: { 'Content-Type': 'application/json' },
709
  body: JSON.stringify({ query, chat_history: chatHistory })
710
  });
711
 
712
+ const requestTime = Date.now() - requestStart;
713
+ console.log(`⏱️ Request completed in ${requestTime}ms`);
714
+
715
+ if (!response.ok) {
716
+ throw new Error(`HTTP ${response.status}: ${response.statusText}`);
717
+ }
718
+
719
  const result = await response.json();
720
+ console.log('πŸ“₯ Response received:', {
721
+ success: result.success,
722
+ responseLength: result.response?.length || 0,
723
+ sources: result.sources?.length || 0,
724
+ visualizations: result.visualizations?.length || 0
725
+ });
726
 
727
  if (result.success) {
728
  addMessage('assistant', result.response, result.sources, result.visualizations);
729
+ console.log('βœ… Message added successfully');
730
  } else {
731
+ console.error('❌ Query failed:', result.error);
732
+ addMessage('assistant', result.response || 'Analysis failed. Please try again.', [], []);
733
  }
734
  } catch (error) {
735
+ console.error('πŸ’₯ Request error:', error);
736
+ addMessage('assistant', `Connection error: ${error.message}. Please check your network and try again.`);
737
  } finally {
738
+ // Reset button state
739
  sendBtn.disabled = false;
740
  sendBtn.innerHTML = 'Research';
741
  input.focus();
742
+ console.log('πŸ”„ Button state reset');
743
  }
744
  }
745
 
 
825
 
826
  @app.post("/query", response_model=QueryResponse)
827
  async def process_query(request: QueryRequest):
828
+ """Process research query with detailed logging"""
829
+ # Log incoming request
830
+ logger.info(f"πŸ“₯ Query received: {request.query[:100]}...")
831
+ logger.info(f"πŸ“Š Chat history length: {len(request.chat_history) if request.chat_history else 0}")
832
+
833
+ start_time = datetime.now()
834
+
835
+ try:
836
+ # Process the query
837
+ result = await service.process_query(request.query)
838
+
839
+ # Log result
840
+ processing_time = (datetime.now() - start_time).total_seconds()
841
+ logger.info(f"βœ… Query processed in {processing_time:.2f}s - Success: {result.success}")
842
+
843
+ if result.success:
844
+ logger.info(f"πŸ“€ Response length: {len(result.response)} chars")
845
+ logger.info(f"πŸ”— Sources: {result.sources}")
846
+ if result.visualizations:
847
+ logger.info(f"πŸ“ˆ Visualizations: {len(result.visualizations)} charts")
848
+ else:
849
+ logger.error(f"❌ Query failed: {result.error}")
850
+
851
+ return result
852
+
853
+ except Exception as e:
854
+ processing_time = (datetime.now() - start_time).total_seconds()
855
+ logger.error(f"πŸ’₯ Query processing exception after {processing_time:.2f}s: {e}")
856
+
857
+ return QueryResponse(
858
+ success=False,
859
+ response=f"System error: {str(e)}",
860
+ error=str(e)
861
+ )
862
 
863
  @app.get("/health")
864
  async def health_check():
 
870
  "version": "2.0.0"
871
  }
872
 
873
+ @app.get("/debug/tools")
874
+ async def debug_tools():
875
+ """Debug endpoint to test tool availability and functionality"""
876
+ try:
877
+ if not service.enabled or not service.agent:
878
+ return {
879
+ "success": False,
880
+ "error": "AI agent not enabled",
881
+ "tools_available": False,
882
+ "gemini_configured": bool(config.GEMINI_API_KEY)
883
+ }
884
+
885
+ tools_info = []
886
+ for tool in service.agent.tools:
887
+ tools_info.append({
888
+ "name": tool.name,
889
+ "description": getattr(tool, 'description', 'No description'),
890
+ "enabled": getattr(tool, 'enabled', True)
891
+ })
892
+
893
+ # Test a simple API call
894
+ test_result = None
895
+ try:
896
+ test_result = await service.process_query("What is the current Bitcoin price?")
897
+ except Exception as e:
898
+ test_result = {"error": str(e)}
899
+
900
+ return {
901
+ "success": True,
902
+ "tools_count": len(service.agent.tools),
903
+ "tools_info": tools_info,
904
+ "test_query_result": {
905
+ "success": test_result.success if hasattr(test_result, 'success') else False,
906
+ "response_length": len(test_result.response) if hasattr(test_result, 'response') else 0,
907
+ "sources": test_result.sources if hasattr(test_result, 'sources') else [],
908
+ "error": test_result.error if hasattr(test_result, 'error') else None
909
+ },
910
+ "gemini_configured": bool(config.GEMINI_API_KEY),
911
+ "timestamp": datetime.now().isoformat()
912
+ }
913
+ except Exception as e:
914
+ logger.error(f"Debug tools error: {e}")
915
+ return {
916
+ "success": False,
917
+ "error": str(e),
918
+ "timestamp": datetime.now().isoformat()
919
+ }
920
+
921
  if __name__ == "__main__":
922
  import uvicorn
923
  logger.info("Starting Web3 Research Co-Pilot...")
src/tools/base_tool.py CHANGED
@@ -75,6 +75,28 @@ class BaseWeb3Tool(BaseTool, ABC):
75
  def _run(self, query: str, filters: Optional[Dict[str, Any]] = None) -> str:
76
  return asyncio.run(self._arun(query, filters))
77
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78
  @abstractmethod
79
  async def _arun(self, query: str, filters: Optional[Dict[str, Any]] = None) -> str:
80
  pass
 
75
  def _run(self, query: str, filters: Optional[Dict[str, Any]] = None) -> str:
76
  return asyncio.run(self._arun(query, filters))
77
 
78
+ def run(self, tool_input: Dict[str, Any]) -> str:
79
+ """Handle both direct calls and LangChain tool calling format"""
80
+ if isinstance(tool_input, dict):
81
+ query = tool_input.get('query', '')
82
+ filters = tool_input.get('filters')
83
+ else:
84
+ # Fallback for string input
85
+ query = str(tool_input)
86
+ filters = None
87
+ return self._run(query, filters)
88
+
89
+ async def arun(self, tool_input: Dict[str, Any]) -> str:
90
+ """Async version of run method"""
91
+ if isinstance(tool_input, dict):
92
+ query = tool_input.get('query', '')
93
+ filters = tool_input.get('filters')
94
+ else:
95
+ # Fallback for string input
96
+ query = str(tool_input)
97
+ filters = None
98
+ return await self._arun(query, filters)
99
+
100
  @abstractmethod
101
  async def _arun(self, query: str, filters: Optional[Dict[str, Any]] = None) -> str:
102
  pass