File size: 3,078 Bytes
87c1568
 
 
 
 
 
 
 
e9b0c98
190dcc8
ba6e969
190dcc8
 
6ef43e0
 
 
 
190dcc8
87c1568
8e7cf2f
0a69c7d
a1c2569
0a69c7d
2b74b67
0a69c7d
 
 
8e7cf2f
0a69c7d
8e7cf2f
 
 
 
 
 
 
 
 
 
 
 
 
 
0a69c7d
8e7cf2f
0a69c7d
8e7cf2f
 
 
 
 
 
 
 
 
 
 
 
 
 
87c1568
5de7c70
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
title: Data Science Agent
emoji: 🐢
colorFrom: green
colorTo: red
sdk: gradio
app_file: app.py
license: mit
sdk_version: 5.33.0
short_description: Agent that solves your AI/ML/Data Science problems
pinned: true
tags:
  - agent-demo-track
  - Mistral
  - LLamaIndex
  - Sambanova
  - Modal

---

## Demo Video
🎥 Watch the [Demo Video here](https://drive.google.com/file/d/1FlvN_tV1BQ4OmFmGsWPSQt_H6Ok92dmy/view?usp=sharing)

More details: [Detailed Talk here](https://drive.google.com/file/d/1edHcyxhYKi6RV8MnCtTkbEf0UAhhhA3D/view?usp=sharing)
## Acknowledgements
Made with ❤️ by [Bhavish Pahwa](https://huggingface.co/bpHigh) & [Abhinav Bhatnagar](https://huggingface.co/Master-warrier)

Here’s the refined **How It Works** section with the iterative back‑and‑forth and LlamaIndex MCP integration clearly outlined:

## 🔧 How It Works

### 1. **Gather Requirements**

* The user engages in a conversation with the chatbot, describing their data science / AI / ML problem.
* There’s an iterative back-and-forth between the user and **Gemini‑2.5‑Pro**—the model asks clarifying questions, the user responds, and this continues until Gemini‑2.5‑Pro is satisfied that requirements are complete. Only then does it issue a “satisfied” response and release the structured requirements. ([huggingface.co][1], [youtube.com][2])

### 2. 🛠️ **Generate Plan** (button)

* Clicking **Generate Plan** makes use of **LlamaIndex’s MCP integration**, which:

  * Discovers all available tools listed via MCP on the Hugging Face server (hf.co/mcp) ([medium.com][3])
  * Prompts **Gemini‑2.5‑Pro** again to select the appropriate tools and construct the plan workflows and call syntax.
* All logic for tool discovery, orchestration, and MCP communication is deployed as a **Modal app**.

### 3. 🚀 **Generate Code** (button)

* When the user clicks **Generate Code**, the **Mistral DevStral** model (served via vLLM, OpenAI-compatible) generates runnable code matching the plan and selected tools. This model, and its integration, are hosted on **Modal Labs**.

### 4. ▶️ **Execute Code** (button)

* The **Execute Code** button sends the generated script to a sandboxed environment in **Modal Labs**, where it’s securely run. Execution results and any outputs are then presented back to the user.

This workflow flows user ↔ requirements collection ↔ tool planning ↔ code generation ↔ secure execution—with each step backed by powerful LLMs (Gemini‑2.5‑Pro, Mistral DevStral), LlamaIndex + MCP, and Modal Labs deployment. Samabanova models with Cline are used as devtools / copilots.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623f2f5828672458f74879b3/DELDAtNnCJbS63b1-Fml8.png)

![image/png](https://cdn-uploads.huggingface.co/production/uploads/623f2f5828672458f74879b3/_xKyLcuJS42uBC2FqjeZP.png)

## License
This project is licensed under the MIT License – see the LICENSE file for details.

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference