|
--- |
|
title: Data Science Agent |
|
emoji: 🐢 |
|
colorFrom: green |
|
colorTo: red |
|
sdk: gradio |
|
app_file: app.py |
|
license: mit |
|
sdk_version: 5.33.0 |
|
short_description: Agent that solves your AI/ML/Data Science problems |
|
pinned: true |
|
tags: |
|
- agent-demo-track |
|
- Mistral |
|
- LLamaIndex |
|
- Sambanova |
|
- Modal |
|
|
|
--- |
|
|
|
## Demo Video |
|
🎥 Watch the [Demo Video here](https://drive.google.com/file/d/1FlvN_tV1BQ4OmFmGsWPSQt_H6Ok92dmy/view?usp=sharing) |
|
|
|
More details: [Detailed Talk here](https://drive.google.com/file/d/1edHcyxhYKi6RV8MnCtTkbEf0UAhhhA3D/view?usp=sharing) |
|
## Acknowledgements |
|
Made with ❤️ by [Bhavish Pahwa](https://huggingface.co/bpHigh) & [Abhinav Bhatnagar](https://huggingface.co/Master-warrier) |
|
|
|
Here’s the refined **How It Works** section with the iterative back‑and‑forth and LlamaIndex MCP integration clearly outlined: |
|
|
|
## 🔧 How It Works |
|
|
|
### 1. **Gather Requirements** |
|
|
|
* The user engages in a conversation with the chatbot, describing their data science / AI / ML problem. |
|
* There’s an iterative back-and-forth between the user and **Gemini‑2.5‑Pro**—the model asks clarifying questions, the user responds, and this continues until Gemini‑2.5‑Pro is satisfied that requirements are complete. Only then does it issue a “satisfied” response and release the structured requirements. ([huggingface.co][1], [youtube.com][2]) |
|
|
|
### 2. 🛠️ **Generate Plan** (button) |
|
|
|
* Clicking **Generate Plan** makes use of **LlamaIndex’s MCP integration**, which: |
|
|
|
* Discovers all available tools listed via MCP on the Hugging Face server (hf.co/mcp) ([medium.com][3]) |
|
* Prompts **Gemini‑2.5‑Pro** again to select the appropriate tools and construct the plan workflows and call syntax. |
|
* All logic for tool discovery, orchestration, and MCP communication is deployed as a **Modal app**. |
|
|
|
### 3. 🚀 **Generate Code** (button) |
|
|
|
* When the user clicks **Generate Code**, the **Mistral DevStral** model (served via vLLM, OpenAI-compatible) generates runnable code matching the plan and selected tools. This model, and its integration, are hosted on **Modal Labs**. |
|
|
|
### 4. ▶️ **Execute Code** (button) |
|
|
|
* The **Execute Code** button sends the generated script to a sandboxed environment in **Modal Labs**, where it’s securely run. Execution results and any outputs are then presented back to the user. |
|
|
|
This workflow flows user ↔ requirements collection ↔ tool planning ↔ code generation ↔ secure execution—with each step backed by powerful LLMs (Gemini‑2.5‑Pro, Mistral DevStral), LlamaIndex + MCP, and Modal Labs deployment. Samabanova models with Cline are used as devtools / copilots. |
|
|
|
 |
|
|
|
 |
|
|
|
## License |
|
This project is licensed under the MIT License – see the LICENSE file for details. |
|
|
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |