|
# CodeLlama Copilot (Web AI Coding Assistant) |
|
Conversational code assistant powered by Meta's CodeLlama-Instruct. |
|
|
|
## Features |
|
- Write, fix & explain code interactively |
|
- Multi-round chat (remembers context) |
|
- Runs entirely in your cloud workspace (private!) |
|
|
|
--- |
|
|
|
## How to deploy on Hugging Face Spaces (FOR FREE) |
|
Because HF provides **free hosting & free CPUs** (much better with Pro account + GPU though). |
|
|
|
### Step-by-step: |
|
1. **Sign up** or login at [huggingface.co](https://huggingface.co) |
|
2. Go to: https://huggingface.co/spaces |
|
3. Click **New Space** |
|
4. Name it, select **Gradio** as SDK |
|
5. Set it **Public** *(private needs a paid plan)* |
|
6. Once created, open the Space repo UI |
|
7. Upload **app.py** and **requirements.txt** exactly as above. |
|
8. Save and **click "Commit"** |
|
9. Wait ~1–2 mins: it will build & launch! |
|
URL formed as: |
|
`https://huggingface.co/spaces/<your-username>/<space-name>` |
|
10. Use it live — share link with friends. |
|
|
|
--- |
|
|
|
## Notes |
|
- **Free Spaces use CPU, so can be slow / timeout for big models.** |
|
- Upgrade to **Hugging Face Pro** ~$9/mo |
|
→ Choose A10G GPU in Space hardware settings for FULL performance. |
|
- Absolutely **no server hassles**. Just upload files! |
|
|
|
--- |
|
|
|
## (Optional) Run locally: |
|
```bash |
|
pip install -r requirements.txt |
|
python app.py |