“Transcendental-Programmer”
commited on
Commit
·
a43d90e
1
Parent(s):
d773e1b
fix: readme config fixes
Browse files
README.md
CHANGED
@@ -1,37 +1,10 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
```bash
|
12 |
-
bash start.sh
|
13 |
-
```
|
14 |
-
|
15 |
-
- The `requirements.txt` includes all necessary dependencies.
|
16 |
-
- Make sure to set any required environment variables in the Hugging Face Space settings.
|
17 |
-
|
18 |
-
## Using the Fine-tuned BART Large Model from Hugging Face Hub
|
19 |
-
|
20 |
-
You can load the fine-tuned BART large model directly from Hugging Face Hub in your backend code as follows:
|
21 |
-
|
22 |
-
```python
|
23 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
24 |
-
|
25 |
-
model_name = "ArchCoder/fine-tuned-bart-large"
|
26 |
-
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
27 |
-
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
|
28 |
-
```
|
29 |
-
|
30 |
-
Replace `"ArchCoder/fine-tuned-bart-large"` with your actual model repository name if different.
|
31 |
-
|
32 |
-
Make sure your backend code (e.g., in `llm_agent.py` or wherever the model is loaded) uses this method to load the model from the Hub instead of local files.
|
33 |
-
|
34 |
-
## Notes
|
35 |
-
|
36 |
-
- Static files are served from the `static` directory.
|
37 |
-
- Adjust API URLs in the frontend to point to the deployed backend URL.
|
|
|
1 |
+
---
|
2 |
+
title: LLM-Integrated-Excel-Plotter-App
|
3 |
+
emoji: 📊
|
4 |
+
colorFrom: blue
|
5 |
+
colorTo: indigo
|
6 |
+
sdk: gradio
|
7 |
+
sdk_version: "4.16.0"
|
8 |
+
app_file: app.py
|
9 |
+
pinned: false
|
10 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|