Spaces:
Running
on
Zero
Running
on
Zero
Update requirements.txt
Browse files- requirements.txt +10 -5
requirements.txt
CHANGED
@@ -1,5 +1,10 @@
|
|
1 |
-
|
2 |
-
|
3 |
-
|
4 |
-
|
5 |
-
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
--extra-index-url https://download.pytorch.org/whl/cu124 # grab a CUDA Torch wheel
|
2 |
+
torch==2.5.1+cu124 # keep before flash-attn
|
3 |
+
|
4 |
+
# FlashAttention pre-built wheel that matches: Torch 2.5 • CUDA 12 • cp310
|
5 |
+
https://github.com/Dao-AILab/flash-attention/releases/download/v2.8.0.post2/flash_attn-2.8.0.post2+cu12torch2.5cxx11abiFALSE-cp310-cp310-linux_x86_64.whl # <- 240 MB wheel:contentReference[oaicite:2]{index=2}
|
6 |
+
|
7 |
+
transformers>=4.52.0
|
8 |
+
accelerate>=0.30.2 # bug-fix for device_map edge case:contentReference[oaicite:3]{index=3}
|
9 |
+
gradio>=4.44.0 # Zero-GPU queue fix PR #5698:contentReference[oaicite:4]{index=4}
|
10 |
+
sentencepiece
|