Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
Spaces:
Duplicated from
ggml-org/gguf-my-repo
olegshulyakov
/
gguf-my-repo
like
2
Sleeping
App
Files
Files
Community
Fetching metadata from the HF Docker repository...
fca7ea4
gguf-my-repo
Ctrl+K
Ctrl+K
15 contributors
History:
89 commits
Oleg Shulyakov
Add F16 and BF16 quantization
fca7ea4
24 days ago
downloads
Download fixes (#127)
10 months ago
.dockerignore
Safe
134 Bytes
Use llama.cpp image
24 days ago
.gitattributes
Safe
1.63 kB
imatrix support (#80)
about 1 year ago
.gitignore
Safe
3.19 kB
Better isolation + various improvements (#133)
10 months ago
Dockerfile
Safe
644 Bytes
Fix Gradio error
24 days ago
README.md
Safe
458 Bytes
Format
24 days ago
app.py
16.7 kB
Add F16 and BF16 quantization
24 days ago
docker-compose.yml
Safe
274 Bytes
Rename CUDA parameter
24 days ago
error.png
Safe
740 kB
Upload error.png (#11)
over 1 year ago
llama.png
Safe
1.8 MB
LFS
Upload llama.png (#4)
over 1 year ago
requirements.txt
Safe
82 Bytes
Fix Gradio error
24 days ago
start.sh
Safe
121 Bytes
Use llama.cpp image
24 days ago
train_data.txt
Safe
201 kB
Rename train data fallback file
24 days ago