Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
56.3
TFLOPS
3
2
10
spooner
spooner2
Follow
jeiku's profile picture
Epiculous's profile picture
Mi6paulino's profile picture
6 followers
ยท
95 following
[email protected]
AI & ML interests
None yet
Recent Activity
reacted
to
mitkox
's
post
with ๐ฅ
6 days ago
I got 370 tokens/sec of Qwen3-30B-A3B 2507 on my desktop Z8 GPU workstation. My target is 400 t/s, and the last 10 % always tastes like victory!
reacted
to
eaddario
's
post
with ๐
about 1 month ago
Layer-wise and Pruned versions of Qwen/Qwen3-30B-A3B * Tesor-wise: https://huggingface.co/eaddario/Qwen3-30B-A3B-GGUF * Pruned: https://huggingface.co/eaddario/Qwen3-30B-A3B-pruned-GGUF Even though the Perplexity scores of the pruned version are 3 times higher, the ARC, HellaSwag, MMLU, Truthful QA and WinoGrande scores are holding remarkably well, considering two layers were removed (5 and 39). This seems to support Xin Men et al conclusions in ShortGPT: Layers in Large Language Models are More Redundant Than You Expect (2403.03853) Results summary in the model's card and test results in the ./scores directory. Questions/feedback is always welcomed.
liked
a model
2 months ago
ubergarm/DeepSeek-R1-0528-GGUF
View all activity
Organizations
None yet
models
0
None public yet
datasets
0
None public yet