DeepSeek-R1-Zero-AWQ 671B

It's a 4-bit AWQ quantization of DeepSeek-R1-Zero 671B model, it's suitable for use with GPU nodes like 8xA100/8xH20/8xH100 with vLLM and SGLang

You can run this model on 8x H100 80GB using vLLM with

vllm serve adamo1139/DeepSeek-R1-Zero-AWQ --tensor-parallel 8

Made by DeepSeek with ❤️

example

Downloads last month
9
Safetensors
Model size
671B params
Tensor type
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for adamo1139/DeepSeek-R1-Zero-AWQ

Quantized
(7)
this model