File size: 5,062 Bytes
4bb1c24 8598fe0 8456b32 e7066a8 8456b32 e7066a8 c0f95f4 e7066a8 8456b32 e7066a8 8456b32 e7066a8 c0f95f4 e7066a8 5f0a440 e7066a8 5f0a440 c0f95f4 5f0a440 e7066a8 4bb1c24 b8ffafb 4bb1c24 3559324 4bb1c24 492a4b6 4bb1c24 b8ffafb 4bb1c24 b8ffafb 4bb1c24 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 |
---
library_name: transformers
pipeline_tag: text-generation
tags:
- ONNX
- ONNXRuntime
license: mit
---
## DeepSeek-R1-Distill-Qwen ONNX models
This repository hosts the optimized versions of [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) to accelerate inference with ONNX Runtime.
Optimized models are published here in [ONNX](https://onnx.ai) format to run with [ONNX Runtime](https://onnxruntime.ai/) on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets.
To easily get started with the model, you can use our ONNX Runtime Generate() API. See instructions [here](https://github.com/microsoft/onnxruntime/blob/gh-pages/docs/genai/tutorials/deepseek-python.md)
For CPU:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download onnxruntime/DeepSeek-R1-Distill-ONNX --include deepseek-r1-distill-qwen-1.5B/cpu_and_mobile/* --local-dir .
# Install the CPU package of ONNX Runtime GenAI
pip install onnxruntime-genai
# Please adjust the model directory (-m) accordingly
curl -o https://raw.githubusercontent.com/microsoft/onnxruntime-genai/refs/heads/main/examples/python/model-chat.py
python model-chat.py -m /path/to/cpu-int4-rtn-block-32-acc-level-4/ -e cpu --chat_template "<|begin▁of▁sentence|><|User|>{input}<|Assistant|>"
```
For CUDA:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download onnxruntime/DeepSeek-R1-Distill-ONNX --include deepseek-r1-distill-qwen-1.5B/gpu/* --local-dir .
# Install the CUDA package of ONNX Runtime GenAI
pip install onnxruntime-genai-cuda
# Please adjust the model directory (-m) accordingly
curl -o https://raw.githubusercontent.com/microsoft/onnxruntime-genai/refs/heads/main/examples/python/model-chat.py
python model-chat.py -m /path/to/gpu-int4-rtn-block-32/ -e cuda --chat_template "<|begin▁of▁sentence|><|User|>{input}<|Assistant|>"
```
For DirectML:
```bash
# Download the model directly using the Hugging Face CLI
huggingface-cli download onnxruntime/DeepSeek-R1-Distill-ONNX --include deepseek-r1-distill-qwen-1.5B/gpu/* --local-dir .
# Install the DirectML package of ONNX Runtime GenAI
pip install onnxruntime-genai-directml
# Please adjust the model directory (-m) accordingly
curl -o https://raw.githubusercontent.com/microsoft/onnxruntime-genai/refs/heads/main/examples/python/model-chat.py
python model-chat.py -m /path/to/gpu-int4-rtn-block-32/ -e dml --chat_template "<|begin▁of▁sentence|><|User|>{input}<|Assistant|>"
```
## ONNX Models
Here are some of the optimized configurations we have added:
1. ONNX model for CPU and mobile using int4 quantization via RTN.
2. ONNX model for GPU using int4 quantization via RTN.
## Performance
ONNX enables you to run your models on-device across CPU, GPU, NPU. With ONNX, you can run your models on any machine across all silica (Qualcomm, AMD, Intel, Nvidia, etc).
See the table below for some key benchmarks for Windows GPU and CPU devices that the ONNX models were tested on.
| **Model** | **Precisionl** | **Device Type** | **Execution Provider** | **Device** | **Token Generation Throughput** | **Speed up vs base model**|
| :------------: | :------------: | :------------: | :------------: | :------------: | :------------: | :------------:|
| deepseek-ai_DeepSeek-R1-Distill-Qwen-1.5B | ONNX | fp16 | CUDA | RTX 4090 | 197.195 | 4X |
| deepseek-ai_DeepSeek-R1-Distill-Qwen-1.5B | ONNX | int4 | CUDA | RTX 4090 | 313.32 | 6.3X |
| deepseek-ai_DeepSeek-R1-Distill-Qwen-1.5B | ONNX | int4 | CPU | Intel i9 | 11.749 | 1.4x |
| deepseek-ai_DeepSeek-R1-Distill-Qwen-7B | ONNX | fp16 | CUDA | RTX 4090 | 57.316 | 1.3X |
| deepseek-ai_DeepSeek-R1-Distill-Qwen-7B | ONNX | int4 | CUDA | RTX 4090 | 161.00 | 3.7X |
| deepseek-ai_DeepSeek-R1-Distill-Qwen-7B | ONNX | int4 | CPU | Intel i9 | 3.184 | 20X |
CPU build specs:
- onnxruntime-genai==0.6.0-dev
- transformers==4.46.2
- onnxruntime==1.20.01
CUDA build specs:
- onnxruntime-genai-cuda==0.6.0-dev
- transformers==4.46.2
- onnxruntime-gpu==1.20.1
## Model Description
- **Developed by:** ONNX Runtime
- **Model type:** ONNX
- **Language(s) (NLP):** Python, C, C++
- **License:** MIT
- **Model Description:** This is a conversion of the Deepseek R1 for ONNX Runtime inference.
- **Disclaimer:** Model is only an optimization of the base model, any risk associated with the model is the responsibility of the user of the model. Please verify and test for you scenarios. There may be a slight difference in output from the base model with the optimizations applied. **
## Base Model Information
See HF links [DeepSeek-R1-Distill-Qwen-1.5B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B/) and [DeepSeek-R1-Distill-Qwen-7B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B/) for details.
|