Instructions to use amd/Kimi-K2.5-W4A8 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use amd/Kimi-K2.5-W4A8 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-text-to-text", model="amd/Kimi-K2.5-W4A8", trust_remote_code=True) messages = [ { "role": "user", "content": [ {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, {"type": "text", "text": "What animal is on the candy?"} ] }, ] pipe(text=messages)# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("amd/Kimi-K2.5-W4A8", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use amd/Kimi-K2.5-W4A8 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "amd/Kimi-K2.5-W4A8" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Kimi-K2.5-W4A8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker
docker model run hf.co/amd/Kimi-K2.5-W4A8
- SGLang
How to use amd/Kimi-K2.5-W4A8 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "amd/Kimi-K2.5-W4A8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Kimi-K2.5-W4A8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "amd/Kimi-K2.5-W4A8" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "amd/Kimi-K2.5-W4A8", "messages": [ { "role": "user", "content": [ { "type": "text", "text": "Describe this image in one sentence." }, { "type": "image_url", "image_url": { "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg" } } ] } ] }' - Docker Model Runner
How to use amd/Kimi-K2.5-W4A8 with Docker Model Runner:
docker model run hf.co/amd/Kimi-K2.5-W4A8
Model Overview
- Model Architecture: Kimi-K2.5
- Input: Text
- Output: Text
- Supported Hardware Microarchitecture: AMD MI300/MI325/MI350/MI355
- ROCm: 7.1.0
- Operating System(s): Linux
- Inference Engine: vLLM
- Model Optimizer: AMD-Quark
- Weight quantization: MOE-only, INT4 Per-Channel & FP8E4M3, Static
- Activation quantization: MOE-only, FP8E4M3, Dynamic
This model was built with Kimi-K2.5 model by applying AMD-Quark for INT4-FP8 quantization.
Model Quantization
The model was quantized from moonshotai/Kimi-K2.5 using AMD-Quark. The weights and activations are quantized to INT4-FP8.
Deployment
Use with vLLM
This model can be deployed efficiently using the vLLM backend.
Evaluation
The model was evaluated on GSM8K benchmarks.
Accuracy
| Benchmark | Kimi-K2.5 | Kimi-K2.5-W4A8(this model) | Recovery |
| GSM8K (flexible-extract) | 94.09 | 93.40 | 99.27% |
Reproduction
The GSM8K results were obtained using the lm-evaluation-harness framework, based on the Docker image vllm/vllm-openai-rocm:v0.14.0.
Install the vLLM (commit ecb4f822091a64b5084b3a4aff326906487a363f) and lm-eval (Version: 0.4.10) in container first.
git clone https://github.com/vllm-project/vllm.git
cd vllm
python3 setup.py develop
pip install lm-eval
Launching server
VLLM_ROCM_USE_AITER_MLA=0 VLLM_ROCM_USE_AITER=1 VLLM_ROCM_USE_AITER_FUSION_SHARED_EXPERTS=0 VLLM_ROCM_USE_AITER_FP4BMM=0 vllm serve amd/Kimi-K2.5-W4A8 \
--tensor-parallel-size 8 \
--mm-encoder-tp-mode data \
--tool-call-parser kimi_k2 \
--reasoning-parser kimi_k2 \
--trust-remote-code \
--enforce-eager
Evaluating model in a new terminal
lm_eval \
--model local-completions \
--model_args "model=amd/Kimi-K2.5-W4A8,base_url=http://0.0.0.0:8000/v1/completions,tokenized_requests=False,tokenizer_backend=None,num_concurrent=32" \
--tasks gsm8k \
--num_fewshot 5 \
--batch_size 1
License
Modifications Copyright(c) 2025 Advanced Micro Devices, Inc. All rights reserved.
- Downloads last month
- 1,727
Model tree for amd/Kimi-K2.5-W4A8
Base model
moonshotai/Kimi-K2.5