Malaysian Qwen 2.5 72B Instruct Dynamic FP8

This is FP8 Dynamic Quantization (A8W8) for https://huggingface.co/mesolitica/Malaysian-Qwen2.5-72B-Instruct

Benchmark

MalayMMLU

Based on 0-shot exact first token match vLLM,

                                        Model   Accuracy  shot        category
0  Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic  79.819894     0            STEM
1  Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic  78.323791     0        Language
2  Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic  74.978317     0  Social science
3  Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic  74.238426     0          Others
4  Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic  79.567691     0      Humanities
Model : Malaysian-Qwen2.5-72B-Instruct-FP8-Dynamic
Metric : full
Shot : 0
average accuracy 77.04125882790237
accuracy for STEM 79.81989357347523
accuracy for Language 78.32379134860051
accuracy for Social science 74.97831743278404
accuracy for Others 74.23842648117055
accuracy for Humanities 79.56769055745166

Acknowledgement

Special thanks to https://www.sns.com.my and Nvidia for 8x H100 node!

Downloads last month
6
Safetensors
Model size
72.7B params
Tensor type
BF16
·
F8_E4M3
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support