Malaysian Qwen 2.5 72B Instruct

Continue finetuning https://huggingface.co/Qwen/Qwen2.5-72B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.

Improvement

  1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.

Training session

Finetune on mesolitica/Malaysian-SFT to make the model understand Malaysian context.

How we train

  1. LoRA on ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"].
  2. 128 Rank with alpha 256, or alpha of 2.0
  3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
  4. Chunk CCE loss for LoRA.
  5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-72b-malaysian-8k?nw=nwuserhuseinzol05

Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5

Benchmark

MalayMMLU

Probability next tokens

Based on 0-shot official MalayMMLU First token accuracy,

                            Model   Accuracy   shot by_letter        category
0  Malaysian-Qwen2.5-72B-Instruct  81.620958  0shot      True            STEM
1  Malaysian-Qwen2.5-72B-Instruct  80.820611  0shot      True        Language
2  Malaysian-Qwen2.5-72B-Instruct  77.536860  0shot      True  Social science
3  Malaysian-Qwen2.5-72B-Instruct  76.900935  0shot      True          Others
4  Malaysian-Qwen2.5-72B-Instruct  82.730375  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-72B-Instruct
Metric : first
Shot : 0shot
average accuracy 79.63490686821129
accuracy for STEM 81.62095783872289
accuracy for Language 80.82061068702289
accuracy for Social science 77.53686036426713
accuracy for Others 76.90093547613337
accuracy for Humanities 82.73037542662117

While the original model,

                  Model   Accuracy   shot by_letter        category
0  Qwen2.5-72B-Instruct  80.884159  0shot      True            STEM
1  Qwen2.5-72B-Instruct  79.103053  0shot      True        Language
2  Qwen2.5-72B-Instruct  75.802255  0shot      True  Social science
3  Qwen2.5-72B-Instruct  75.053970  0shot      True          Others
4  Qwen2.5-72B-Instruct  79.977247  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-72B-Instruct
Metric : first
Shot : 0shot
average accuracy 77.80118118366167
accuracy for STEM 80.88415882112157
accuracy for Language 79.1030534351145
accuracy for Social science 75.80225498699046
accuracy for Others 75.05396977692492
accuracy for Humanities 79.97724687144482

First token match using vLLM

Based on 0-shot exact first token match using vLLM Guided Decoding,

                            Model   Accuracy  shot        category
0  Malaysian-Qwen2.5-72B-Instruct  80.229226     0            STEM
1  Malaysian-Qwen2.5-72B-Instruct  78.101145     0        Language
2  Malaysian-Qwen2.5-72B-Instruct  75.252963     0  Social science
3  Malaysian-Qwen2.5-72B-Instruct  74.358359     0          Others
4  Malaysian-Qwen2.5-72B-Instruct  80.477816     0      Humanities
Model : Malaysian-Qwen2.5-72B-Instruct
Metric : full
Shot : 0
average accuracy 77.28905959608475
accuracy for STEM 80.22922636103151
accuracy for Language 78.10114503816794
accuracy for Social science 75.25296328418618
accuracy for Others 74.35835931878148
accuracy for Humanities 80.4778156996587

While the original model,

                  Model   Accuracy  shot        category
0  Qwen2.5-72B-Instruct  81.129758     0            STEM
1  Qwen2.5-72B-Instruct  78.975827     0        Language
2  Qwen2.5-72B-Instruct  75.397514     0  Social science
3  Qwen2.5-72B-Instruct  75.077956     0          Others
4  Qwen2.5-72B-Instruct  79.954494     0      Humanities
Model : Qwen2.5-72B-Instruct
Metric : full
Shot : 0
average accuracy 77.67728079957048
accuracy for STEM 81.12975849365534
accuracy for Language 78.97582697201018
accuracy for Social science 75.39751373229257
accuracy for Others 75.0779563444471
accuracy for Humanities 79.95449374288964

Acknowledgement

Special thanks to https://www.sns.com.my for 8x H100 node!

Downloads last month
18
Safetensors
Model size
72.7B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Malaysian-Qwen2.5-72B-Instruct

Quantizations
2 models

Collection including mesolitica/Malaysian-Qwen2.5-72B-Instruct