Malaysian Qwen 2.5 32B Instruct

Continue finetuning https://huggingface.co/Qwen/Qwen2.5-32B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset.

Improvement

  1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu.
  3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages.

Training session

Finetune on mesolitica/Malaysian-SFT to make the model understand Malaysian context.

How we train

  1. LoRA on ["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"].
  2. 128 Rank with alpha 256, or alpha of 2.0
  3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids.
  4. Chunk CCE loss for LoRA.
  5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-32b-malaysian-8k?nw=nwuserhuseinzol05

Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5

Benchmark

MalayMMLU

Probability next tokens

Based on 0-shot official MalayMMLU First token accuracy,

                            Model   Accuracy   shot by_letter        category
0  Malaysian-Qwen2.5-32B-Instruct  79.451494  0shot      True            STEM
1  Malaysian-Qwen2.5-32B-Instruct  78.689567  0shot      True        Language
2  Malaysian-Qwen2.5-32B-Instruct  73.142527  0shot      True  Social science
3  Malaysian-Qwen2.5-32B-Instruct  73.063085  0shot      True          Others
4  Malaysian-Qwen2.5-32B-Instruct  78.998862  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Malaysian-Qwen2.5-32B-Instruct
Metric : first
Shot : 0shot
average accuracy 76.26894643373394
accuracy for STEM 79.45149406467458
accuracy for Language 78.68956743002545
accuracy for Social science 73.14252674183291
accuracy for Others 73.06308467258336
accuracy for Humanities 78.99886234357224

While the original model,

                  Model   Accuracy   shot by_letter        category
0  Qwen2.5-32B-Instruct  79.738027  0shot      True            STEM
1  Qwen2.5-32B-Instruct  76.940204  0shot      True        Language
2  Qwen2.5-32B-Instruct  72.390864  0shot      True  Social science
3  Qwen2.5-32B-Instruct  70.808347  0shot      True          Others
4  Qwen2.5-32B-Instruct  76.723549  0shot      True      Humanities
{'Social science': 6918, 'Language': 6288, 'Humanities': 4395, 'Others': 4169, 'STEM': 2443}
Model : Qwen2.5-32B-Instruct
Metric : first
Shot : 0shot
average accuracy 74.8275719654731
accuracy for STEM 79.73802701596398
accuracy for Language 76.94020356234097
accuracy for Social science 72.39086441167967
accuracy for Others 70.80834732549772
accuracy for Humanities 76.72354948805462

First token match using vLLM

Based on 0-shot exact first token match using vLLM Guided Decoding,

                            Model   Accuracy  shot        category
0  Malaysian-Qwen2.5-32B-Instruct  77.322964     0            STEM
1  Malaysian-Qwen2.5-32B-Instruct  75.286260     0        Language
2  Malaysian-Qwen2.5-32B-Instruct  69.904597     0  Social science
3  Malaysian-Qwen2.5-32B-Instruct  70.760374     0          Others
4  Malaysian-Qwen2.5-32B-Instruct  74.766780     0      Humanities
Model : Malaysian-Qwen2.5-32B-Instruct
Metric : full
Shot : 0
average accuracy 73.08057654978731
accuracy for STEM 77.32296356938191
accuracy for Language 75.28625954198473
accuracy for Social science 69.90459670424978
accuracy for Others 70.76037419045335
accuracy for Humanities 74.76678043230945

While the original model,

                  Model   Accuracy  shot        category
0  Qwen2.5-32B-Instruct  79.656160     0            STEM
1  Qwen2.5-32B-Instruct  75.986005     0        Language
2  Qwen2.5-32B-Instruct  72.058398     0  Social science
3  Qwen2.5-32B-Instruct  70.208683     0          Others
4  Qwen2.5-32B-Instruct  76.382253     0      Humanities
Model : Qwen2.5-32B-Instruct
Metric : full
Shot : 0
average accuracy 74.31132036509314
accuracy for STEM 79.65616045845272
accuracy for Language 75.98600508905852
accuracy for Social science 72.05839838103498
accuracy for Others 70.20868313744303
accuracy for Humanities 76.38225255972696

Acknowledgement

Special thanks to https://www.sns.com.my for 8x H100 node!

Downloads last month
17
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mesolitica/Malaysian-Qwen2.5-32B-Instruct

Quantizations
2 models

Collection including mesolitica/Malaysian-Qwen2.5-32B-Instruct