JosephusCheung commited on
Commit
b430829
·
1 Parent(s): d3aecca

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -6
README.md CHANGED
@@ -42,6 +42,9 @@ Please note that the model was trained on unfiltered internet data. Since we do
42
 
43
  Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LMunder visual instructions would enable rapid implementation of effective multimodal capabilities.
44
 
 
 
 
45
  ## MMLU:
46
  stem ACC: 56.83
47
 
@@ -54,17 +57,17 @@ social ACC: 72.41
54
  **AVERAGE ACC:63.82**
55
 
56
  ## CEval (Val):
57
- STEM ACC: 66.71
58
 
59
- Social Science ACC: 85.10
60
 
61
- Humanities ACC: 76.68
62
 
63
- Other ACC: 70.23
64
 
65
- Hard ACC:54.71
66
 
67
- **AVERAGE ACC:73.10**
68
 
69
  ## GSM8K
70
 
 
42
 
43
  Bonus: The model underwent some fine-tuning on the prompt format introduced in LLaVA1.5 that is unrelated to image attention calculation. Therefore, aligning the ViT Projection module with frozen LMunder visual instructions would enable rapid implementation of effective multimodal capabilities.
44
 
45
+ ## PROMPT FORMAT:
46
+ [chatml](https://github.com/openai/openai-python/blob/main/chatml.md)
47
+
48
  ## MMLU:
49
  stem ACC: 56.83
50
 
 
57
  **AVERAGE ACC:63.82**
58
 
59
  ## CEval (Val):
60
+ STEM acc: 61.67
61
 
62
+ Social Science acc: 81.94
63
 
64
+ Humanities acc: 77.19
65
 
66
+ Other acc: 68.35
67
 
68
+ Hard acc:48.03
69
 
70
+ **AVERAGE acc:70.27**
71
 
72
  ## GSM8K
73