applied-ai-018 commited on
Commit
53ac5ac
·
verified ·
1 Parent(s): 5aba15c

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/universal/global_step20/zero/21.mlp.dense_h_to_4h_swiglu.weight/fp32.pt +3 -0
  2. lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/_held_in_template_yaml +14 -0
  3. lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/flan_held_in.yaml +331 -0
  4. lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/flan_held_out.yaml +13 -0
  5. lm-evaluation-harness/lm_eval/tasks/benchmarks/minerva_math.yaml +9 -0
  6. lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md +43 -0
  7. lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/multimedqa.yaml +17 -0
  8. lm-evaluation-harness/lm_eval/tasks/benchmarks/openllm.yaml +18 -0
  9. lm-evaluation-harness/lm_eval/tasks/benchmarks/pythia.yaml +12 -0
  10. lm-evaluation-harness/lm_eval/tasks/benchmarks/t0_eval.yaml +127 -0
  11. lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md +48 -0
  12. lm-evaluation-harness/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml +16 -0
  13. lm-evaluation-harness/lm_eval/tasks/haerae/README.md +49 -0
  14. lm-evaluation-harness/lm_eval/tasks/haerae/_default_haerae_yaml +17 -0
  15. lm-evaluation-harness/lm_eval/tasks/haerae/haerae_gk.yaml +3 -0
  16. lm-evaluation-harness/lm_eval/tasks/haerae/haerae_hi.yaml +3 -0
  17. lm-evaluation-harness/lm_eval/tasks/haerae/haerae_lw.yaml +3 -0
  18. lm-evaluation-harness/lm_eval/tasks/haerae/haerae_rw.yaml +3 -0
  19. lm-evaluation-harness/lm_eval/tasks/haerae/haerae_sn.yaml +3 -0
  20. lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md +49 -0
  21. lm-evaluation-harness/lm_eval/tasks/hellaswag/__pycache__/utils.cpython-310.pyc +0 -0
  22. lm-evaluation-harness/lm_eval/tasks/hellaswag/hellaswag.yaml +22 -0
  23. lm-evaluation-harness/lm_eval/tasks/hellaswag/utils.py +25 -0
  24. lm-evaluation-harness/lm_eval/tasks/lambada/README.md +39 -0
  25. lm-evaluation-harness/lm_eval/tasks/lambada/lambada_openai.yaml +22 -0
  26. lm-evaluation-harness/lm_eval/tasks/lambada/lambada_standard.yaml +21 -0
  27. lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md +56 -0
  28. lm-evaluation-harness/lm_eval/tasks/lambada_cloze/lambada_openai_cloze.yaml +20 -0
  29. lm-evaluation-harness/lm_eval/tasks/lambada_cloze/lambada_standard_cloze.yaml +21 -0
  30. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_abstract_algebra.yaml +8 -0
  31. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_business_ethics.yaml +8 -0
  32. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_college_computer_science.yaml +8 -0
  33. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_elementary_mathematics.yaml +8 -0
  34. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_high_school_chemistry.yaml +8 -0
  35. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_high_school_us_history.yaml +8 -0
  36. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_jurisprudence.yaml +8 -0
  37. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_logical_fallacies.yaml +8 -0
  38. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_moral_disputes.yaml +8 -0
  39. lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_nutrition.yaml +8 -0
  40. lm-evaluation-harness/lm_eval/tasks/nq_open/README.md +0 -0
  41. lm-evaluation-harness/lm_eval/tasks/nq_open/nq_open.yaml +32 -0
  42. lm-evaluation-harness/lm_eval/tasks/paws-x/_generate_config.py +109 -0
  43. lm-evaluation-harness/lm_eval/tasks/paws-x/paws_de.yaml +7 -0
  44. lm-evaluation-harness/lm_eval/tasks/paws-x/paws_ko.yaml +6 -0
  45. lm-evaluation-harness/lm_eval/tasks/storycloze/README.md +73 -0
  46. lm-evaluation-harness/lm_eval/tasks/storycloze/storycloze_2016.yaml +18 -0
  47. lm-evaluation-harness/lm_eval/tasks/storycloze/storycloze_2018.yaml +16 -0
  48. lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md +51 -0
  49. lm-evaluation-harness/lm_eval/tasks/triviaqa/default.yaml +31 -0
  50. lm-evaluation-harness/lm_eval/tasks/xnli/README.md +78 -0
ckpts/universal/global_step20/zero/21.mlp.dense_h_to_4h_swiglu.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:995b8b4d667bd43acf2b892d7812e368ae4198287ad15f38b6e77e7f99b13bf6
3
+ size 33555533
lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/_held_in_template_yaml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ output_type: generate_until
2
+ test_split: null
3
+ doc_to_choice: null
4
+ metric_list:
5
+ - metric: exact_match
6
+ aggregation: mean
7
+ higher_is_better: true
8
+ generation_kwargs:
9
+ until:
10
+ - "</s>"
11
+ do_sample: false
12
+ temperature: 0.0
13
+ metadata:
14
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/flan_held_in.yaml ADDED
@@ -0,0 +1,331 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: flan_held_in
2
+ group_alias: Flan (Held-In)
3
+ task:
4
+ # ANLI R1
5
+ - group: anli_r1_flan
6
+ group_alias: ANLI R1
7
+ task:
8
+ - task: anli_r1
9
+ task_alias: prompt-0
10
+ include: _held_in_template_yaml
11
+ doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is"
12
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
13
+ - task: anli_r1
14
+ task_alias: prompt-1
15
+ include: _held_in_template_yaml
16
+ doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
17
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
18
+ - task: anli_r1
19
+ task_alias: prompt-2
20
+ include: _held_in_template_yaml
21
+ doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
22
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
23
+ - task: anli_r1
24
+ task_alias: prompt-3
25
+ include: _held_in_template_yaml
26
+ doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
27
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
28
+ - task: anli_r1
29
+ task_alias: prompt-4
30
+ include: _held_in_template_yaml
31
+ doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:"
32
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
33
+ - task: anli_r1
34
+ task_alias: prompt-5
35
+ include: _held_in_template_yaml
36
+ doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n"
37
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
38
+ - task: anli_r1
39
+ task_alias: prompt-6
40
+ include: _held_in_template_yaml
41
+ doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
42
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
43
+ - task: anli_r1
44
+ task_alias: prompt-7
45
+ include: _held_in_template_yaml
46
+ doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
47
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
48
+ - task: anli_r1
49
+ task_alias: prompt-8
50
+ include: _held_in_template_yaml
51
+ doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
52
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
53
+ # ANLI R2
54
+ - group: anli_r2_flan
55
+ group_alias: ANLI R2
56
+ task:
57
+ - task: anli_r2
58
+ task_alias: prompt-0
59
+ include: _held_in_template_yaml
60
+ doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is"
61
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
62
+ - task: anli_r2
63
+ task_alias: prompt-1
64
+ include: _held_in_template_yaml
65
+ doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
66
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
67
+ - task: anli_r2
68
+ task_alias: prompt-2
69
+ include: _held_in_template_yaml
70
+ doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
71
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
72
+ - task: anli_r2
73
+ task_alias: prompt-3
74
+ include: _held_in_template_yaml
75
+ doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
76
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
77
+ - task: anli_r2
78
+ task_alias: prompt-4
79
+ include: _held_in_template_yaml
80
+ doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:"
81
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
82
+ - task: anli_r2
83
+ task_alias: prompt-5
84
+ include: _held_in_template_yaml
85
+ doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n"
86
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
87
+ - task: anli_r2
88
+ task_alias: prompt-6
89
+ include: _held_in_template_yaml
90
+ doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
91
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
92
+ - task: anli_r2
93
+ task_alias: prompt-7
94
+ include: _held_in_template_yaml
95
+ doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
96
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
97
+ - task: anli_r2
98
+ task_alias: prompt-8
99
+ include: _held_in_template_yaml
100
+ doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
101
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
102
+ # ANLI R3
103
+ - group: anli_r3_flan
104
+ group_alias: ANLI R3
105
+ task:
106
+ - task: anli_r3
107
+ task_alias: prompt-0
108
+ include: _held_in_template_yaml
109
+ doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is"
110
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
111
+ - task: anli_r3
112
+ task_alias: prompt-1
113
+ include: _held_in_template_yaml
114
+ doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
115
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
116
+ - task: anli_r3
117
+ task_alias: prompt-2
118
+ include: _held_in_template_yaml
119
+ doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
120
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
121
+ - task: anli_r3
122
+ task_alias: prompt-3
123
+ include: _held_in_template_yaml
124
+ doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
125
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
126
+ - task: anli_r3
127
+ task_alias: prompt-4
128
+ include: _held_in_template_yaml
129
+ doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:"
130
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
131
+ - task: anli_r3
132
+ task_alias: prompt-5
133
+ include: _held_in_template_yaml
134
+ doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n"
135
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
136
+ - task: anli_r3
137
+ task_alias: prompt-6
138
+ include: _held_in_template_yaml
139
+ doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
140
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
141
+ - task: anli_r3
142
+ task_alias: prompt-7
143
+ include: _held_in_template_yaml
144
+ doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
145
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
146
+ - task: anli_r3
147
+ task_alias: prompt-8
148
+ include: _held_in_template_yaml
149
+ doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No"
150
+ doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}"
151
+ # Arc Easy
152
+ - group: arc_easy_flan
153
+ group_alias: Arc Easy
154
+ task:
155
+ - task: arc_easy
156
+ task_alias: prompt-0
157
+ include: _held_in_template_yaml
158
+ doc_to_text: "{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
159
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
160
+ - task: arc_easy
161
+ task_alias: prompt-1
162
+ include: _held_in_template_yaml
163
+ doc_to_text: "Question: {{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}\nAnswer:"
164
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
165
+ - task: arc_easy
166
+ task_alias: prompt-2
167
+ include: _held_in_template_yaml
168
+ doc_to_text: "Question: {{question}}\n\nWhat is the correct answer to the question from the following choices?\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
169
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
170
+ - task: arc_easy
171
+ task_alias: prompt-3
172
+ include: _held_in_template_yaml
173
+ doc_to_text: "Q: {{question}}\nWhat is the correct answer to this question?\nOPTIONS:\n- {{choices.text|join('\n- ')}}...A:"
174
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
175
+ - task: arc_easy
176
+ task_alias: prompt-4
177
+ include: _held_in_template_yaml
178
+ doc_to_text: "Choose your answer?\n\n{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
179
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
180
+ - task: arc_easy
181
+ task_alias: prompt-5
182
+ include: _held_in_template_yaml
183
+ doc_to_text: "Answer the question\n\n{{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
184
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
185
+ - task: arc_easy
186
+ task_alias: prompt-6
187
+ include: _held_in_template_yaml
188
+ doc_to_text: "{{question}}\n\nPick the answer from these options\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
189
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
190
+ # Arc Challenge
191
+ - group: arc_challenge_flan
192
+ group_alias: Arc Challenge
193
+ task:
194
+ - task: arc_challenge
195
+ task_alias: prompt-0
196
+ include: _held_in_template_yaml
197
+ doc_to_text: "{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
198
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
199
+ - task: arc_challenge
200
+ task_alias: prompt-1
201
+ include: _held_in_template_yaml
202
+ doc_to_text: "Question: {{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}\nAnswer:"
203
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
204
+ - task: arc_challenge
205
+ task_alias: prompt-2
206
+ include: _held_in_template_yaml
207
+ doc_to_text: "Question: {{question}}\n\nWhat is the correct answer to the question from the following choices?\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
208
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
209
+ - task: arc_challenge
210
+ task_alias: prompt-3
211
+ include: _held_in_template_yaml
212
+ doc_to_text: "Q: {{question}}\nWhat is the correct answer to this question?\nOPTIONS:\n- {{choices.text|join('\n- ')}}...A:"
213
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
214
+ - task: arc_challenge
215
+ task_alias: prompt-4
216
+ include: _held_in_template_yaml
217
+ doc_to_text: "Choose your answer?\n\n{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
218
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
219
+ - task: arc_challenge
220
+ task_alias: prompt-5
221
+ include: _held_in_template_yaml
222
+ doc_to_text: "Answer the question\n\n{{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
223
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
224
+ - task: arc_challenge
225
+ task_alias: prompt-6
226
+ include: _held_in_template_yaml
227
+ doc_to_text: "{{question}}\n\nPick the answer from these options\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}"
228
+ doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}"
229
+ # BoolQ
230
+ - group: boolq_flan
231
+ group_alias: BoolQ
232
+ task:
233
+ - task: boolq
234
+ task_alias: prompt-0
235
+ include: _held_in_template_yaml
236
+ doc_to_text: "{{passage}}\n\nCan we conclude that {{question}}?\n\nOPTIONS:\n- no\n- yes"
237
+ doc_to_target: "{{['no', 'yes'][label]}}"
238
+ - task: boolq
239
+ task_alias: prompt-1
240
+ include: _held_in_template_yaml
241
+ doc_to_text: "{{passage}}\n\nIs it true that {{question}}?\n\nOPTIONS:\n- no\n- yes"
242
+ doc_to_target: "{{['no', 'yes'][label]}}"
243
+ - task: boolq
244
+ task_alias: prompt-2
245
+ include: _held_in_template_yaml
246
+ doc_to_text: "{{passage}}\n\n{{question}}?\n\nOPTIONS:\n- no\n- yes"
247
+ doc_to_target: "{{['no', 'yes'][label]}}"
248
+ - task: boolq
249
+ task_alias: prompt-3
250
+ include: _held_in_template_yaml
251
+ doc_to_text: "Text: {{passage}}\n\nQuestion: {{question}}?\n\nOPTIONS:\n- no\n- yes"
252
+ doc_to_target: "{{['no', 'yes'][label]}}"
253
+ - task: boolq
254
+ task_alias: prompt-4
255
+ include: _held_in_template_yaml
256
+ doc_to_text: "{{passage}}\n\nWhat's the best answer to this question: {{question}}?\n\nOPTIONS:\n- no\n- yes"
257
+ doc_to_target: "{{['no', 'yes'][label]}}"
258
+ - task: boolq
259
+ task_alias: prompt-5
260
+ include: _held_in_template_yaml
261
+ doc_to_text: "{{passage}}\nBased on the above text what's the best answer to this question: {{question}}?\n\nOPTIONS:\n- no\n- yes"
262
+ doc_to_target: "{{['no', 'yes'][label]}}"
263
+ - task: boolq
264
+ task_alias: prompt-6
265
+ include: _held_in_template_yaml
266
+ doc_to_text: "{{passage}}\nAnswer this question making sure that the answer is supposed by the text: {{question}}?\n\nOPTIONS:\n- no\n- yes"
267
+ doc_to_target: "{{['no', 'yes'][label]}}"
268
+ - task: boolq
269
+ task_alias: prompt-7
270
+ include: _held_in_template_yaml
271
+ doc_to_text: "{{passage}}\n\nIs the following statement correct based on the text\n\n{{question}}\n\nOPTIONS:\n- no\n- yes"
272
+ doc_to_target: "{{['no', 'yes'][label]}}"
273
+ - task: boolq
274
+ task_alias: prompt-8
275
+ include: _held_in_template_yaml
276
+ doc_to_text: "{{passage}}\n\nIs this statement correct \"{{question}}\"?\n\nOPTIONS:\n- no\n- yes"
277
+ doc_to_target: "{{['no', 'yes'][label]}}"
278
+ - task: boolq
279
+ task_alias: prompt-9
280
+ include: _held_in_template_yaml
281
+ doc_to_text: "Is it true that {{question}} based on the following text?\n\n{{passage}}\n\nOPTIONS:\n- no\n- yes"
282
+ doc_to_target: "{{['no', 'yes'][label]}}"
283
+ # RTE
284
+ - group: rte_flan
285
+ group_alias: RTE
286
+ task:
287
+ - task: rte
288
+ task_alias: prompt-0
289
+ include: _held_in_template_yaml
290
+ doc_to_text: "{{sentence1}}\n\nQuestion with options: Based on the paragraph above can we conclude that \"{{sentence2}}\"?\n\nOPTIONS:\n- yes\n- no"
291
+ doc_to_target: "{{['yes', 'no'][label]}}"
292
+ - task: rte
293
+ task_alias: prompt-1
294
+ include: _held_in_template_yaml
295
+ doc_to_text: "{{sentence1}}\n\nBased on that paragraph can we conclude that the sentence below is true?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no"
296
+ doc_to_target: "{{['yes', 'no'][label]}}"
297
+ - task: rte
298
+ task_alias: prompt-2
299
+ include: _held_in_template_yaml
300
+ doc_to_text: "{{sentence1}}\n\nQ with options: Can we draw the following conclusion?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no"
301
+ doc_to_target: "{{['yes', 'no'][label]}}"
302
+ - task: rte
303
+ task_alias: prompt-3
304
+ include: _held_in_template_yaml
305
+ doc_to_text: "{{sentence1}}\nDoes this next sentence follow, given the preceding text?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no"
306
+ doc_to_target: "{{['yes', 'no'][label]}}"
307
+ - task: rte
308
+ task_alias: prompt-4
309
+ include: _held_in_template_yaml
310
+ doc_to_text: "{{sentence1}}\nOPTIONS:\n- yes\n- no\nQuestion: Can we infer the following?\n{{sentence2}}"
311
+ doc_to_target: "{{['yes', 'no'][label]}}"
312
+ - task: rte
313
+ task_alias: prompt-5
314
+ include: _held_in_template_yaml
315
+ doc_to_text: "Read the following paragraph and determine if the hypothesis is true. Select from options at the end:\n\n{{sentence1}}\n\nHypothesis: {{sentence2}}\nOPTIONS:\n- yes\n- no\nThe answer is"
316
+ doc_to_target: "{{['yes', 'no'][label]}}"
317
+ - task: rte
318
+ task_alias: prompt-6
319
+ include: _held_in_template_yaml
320
+ doc_to_text: "Read the text and determine if the sentence is true:\n\n{{sentence1}}\n\nSentence: {{sentence2}}\nOPTIONS:\n- yes\n- no\nA:"
321
+ doc_to_target: "{{['yes', 'no'][label]}}"
322
+ - task: rte
323
+ task_alias: prompt-7
324
+ include: _held_in_template_yaml
325
+ doc_to_text: "Question with options: can we draw the following hypothesis from the context? \n\nContext:\n\n{{sentence1}}\n\nHypothesis: {{sentence2}}\nOPTIONS:\n- yes\n- no\nA:"
326
+ doc_to_target: "{{['yes', 'no'][label]}}"
327
+ - task: rte
328
+ task_alias: prompt-8
329
+ include: _held_in_template_yaml
330
+ doc_to_text: "Determine if the sentence is true based on the text below. Choose from options.\n{{sentence2}}\n\n{{sentence1}}\nOPTIONS:\n- yes\n- no"
331
+ doc_to_target: "{{['yes', 'no'][label]}}"
lm-evaluation-harness/lm_eval/tasks/benchmarks/flan/flan_held_out.yaml ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: flan_held_out
2
+ task:
3
+ # BBH
4
+ - bbh_zeroshot
5
+ - bbh_fewshot
6
+ - bbh_cot_fewshot
7
+ - bbh_cot_zeroshot
8
+ # MMLU
9
+ - mmlu
10
+ - mmlu_flan_n_shot_generative
11
+ - mmlu_flan_n_shot_loglikelihood
12
+ - mmlu_flan_cot_zeroshot
13
+ - mmlu_flan_cot_fewshot
lm-evaluation-harness/lm_eval/tasks/benchmarks/minerva_math.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ group: minerva_math
2
+ task:
3
+ - minerva_math_algebra
4
+ - minerva_math_counting_and_prob
5
+ - minerva_math_geometry
6
+ - minerva_math_intermediate_algebra
7
+ - minerva_math_num_theory
8
+ - minerva_math_prealgebra
9
+ - minerva_math_precalc
lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MultiMedQA (multiple-choice subset)
2
+
3
+ ### Paper
4
+
5
+ Title: Large Language Models Encode Clinical Knowledge
6
+
7
+ Abstract: https://arxiv.org/abs/2212.13138
8
+
9
+ A benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries.
10
+
11
+ ### Citation
12
+
13
+ ```
14
+ @Article{Singhal2023,
15
+ author={Singhal, Karan and Azizi, Shekoofeh and Tu, Tao and Mahdavi, S. Sara and Wei, Jason and Chung, Hyung Won and Scales, Nathan and Tanwani, Ajay and Cole-Lewis, Heather and Pfohl, Stephen and Payne, Perry and Seneviratne, Martin and Gamble, Paul and Kelly, Chris and Babiker, Abubakr and Sch{\"a}rli, Nathanael and Chowdhery, Aakanksha and Mansfield, Philip and Demner-Fushman, Dina and Ag{\"u}era y Arcas, Blaise and Webster, Dale and Corrado, Greg S. and Matias, Yossi and Chou, Katherine and Gottweis, Juraj and Tomasev, Nenad and Liu, Yun and Rajkomar, Alvin and Barral, Joelle and Semturs, Christopher and Karthikesalingam, Alan and Natarajan, Vivek},
16
+ title={Large language models encode clinical knowledge},
17
+ journal={Nature},
18
+ year={2023},
19
+ month={Aug},
20
+ day={01},
21
+ volume={620},
22
+ number={7972},
23
+ pages={172-180},
24
+ issn={1476-4687},
25
+ doi={10.1038/s41586-023-06291-2},
26
+ url={https://doi.org/10.1038/s41586-023-06291-2}
27
+ }
28
+ ```
29
+
30
+ ### Tasks
31
+
32
+ * [PubMedQA](https://pubmedqa.github.io/) - 1,000 expert-labeled Q&A pairs where a question and corresponding PubMed abstract as context is given and the a yes/maybe/no answer must be produced. Unlike the rest of the tasks in this suite, PubMedQA is a closed-domain Q&A task.
33
+ * [MedQA](https://github.com/jind11/MedQA) - US Medical License Exam (USMLE) questions with 4 or 5 possible answers. Typically, only the 4-option questions are used.
34
+ * [MedMCQA](https://medmcqa.github.io/) - 4-option multiple choice questions from Indian medical entrance examinations, >191k total questions.
35
+ * [MMLU](https://arxiv.org/abs/2009.03300) - 4-option multiple choice exam questions from a variety of domains. The following 6 domains are utilized here:
36
+ * Anatomy
37
+ * Clinical Knowledge
38
+ * College Medicine
39
+ * Medical Genetics
40
+ * Professional Medicine
41
+ * College Biology
42
+
43
+ Note that MultiMedQA also includes some short-form and long-form Q&A tasks (LiveQA, MedicationQA, HealthSearchQA). Evaluation on these tasks is usually done by experts and is not typically performed automatically, and therefore is ignored here.
lm-evaluation-harness/lm_eval/tasks/benchmarks/multimedqa/multimedqa.yaml ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: multimedqa
2
+ task:
3
+ - pubmedqa
4
+ - medmcqa
5
+ - medqa_4options
6
+ - task: mmlu_anatomy
7
+ task_alias: "anatomy (mmlu)"
8
+ - task: mmlu_clinical_knowledge
9
+ task_alias: "clinical_knowledge (mmlu)"
10
+ - task: mmlu_college_medicine
11
+ task_alias: "college_medicine (mmlu)"
12
+ - task: mmlu_medical_genetics
13
+ task_alias: "medical_genetics (mmlu)"
14
+ - task: mmlu_professional_medicine
15
+ task_alias: "professional_medicine (mmlu)"
16
+ - task: mmlu_college_biology
17
+ task_alias: "college_biology (mmlu)"
lm-evaluation-harness/lm_eval/tasks/benchmarks/openllm.yaml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: openllm
2
+ group_alias: Open LLM Leaderboard
3
+ task:
4
+ - task: arc_challenge
5
+ fewshot_split: validation
6
+ num_fewshot: 25
7
+ - task: hellaswag
8
+ fewshot_split: train
9
+ num_fewshot: 10
10
+ - task: truthfulqa
11
+ num_fewshot: 0
12
+ - task: mmlu
13
+ num_fewshot: 5
14
+ - task: winogrande
15
+ fewshot_split: train
16
+ num_fewshot: 5
17
+ - task: gsm8k
18
+ num_fewshot: 5
lm-evaluation-harness/lm_eval/tasks/benchmarks/pythia.yaml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: pythia
2
+ task:
3
+ - lambada_openai
4
+ - logiqa
5
+ - piqa
6
+ - sciq
7
+ - wikitext
8
+ - winogrande
9
+ - wsc
10
+ - ai2_arc
11
+ - blimp
12
+ - mmlu
lm-evaluation-harness/lm_eval/tasks/benchmarks/t0_eval.yaml ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: t0_eval
2
+ task:
3
+ # Coreference Resolution
4
+ - dataset_path: super_glue
5
+ dataset_name: wsc.fixed
6
+ use_prompt: promptsource:*
7
+ training_split: train
8
+ validation_split: validation
9
+ output_type: generate_until
10
+ metric_list:
11
+ - metric: exact_match
12
+ aggregation: mean
13
+ higher_is_better: true
14
+ ignore_case: true
15
+ ignore_punctuation: true
16
+ # Coreference Resolution
17
+ - dataset_path: winogrande
18
+ dataset_name: winogrande_xl
19
+ use_prompt: promptsource:*
20
+ training_split: train
21
+ validation_split: validation
22
+ output_type: generate_until
23
+ metric_list:
24
+ - metric: exact_match
25
+ aggregation: mean
26
+ higher_is_better: true
27
+ ignore_case: true
28
+ ignore_punctuation: true
29
+ # Natural Language Inference
30
+ - dataset_path: super_glue
31
+ dataset_name: cb
32
+ use_prompt: promptsource:*
33
+ training_split: train
34
+ validation_split: validation
35
+ output_type: generate_until
36
+ metric_list:
37
+ - metric: exact_match
38
+ aggregation: mean
39
+ higher_is_better: true
40
+ ignore_case: true
41
+ ignore_punctuation: true
42
+ - dataset_path: super_glue
43
+ dataset_name: rte
44
+ use_prompt: promptsource:*
45
+ training_split: train
46
+ validation_split: validation
47
+ output_type: generate_until
48
+ metric_list:
49
+ - metric: exact_match
50
+ aggregation: mean
51
+ higher_is_better: true
52
+ ignore_case: true
53
+ ignore_punctuation: true
54
+ - task: anli_r1
55
+ dataset_path: anli
56
+ use_prompt: promptsource:*
57
+ training_split: train_r1
58
+ validation_split: dev_r1
59
+ output_type: generate_until
60
+ metric_list:
61
+ - metric: exact_match
62
+ aggregation: mean
63
+ higher_is_better: true
64
+ ignore_case: true
65
+ ignore_punctuation: true
66
+ - task: anli_r2
67
+ dataset_path: anli
68
+ use_prompt: promptsource:*
69
+ training_split: train_r2
70
+ validation_split: dev_r2
71
+ output_type: generate_until
72
+ metric_list:
73
+ - metric: exact_match
74
+ aggregation: mean
75
+ higher_is_better: true
76
+ ignore_case: true
77
+ ignore_punctuation: true
78
+ - task: anli_r3
79
+ dataset_path: anli
80
+ use_prompt: promptsource:*
81
+ training_split: train_r3
82
+ validation_split: dev_r3
83
+ output_type: generate_until
84
+ metric_list:
85
+ - metric: exact_match
86
+ aggregation: mean
87
+ higher_is_better: true
88
+ ignore_case: true
89
+ ignore_punctuation: true
90
+ # Sentence Completion
91
+ - dataset_path: super_glue
92
+ dataset_name: copa
93
+ use_prompt: promptsource:*
94
+ training_split: train
95
+ validation_split: validation
96
+ output_type: generate_until
97
+ metric_list:
98
+ - metric: exact_match
99
+ aggregation: mean
100
+ higher_is_better: true
101
+ ignore_case: true
102
+ ignore_punctuation: true
103
+ # Natural Language Inference
104
+ - dataset_path: hellaswag
105
+ use_prompt: promptsource:*
106
+ training_split: train
107
+ validation_split: validation
108
+ output_type: generate_until
109
+ metric_list:
110
+ - metric: exact_match
111
+ aggregation: mean
112
+ higher_is_better: true
113
+ ignore_case: true
114
+ ignore_punctuation: true
115
+ # Word Sense Disambiguation
116
+ - dataset_path: super_glue
117
+ dataset_name: wic
118
+ use_prompt: promptsource:*
119
+ training_split: train
120
+ validation_split: validation
121
+ output_type: generate_until
122
+ metric_list:
123
+ - metric: exact_match
124
+ aggregation: mean
125
+ higher_is_better: true
126
+ ignore_case: true
127
+ ignore_punctuation: true
lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EusProficiency
2
+
3
+ ### Paper
4
+
5
+ Title: Latxa: An Open Language Model and Evaluation Suite for Basque
6
+
7
+ Abstract: https://arxiv.org/abs/2403.20266
8
+
9
+ EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises from EGA exams through the years 1998 to 2008. Atarikoa is the first qualifying test of EGA, which measures different aspects of language competency, such as reading comprehension, grammar, vocabulary, spelling, and writing. Each test generally has 85 multiple-choice questions, with 4 choices and a single correct answer.
10
+
11
+ Homepage: https://github.com/hitz-zentroa/latxa
12
+
13
+
14
+ ### Citation
15
+
16
+ ```
17
+ @misc{etxaniz2024latxa,
18
+ title={Latxa: An Open Language Model and Evaluation Suite for Basque},
19
+ author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
20
+ year={2024},
21
+ eprint={2403.20266},
22
+ archivePrefix={arXiv},
23
+ primaryClass={cs.CL}
24
+ }
25
+ ```
26
+
27
+ ### Groups and Tasks
28
+
29
+ #### Groups
30
+
31
+ There are no groups.
32
+
33
+ #### Tasks
34
+
35
+ * `eus_proficiency`: EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.
36
+
37
+ ### Checklist
38
+
39
+ For adding novel benchmarks/datasets to the library:
40
+ * [ ] Is the task an existing benchmark in the literature?
41
+ * [ ] Have you referenced the original paper that introduced the task?
42
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
43
+
44
+
45
+ If other tasks on this dataset are already supported:
46
+ * [ ] Is the "Main" variant of this task clearly denoted?
47
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
48
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_path: HiTZ/EusProficiency
2
+ dataset_name: default
3
+ task: eus_proficiency
4
+ doc_to_text: "Galdera: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nD: {{candidates[3]}}\nErantzuna:"
5
+ doc_to_choice: ["A", "B", "C", "D"]
6
+ validation_split: null
7
+ test_split: test
8
+ fewshot_split: test
9
+ output_type: multiple_choice
10
+ doc_to_target: answer
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ version: 0.0
lm-evaluation-harness/lm_eval/tasks/haerae/README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HAE-RAE BENCH
2
+
3
+ ### Paper
4
+
5
+ Title: `HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models`
6
+
7
+ Abstract: `Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the attention given to their evaluation methodologies remains limited. Current multilingual benchmarks often rely on back translations or re-implementations of English tests, limiting their capacity to capture unique cultural and linguistic nuances. To bridge this gap for the Korean language, we introduce HAE-RAE Bench, a dataset curated to challenge models lacking Korean cultural and contextual depth. The dataset encompasses six downstream tasks across four domains: vocabulary, history, general knowledge, and reading comprehension. Contrary to traditional evaluation suites focused on token or sequence classification and specific mathematical or logical reasoning, HAE-RAE Bench emphasizes a model's aptitude for recalling Korean-specific knowledge and cultural contexts. Comparative analysis with prior Korean benchmarks indicates that the HAE-RAE Bench presents a greater challenge to non-native models, by disturbing abilities and knowledge learned from English being transferred.`
8
+
9
+ Homepage: https://huggingface.co/datasets/HAERAE-HUB/HAE_RAE_BENCH
10
+
11
+ ### Citation
12
+
13
+ @misc{son2023haerae,
14
+ title={HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models},
15
+ author={Guijin Son and Hanwool Lee and Suwan Kim and Huiseo Kim and Jaecheol Lee and Je Won Yeom and Jihyu Jung and Jung Woo Kim and Songseong Kim},
16
+ year={2023},
17
+ eprint={2309.02706},
18
+ archivePrefix={arXiv},
19
+ primaryClass={cs.CL}
20
+ }
21
+
22
+ ### Groups and Tasks
23
+
24
+ #### Groups
25
+
26
+ * `haerae`: 'It consists of five tasks provided in the HAERAE-BENCH paper. 'Reading Comprehension' was excluded from the implementation due to copyright issues. We will include it in the next haerae update. For other tasks, some part of data may be replaced or increased with the production of Haerae v1.1. Please note this when using it.'
27
+
28
+ #### Tasks
29
+
30
+ The following tasks evaluate subjects in the HaeRae dataset
31
+
32
+ - `haerae_standard_nomenclature`
33
+ - `haerae_loan_word`
34
+ - `haerae_rare_word`
35
+ - `haerae_general_knowledge`
36
+ - `haerae_history`
37
+
38
+ ### Checklist
39
+
40
+ For adding novel benchmarks/datasets to the library:
41
+ * [x] Is the task an existing benchmark in the literature?
42
+ * [x] Have you referenced the original paper that introduced the task?
43
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
44
+
45
+
46
+ If other tasks on this dataset are already supported:
47
+ * [ ] Is the "Main" variant of this task clearly denoted?
48
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
49
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/haerae/_default_haerae_yaml ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: haerae
2
+ dataset_path: HAERAE-HUB/HAE_RAE_BENCH
3
+ test_split: test
4
+ fewshot_split: test
5
+ output_type: multiple_choice
6
+ doc_to_text: "{{query}}"
7
+ doc_to_choice: ["(A)", "(B)", "(C)", "(D)", "(E)"]
8
+ doc_to_target: "{{answer}}"
9
+ metric_list:
10
+ - metric: acc
11
+ aggregation: mean
12
+ higher_is_better: true
13
+ - metric: acc_norm
14
+ aggregation: mean
15
+ higher_is_better: true
16
+ metadata:
17
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/haerae/haerae_gk.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "general_knowledge"
2
+ "include": "_default_haerae_yaml"
3
+ "task": "haerae_general_knowledge"
lm-evaluation-harness/lm_eval/tasks/haerae/haerae_hi.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "history"
2
+ "include": "_default_haerae_yaml"
3
+ "task": "haerae_history"
lm-evaluation-harness/lm_eval/tasks/haerae/haerae_lw.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "loan_words"
2
+ "include": "_default_haerae_yaml"
3
+ "task": "haerae_loan_word"
lm-evaluation-harness/lm_eval/tasks/haerae/haerae_rw.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "rare_words"
2
+ "include": "_default_haerae_yaml"
3
+ "task": "haerae_rare_word"
lm-evaluation-harness/lm_eval/tasks/haerae/haerae_sn.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ "dataset_name": "standard_nomenclature"
2
+ "include": "_default_haerae_yaml"
3
+ "task": "haerae_standard_nomenclature"
lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HellaSwag
2
+
3
+ ### Paper
4
+
5
+ Title: `HellaSwag: Can a Machine Really Finish Your Sentence?`
6
+
7
+ Abstract: https://arxiv.org/abs/1905.07830
8
+
9
+ Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select the most likely followup: "She sets her fingers on the keys." With the introduction of BERT, near human-level performance was reached. Does this mean that machines can perform human level commonsense inference?
10
+ In this paper, we show that commonsense inference still proves difficult for even state-of-the-art models, by presenting HellaSwag, a new challenge dataset. Though its questions are trivial for humans (>95% accuracy), state-of-the-art models struggle (<48%). We achieve this via Adversarial Filtering (AF), a data collection paradigm wherein a series of discriminators iteratively select an adversarial set of machine-generated wrong answers. AF proves to be surprisingly robust. The key insight is to scale up the length and complexity of the dataset examples towards a critical 'Goldilocks' zone wherein generated text is ridiculous to humans, yet often misclassified by state-of-the-art models.
11
+ Our construction of HellaSwag, and its resulting difficulty, sheds light on the inner workings of deep pretrained models. More broadly, it suggests a new path forward for NLP research, in which benchmarks co-evolve with the evolving state-of-the-art in an adversarial way, so as to present ever-harder challenges.
12
+
13
+ Homepage: `https://rowanzellers.com/hellaswag/`
14
+
15
+
16
+ ### Citation
17
+
18
+ ```
19
+ @inproceedings{zellers2019hellaswag,
20
+ title={HellaSwag: Can a Machine Really Finish Your Sentence?},
21
+ author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin},
22
+ booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics},
23
+ year={2019}
24
+ }
25
+ ```
26
+
27
+ ### Groups and Tasks
28
+
29
+ #### Groups
30
+
31
+ - Not part of a group yet
32
+
33
+ #### Tasks
34
+
35
+ - `hellaswag`
36
+
37
+
38
+ ### Checklist
39
+
40
+ For adding novel benchmarks/datasets to the library:
41
+ * [x] Is the task an existing benchmark in the literature?
42
+ * [x] Have you referenced the original paper that introduced the task?
43
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
44
+
45
+
46
+ If other tasks on this dataset are already supported:
47
+ * [ ] Is the "Main" variant of this task clearly denoted?
48
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
49
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/hellaswag/__pycache__/utils.cpython-310.pyc ADDED
Binary file (1.08 kB). View file
 
lm-evaluation-harness/lm_eval/tasks/hellaswag/hellaswag.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - multiple_choice
3
+ task: hellaswag
4
+ dataset_path: hellaswag
5
+ dataset_name: null
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: null
10
+ process_docs: !function utils.process_docs
11
+ doc_to_text: "{{query}}"
12
+ doc_to_target: "{{label}}"
13
+ doc_to_choice: "choices"
14
+ metric_list:
15
+ - metric: acc
16
+ aggregation: mean
17
+ higher_is_better: true
18
+ - metric: acc_norm
19
+ aggregation: mean
20
+ higher_is_better: true
21
+ metadata:
22
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/hellaswag/utils.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+
3
+ import datasets
4
+
5
+
6
+ def preprocess(text):
7
+ text = text.strip()
8
+ # NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag.
9
+ text = text.replace(" [title]", ". ")
10
+ text = re.sub("\\[.*?\\]", "", text)
11
+ text = text.replace(" ", " ")
12
+ return text
13
+
14
+
15
+ def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
16
+ def _process_doc(doc):
17
+ ctx = doc["ctx_a"] + " " + doc["ctx_b"].capitalize()
18
+ out_doc = {
19
+ "query": preprocess(doc["activity_label"] + ": " + ctx),
20
+ "choices": [preprocess(ending) for ending in doc["endings"]],
21
+ "gold": int(doc["label"]),
22
+ }
23
+ return out_doc
24
+
25
+ return dataset.map(_process_doc)
lm-evaluation-harness/lm_eval/tasks/lambada/README.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LAMBADA
2
+
3
+ ### Paper
4
+ Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context`
5
+
6
+ Abstract: https://arxiv.org/pdf/1606.06031.pdf
7
+
8
+ LAMBADA is a dataset to evaluate the capabilities of computational models for text
9
+ understanding by means of a word prediction task. LAMBADA is a collection of narrative
10
+ passages sharing the characteristic that human subjects are able to guess their last
11
+ word if they are exposed to the whole passage, but not if they only see the last
12
+ sentence preceding the target word. To succeed on LAMBADA, computational models
13
+ cannot simply rely on local context, but must be able to keep track of information
14
+ in the broader discourse.
15
+
16
+ Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI
17
+
18
+ ### Groups and Tasks
19
+
20
+ #### Groups
21
+
22
+ - `lambada`
23
+
24
+ #### Tasks
25
+
26
+ - `lambada_openai`
27
+ - `lambada_standard`
28
+
29
+
30
+ ### Citation
31
+
32
+ @misc{
33
+ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},
34
+ title={The LAMBADA dataset},
35
+ DOI={10.5281/zenodo.2630551},
36
+ publisher={Zenodo},
37
+ year={2016},
38
+ month={Aug}
39
+ }
lm-evaluation-harness/lm_eval/tasks/lambada/lambada_openai.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - lambada
3
+ task: lambada_openai
4
+ dataset_path: EleutherAI/lambada_openai
5
+ dataset_name: default
6
+ output_type: loglikelihood
7
+ test_split: test
8
+ doc_to_text: "{{text.split(' ')[:-1]|join(' ')}}"
9
+ doc_to_target: "{{' '+text.split(' ')[-1]}}"
10
+ should_decontaminate: true
11
+ doc_to_decontamination_query: "{{text}}"
12
+ metric_list:
13
+ - metric: perplexity
14
+ aggregation: perplexity
15
+ higher_is_better: false
16
+ - metric: acc
17
+ aggregation: mean
18
+ higher_is_better: true
19
+ metadata:
20
+ version: 1.0
21
+ dataset_kwargs:
22
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/lambada/lambada_standard.yaml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - lambada
3
+ task: lambada_standard
4
+ dataset_path: lambada
5
+ dataset_name: null
6
+ output_type: loglikelihood
7
+ validation_split: validation
8
+ test_split: test
9
+ doc_to_text: "{{text.split(' ')[:-1]|join(' ')}}"
10
+ doc_to_target: "{{' '+text.split(' ')[-1]}}"
11
+ should_decontaminate: true
12
+ doc_to_decontamination_query: "{{text}}"
13
+ metric_list:
14
+ - metric: perplexity
15
+ aggregation: perplexity
16
+ higher_is_better: false
17
+ - metric: acc
18
+ aggregation: mean
19
+ higher_is_better: true
20
+ metadata:
21
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LAMBADA Cloze
2
+
3
+ ### Paper
4
+
5
+ Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context`
6
+
7
+ Abstract: https://arxiv.org/abs/1606.06031
8
+
9
+ Cloze-style LAMBADA dataset.
10
+ LAMBADA is a dataset to evaluate the capabilities of computational models for text
11
+ understanding by means of a word prediction task. LAMBADA is a collection of narrative
12
+ passages sharing the characteristic that human subjects are able to guess their last
13
+ word if they are exposed to the whole passage, but not if they only see the last
14
+ sentence preceding the target word. To succeed on LAMBADA, computational models
15
+ cannot simply rely on local context, but must be able to keep track of information
16
+ in the broader discourse.
17
+
18
+ Homepage: https://zenodo.org/record/2630551#.X4Xzn5NKjUI
19
+
20
+
21
+ ### Citation
22
+
23
+ ```
24
+ @misc{
25
+ author={Paperno, Denis and Kruszewski, Germán and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernández, Raquel},
26
+ title={The LAMBADA dataset},
27
+ DOI={10.5281/zenodo.2630551},
28
+ publisher={Zenodo},
29
+ year={2016},
30
+ month={Aug}
31
+ }
32
+ ```
33
+
34
+ ### Groups and Tasks
35
+
36
+ #### Groups
37
+
38
+ * `lambada_cloze`
39
+
40
+ #### Tasks
41
+
42
+ * `lambada_openai_cloze_yaml`
43
+ * `lambada_standard_cloze_yaml`
44
+
45
+ ### Checklist
46
+
47
+ For adding novel benchmarks/datasets to the library:
48
+ * [ ] Is the task an existing benchmark in the literature?
49
+ * [ ] Have you referenced the original paper that introduced the task?
50
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
51
+
52
+
53
+ If other tasks on this dataset are already supported:
54
+ * [ ] Is the "Main" variant of this task clearly denoted?
55
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
56
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/lambada_cloze/lambada_openai_cloze.yaml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - lambada_cloze
3
+ task: lambada_openai_cloze_yaml
4
+ dataset_path: EleutherAI/lambada_openai
5
+ dataset_name: default
6
+ output_type: loglikelihood
7
+ test_split: test
8
+ doc_to_text: "{{text.split(' ')[:-1]|join(' ')}} ____. ->"
9
+ doc_to_target: "{{' '+text.split(' ')[-1]}}"
10
+ should_decontaminate: true
11
+ doc_to_decontamination_query: "{{text}}"
12
+ metric_list:
13
+ - metric: perplexity
14
+ aggregation: perplexity
15
+ higher_is_better: false
16
+ - metric: acc
17
+ aggregation: mean
18
+ higher_is_better: true
19
+ metadata:
20
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/lambada_cloze/lambada_standard_cloze.yaml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - lambada_cloze
3
+ task: lambada_standard_cloze_yaml
4
+ dataset_path: lambada
5
+ dataset_name: null
6
+ output_type: loglikelihood
7
+ validation_split: validation
8
+ test_split: test
9
+ doc_to_text: "{{text.split(' ')[:-1]|join(' ')}} ____. ->"
10
+ doc_to_target: "{{' '+text.split(' ')[-1]}}"
11
+ should_decontaminate: true
12
+ doc_to_decontamination_query: "{{text}}"
13
+ metric_list:
14
+ - metric: perplexity
15
+ aggregation: perplexity
16
+ higher_is_better: false
17
+ - metric: acc
18
+ aggregation: mean
19
+ higher_is_better: true
20
+ metadata:
21
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_abstract_algebra.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "abstract_algebra"
2
+ "description": "The following are multiple choice questions (with answers) about abstract\
3
+ \ algebra.\n\n"
4
+ "group": "mmlu_stem"
5
+ "group_alias": "stem"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_abstract_algebra"
8
+ "task_alias": "abstract_algebra"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_business_ethics.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "business_ethics"
2
+ "description": "The following are multiple choice questions (with answers) about business\
3
+ \ ethics.\n\n"
4
+ "group": "mmlu_other"
5
+ "group_alias": "other"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_business_ethics"
8
+ "task_alias": "business_ethics"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_college_computer_science.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "college_computer_science"
2
+ "description": "The following are multiple choice questions (with answers) about college\
3
+ \ computer science.\n\n"
4
+ "group": "mmlu_stem"
5
+ "group_alias": "stem"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_college_computer_science"
8
+ "task_alias": "college_computer_science"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_elementary_mathematics.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "elementary_mathematics"
2
+ "description": "The following are multiple choice questions (with answers) about elementary\
3
+ \ mathematics.\n\n"
4
+ "group": "mmlu_stem"
5
+ "group_alias": "stem"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_elementary_mathematics"
8
+ "task_alias": "elementary_mathematics"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_high_school_chemistry.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "high_school_chemistry"
2
+ "description": "The following are multiple choice questions (with answers) about high\
3
+ \ school chemistry.\n\n"
4
+ "group": "mmlu_stem"
5
+ "group_alias": "stem"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_high_school_chemistry"
8
+ "task_alias": "high_school_chemistry"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_high_school_us_history.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "high_school_us_history"
2
+ "description": "The following are multiple choice questions (with answers) about high\
3
+ \ school us history.\n\n"
4
+ "group": "mmlu_humanities"
5
+ "group_alias": "humanities"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_high_school_us_history"
8
+ "task_alias": "high_school_us_history"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_jurisprudence.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "jurisprudence"
2
+ "description": "The following are multiple choice questions (with answers) about jurisprudence.\n\
3
+ \n"
4
+ "group": "mmlu_humanities"
5
+ "group_alias": "humanities"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_jurisprudence"
8
+ "task_alias": "jurisprudence"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_logical_fallacies.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "logical_fallacies"
2
+ "description": "The following are multiple choice questions (with answers) about logical\
3
+ \ fallacies.\n\n"
4
+ "group": "mmlu_humanities"
5
+ "group_alias": "humanities"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_logical_fallacies"
8
+ "task_alias": "logical_fallacies"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_moral_disputes.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "moral_disputes"
2
+ "description": "The following are multiple choice questions (with answers) about moral\
3
+ \ disputes.\n\n"
4
+ "group": "mmlu_humanities"
5
+ "group_alias": "humanities"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_moral_disputes"
8
+ "task_alias": "moral_disputes"
lm-evaluation-harness/lm_eval/tasks/mmlu/default/mmlu_nutrition.yaml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ "dataset_name": "nutrition"
2
+ "description": "The following are multiple choice questions (with answers) about nutrition.\n\
3
+ \n"
4
+ "group": "mmlu_other"
5
+ "group_alias": "other"
6
+ "include": "_default_template_yaml"
7
+ "task": "mmlu_nutrition"
8
+ "task_alias": "nutrition"
lm-evaluation-harness/lm_eval/tasks/nq_open/README.md ADDED
File without changes
lm-evaluation-harness/lm_eval/tasks/nq_open/nq_open.yaml ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: nq_open
2
+ dataset_path: nq_open
3
+ output_type: generate_until
4
+ training_split: train
5
+ validation_split: validation
6
+ description: "Answer these questions:\n\n"
7
+ doc_to_text: "Q: {{question}}?\nA:"
8
+ doc_to_target: "{{answer}}" # TODO: should be multi-target
9
+ fewshot_delimiter: "\n"
10
+ generation_kwargs:
11
+ until:
12
+ - "\n"
13
+ - "."
14
+ - ","
15
+ do_sample: false
16
+ temperature: 0.0
17
+ filter_list:
18
+ - name: remove_whitespace
19
+ filter:
20
+ - function: remove_whitespace
21
+ - function: take_first
22
+ target_delimiter: " "
23
+ metric_list:
24
+ - metric: exact_match
25
+ aggregation: mean
26
+ higher_is_better: true
27
+ ignore_case: true
28
+ ignore_punctuation: true
29
+ regexes_to_ignore:
30
+ - "\\b(?:The |the |An |A |The |a |an )"
31
+ metadata:
32
+ version: 3.0
lm-evaluation-harness/lm_eval/tasks/paws-x/_generate_config.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+
3
+ import yaml
4
+
5
+
6
+ # Different languages that are part of xnli.
7
+ # These correspond to dataset names (Subsets) on HuggingFace.
8
+ # A yaml file is generated by this script for each language.
9
+
10
+ LANGUAGES = {
11
+ "de": { # German
12
+ "QUESTION_WORD": "richtig",
13
+ "YES": "Ja",
14
+ "NO": "Nein",
15
+ },
16
+ "en": { # English
17
+ "QUESTION_WORD": "right",
18
+ "YES": "Yes",
19
+ "NO": "No",
20
+ },
21
+ "es": { # Spanish
22
+ "QUESTION_WORD": "verdad",
23
+ "YES": "Sí",
24
+ "NO": "No",
25
+ },
26
+ "fr": { # French
27
+ "QUESTION_WORD": "n'est-ce pas",
28
+ "YES": "Oui",
29
+ "NO": "No",
30
+ },
31
+ "ja": { # Japanese
32
+ "QUESTION_WORD": "ですね",
33
+ "YES": "はい",
34
+ "NO": "いいえ",
35
+ },
36
+ "ko": { # Korean
37
+ "QUESTION_WORD": "맞죠",
38
+ "YES": "예",
39
+ "NO": "아니요",
40
+ },
41
+ "zh": { # Chinese
42
+ "QUESTION_WORD": "对吧",
43
+ "YES": "是",
44
+ "NO": "不是",
45
+ },
46
+ }
47
+
48
+
49
+ def gen_lang_yamls(output_dir: str, overwrite: bool) -> None:
50
+ """
51
+ Generate a yaml file for each language.
52
+
53
+ :param output_dir: The directory to output the files to.
54
+ :param overwrite: Whether to overwrite files if they already exist.
55
+ """
56
+ err = []
57
+ for lang in LANGUAGES.keys():
58
+ file_name = f"paws_{lang}.yaml"
59
+ try:
60
+ QUESTION_WORD = LANGUAGES[lang]["QUESTION_WORD"]
61
+ YES = LANGUAGES[lang]["YES"]
62
+ NO = LANGUAGES[lang]["NO"]
63
+ with open(
64
+ f"{output_dir}/{file_name}", "w" if overwrite else "x", encoding="utf8"
65
+ ) as f:
66
+ f.write("# Generated by utils.py\n")
67
+ yaml.dump(
68
+ {
69
+ "include": "pawsx_template_yaml",
70
+ "dataset_name": lang,
71
+ "task": f"paws_{lang}",
72
+ "doc_to_text": "",
73
+ "doc_to_choice": f"{{{{["
74
+ f"""sentence1+\", {QUESTION_WORD}? {YES}, \"+sentence2,"""
75
+ f""" sentence1+\", {QUESTION_WORD}? {NO}, \"+sentence2"""
76
+ f"]}}}}",
77
+ },
78
+ f,
79
+ allow_unicode=True,
80
+ )
81
+ except FileExistsError:
82
+ err.append(file_name)
83
+
84
+ if len(err) > 0:
85
+ raise FileExistsError(
86
+ "Files were not created because they already exist (use --overwrite flag):"
87
+ f" {', '.join(err)}"
88
+ )
89
+
90
+
91
+ def main() -> None:
92
+ """Parse CLI args and generate language-specific yaml files."""
93
+ parser = argparse.ArgumentParser()
94
+ parser.add_argument(
95
+ "--overwrite",
96
+ default=False,
97
+ action="store_true",
98
+ help="Overwrite files if they already exist",
99
+ )
100
+ parser.add_argument(
101
+ "--output-dir", default=".", help="Directory to write yaml files to"
102
+ )
103
+ args = parser.parse_args()
104
+
105
+ gen_lang_yamls(output_dir=args.output_dir, overwrite=args.overwrite)
106
+
107
+
108
+ if __name__ == "__main__":
109
+ main()
lm-evaluation-harness/lm_eval/tasks/paws-x/paws_de.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: de
3
+ doc_to_choice: '{{[sentence1+", richtig? Ja, "+sentence2, sentence1+", richtig? Nein,
4
+ "+sentence2]}}'
5
+ doc_to_text: ''
6
+ include: pawsx_template_yaml
7
+ task: paws_de
lm-evaluation-harness/lm_eval/tasks/paws-x/paws_ko.yaml ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: ko
3
+ doc_to_choice: '{{[sentence1+", 맞죠? 예, "+sentence2, sentence1+", 맞죠? 아니요, "+sentence2]}}'
4
+ doc_to_text: ''
5
+ include: pawsx_template_yaml
6
+ task: paws_ko
lm-evaluation-harness/lm_eval/tasks/storycloze/README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # StoryCloze
2
+
3
+ ### Paper
4
+
5
+ Title: `Few-shot Learning with Multilingual Language Models`
6
+ Abstract: `https://arxiv.org/abs/2112.10668`
7
+
8
+ XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI.
9
+
10
+ Homepage: https://github.com/facebookresearch/fairseq/pull/4820
11
+
12
+
13
+ ### Citation
14
+
15
+ ```
16
+ @article{DBLP:journals/corr/abs-2112-10668,
17
+ author = {Xi Victoria Lin and
18
+ Todor Mihaylov and
19
+ Mikel Artetxe and
20
+ Tianlu Wang and
21
+ Shuohui Chen and
22
+ Daniel Simig and
23
+ Myle Ott and
24
+ Naman Goyal and
25
+ Shruti Bhosale and
26
+ Jingfei Du and
27
+ Ramakanth Pasunuru and
28
+ Sam Shleifer and
29
+ Punit Singh Koura and
30
+ Vishrav Chaudhary and
31
+ Brian O'Horo and
32
+ Jeff Wang and
33
+ Luke Zettlemoyer and
34
+ Zornitsa Kozareva and
35
+ Mona T. Diab and
36
+ Veselin Stoyanov and
37
+ Xian Li},
38
+ title = {Few-shot Learning with Multilingual Language Models},
39
+ journal = {CoRR},
40
+ volume = {abs/2112.10668},
41
+ year = {2021},
42
+ url = {https://arxiv.org/abs/2112.10668},
43
+ eprinttype = {arXiv},
44
+ eprint = {2112.10668},
45
+ timestamp = {Tue, 04 Jan 2022 15:59:27 +0100},
46
+ biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib},
47
+ bibsource = {dblp computer science bibliography, https://dblp.org}
48
+ }
49
+ ```
50
+
51
+ ### Groups and Tasks
52
+
53
+ #### Groups
54
+
55
+ * `storycloze`
56
+
57
+ #### Tasks
58
+
59
+ * `storycloze_2016`
60
+ * `storycloze_2018`
61
+
62
+ ### Checklist
63
+
64
+ For adding novel benchmarks/datasets to the library:
65
+ * [ ] Is the task an existing benchmark in the literature?
66
+ * [ ] Have you referenced the original paper that introduced the task?
67
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
68
+
69
+
70
+ If other tasks on this dataset are already supported:
71
+ * [ ] Is the "Main" variant of this task clearly denoted?
72
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
73
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/storycloze/storycloze_2016.yaml ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: storycloze
2
+ task: storycloze_2016
3
+ dataset_path: story_cloze
4
+ dataset_name: 2016
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}"
9
+ doc_to_target: "{{answer_right_ending-1}}"
10
+ doc_to_choice: "{{[sentence_quiz1, sentence_quiz2]}}"
11
+ should_decontaminate: true
12
+ doc_to_decontamination_query: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}"
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: true
17
+ metadata:
18
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/storycloze/storycloze_2018.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: storycloze
2
+ task: storycloze_2018
3
+ dataset_path: story_cloze
4
+ dataset_name: 2018
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}"
9
+ doc_to_target: "{{answer_right_ending-1}}"
10
+ doc_to_choice: "{{[sentence_quiz1, sentence_quiz2]}}"
11
+ should_decontaminate: true
12
+ doc_to_decontamination_query: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}"
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: true
lm-evaluation-harness/lm_eval/tasks/triviaqa/README.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Trivia QA
2
+
3
+ ### Paper
4
+
5
+ Title: `TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension`
6
+ Abstract: https://arxiv.org/abs/1705.03551
7
+
8
+ TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence
9
+ triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts
10
+ and independently gathered evidence documents, six per question on average, that provide
11
+ high quality distant supervision for answering the questions.
12
+
13
+ Homepage: https://nlp.cs.washington.edu/triviaqa/
14
+
15
+
16
+ ### Citation
17
+
18
+ ```
19
+ @InProceedings{JoshiTriviaQA2017,
20
+ author = {Joshi, Mandar and Choi, Eunsol and Weld, Daniel S. and Zettlemoyer, Luke},
21
+ title = {TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension},
22
+ booktitle = {Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics},
23
+ month = {July},
24
+ year = {2017},
25
+ address = {Vancouver, Canada},
26
+ publisher = {Association for Computational Linguistics},
27
+ }
28
+ ```
29
+
30
+ ### Groups and Tasks
31
+
32
+ #### Groups
33
+
34
+ * Not part of a group yet.
35
+
36
+ #### Tasks
37
+
38
+ * `triviaqa`: `Generate and answer based on the question.`
39
+
40
+ ### Checklist
41
+
42
+ For adding novel benchmarks/datasets to the library:
43
+ * [ ] Is the task an existing benchmark in the literature?
44
+ * [ ] Have you referenced the original paper that introduced the task?
45
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
46
+
47
+
48
+ If other tasks on this dataset are already supported:
49
+ * [ ] Is the "Main" variant of this task clearly denoted?
50
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
51
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/triviaqa/default.yaml ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: triviaqa
2
+ dataset_path: trivia_qa
3
+ dataset_name: rc.nocontext
4
+ output_type: generate_until
5
+ training_split: train
6
+ validation_split: validation
7
+ doc_to_text: "Question: {{question}}?\nAnswer:"
8
+ doc_to_target: "{{answer.aliases}}"
9
+ should_decontaminate: true
10
+ doc_to_decontamination_query: question
11
+ generation_kwargs:
12
+ until:
13
+ - "\n"
14
+ - "."
15
+ - ","
16
+ do_sample: false
17
+ temperature: 0.0
18
+ filter_list:
19
+ - name: remove_whitespace
20
+ filter:
21
+ - function: remove_whitespace
22
+ - function: take_first
23
+ target_delimiter: " "
24
+ metric_list:
25
+ - metric: exact_match
26
+ aggregation: mean
27
+ higher_is_better: true
28
+ ignore_case: true
29
+ ignore_punctuation: true
30
+ metadata:
31
+ version: 3.0
lm-evaluation-harness/lm_eval/tasks/xnli/README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # XNLI
2
+
3
+ ### Paper
4
+
5
+ Title: `XNLI: Evaluating Cross-lingual Sentence Representations`
6
+
7
+ Abstract: https://arxiv.org/abs/1809.05053
8
+
9
+ Based on the implementation of @yongzx (see https://github.com/EleutherAI/lm-evaluation-harness/pull/258)
10
+
11
+ Prompt format (same as XGLM and mGPT):
12
+
13
+ sentence1 + ", right? " + mask = (Yes|Also|No) + ", " + sentence2
14
+
15
+ Predicition is the full sequence with the highest likelihood.
16
+
17
+ Language specific prompts are translated word-by-word with Google Translate
18
+ and may differ from the ones used by mGPT and XGLM (they do not provide their prompts).
19
+
20
+ Homepage: https://github.com/facebookresearch/XNLI
21
+
22
+
23
+ ### Citation
24
+
25
+ """
26
+ @InProceedings{conneau2018xnli,
27
+ author = "Conneau, Alexis
28
+ and Rinott, Ruty
29
+ and Lample, Guillaume
30
+ and Williams, Adina
31
+ and Bowman, Samuel R.
32
+ and Schwenk, Holger
33
+ and Stoyanov, Veselin",
34
+ title = "XNLI: Evaluating Cross-lingual Sentence Representations",
35
+ booktitle = "Proceedings of the 2018 Conference on Empirical Methods
36
+ in Natural Language Processing",
37
+ year = "2018",
38
+ publisher = "Association for Computational Linguistics",
39
+ location = "Brussels, Belgium",
40
+ }
41
+ """
42
+
43
+ ### Groups and Tasks
44
+
45
+ #### Groups
46
+
47
+ * `xnli`
48
+
49
+ #### Tasks
50
+
51
+ * `xnli_ar`: Arabic
52
+ * `xnli_bg`: Bulgarian
53
+ * `xnli_de`: German
54
+ * `xnli_el`: Greek
55
+ * `xnli_en`: English
56
+ * `xnli_es`: Spanish
57
+ * `xnli_fr`: French
58
+ * `xnli_hi`: Hindi
59
+ * `xnli_ru`: Russian
60
+ * `xnli_sw`: Swahili
61
+ * `xnli_th`: Thai
62
+ * `xnli_tr`: Turkish
63
+ * `xnli_ur`: Urdu
64
+ * `xnli_vi`: Vietnamese
65
+ * `xnli_zh`: Chinese
66
+
67
+ ### Checklist
68
+
69
+ For adding novel benchmarks/datasets to the library:
70
+ * [ ] Is the task an existing benchmark in the literature?
71
+ * [ ] Have you referenced the original paper that introduced the task?
72
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
73
+
74
+
75
+ If other tasks on this dataset are already supported:
76
+ * [ ] Is the "Main" variant of this task clearly denoted?
77
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
78
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?