Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- lm-evaluation-harness/lm_eval/tasks/belebele/README.md +49 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_als_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ceb_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_est_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_grn_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_guj_Gujr.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hin_Deva.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hin_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hrv_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hye_Armn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ind_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_isl_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_kaz_Cyrl.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_khm_Khmr.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_lao_Laoo.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_lug_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_mal_Mlym.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_mkd_Cyrl.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_npi_Deva.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ory_Orya.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_pan_Guru.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_pol_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_sin_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_slv_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_snd_Arab.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_sot_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_swe_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_swh_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tam_Taml.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tgl_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tir_Ethi.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tso_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_war_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_wol_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_xho_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/belebele/belebele_zsm_Latn.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/coqa/README.md +43 -0
- lm-evaluation-harness/lm_eval/tasks/coqa/default.yaml +24 -0
- lm-evaluation-harness/lm_eval/tasks/coqa/utils.py +77 -0
- lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md +59 -0
- lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml +34 -0
- lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-zeroshot.yaml +44 -0
- lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot.yaml +51 -0
- lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k.yaml +45 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/README.md +94 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/cot_yaml +36 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_en.yaml +12 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_es.yaml +12 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_fr.yaml +12 -0
- lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_ru.yaml +12 -0
lm-evaluation-harness/lm_eval/tasks/belebele/README.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Belebele
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
The Belebele Benchmark for Massively Multilingual NLU Evaluation
|
6 |
+
https://arxiv.org/abs/2308.16884
|
7 |
+
|
8 |
+
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-200 dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
|
9 |
+
|
10 |
+
Homepage: https://github.com/facebookresearch/belebele
|
11 |
+
|
12 |
+
### Citation
|
13 |
+
|
14 |
+
```bibtex
|
15 |
+
@misc{bandarkar2023belebele,
|
16 |
+
title={The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants},
|
17 |
+
author={Lucas Bandarkar and Davis Liang and Benjamin Muller and Mikel Artetxe and Satya Narayan Shukla and Donald Husa and Naman Goyal and Abhinandan Krishnan and Luke Zettlemoyer and Madian Khabsa},
|
18 |
+
year={2023},
|
19 |
+
eprint={2308.16884},
|
20 |
+
archivePrefix={arXiv},
|
21 |
+
primaryClass={cs.CL}
|
22 |
+
}
|
23 |
+
```
|
24 |
+
|
25 |
+
### Groups and Tasks
|
26 |
+
|
27 |
+
#### Groups
|
28 |
+
|
29 |
+
- `belebele`: All 122 languages of the Belebele dataset, evaluated following the methodology in MMLU's original implementation.
|
30 |
+
|
31 |
+
#### Tasks
|
32 |
+
|
33 |
+
|
34 |
+
The following tasks evaluate languages in the Belebele dataset using loglikelihood-based multiple-choice scoring:
|
35 |
+
- `belebele_{language}`
|
36 |
+
|
37 |
+
The variant evaluated here is the 0-shot or few-shot evaluation with English Instructions.
|
38 |
+
|
39 |
+
### Checklist
|
40 |
+
|
41 |
+
* [x] Is the task an existing benchmark in the literature?
|
42 |
+
* [x] Have you referenced the original paper that introduced the task?
|
43 |
+
* [x] If yes, does the original paper provide a reference implementation?
|
44 |
+
* [ ] Yes, original implementation contributed by author of the benchmark
|
45 |
+
|
46 |
+
If other tasks on this dataset are already supported:
|
47 |
+
* [x] Is the "Main" variant of this task clearly denoted?
|
48 |
+
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
49 |
+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_als_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "als_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_als_Latn"
|
4 |
+
"test_split": "als_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ceb_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "ceb_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_ceb_Latn"
|
4 |
+
"test_split": "ceb_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_est_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "est_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_est_Latn"
|
4 |
+
"test_split": "est_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_grn_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "grn_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_grn_Latn"
|
4 |
+
"test_split": "grn_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_guj_Gujr.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "guj_Gujr"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_guj_Gujr"
|
4 |
+
"test_split": "guj_Gujr"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hin_Deva.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "hin_Deva"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_hin_Deva"
|
4 |
+
"test_split": "hin_Deva"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hin_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "hin_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_hin_Latn"
|
4 |
+
"test_split": "hin_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hrv_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "hrv_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_hrv_Latn"
|
4 |
+
"test_split": "hrv_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_hye_Armn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "hye_Armn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_hye_Armn"
|
4 |
+
"test_split": "hye_Armn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ind_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "ind_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_ind_Latn"
|
4 |
+
"test_split": "ind_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_isl_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "isl_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_isl_Latn"
|
4 |
+
"test_split": "isl_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_kaz_Cyrl.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "kaz_Cyrl"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_kaz_Cyrl"
|
4 |
+
"test_split": "kaz_Cyrl"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_khm_Khmr.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "khm_Khmr"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_khm_Khmr"
|
4 |
+
"test_split": "khm_Khmr"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_lao_Laoo.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "lao_Laoo"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_lao_Laoo"
|
4 |
+
"test_split": "lao_Laoo"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_lug_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "lug_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_lug_Latn"
|
4 |
+
"test_split": "lug_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_mal_Mlym.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "mal_Mlym"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_mal_Mlym"
|
4 |
+
"test_split": "mal_Mlym"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_mkd_Cyrl.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "mkd_Cyrl"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_mkd_Cyrl"
|
4 |
+
"test_split": "mkd_Cyrl"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_npi_Deva.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "npi_Deva"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_npi_Deva"
|
4 |
+
"test_split": "npi_Deva"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_ory_Orya.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "ory_Orya"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_ory_Orya"
|
4 |
+
"test_split": "ory_Orya"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_pan_Guru.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "pan_Guru"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_pan_Guru"
|
4 |
+
"test_split": "pan_Guru"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_pol_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "pol_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_pol_Latn"
|
4 |
+
"test_split": "pol_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_sin_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "sin_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_sin_Latn"
|
4 |
+
"test_split": "sin_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_slv_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "slv_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_slv_Latn"
|
4 |
+
"test_split": "slv_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_snd_Arab.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "snd_Arab"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_snd_Arab"
|
4 |
+
"test_split": "snd_Arab"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_sot_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "sot_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_sot_Latn"
|
4 |
+
"test_split": "sot_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_swe_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "swe_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_swe_Latn"
|
4 |
+
"test_split": "swe_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_swh_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "swh_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_swh_Latn"
|
4 |
+
"test_split": "swh_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tam_Taml.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "tam_Taml"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_tam_Taml"
|
4 |
+
"test_split": "tam_Taml"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tgl_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "tgl_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_tgl_Latn"
|
4 |
+
"test_split": "tgl_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tir_Ethi.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "tir_Ethi"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_tir_Ethi"
|
4 |
+
"test_split": "tir_Ethi"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_tso_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "tso_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_tso_Latn"
|
4 |
+
"test_split": "tso_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_war_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "war_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_war_Latn"
|
4 |
+
"test_split": "war_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_wol_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "wol_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_wol_Latn"
|
4 |
+
"test_split": "wol_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_xho_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "xho_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_xho_Latn"
|
4 |
+
"test_split": "xho_Latn"
|
lm-evaluation-harness/lm_eval/tasks/belebele/belebele_zsm_Latn.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"fewshot_split": "zsm_Latn"
|
2 |
+
"include": "_default_template_yaml"
|
3 |
+
"task": "belebele_zsm_Latn"
|
4 |
+
"test_split": "zsm_Latn"
|
lm-evaluation-harness/lm_eval/tasks/coqa/README.md
ADDED
@@ -0,0 +1,43 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# CoQA
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
Title: `CoQA: A Conversational Question Answering Challenge`
|
6 |
+
|
7 |
+
Abstract: https://arxiv.org/pdf/1808.07042.pdf
|
8 |
+
|
9 |
+
CoQA is a large-scale dataset for building Conversational Question Answering
|
10 |
+
systems. The goal of the CoQA challenge is to measure the ability of machines to
|
11 |
+
understand a text passage and answer a series of interconnected questions that
|
12 |
+
appear in a conversation.
|
13 |
+
|
14 |
+
Homepage: https://stanfordnlp.github.io/coqa/
|
15 |
+
|
16 |
+
### Citation
|
17 |
+
|
18 |
+
```
|
19 |
+
BibTeX-formatted citation goes here
|
20 |
+
```
|
21 |
+
|
22 |
+
### Groups and Tasks
|
23 |
+
|
24 |
+
#### Groups
|
25 |
+
|
26 |
+
* Not part of a group yet
|
27 |
+
|
28 |
+
#### Tasks
|
29 |
+
|
30 |
+
* `coqa`
|
31 |
+
|
32 |
+
### Checklist
|
33 |
+
|
34 |
+
For adding novel benchmarks/datasets to the library:
|
35 |
+
* [ ] Is the task an existing benchmark in the literature?
|
36 |
+
* [ ] Have you referenced the original paper that introduced the task?
|
37 |
+
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
|
38 |
+
|
39 |
+
|
40 |
+
If other tasks on this dataset are already supported:
|
41 |
+
* [ ] Is the "Main" variant of this task clearly denoted?
|
42 |
+
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
43 |
+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
|
lm-evaluation-harness/lm_eval/tasks/coqa/default.yaml
ADDED
@@ -0,0 +1,24 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
task: coqa
|
2 |
+
dataset_path: EleutherAI/coqa
|
3 |
+
output_type: generate_until
|
4 |
+
training_split: train
|
5 |
+
validation_split: validation
|
6 |
+
doc_to_text: !function utils.doc_to_text
|
7 |
+
doc_to_target: !function utils.doc_to_target
|
8 |
+
process_results: !function utils.process_results
|
9 |
+
should_decontaminate: true
|
10 |
+
doc_to_decontamination_query: "{{story}} {{question.input_text|join('\n')}}"
|
11 |
+
generation_kwargs:
|
12 |
+
until:
|
13 |
+
- "\nQ:"
|
14 |
+
metric_list:
|
15 |
+
- metric: em
|
16 |
+
aggregation: mean
|
17 |
+
higher_is_better: true
|
18 |
+
- metric: f1
|
19 |
+
aggregation: mean
|
20 |
+
higher_is_better: true
|
21 |
+
metadata:
|
22 |
+
version: 3.0
|
23 |
+
dataset_kwargs:
|
24 |
+
trust_remote_code: true
|
lm-evaluation-harness/lm_eval/tasks/coqa/utils.py
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from itertools import zip_longest
|
2 |
+
|
3 |
+
import transformers.data.metrics.squad_metrics as squad_metrics
|
4 |
+
|
5 |
+
|
6 |
+
def doc_to_text(doc):
|
7 |
+
# Given a passage p, the conversation history {q1, a1, . . . qi−1, ai−1}
|
8 |
+
# and a question qi, the task is to predict the answer ai
|
9 |
+
doc_text = doc["story"] + "\n\n"
|
10 |
+
for q, a in zip_longest(
|
11 |
+
doc["questions"]["input_text"], doc["answers"]["input_text"][:-1]
|
12 |
+
): # omit target answer ai
|
13 |
+
question = f"Q: {q}\n\n"
|
14 |
+
answer = f"A: {a}\n\n" if a is not None else "A:"
|
15 |
+
doc_text += question + answer
|
16 |
+
return doc_text
|
17 |
+
|
18 |
+
|
19 |
+
def doc_to_target(doc):
|
20 |
+
turn_id = len(doc["questions"]["input_text"])
|
21 |
+
# Returns unique answers and valid alternatives (Some questions in CoQA have multiple valid answers).
|
22 |
+
answers = []
|
23 |
+
answer_forturn = doc["answers"]["input_text"][turn_id - 1]
|
24 |
+
answers.append(answer_forturn)
|
25 |
+
|
26 |
+
additional_answers = doc.get("additional_answers")
|
27 |
+
if additional_answers:
|
28 |
+
for key in additional_answers:
|
29 |
+
additional_answer_for_turn = additional_answers[key]["input_text"][
|
30 |
+
turn_id - 1
|
31 |
+
]
|
32 |
+
if additional_answer_for_turn.lower() not in map(str.lower, answers):
|
33 |
+
answers.append(additional_answer_for_turn)
|
34 |
+
return answers
|
35 |
+
|
36 |
+
|
37 |
+
def em(gold_list, pred):
|
38 |
+
# tests for exact match and on the normalised answer (compute_exact)
|
39 |
+
em_sum = 0.0
|
40 |
+
if len(gold_list) > 1:
|
41 |
+
for i in range(len(gold_list)):
|
42 |
+
gold_answers = gold_list[0:i] + gold_list[i + 1 :]
|
43 |
+
# predictions compared against (n) golds and take maximum
|
44 |
+
em_sum += max(squad_metrics.compute_exact(a, pred) for a in gold_answers)
|
45 |
+
else:
|
46 |
+
em_sum += max(squad_metrics.compute_exact(a, pred) for a in gold_list)
|
47 |
+
|
48 |
+
return em_sum / max(1, len(gold_list))
|
49 |
+
|
50 |
+
|
51 |
+
def compute_scores(gold_list, pred):
|
52 |
+
# tests for exact match and on the normalised answer (compute_exact)
|
53 |
+
# test for overlap (compute_f1)
|
54 |
+
f1_sum = 0.0
|
55 |
+
em_sum = 0.0
|
56 |
+
if len(gold_list) > 1:
|
57 |
+
for i in range(len(gold_list)):
|
58 |
+
gold_answers = gold_list[0:i] + gold_list[i + 1 :]
|
59 |
+
# predictions compared against (n) golds and take maximum
|
60 |
+
em_sum += max(squad_metrics.compute_exact(a, pred) for a in gold_answers)
|
61 |
+
f1_sum += max(squad_metrics.compute_f1(a, pred) for a in gold_answers)
|
62 |
+
else:
|
63 |
+
em_sum += max(squad_metrics.compute_exact(a, pred) for a in gold_list)
|
64 |
+
f1_sum += max(squad_metrics.compute_f1(a, pred) for a in gold_list)
|
65 |
+
|
66 |
+
return {
|
67 |
+
"em": em_sum / max(1, len(gold_list)),
|
68 |
+
"f1": f1_sum / max(1, len(gold_list)),
|
69 |
+
}
|
70 |
+
|
71 |
+
|
72 |
+
def process_results(doc, results):
|
73 |
+
gold_list = doc_to_target(doc)
|
74 |
+
pred = results[0].strip().split("\n")[0]
|
75 |
+
|
76 |
+
scores = compute_scores(gold_list, pred)
|
77 |
+
return scores
|
lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# GSM8k
|
2 |
+
|
3 |
+
## Paper
|
4 |
+
Training Verifiers to Solve Math Word Problems
|
5 |
+
https://arxiv.org/abs/2110.14168
|
6 |
+
|
7 |
+
State-of-the-art language models can match human performance on many tasks, but
|
8 |
+
they still struggle to robustly perform multi-step mathematical reasoning. To
|
9 |
+
diagnose the failures of current models and support research, we introduce GSM8K,
|
10 |
+
a dataset of 8.5K high quality linguistically diverse grade school math word problems.
|
11 |
+
We find that even the largest transformer models fail to achieve high test performance,
|
12 |
+
despite the conceptual simplicity of this problem distribution.
|
13 |
+
|
14 |
+
NOTE: See the official implementation of the task:
|
15 |
+
https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py
|
16 |
+
for how to make use of the dataset's calculator annotations in your language
|
17 |
+
model's sample/generation function.
|
18 |
+
|
19 |
+
Homepage: https://github.com/openai/grade-school-math
|
20 |
+
|
21 |
+
|
22 |
+
## Citation
|
23 |
+
```
|
24 |
+
@misc{cobbe2021training,
|
25 |
+
title={Training Verifiers to Solve Math Word Problems},
|
26 |
+
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
|
27 |
+
year={2021},
|
28 |
+
eprint={2110.14168},
|
29 |
+
archivePrefix={arXiv},
|
30 |
+
primaryClass={cs.LG}
|
31 |
+
}
|
32 |
+
```
|
33 |
+
|
34 |
+
### Groups and Tasks
|
35 |
+
|
36 |
+
#### Groups
|
37 |
+
|
38 |
+
- `math_word_problems`
|
39 |
+
- `chain_of_thought`
|
40 |
+
- `self_consistency`
|
41 |
+
|
42 |
+
#### Tasks
|
43 |
+
|
44 |
+
- `gsm8k_yaml`
|
45 |
+
- `gsm8k_cot`: GSM8K with Chain-of-Thought
|
46 |
+
- `gsm8k_cot_self_consistency`: GSM8K with Chain-of-Thought and Self-Consistency
|
47 |
+
|
48 |
+
### Checklist
|
49 |
+
|
50 |
+
- [x] Is in Eval-harness v1.0 ?
|
51 |
+
- [ ] Has been checked for regression from v1.0?
|
52 |
+
- [ ] Has been checked for equivalence with original paper methodology?
|
53 |
+
- [ ] "Main" checked variant clearly denoted?
|
54 |
+
|
55 |
+
### Variant Wishlist
|
56 |
+
|
57 |
+
- [ ] Variant with Calculator (see https://github.com/openai/grade-school-math/blob/master/grade_school_math/calculator.py for example implementation)
|
58 |
+
- [ ] Using Verifiers
|
59 |
+
- [ ] Majority voting "without CoT"
|
lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
include: gsm8k-cot.yaml
|
2 |
+
group:
|
3 |
+
- chain_of_thought
|
4 |
+
- self_consistency
|
5 |
+
task: gsm8k_cot_self_consistency
|
6 |
+
generation_kwargs:
|
7 |
+
until:
|
8 |
+
- "Q:"
|
9 |
+
- "\n\n"
|
10 |
+
do_sample: true
|
11 |
+
temperature: 0.2
|
12 |
+
repeats: 64
|
13 |
+
filter_list:
|
14 |
+
- name: "score-first" # pick only the first response, and report metrics on that
|
15 |
+
filter:
|
16 |
+
- function: "regex"
|
17 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
18 |
+
- function: "take_first"
|
19 |
+
- name: "maj@64"
|
20 |
+
filter:
|
21 |
+
- function: "regex"
|
22 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
23 |
+
- function: "majority_vote"
|
24 |
+
- function: "take_first"
|
25 |
+
- name: "maj@8" # get Maj@8 , via selecting the first 8 responses. Using a better estimator would be optimal.
|
26 |
+
filter:
|
27 |
+
- function: "take_first_k"
|
28 |
+
k: 8
|
29 |
+
- function: "regex"
|
30 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
31 |
+
- function: "majority_vote"
|
32 |
+
- function: "take_first"
|
33 |
+
metadata:
|
34 |
+
version: 2.0
|
lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot-zeroshot.yaml
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group:
|
2 |
+
- math_word_problems
|
3 |
+
task: gsm8k_cot_zeroshot
|
4 |
+
dataset_path: gsm8k
|
5 |
+
dataset_name: main
|
6 |
+
output_type: generate_until
|
7 |
+
training_split: train
|
8 |
+
fewshot_split: train
|
9 |
+
test_split: test
|
10 |
+
doc_to_text: "Q: {{question}}\nA: Let's think step by step."
|
11 |
+
doc_to_target: "{{answer}}" #" {{answer.split('### ')[-1].rstrip()}}"
|
12 |
+
metric_list:
|
13 |
+
- metric: exact_match
|
14 |
+
aggregation: mean
|
15 |
+
higher_is_better: true
|
16 |
+
ignore_case: true
|
17 |
+
ignore_punctuation: false
|
18 |
+
regexes_to_ignore:
|
19 |
+
- ","
|
20 |
+
- "\\$"
|
21 |
+
- "(?s).*#### "
|
22 |
+
- "\\.$"
|
23 |
+
generation_kwargs:
|
24 |
+
until:
|
25 |
+
- "Q:"
|
26 |
+
- "</s>"
|
27 |
+
- "<|im_end|>"
|
28 |
+
do_sample: false
|
29 |
+
repeats: 1
|
30 |
+
num_fewshot: 0
|
31 |
+
filter_list:
|
32 |
+
- name: "strict-match"
|
33 |
+
filter:
|
34 |
+
- function: "regex"
|
35 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)."
|
36 |
+
- function: "take_first"
|
37 |
+
- name: "flexible-extract"
|
38 |
+
filter:
|
39 |
+
- function: "regex"
|
40 |
+
group_select: -1
|
41 |
+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
|
42 |
+
- function: "take_first"
|
43 |
+
metadata:
|
44 |
+
version: 3.0
|
lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k-cot.yaml
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group:
|
2 |
+
- chain_of_thought
|
3 |
+
task: gsm8k_cot
|
4 |
+
dataset_path: gsm8k
|
5 |
+
dataset_name: main
|
6 |
+
output_type: generate_until
|
7 |
+
test_split: test
|
8 |
+
doc_to_text: "Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today?\nA: There are 15 trees originally. Then there were 21 trees after some more were planted. So there must have been 21 - 15 = 6. The answer is 6.\n\n\
|
9 |
+
Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot?\nA: There are originally 3 cars. 2 more cars arrive. 3 + 2 = 5. The answer is 5.\n\n\
|
10 |
+
Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total?\nA: Originally, Leah had 32 chocolates. Her sister had 42. So in total they had 32 + 42 = 74. After eating 35, they had 74 - 35 = 39. The answer is 39.\n\n\
|
11 |
+
Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?\nA: Jason started with 20 lollipops. Then he had 12 after giving some to Denny. So he gave Denny 20 - 12 = 8. The answer is 8.\n\n\
|
12 |
+
Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now?\nA: Shawn started with 5 toys. If he got 2 toys each from his mom and dad, then that is 4 more toys. 5 + 4 = 9. The answer is 9.\n\n\
|
13 |
+
Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room?\nA: There were originally 9 computers. For each of 4 days, 5 more computers were added. So 5 * 4 = 20 computers were added. 9 + 20 is 29. The answer is 29.\n\n\
|
14 |
+
Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday?\nA: Michael started with 58 golf balls. After losing 23 on tuesday, he had 58 - 23 = 35. After losing 2 more, he had 35 - 2 = 33 golf balls. The answer is 33.\n\n\
|
15 |
+
Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left?\nA: Olivia had 23 dollars. 5 bagels for 3 dollars each will be 5 x 3 = 15 dollars. So she has 23 - 15 dollars left. 23 - 15 is 8. The answer is 8.\n\n\
|
16 |
+
Q: {{question}}\nA:"
|
17 |
+
doc_to_target: "{{answer.split('####')[-1].strip()}}"
|
18 |
+
metric_list:
|
19 |
+
- metric: exact_match
|
20 |
+
aggregation: mean
|
21 |
+
higher_is_better: true
|
22 |
+
ignore_case: true
|
23 |
+
ignore_punctuation: false
|
24 |
+
regexes_to_ignore:
|
25 |
+
- ","
|
26 |
+
- "\\$"
|
27 |
+
- "(?s).*#### "
|
28 |
+
- "\\.$"
|
29 |
+
generation_kwargs:
|
30 |
+
until:
|
31 |
+
- "Q:"
|
32 |
+
- "</s>"
|
33 |
+
- "<|im_end|>"
|
34 |
+
do_sample: false
|
35 |
+
repeats: 1
|
36 |
+
num_fewshot: 0
|
37 |
+
filter_list:
|
38 |
+
- name: "strict-match"
|
39 |
+
filter:
|
40 |
+
- function: "regex"
|
41 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)."
|
42 |
+
- function: "take_first"
|
43 |
+
- name: "flexible-extract"
|
44 |
+
filter:
|
45 |
+
- function: "regex"
|
46 |
+
group_select: -1
|
47 |
+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
|
48 |
+
- function: "take_first"
|
49 |
+
metadata:
|
50 |
+
version: 3.0
|
51 |
+
num_fewshot: 8
|
lm-evaluation-harness/lm_eval/tasks/gsm8k/gsm8k.yaml
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group:
|
2 |
+
- math_word_problems
|
3 |
+
task: gsm8k
|
4 |
+
dataset_path: gsm8k
|
5 |
+
dataset_name: main
|
6 |
+
output_type: generate_until
|
7 |
+
training_split: train
|
8 |
+
fewshot_split: train
|
9 |
+
test_split: test
|
10 |
+
doc_to_text: "Question: {{question}}\nAnswer:"
|
11 |
+
doc_to_target: "{{answer}}" #" {{answer.split('### ')[-1].rstrip()}}"
|
12 |
+
metric_list:
|
13 |
+
- metric: exact_match
|
14 |
+
aggregation: mean
|
15 |
+
higher_is_better: true
|
16 |
+
ignore_case: true
|
17 |
+
ignore_punctuation: false
|
18 |
+
regexes_to_ignore:
|
19 |
+
- ","
|
20 |
+
- "\\$"
|
21 |
+
- "(?s).*#### "
|
22 |
+
- "\\.$"
|
23 |
+
generation_kwargs:
|
24 |
+
until:
|
25 |
+
- "Question:"
|
26 |
+
- "</s>"
|
27 |
+
- "<|im_end|>"
|
28 |
+
do_sample: false
|
29 |
+
temperature: 0.0
|
30 |
+
repeats: 1
|
31 |
+
num_fewshot: 5
|
32 |
+
filter_list:
|
33 |
+
- name: "strict-match"
|
34 |
+
filter:
|
35 |
+
- function: "regex"
|
36 |
+
regex_pattern: "#### (\\-?[0-9\\.\\,]+)"
|
37 |
+
- function: "take_first"
|
38 |
+
- name: "flexible-extract"
|
39 |
+
filter:
|
40 |
+
- function: "regex"
|
41 |
+
group_select: -1
|
42 |
+
regex_pattern: "(-?[$0-9.,]{2,})|(-?[0-9]+)"
|
43 |
+
- function: "take_first"
|
44 |
+
metadata:
|
45 |
+
version: 3.0
|
lm-evaluation-harness/lm_eval/tasks/mgsm/README.md
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# MGSM
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
Title: `Language Models are Multilingual Chain-of-Thought Reasoners`
|
6 |
+
|
7 |
+
Abstract: https://arxiv.org/abs/2210.03057
|
8 |
+
|
9 |
+
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://arxiv.org/abs/2210.03057).
|
10 |
+
|
11 |
+
The same 250 problems from [GSM8K](https://arxiv.org/abs/2110.14168) are each translated via human annotators in 10 languages. The 10 languages are:
|
12 |
+
- Spanish
|
13 |
+
- French
|
14 |
+
- German
|
15 |
+
- Russian
|
16 |
+
- Chinese
|
17 |
+
- Japanese
|
18 |
+
- Thai
|
19 |
+
- Swahili
|
20 |
+
- Bengali
|
21 |
+
- Telugu
|
22 |
+
|
23 |
+
GSM8K (Grade School Math 8K) is a dataset of 8.5K high quality linguistically diverse grade school math word problems. The dataset was created to support the task of question answering on basic mathematical problems that require multi-step reasoning.
|
24 |
+
|
25 |
+
You can find the input and targets for each of the ten languages (and English) as `.tsv` files.
|
26 |
+
We also include few-shot exemplars that are also manually translated from each language in `exemplars.py`.
|
27 |
+
|
28 |
+
Homepage: https://github.com/google-research/url-nlp/tree/main/mgsm
|
29 |
+
|
30 |
+
|
31 |
+
### Citation
|
32 |
+
|
33 |
+
```
|
34 |
+
@misc{cobbe2021training,
|
35 |
+
title={Training Verifiers to Solve Math Word Problems},
|
36 |
+
author={Karl Cobbe and Vineet Kosaraju and Mohammad Bavarian and Jacob Hilton and Reiichiro Nakano and Christopher Hesse and John Schulman},
|
37 |
+
year={2021},
|
38 |
+
eprint={2110.14168},
|
39 |
+
archivePrefix={arXiv},
|
40 |
+
primaryClass={cs.LG}
|
41 |
+
}
|
42 |
+
@misc{shi2022language,
|
43 |
+
title={Language Models are Multilingual Chain-of-Thought Reasoners},
|
44 |
+
author={Freda Shi and Mirac Suzgun and Markus Freitag and Xuezhi Wang and Suraj Srivats and Soroush Vosoughi and Hyung Won Chung and Yi Tay and Sebastian Ruder and Denny Zhou and Dipanjan Das and Jason Wei},
|
45 |
+
year={2022},
|
46 |
+
eprint={2210.03057},
|
47 |
+
archivePrefix={arXiv},
|
48 |
+
primaryClass={cs.CL}
|
49 |
+
}
|
50 |
+
```
|
51 |
+
|
52 |
+
### Groups and Tasks
|
53 |
+
|
54 |
+
#### Groups
|
55 |
+
|
56 |
+
* `mgsm_direct`: Direct question
|
57 |
+
* `mgsm_direct_bn`: Bengali
|
58 |
+
* `mgsm_direct_de`: German
|
59 |
+
* `mgsm_direct_en`: English
|
60 |
+
* `mgsm_direct_es`: Spanish
|
61 |
+
* `mgsm_direct_fr`: French
|
62 |
+
* `mgsm_direct_ja`: Japanese
|
63 |
+
* `mgsm_direct_ru`: Russian
|
64 |
+
* `mgsm_direct_sw`: Swahili
|
65 |
+
* `mgsm_direct_te`: Telugu
|
66 |
+
* `mgsm_direct_th`: Thai
|
67 |
+
* `mgsm_direct_zh`: Chinese
|
68 |
+
* `mgsm_cot_native`: Question with Answer followed by CoT prompt in the same language as the dataset.
|
69 |
+
* `mgsm_cot_native_bn`: Bengali
|
70 |
+
* `mgsm_cot_native_de`: German
|
71 |
+
* `mgsm_cot_native_en`: English
|
72 |
+
* `mgsm_cot_native_es`: Spanish
|
73 |
+
* `mgsm_cot_native_fr`: French
|
74 |
+
* `mgsm_cot_native_ja`: Japanese
|
75 |
+
* `mgsm_cot_native_ru`: Russian
|
76 |
+
* `mgsm_cot_native_sw`: Swahili
|
77 |
+
* `mgsm_cot_native_te`: Telugu
|
78 |
+
* `mgsm_cot_native_th`: Thai
|
79 |
+
* `mgsm_cot_native_zh`: Chinese
|
80 |
+
|
81 |
+
Examplar Samples: https://github.com/google-research/url-nlp/blob/main/mgsm/exemplars.py
|
82 |
+
|
83 |
+
### Checklist
|
84 |
+
|
85 |
+
For adding novel benchmarks/datasets to the library:
|
86 |
+
* [ ] Is the task an existing benchmark in the literature?
|
87 |
+
* [ ] Have you referenced the original paper that introduced the task?
|
88 |
+
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
|
89 |
+
|
90 |
+
|
91 |
+
If other tasks on this dataset are already supported:
|
92 |
+
* [ ] Is the "Main" variant of this task clearly denoted?
|
93 |
+
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
94 |
+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
|
lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/cot_yaml
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file will be included in the generated language-specific task configs.
|
2 |
+
# It doesn't have a yaml file extension as it is not meant to be imported directly
|
3 |
+
# by the harness.
|
4 |
+
group: mgsm_cot_native
|
5 |
+
dataset_path: juletxara/mgsm
|
6 |
+
dataset_name: null # Overridden by language-specific config.
|
7 |
+
output_type: generate_until
|
8 |
+
training_split: train
|
9 |
+
test_split: test
|
10 |
+
generation_kwargs:
|
11 |
+
until:
|
12 |
+
- "\n\n"
|
13 |
+
- "\n"
|
14 |
+
do_sample: false
|
15 |
+
temperature: 0.0
|
16 |
+
target_delimiter: " "
|
17 |
+
metric_list:
|
18 |
+
- metric: exact_match
|
19 |
+
aggregation: mean
|
20 |
+
higher_is_better: true
|
21 |
+
ignore_case: true
|
22 |
+
ignore_punctuation: true
|
23 |
+
filter_list:
|
24 |
+
- name: "strict-match"
|
25 |
+
filter:
|
26 |
+
- function: "regex"
|
27 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]+)"
|
28 |
+
- function: "take_first"
|
29 |
+
- filter:
|
30 |
+
- function: regex
|
31 |
+
group_select: -1
|
32 |
+
regex_pattern: (-?[$0-9.,]{2,})|(-?[0-9]+)
|
33 |
+
- function: take_first
|
34 |
+
name: flexible-extract
|
35 |
+
metadata:
|
36 |
+
version: 2.0
|
lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_en.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: en
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[21:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: mgsm_en_cot_en
|
lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_es.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: es
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[23:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Pregunta: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Pregunta:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: mgsm_en_cot_es
|
lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_fr.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: fr
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[26:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Question : "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Question :'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: mgsm_en_cot_fr
|
lm-evaluation-harness/lm_eval/tasks/mgsm/en_cot/mgsm_en_cot_ru.yaml
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Generated by utils.py
|
2 |
+
dataset_name: ru
|
3 |
+
doc_to_target: '{% if answer is not none %}{{answer[18:]}}{% else %}{{answer_number|string}}{% endif %}'
|
4 |
+
doc_to_text: '{% if answer is not none %}{{question+"\nStep-by-Step Answer:"}}{% else %}{{"Задача: "+question+"\nStep-by-Step Answer:"}}{% endif %}'
|
5 |
+
generation_kwargs:
|
6 |
+
do_sample: false
|
7 |
+
until:
|
8 |
+
- 'Задача:'
|
9 |
+
- </s>
|
10 |
+
- <|im_end|>
|
11 |
+
include: cot_yaml
|
12 |
+
task: mgsm_en_cot_ru
|