diff --git a/lm-evaluation/lm_eval/tasks/aexams/README.md b/lm-evaluation/lm_eval/tasks/aexams/README.md new file mode 100644 index 0000000000000000000000000000000000000000..799c6c1ff90b9b38d71c92a30e787b073e139073 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/aexams/README.md @@ -0,0 +1,53 @@ +# Arabic EXAMS + +### Paper + +EXAMS: a resource specialized in multilingual high school exam questions. +The original paper [EXAMS](https://aclanthology.org/2020.emnlp-main.438/) + +The Arabic EXAMS dataset includes five subjects + + - Islamic studies + - Biology + - Physics + - Science + - Social + +The original dataset [EXAMS-QA](https://github.com/mhardalov/exams-qa) + +EXAMS is a benchmark dataset for cross-lingual and multilingual question answering for high school examinations. +With 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. +EXAMS offers unique fine-grained evaluation framework across multiple languages and subjects + +Homepage for Arabic EXAMS: [EXAMS Arabic Homepage](https://github.com/FreedomIntelligence/AceGPT/tree/main/eval/benchmark_eval/benchmarks/EXAMS_Arabic) + +### Citation + + +### Groups and Tasks + +#### Groups + +- `EXAMS Arabic`: include IslamicStudies, Biology, Science, Physics, Social. + +#### Tasks + + +The following tasks evaluate subjects in Arabic EXAMS dataset using loglikelihood-based multiple-choice scoring: +- `aexams_IslamicStudies` +- `aexams_Biology` +- `aexams_Science` +- `aexams_Physics` +- `aexams_Social` + +### Checklist + +* [x] Is the task an existing benchmark in the literature? + * [x] Have you referenced the original paper that introduced the task? + * [x] If yes, does the original paper provide a reference implementation? + * [x] Yes, original implementation contributed by author of the benchmark + +If other tasks on this dataset are already supported: +* [x] Is the "Main" variant of this task clearly denoted? +* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [x] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/lm_eval/tasks/aexams/aexams_Physics.yaml b/lm-evaluation/lm_eval/tasks/aexams/aexams_Physics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f2764a06ef2680a1c81ccca0e76dcbcf1ba52672 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/aexams/aexams_Physics.yaml @@ -0,0 +1,4 @@ +"dataset_name": "Physics" +"description": "قم بالإجابة على مايلي في مجال الفيزياء \n\n" +"include": "_default_template_yaml" +"task": "aexams_Physics" diff --git a/lm-evaluation/lm_eval/tasks/aexams/aexams_Social.yaml b/lm-evaluation/lm_eval/tasks/aexams/aexams_Social.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3042a419e6e3902ddd0090028fc4b875a148a213 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/aexams/aexams_Social.yaml @@ -0,0 +1,4 @@ +"dataset_name": "Social" +"description": "قم بالإجابة على مايلي في مجال العلوم الإجتماعية \n\n" +"include": "_default_template_yaml" +"task": "aexams_Social" diff --git a/lm-evaluation/lm_eval/tasks/eus_proficiency/README.md b/lm-evaluation/lm_eval/tasks/eus_proficiency/README.md new file mode 100644 index 0000000000000000000000000000000000000000..6671bda477e4533204c8ba154323e40d3df23f79 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/eus_proficiency/README.md @@ -0,0 +1,48 @@ +# EusProficiency + +### Paper + +Title: Latxa: An Open Language Model and Evaluation Suite for Basque + +Abstract: https://arxiv.org/abs/2403.20266 + +EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises from EGA exams through the years 1998 to 2008. Atarikoa is the first qualifying test of EGA, which measures different aspects of language competency, such as reading comprehension, grammar, vocabulary, spelling, and writing. Each test generally has 85 multiple-choice questions, with 4 choices and a single correct answer. + +Homepage: https://github.com/hitz-zentroa/latxa + + +### Citation + +``` +@misc{etxaniz2024latxa, + title={Latxa: An Open Language Model and Evaluation Suite for Basque}, + author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, + year={2024}, + eprint={2403.20266}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +### Groups and Tasks + +#### Groups + +There are no groups. + +#### Tasks + +* `eus_proficiency`: EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml b/lm-evaluation/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml new file mode 100644 index 0000000000000000000000000000000000000000..18cf5d2ab313a2ac907738185b5e39036402c7e2 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml @@ -0,0 +1,16 @@ +dataset_path: HiTZ/EusProficiency +dataset_name: default +task: eus_proficiency +doc_to_text: "Galdera: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nD: {{candidates[3]}}\nErantzuna:" +doc_to_choice: ["A", "B", "C", "D"] +validation_split: null +test_split: test +fewshot_split: test +output_type: multiple_choice +doc_to_target: answer +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_agricultural_sciences.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_agricultural_sciences.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5bf1fa4b56fdc58cd4219164cc90b11f50886bc1 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_agricultural_sciences.yaml @@ -0,0 +1,3 @@ +dataset_name: Agricultural-Sciences +include: _direct_kmmlu_yaml +task: kmmlu_direct_agricultural_sciences diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_chemical_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_chemical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e5875bb7e8be076e5f7a1076b01b21bf308b5acd --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_chemical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Chemical-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_chemical_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..db4d78405a6079273f8042350fd4f785c9fe4bed --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml @@ -0,0 +1,3 @@ +dataset_name: Economics +include: _direct_kmmlu_yaml +task: kmmlu_direct_economics diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electronics_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electronics_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b45aa3083cb269c964b4beff2c48a9d1cfcc973c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electronics_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Electronics-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_electronics_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_energy_management.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_energy_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b4fb806b3808d2cb47ea68534030b9432e998b74 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_energy_management.yaml @@ -0,0 +1,3 @@ +dataset_name: Energy-Management +include: _direct_kmmlu_yaml +task: kmmlu_direct_energy_management diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_fashion.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_fashion.yaml new file mode 100644 index 0000000000000000000000000000000000000000..aef8043aa4605573b074b96b711b6f321d179f44 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_fashion.yaml @@ -0,0 +1,3 @@ +dataset_name: Fashion +include: _direct_kmmlu_yaml +task: kmmlu_direct_fashion diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c42e80eda1ad438d65d1d656671d5fb1542018da --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml @@ -0,0 +1,3 @@ +dataset_name: Information-Technology +include: _direct_kmmlu_yaml +task: kmmlu_direct_information_technology diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_interior_architecture_and_design.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_interior_architecture_and_design.yaml new file mode 100644 index 0000000000000000000000000000000000000000..842534aa0a4e87d6aa4bb43b0261b85b7e47676f --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_interior_architecture_and_design.yaml @@ -0,0 +1,3 @@ +dataset_name: Interior-Architecture-and-Design +include: _direct_kmmlu_yaml +task: kmmlu_direct_interior_architecture_and_design diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_management.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7352a1360b2a0cb32a85e88351cccfad62c142d3 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_management.yaml @@ -0,0 +1,3 @@ +dataset_name: Management +include: _direct_kmmlu_yaml +task: kmmlu_direct_management diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_materials_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_materials_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f04e0975a0700c13d9e816c5d37981d22d8f1b6c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_materials_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Materials-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_materials_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_mechanical_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_mechanical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a253535adb6c44a8fa8340b106539205cbe6c689 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_mechanical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Mechanical-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_mechanical_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_psychology.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_psychology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..140302d01f32ab5d0e55cfe01748659536a2262c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_psychology.yaml @@ -0,0 +1,3 @@ +dataset_name: Psychology +include: _direct_kmmlu_yaml +task: kmmlu_direct_psychology diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_public_safety.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_public_safety.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5bb16a90d1f5303b919e8f348b3eb79a9f7cf296 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_public_safety.yaml @@ -0,0 +1,3 @@ +dataset_name: Public-Safety +include: _direct_kmmlu_yaml +task: kmmlu_direct_public_safety diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_refrigerating_machinery.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_refrigerating_machinery.yaml new file mode 100644 index 0000000000000000000000000000000000000000..44f9e428bbd8d8c7eb33617a6498d2856a6e1c1a --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_refrigerating_machinery.yaml @@ -0,0 +1,3 @@ +dataset_name: Refrigerating-Machinery +include: _direct_kmmlu_yaml +task: kmmlu_direct_refrigerating_machinery diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69e71d6dfa6284cc701221c5c187969be5e92832 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml @@ -0,0 +1,3 @@ +dataset_name: Taxation +include: _direct_kmmlu_yaml +task: kmmlu_direct_taxation diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_telecommunications_and_wireless_technology.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_telecommunications_and_wireless_technology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f4d1fd05c876bf269c0aae1f3590f8801f7e9955 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_telecommunications_and_wireless_technology.yaml @@ -0,0 +1,3 @@ +dataset_name: Telecommunications-and-Wireless-Technology +include: _direct_kmmlu_yaml +task: kmmlu_direct_telecommunications_and_wireless_technology diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_aviation_engineering_and_maintenance.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_aviation_engineering_and_maintenance.yaml new file mode 100644 index 0000000000000000000000000000000000000000..87b3845f28561d4be1a3437995ad08015ac1ae0c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_aviation_engineering_and_maintenance.yaml @@ -0,0 +1,3 @@ +dataset_name: aviation_engineering_and_maintenance +include: _hard_kmmlu_yaml +task: kmmlu_hard_aviation_engineering_and_maintenance diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_chemical_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_chemical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8fc448a81ab4d883e1e7fe6456d5371541356f1e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_chemical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: chemical_engineering +include: _hard_kmmlu_yaml +task: kmmlu_hard_chemical_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_civil_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_civil_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ba1a15ad8cb268adc0aeaa96a06418d18209ecda --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_civil_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: civil_engineering +include: _hard_kmmlu_yaml +task: kmmlu_hard_civil_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_computer_science.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_computer_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4e1f12135248d2cdabf32771fcc4bcbb62de68f5 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_computer_science.yaml @@ -0,0 +1,3 @@ +dataset_name: computer_science +include: _hard_kmmlu_yaml +task: kmmlu_hard_computer_science diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_construction.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_construction.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8331379cf222bacb760e18388dd2c21c53a231da --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_construction.yaml @@ -0,0 +1,3 @@ +dataset_name: construction +include: _hard_kmmlu_yaml +task: kmmlu_hard_construction diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_economics.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_economics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4f1bfba0658e65f3485264af2f92eac3105d93dc --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_economics.yaml @@ -0,0 +1,3 @@ +dataset_name: economics +include: _hard_kmmlu_yaml +task: kmmlu_hard_economics diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_education.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_education.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f6a6a80780dfbaada0f21303e08935f89d2871f --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_education.yaml @@ -0,0 +1,3 @@ +dataset_name: education +include: _hard_kmmlu_yaml +task: kmmlu_hard_education diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electrical_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electrical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..51625c1ec372785ceea741d6aaff21c47316458d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electrical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: electrical_engineering +include: _hard_kmmlu_yaml +task: kmmlu_hard_electrical_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electronics_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electronics_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..252ecc19d5e0bb91763e5efa5ea4edd083967ba8 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electronics_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: electronics_engineering +include: _hard_kmmlu_yaml +task: kmmlu_hard_electronics_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_energy_management.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_energy_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..062204f1dea6473a74eeae80db0ed1017b0ccbe2 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_energy_management.yaml @@ -0,0 +1,3 @@ +dataset_name: energy_management +include: _hard_kmmlu_yaml +task: kmmlu_hard_energy_management diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_environmental_science.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_environmental_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d7f32dc5b518796f78896eec6fdd2e1dbf3d2b83 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_environmental_science.yaml @@ -0,0 +1,3 @@ +dataset_name: environmental_science +include: _hard_kmmlu_yaml +task: kmmlu_hard_environmental_science diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_fashion.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_fashion.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9448efcf8c4775eab3822be73635d80ba35d0c12 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_fashion.yaml @@ -0,0 +1,3 @@ +dataset_name: fashion +include: _hard_kmmlu_yaml +task: kmmlu_hard_fashion diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_health.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_health.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c5e2ba98addb3794fccfa9b58bfdd1bb869e1acc --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_health.yaml @@ -0,0 +1,3 @@ +dataset_name: health +include: _hard_kmmlu_yaml +task: kmmlu_hard_health diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_industrial_engineer.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_industrial_engineer.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d3cbef78bfe12d8ac674972b6ae9ebab0ce5ff67 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_industrial_engineer.yaml @@ -0,0 +1,3 @@ +dataset_name: industrial_engineer +include: _hard_kmmlu_yaml +task: kmmlu_hard_industrial_engineer diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_korean_history.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_korean_history.yaml new file mode 100644 index 0000000000000000000000000000000000000000..60ff94e7ff39c5d24bcc4be97d11c4ddcbd608a5 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_korean_history.yaml @@ -0,0 +1,3 @@ +dataset_name: korean_history +include: _hard_kmmlu_yaml +task: kmmlu_hard_korean_history diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_machine_design_and_manufacturing.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_machine_design_and_manufacturing.yaml new file mode 100644 index 0000000000000000000000000000000000000000..222f89bacd4c549ced153434568fb4b065353c51 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_machine_design_and_manufacturing.yaml @@ -0,0 +1,3 @@ +dataset_name: machine_design_and_manufacturing +include: _hard_kmmlu_yaml +task: kmmlu_hard_machine_design_and_manufacturing diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_management.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8e9e866499e8d3287c107147472b1ceb89199525 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_management.yaml @@ -0,0 +1,3 @@ +dataset_name: management +include: _hard_kmmlu_yaml +task: kmmlu_hard_management diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_math.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e563717686f991baa06323a0e9f1d415a74df128 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_math.yaml @@ -0,0 +1,3 @@ +dataset_name: math +include: _hard_kmmlu_yaml +task: kmmlu_hard_math diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_mechanical_engineering.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_mechanical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9b3adca0b644ef7f6a8ede8a2918a46f40707c1b --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_mechanical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: mechanical_engineering +include: _hard_kmmlu_yaml +task: kmmlu_hard_mechanical_engineering diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_patent.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_patent.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3fcdcd96b136e0872cd530b5261760492b29a5e2 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_patent.yaml @@ -0,0 +1,3 @@ +dataset_name: patent +include: _hard_kmmlu_yaml +task: kmmlu_hard_patent diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_political_science_and_sociology.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_political_science_and_sociology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6bb907cb10792070f6043eeeed8f629cd503cbe9 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_political_science_and_sociology.yaml @@ -0,0 +1,3 @@ +dataset_name: political_science_and_sociology +include: _hard_kmmlu_yaml +task: kmmlu_hard_political_science_and_sociology diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_social_welfare.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_social_welfare.yaml new file mode 100644 index 0000000000000000000000000000000000000000..12502a573e51dc7ab45fc42f6ee97e92e9b78b58 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_social_welfare.yaml @@ -0,0 +1,3 @@ +dataset_name: social_welfare +include: _hard_kmmlu_yaml +task: kmmlu_hard_social_welfare diff --git a/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_telecommunications_and_wireless_technology.yaml b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_telecommunications_and_wireless_technology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0cb519d11ec046aa947fef00738bdcc062c836fd --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_telecommunications_and_wireless_technology.yaml @@ -0,0 +1,3 @@ +dataset_name: telecommunications_and_wireless_technology +include: _hard_kmmlu_yaml +task: kmmlu_hard_telecommunications_and_wireless_technology diff --git a/lm-evaluation/lm_eval/tasks/qa4mre/README.md b/lm-evaluation/lm_eval/tasks/qa4mre/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3b8dc9fc9c38c09c48d52b2899fd74d639216765 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/qa4mre/README.md @@ -0,0 +1,55 @@ +# QA4MRE + +### Paper + +Title: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation` + +Abstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf + +The (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013. +The main objective of this exercise is to develop a methodology for evaluating +Machine Reading systems through Question Answering and Reading Comprehension +Tests. Systems should be able to extract knowledge from large volumes of text +and use this knowledge to answer questions. Four different tasks have been +organized during these years: Main Task, Processing Modality and Negation for +Machine Reading, Machine Reading of Biomedical Texts about Alzheimer's disease, +and Entrance Exam. + +Homepage: http://nlp.uned.es/clef-qa/repository/qa4mre.php + + +### Citation + +``` +@inproceedings{Peas2013QA4MRE2O, + title={QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation}, + author={Anselmo Pe{\~n}as and Eduard H. Hovy and Pamela Forner and {\'A}lvaro Rodrigo and Richard F. E. Sutcliffe and Roser Morante}, + booktitle={CLEF}, + year={2013} +} +``` + +### Groups and Tasks + +#### Groups + +* `qa4mre` + +#### Tasks + +* `qa4mre_2011` +* `qa4mre_2012` +* `qa4mre_2013` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/lm_eval/tasks/qa4mre/preprocess_qa4mre.py b/lm-evaluation/lm_eval/tasks/qa4mre/preprocess_qa4mre.py new file mode 100644 index 0000000000000000000000000000000000000000..3e07db422b1e20f3d456f0da9f806c76feb1c557 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/qa4mre/preprocess_qa4mre.py @@ -0,0 +1,6 @@ +def qa4mre_process(doc): + return int(doc["correct_answer_id"]) - 1 + + +def doc_to_target(doc): + return doc["answer_options"]["answer_str"][qa4mre_process(doc)] diff --git a/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2011.yaml b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2011.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b9ceb78094abcf60b378d695936f1548a2d69188 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2011.yaml @@ -0,0 +1,22 @@ +group: + - qa4mre +task: qa4mre_2011 +dataset_path: qa4mre +dataset_name: 2011.main.EN +output_type: multiple_choice +test_split: train +# doc_to_text: "{{document_str.strip()}}\nQuestion: {{question_str}}\nChoices:\n- {{answer_choices|join('\n- ')}}\nAnswer:" +doc_to_text: "{{document_str.strip()}}\nQuestion: {{question_str}}\nAnswer:" +doc_to_target: "{{correct_answer_id|int - 1}}" +doc_to_choice: "{{answer_options.answer_str}}" +should_decontaminate: true +doc_to_decontamination_query: "{{document_str.strip()}} + ' ' + {{question_str}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2012.yaml b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2012.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ec015651675e34e3f51b221ef2b35d60092bbc3f --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2012.yaml @@ -0,0 +1,4 @@ +include: qa4mre_2011.yaml +task: qa4mre_2012 +dataset_path: qa4mre +dataset_name: 2012.main.EN diff --git a/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2013.yaml b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2013.yaml new file mode 100644 index 0000000000000000000000000000000000000000..08b96e306dcd47e02e06c451692665aef97869ba --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2013.yaml @@ -0,0 +1,4 @@ +include: qa4mre_2011.yaml +task: qa4mre_2013 +dataset_path: qa4mre +dataset_name: 2013.main.EN diff --git a/lm-evaluation/lm_eval/tasks/super_glue/README.md b/lm-evaluation/lm_eval/tasks/super_glue/README.md new file mode 100644 index 0000000000000000000000000000000000000000..c8e807718af5abcec3cbb0ac91af2aab6cb4a3fc --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/README.md @@ -0,0 +1,77 @@ +# SuperGLUE + +### Paper + +Title: `SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems` +Abstract: `https://w4ngatang.github.io/static/papers/superglue.pdf` + +SuperGLUE is a benchmark styled after GLUE with a new set of more difficult language +understanding tasks. + +Homepage: https://super.gluebenchmark.com/ + +### Citation + +``` +@inproceedings{NEURIPS2019_4496bf24, + author = {Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel}, + booktitle = {Advances in Neural Information Processing Systems}, + editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett}, + pages = {}, + publisher = {Curran Associates, Inc.}, + title = {SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems}, + url = {https://proceedings.neurips.cc/paper/2019/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf}, + volume = {32}, + year = {2019} +} +``` + +### Groups and Tasks + +#### Groups + +* `super-glue-lm-eval-v1`: SuperGLUE eval adapted from LM Eval V1 +* `super-glue-t5-prompt`: SuperGLUE prompt and evaluation that matches the T5 paper (if using accelerate, will error if record is included.) + +#### Tasks + +Comparison between validation split score on T5x and LM-Eval (T5x models converted to HF) +| T5V1.1 Base | SGLUE | BoolQ | CB | Copa | MultiRC | ReCoRD | RTE | WiC | WSC | +| ----------- | ------| ----- | --------- | ---- | ------- | ------ | --- | --- | --- | +| T5x | 69.47 | 78.47(acc) | 83.93(f1) 87.5(acc) | 50(acc) | 73.81(f1) 33.26(em) | 70.09(em) 71.34(f1) | 78.7(acc) | 63.64(acc) | 75(acc) | +| LM-Eval | 71.35 | 79.36(acc) | 83.63(f1) 87.5(acc) | 63(acc) | 73.45(f1) 33.26(em) | 69.85(em) 68.86(f1) | 78.34(acc) | 65.83(acc) | 75.96(acc) | + + + +* `super-glue-lm-eval-v1` + - `boolq` + - `cb` + - `copa` + - `multirc` + - `record` + - `rte` + - `wic` + - `wsc` + +* `super-glue-t5-prompt` + - `super_glue-boolq-t5-prompt` + - `super_glue-cb-t5-prompt` + - `super_glue-copa-t5-prompt` + - `super_glue-multirc-t5-prompt` + - `super_glue-record-t5-prompt` + - `super_glue-rte-t5-prompt` + - `super_glue-wic-t5-prompt` + - `super_glue-wsc-t5-prompt` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/lm_eval/tasks/super_glue/boolq/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/boolq/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f26e4682c40ff7c7ba1183fecaadb5718206dbfd --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/boolq/default.yaml @@ -0,0 +1,17 @@ +group: + - super-glue-lm-eval-v1 +task: boolq +dataset_path: super_glue +dataset_name: boolq +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:" +doc_to_target: label +doc_to_choice: ["no", "yes"] +should_decontaminate: true +doc_to_decontamination_query: passage +metric_list: + - metric: acc +metadata: + version: 2.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/boolq/seq2seq.yaml b/lm-evaluation/lm_eval/tasks/super_glue/boolq/seq2seq.yaml new file mode 100644 index 0000000000000000000000000000000000000000..569316cb31b909755ba6916dea4e54f80fc95df1 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/boolq/seq2seq.yaml @@ -0,0 +1,26 @@ +group: + - super-glue-lm-eval-v1-seq2seq +task: "boolq-seq2seq" +dataset_path: super_glue +dataset_name: boolq +output_type: generate_until +training_split: train +validation_split: validation +doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:" +doc_to_target: label +doc_to_choice: [' no', ' yes'] +target_delimiter: "" +generation_kwargs: + until: + - "\n\n" + - "\n" + do_sample: false + temperature: 0.0 +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/boolq/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/boolq/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7089381ad86c05913b111d1888878b721a33a222 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/boolq/t5-prompt.yaml @@ -0,0 +1,22 @@ +group: + - super-glue-t5-prompt +task: super_glue-boolq-t5-prompt +dataset_path: super_glue +dataset_name: boolq +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "boolq passage: {{passage}} question: {{question}}" +doc_to_target: label +doc_to_choice: ['False', 'True'] +generation_kwargs: + until: + - "" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/cb/aggregate.py b/lm-evaluation/lm_eval/tasks/super_glue/cb/aggregate.py new file mode 100644 index 0000000000000000000000000000000000000000..4b99849f9bfa8307006879666ecf971b17b511b2 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/cb/aggregate.py @@ -0,0 +1,13 @@ +import numpy as np +import sklearn + + +def cb_multi_fi(items): + preds, golds = zip(*items) + preds = np.array(preds) + golds = np.array(golds) + f11 = sklearn.metrics.f1_score(y_true=golds == 0, y_pred=preds == 0) + f12 = sklearn.metrics.f1_score(y_true=golds == 1, y_pred=preds == 1) + f13 = sklearn.metrics.f1_score(y_true=golds == 2, y_pred=preds == 2) + avg_f1 = np.mean([f11, f12, f13]) + return avg_f1 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/cb/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/cb/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c575e9872aa712eff69f779a7114d5baed487706 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/cb/default.yaml @@ -0,0 +1,17 @@ +group: + - super-glue-lm-eval-v1 +task: cb +dataset_path: super_glue +dataset_name: cb +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{premise}}\nQuestion: {{hypothesis}}. True, False, or Neither?\nAnswer:" +doc_to_target: label +doc_to_choice: ['True', 'False', 'Neither'] +metric_list: + - metric: acc + - metric: f1 + aggregation: !function "aggregate.cb_multi_fi" +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/cb/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/cb/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..984e17935ad2479fb9d48dabfeb14f14269da2db --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/cb/t5-prompt.yaml @@ -0,0 +1,25 @@ +group: + - super-glue-t5-prompt +task: super_glue-cb-t5-prompt +dataset_path: super_glue +dataset_name: cb +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "cb hypothesis: {{hypothesis}} premise: {{premise}}" +doc_to_target: label +doc_to_choice: ['entailment', 'contradiction', 'neutral'] +generation_kwargs: + until: + - "" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + - metric: !function "t5_utils.mean_3class_f1" + aggregation: !function "t5_utils.agg_mean_3class_f1" + higher_is_better: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/cb/t5_utils.py b/lm-evaluation/lm_eval/tasks/super_glue/cb/t5_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..ec02e34538e15f71861f354b437060da5390544e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/cb/t5_utils.py @@ -0,0 +1,30 @@ +import sklearn.metrics + + +def mean_3class_f1(predictions, references): # This is a passthrough function + string_label = ["entailment", "contradiction", "neutral"] + predictions = ( + string_label.index(predictions[0]) if predictions[0] in string_label else 0 + ) + references = string_label.index(references[0]) + + return (predictions, references) + + +def agg_mean_3class_f1(items): + predictions, references = zip(*items) + + """Computes the unweighted average of the F1 per class.""" + metric_str = "fbeta_score" + metric_fn_kwargs = { + "beta": 1, + "labels": range(3), + "average": "macro", + } + + def _fn(predictions, references): + metric_fn = getattr(sklearn.metrics, metric_str) + metric_val = metric_fn(references, predictions, **metric_fn_kwargs) + return metric_val + + return _fn(predictions, references) diff --git a/lm-evaluation/lm_eval/tasks/super_glue/copa/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/copa/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1af5dbf47258e203e7a1b506e7ba6e91351a61e4 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/copa/default.yaml @@ -0,0 +1,15 @@ +group: + - super-glue-lm-eval-v1 +task: copa +dataset_path: super_glue +dataset_name: copa +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: !function utils.doc_to_text +doc_to_target: !function utils.doc_to_target +doc_to_choice: !function utils.doc_to_choice +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/copa/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/copa/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..20a90db98d28a78307b7e46b99834eaf98cc3f9e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/copa/t5-prompt.yaml @@ -0,0 +1,22 @@ +group: + - super-glue-t5-prompt +task: super_glue-copa-t5-prompt +dataset_path: super_glue +dataset_name: copa +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "copa choice1: {{choice1}} choice2: {{choice2}} premise: {{premise}} question: {{question}}" +doc_to_target: label +doc_to_choice: ['choice1', 'choice2'] +generation_kwargs: + until: + - "" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/copa/utils.py b/lm-evaluation/lm_eval/tasks/super_glue/copa/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..3afc868eb486c47c51b0036ce955502bc377c9c4 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/copa/utils.py @@ -0,0 +1,21 @@ +def convert_choice(choice): + return choice[0].lower() + choice[1:] + + +def doc_to_text(doc): + # Drop the period + connector = { + "cause": "because", + "effect": "therefore", + }[doc["question"]] + return doc["premise"].strip()[:-1] + f" {connector}" + + +def doc_to_target(doc): + correct_choice = doc["choice1"] if doc["label"] == 0 else doc["choice2"] + # Connect the sentences + return " " + convert_choice(correct_choice) + + +def doc_to_choice(doc): + return [" " + convert_choice(doc["choice1"]), " " + convert_choice(doc["choice2"])] diff --git a/lm-evaluation/lm_eval/tasks/super_glue/multirc/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/multirc/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5a388299f6496673a3edc9c5047fddd1a14302e4 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/multirc/default.yaml @@ -0,0 +1,15 @@ +group: + - super-glue-lm-eval-v1 +task: multirc +dataset_path: super_glue +dataset_name: multirc +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{paragraph}}\nQuestion: {{question}}\nAnswer:" +doc_to_target: label +doc_to_choice: "['''{{answer}}\\nIs the answer correct? yes''', '''{{answer}}\\nIs the answer correct? no''']" +metric_list: + - metric: acc +metadata: + version: 2.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..927a357158abf96502f955470fcd8afbe0eee49c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5-prompt.yaml @@ -0,0 +1,23 @@ +group: + - super-glue-t5-prompt +task: super_glue-multirc-t5-prompt +dataset_path: super_glue +dataset_name: multirc +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "multirc question: {{question}} answer: {{answer}} paragraph: {{paragraph}}" +doc_to_target: label +doc_to_choice: "{% set group_id = idx.question|string %}{{[group_id+'_False', group_id+'_True']}}" +generation_kwargs: + until: + - "" +metric_list: + - metric: !function t5_utils.f1 + aggregation: !function t5_utils.agg_f1 + higher_is_better: true + - metric: !function t5_utils.em + aggregation: !function t5_utils.agg_em + higher_is_better: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5_utils.py b/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..d17d498fa25db9a6d7f56e03c43c9e661d66f9f1 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/multirc/t5_utils.py @@ -0,0 +1,53 @@ +import collections + +import numpy as np +import sklearn.metrics + + +def f1(predictions, references): # This is a passthrough function + _prediction = predictions[0] + _reference = references[0].split("_")[-1] + string_label = ["False", "True"] + reference = string_label.index(_reference) + prediction = ( + string_label.index(_prediction) + if _prediction in string_label + else not bool(reference) + ) + + return (prediction, reference) + + +def agg_f1(items): + predictions, references = zip(*items) + references, predictions = np.asarray(references), np.asarray(predictions) + + return sklearn.metrics.f1_score(references, predictions) + + +def em(predictions, references): # This is a passthrough function + _prediction = predictions[0] + _group, _reference = references[0].split("_") + string_label = ["False", "True"] + reference = string_label.index(_reference) + prediction = ( + string_label.index(_prediction) + if _prediction in string_label + else not bool(reference) + ) + + return (_group, prediction, reference) + + +def agg_em(items): + grouped_values = collections.defaultdict(lambda: ([], [])) + for group, prediction, reference in items: + grouped_values[group][0].append(reference) + grouped_values[group][1].append(prediction) + + group_scores = [] + for group, (targets, predictions) in grouped_values.items(): + score = float(np.array_equal(targets, predictions)) + group_scores.append(score) + + return np.mean(group_scores) diff --git a/lm-evaluation/lm_eval/tasks/super_glue/record/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/record/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ca978fd2ab4db0661ac12185169bc9b8517d1fe8 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/record/default.yaml @@ -0,0 +1,21 @@ +group: + - super-glue-lm-eval-v1 +task: record +dataset_path: super_glue +dataset_name: record +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: !function util.doc_to_text +doc_to_target: !function util.doc_to_target +doc_to_choice: !function util.doc_to_choice +process_docs: !function util.process_docs +process_results: !function util.process_results +metric_list: + - metric: f1 + aggregation: mean + - metric: em + higher_is_better: True + aggregation: mean +metadata: + version: 2.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/record/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/record/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c999bc90301ecc92ec36292a9544733a370b5e69 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/record/t5-prompt.yaml @@ -0,0 +1,22 @@ +group: + - super-glue-t5-prompt +task: super_glue-record-t5-prompt +dataset_path: super_glue +dataset_name: record +validation_split: validation +output_type: generate_until +process_docs: !function t5_utils.process_docs +doc_to_text: !function t5_utils.doc_to_text +doc_to_target: "{{idx.passage|string}}+{{idx.query}}_{{answers}}" +generation_kwargs: + until: + - "" +metric_list: + - metric: !function t5_utils.em + aggregation: !function t5_utils.squad_em_agg + higher_is_better: true + - metric: !function t5_utils.f1 + aggregation: !function t5_utils.squad_f1_agg + higher_is_better: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/record/t5_utils.py b/lm-evaluation/lm_eval/tasks/super_glue/record/t5_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..e1a29a9498cad497c7f19d4a24b0e55d287992be --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/record/t5_utils.py @@ -0,0 +1,132 @@ +import collections +import re +import string + +import numpy as np +from datasets import Dataset + +from lm_eval.api.metrics import metric_max_over_ground_truths + + +def doc_to_text(doc): + passage = doc["passage"] + passage = re.sub(r"(\.|\?|\!|\"|\')\n@highlight\n", r"\1 ", passage) + passage = re.sub(r"\n@highlight\n", ". ", passage) + + return " ".join( + [ + "record query:", + doc["query"], + "entities:", + ", ".join(doc["entities"]), + "passage:", + passage, + ] + ) + + +def process_docs(dataset): + def split_answers(doc): + split_doc = { + **{k: [] for k in doc.keys()}, + } + answers = doc.pop("answers") + for idx, answer in enumerate(answers): + for key in split_doc.keys(): + if key in doc: + split_doc[key].append(doc[key]) + + split_doc["answers"].append(answer) + return split_doc + + dataset = dataset.map(split_answers) + new_dataset = {} + for key in dataset.features.keys(): + new_dataset[key] = [x for row in dataset[key] for x in row] + + return Dataset.from_dict(new_dataset) + + +def normalize_squad(answer): + """Normalization used in official SQuAD evaluation script.""" + + def _normalize_answer(text, punc_chars, punc_repl): + """Lower text and remove punctuation, articles and extra whitespace.""" + + def remove_articles(s): + return re.sub(r"\b(a|an|the)\b", " ", s) + + def replace_punctuation(s): + to_replace = set(punc_chars) + return "".join(punc_repl if ch in to_replace else ch for ch in s) + + def white_space_fix(s): + return " ".join(s.split()) + + text = text.lower() + text = replace_punctuation(text) + text = remove_articles(text) + text = white_space_fix(text) + + return text + + return _normalize_answer(answer, punc_chars=string.punctuation, punc_repl="") + + +def em(predictions, references): # This is a passthrough function + return (predictions[0], references[0]) + + +def f1(predictions, references): # This is a passthrough function + return (predictions[0], references[0]) + + +def squad_em_agg(items): + def _exact_match_score(prediction, target): + return target == prediction + + grouped_values = collections.defaultdict(lambda: ([], [])) + for prediction, reference in items: + group, reference = reference.split("_") + # if group not in grouped_values: + grouped_values[group][0].append(normalize_squad(prediction)) + grouped_values[group][1].append(normalize_squad(reference)) + + em = [] + for group in grouped_values.keys(): + predictions, targets = grouped_values[group] + for p in predictions: + em.append(metric_max_over_ground_truths(_exact_match_score, p, targets)) + + return np.mean(em) + + +def squad_f1_agg(items): + def _f1_score(prediction, target): + """Computes token f1 score for a single target and prediction.""" + prediction_tokens = prediction.split() + target_tokens = target.split() + common = collections.Counter(prediction_tokens) & collections.Counter( + target_tokens + ) + num_same = sum(common.values()) + if num_same == 0: + return 0 + precision = 1.0 * num_same / len(prediction_tokens) + recall = 1.0 * num_same / len(target_tokens) + f1 = (2 * precision * recall) / (precision + recall) + return f1 + + grouped_values = collections.defaultdict(lambda: ([], [])) + for prediction, reference in items: + group, reference = reference.split("_") + if group not in grouped_values: + grouped_values[group][0].append(normalize_squad(prediction)) + grouped_values[group][1].append(normalize_squad(reference)) + + f1 = [] + for group in grouped_values.keys(): + p, t = grouped_values[group] + f1.append(metric_max_over_ground_truths(_f1_score, p[0], t)) + + return np.mean(f1) diff --git a/lm-evaluation/lm_eval/tasks/super_glue/record/util.py b/lm-evaluation/lm_eval/tasks/super_glue/record/util.py new file mode 100644 index 0000000000000000000000000000000000000000..252dba44eb1b8a806209b4d5519ea2ba79d12e17 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/record/util.py @@ -0,0 +1,60 @@ +import datasets +import numpy as np +import transformers.data.metrics.squad_metrics as squad_metrics + +from lm_eval.api.metrics import metric_max_over_ground_truths + + +def doc_to_text(doc): + initial_text, *highlights = doc["passage"].strip().split("\n@highlight\n") + text = initial_text + "\n\n" + for highlight in highlights: + text += f" - {highlight}.\n" + return text + + +def format_answer(query, entity): + return f" - {query}".replace("@placeholder", entity) + + +def doc_to_target(doc): + # We only output the first correct entity in a doc + return format_answer(query=doc["query"], entity=doc["answers"][0]) + + +def doc_to_choice(doc): + return [format_answer(query=doc["query"], entity=ans) for ans in doc["entities"]] + + +def process_docs(dataset: datasets.Dataset): + def _process_doc(doc): + return { + "passage": doc["passage"], + "query": doc["query"], + "entities": sorted(list(set(doc["entities"]))), + "answers": sorted(list(set(doc["answers"]))), + } + + return dataset.map(_process_doc) + + +def process_results(doc, results): + # ReCoRD's evaluation is actually deceptively simple: + # - Pick the maximum likelihood prediction entity + # - Evaluate the accuracy and token F1 PER EXAMPLE + # - Average over all examples + max_idx = np.argmax(np.array([result[0] for result in results])) + + prediction = doc["entities"][max_idx] + gold_label_set = doc["answers"] + f1 = metric_max_over_ground_truths( + squad_metrics.compute_f1, prediction, gold_label_set + ) + em = metric_max_over_ground_truths( + squad_metrics.compute_exact, prediction, gold_label_set + ) + + return { + "f1": f1, + "em": em, + } diff --git a/lm-evaluation/lm_eval/tasks/super_glue/rte/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/rte/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6754af1a1e5688110ab9853e1d53e833ef02dd29 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/rte/default.yaml @@ -0,0 +1,15 @@ +group: + - super-glue-lm-eval-v1 +task: sglue_rte +dataset_path: super_glue +dataset_name: rte +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{premise}}\nQuestion: {{hypothesis}} True or False?\nAnswer:" +doc_to_target: label +doc_to_choice: ['True', 'False'] +metric_list: + - metric: acc +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/rte/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/rte/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9e80686e2a36cbe2a3851ba18fe12130894b7ad7 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/rte/t5-prompt.yaml @@ -0,0 +1,22 @@ +group: + - super-glue-t5-prompt +task: super_glue-rte-t5-prompt +dataset_path: super_glue +dataset_name: rte +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "rte hypothesis: {{hypothesis}} premise: {{premise}}" +doc_to_target: label +doc_to_choice: ['entailment', 'not_entailment'] +generation_kwargs: + until: + - "" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wic/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/wic/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f86855a7811ca1e2c11f61201237f8d10ed524c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wic/default.yaml @@ -0,0 +1,15 @@ +group: + - super-glue-lm-eval-v1 +task: "wic" +dataset_path: super_glue +dataset_name: wic +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "Sentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\nQuestion: Is the word '{{sentence1[start1:end1]}}' used in the same way in the two sentences above?\nAnswer:" +doc_to_target: label +doc_to_choice: ['no', 'yes'] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wic/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/wic/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3a0dbb2f7fd64f2ec3ae3e6d58c4dd7e0963edc2 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wic/t5-prompt.yaml @@ -0,0 +1,22 @@ +group: + - super-glue-t5-prompt +task: super_glue-wic-t5-prompt +dataset_path: super_glue +dataset_name: wic +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: "wic sentence1: {{sentence1}} sentence2: {{sentence2}} word: {{word}}" +doc_to_target: label +doc_to_choice: ['False', 'True'] +generation_kwargs: + until: + - "" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wsc/default.yaml b/lm-evaluation/lm_eval/tasks/super_glue/wsc/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b9c7ec347c2beccb8fdc54ada1082a763c9cfe0d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wsc/default.yaml @@ -0,0 +1,15 @@ +group: + - super-glue-lm-eval-v1 +task: wsc +dataset_path: super_glue +dataset_name: wsc.fixed +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: !function preprocess_wsc.default_doc_to_text +doc_to_target: label +doc_to_choice: ['no', 'yes'] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wsc/preprocess_wsc.py b/lm-evaluation/lm_eval/tasks/super_glue/wsc/preprocess_wsc.py new file mode 100644 index 0000000000000000000000000000000000000000..c62c25676a51fd8e60a4d9fc6f8755041bba7534 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wsc/preprocess_wsc.py @@ -0,0 +1,17 @@ +from lm_eval.utils import general_detokenize + + +def default_doc_to_text(x): + raw_passage = x["text"] + # NOTE: HuggingFace span indices are word-based not character-based. + pre = " ".join(raw_passage.split()[: x["span2_index"]]) + post = raw_passage[len(pre) + len(x["span2_text"]) + 1 :] + passage = general_detokenize(pre + " *{}*".format(x["span2_text"]) + post) + noun = x["span1_text"] + pronoun = x["span2_text"] + text = ( + f"Passage: {passage}\n" + + f'Question: In the passage above, does the pronoun "*{pronoun}*" refer to "*{noun}*"?\n' + + "Answer:" + ) + return text diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5-prompt.yaml b/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5-prompt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6030d1faf210da7b9aab301d059a74978a411a1f --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5-prompt.yaml @@ -0,0 +1,20 @@ +group: + - super-glue-t5-prompt +task: super_glue-wsc-t5-prompt +dataset_path: super_glue +dataset_name: wsc.fixed +training_split: train +validation_split: validation +output_type: generate_until +doc_to_text: !function "t5_utils.doc_to_text" +process_results: !function "t5_utils.process_results" +doc_to_target: label +generation_kwargs: + until: + - "" +metric_list: + - metric: accuracy + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5_utils.py b/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5_utils.py new file mode 100644 index 0000000000000000000000000000000000000000..2860a2a903944a11fff0e981c5135214a8cf8f17 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/super_glue/wsc/t5_utils.py @@ -0,0 +1,104 @@ +import re +from typing import List + + +def doc_to_text(x): + text = re.sub(r" X ", " *" + x["span2_text"] + "* ", _wsc_inputs(x)) + return "wsc: " + text + + +def _wsc_inputs(x): + words = x["text"].split(" ") + + # We would need some special logic to handle the case where the pronoun is the + # first or last word in the text. None of the examples in WSC seem to have + # this, so we are ignoring these cases. + assert x["span2_index"] > 0 + assert x["span2_index"] < len(words) + pronoun_index = x["span2_index"] + + def create_input(): + assert words[pronoun_index] == x["span2_text"] + + return " ".join( + [ + " ".join(words[:pronoun_index]), + "X", + " ".join(words[pronoun_index + 1 :]), + ] + ) + + # Handle some special cases. + if ( + x["text"] + == 'The boy continued to whip the pony , and eventually the pony threw him over. John laughed out quite loud. "Good for him," he said. ' + ): + return ( + "The boy continued to whip the pony , and eventually the pony threw " + 'him over. John laughed out quite loud. "Good for X ," he said.' + ) + + # Using the span2_index, we get 'use' instead of 'it'. + if ( + x["text"] + == "When they had eventually calmed down a bit , and had gotten home, Mr. Farley put the magic pebble in an iron safe . Some day they might want to use it , but really for now, what more could they wish for?" + ): + return ( + "When they had eventually calmed down a bit , and had gotten home, " + "Mr. Farley put the magic pebble in an iron safe . Some day they might " + "want to use X , but really for now, what more could they wish for?" + ) + + return create_input() + + +DETERMINERS = { + "a", + "an", + "few", + "her", + "his", + "each", + "every", + "many", + "much", + "my", + "our", + "some", + "that", + "the", + "their", + "these", + "this", + "those", + "which", + "whose", + "your", +} + + +def clean(s: str) -> str: + """Ignore capitalization and determiners.""" + s = s.strip().lower() + return " ".join([w for w in s.split(" ") if w not in DETERMINERS]) + + +def process_results(docs: dict, resps: List): + prediction = clean(resps[0]) + reference = clean(docs["span1_text"]) + + if ("'" in prediction) != ("'" in reference): + # referent is "Bob's hat" as predicting the referent. + predicted_referent = False + else: + prediction_words = set(prediction.split(" ")) + referent_words = set(reference.split(" ")) + + # Handle cases where the prediction is "fuzzy bunny" and the referent is + # "bunny". + predicted_referent = prediction_words.issubset( + referent_words + ) or referent_words.issubset(prediction_words) + + acc = 1.0 if predicted_referent == docs["label"] else 0.0 + return {"accuracy": acc}