applied-ai-018 commited on
Commit
851905f
·
verified ·
1 Parent(s): 8ced3f5

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. lm-evaluation/lm_eval/tasks/aexams/README.md +53 -0
  2. lm-evaluation/lm_eval/tasks/aexams/aexams_Physics.yaml +4 -0
  3. lm-evaluation/lm_eval/tasks/aexams/aexams_Social.yaml +4 -0
  4. lm-evaluation/lm_eval/tasks/eus_proficiency/README.md +48 -0
  5. lm-evaluation/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml +16 -0
  6. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_agricultural_sciences.yaml +3 -0
  7. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_chemical_engineering.yaml +3 -0
  8. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml +3 -0
  9. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electronics_engineering.yaml +3 -0
  10. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_energy_management.yaml +3 -0
  11. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_fashion.yaml +3 -0
  12. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml +3 -0
  13. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_interior_architecture_and_design.yaml +3 -0
  14. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_management.yaml +3 -0
  15. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_materials_engineering.yaml +3 -0
  16. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_mechanical_engineering.yaml +3 -0
  17. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_psychology.yaml +3 -0
  18. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_public_safety.yaml +3 -0
  19. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_refrigerating_machinery.yaml +3 -0
  20. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml +3 -0
  21. lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_telecommunications_and_wireless_technology.yaml +3 -0
  22. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_aviation_engineering_and_maintenance.yaml +3 -0
  23. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_chemical_engineering.yaml +3 -0
  24. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_civil_engineering.yaml +3 -0
  25. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_computer_science.yaml +3 -0
  26. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_construction.yaml +3 -0
  27. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_economics.yaml +3 -0
  28. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_education.yaml +3 -0
  29. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electrical_engineering.yaml +3 -0
  30. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electronics_engineering.yaml +3 -0
  31. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_energy_management.yaml +3 -0
  32. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_environmental_science.yaml +3 -0
  33. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_fashion.yaml +3 -0
  34. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_health.yaml +3 -0
  35. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_industrial_engineer.yaml +3 -0
  36. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_korean_history.yaml +3 -0
  37. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_machine_design_and_manufacturing.yaml +3 -0
  38. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_management.yaml +3 -0
  39. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_math.yaml +3 -0
  40. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_mechanical_engineering.yaml +3 -0
  41. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_patent.yaml +3 -0
  42. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_political_science_and_sociology.yaml +3 -0
  43. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_social_welfare.yaml +3 -0
  44. lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_telecommunications_and_wireless_technology.yaml +3 -0
  45. lm-evaluation/lm_eval/tasks/qa4mre/README.md +55 -0
  46. lm-evaluation/lm_eval/tasks/qa4mre/preprocess_qa4mre.py +6 -0
  47. lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2011.yaml +22 -0
  48. lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2012.yaml +4 -0
  49. lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2013.yaml +4 -0
  50. lm-evaluation/lm_eval/tasks/super_glue/README.md +77 -0
lm-evaluation/lm_eval/tasks/aexams/README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Arabic EXAMS
2
+
3
+ ### Paper
4
+
5
+ EXAMS: a resource specialized in multilingual high school exam questions.
6
+ The original paper [EXAMS](https://aclanthology.org/2020.emnlp-main.438/)
7
+
8
+ The Arabic EXAMS dataset includes five subjects
9
+
10
+ - Islamic studies
11
+ - Biology
12
+ - Physics
13
+ - Science
14
+ - Social
15
+
16
+ The original dataset [EXAMS-QA](https://github.com/mhardalov/exams-qa)
17
+
18
+ EXAMS is a benchmark dataset for cross-lingual and multilingual question answering for high school examinations.
19
+ With 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others.
20
+ EXAMS offers unique fine-grained evaluation framework across multiple languages and subjects
21
+
22
+ Homepage for Arabic EXAMS: [EXAMS Arabic Homepage](https://github.com/FreedomIntelligence/AceGPT/tree/main/eval/benchmark_eval/benchmarks/EXAMS_Arabic)
23
+
24
+ ### Citation
25
+
26
+
27
+ ### Groups and Tasks
28
+
29
+ #### Groups
30
+
31
+ - `EXAMS Arabic`: include IslamicStudies, Biology, Science, Physics, Social.
32
+
33
+ #### Tasks
34
+
35
+
36
+ The following tasks evaluate subjects in Arabic EXAMS dataset using loglikelihood-based multiple-choice scoring:
37
+ - `aexams_IslamicStudies`
38
+ - `aexams_Biology`
39
+ - `aexams_Science`
40
+ - `aexams_Physics`
41
+ - `aexams_Social`
42
+
43
+ ### Checklist
44
+
45
+ * [x] Is the task an existing benchmark in the literature?
46
+ * [x] Have you referenced the original paper that introduced the task?
47
+ * [x] If yes, does the original paper provide a reference implementation?
48
+ * [x] Yes, original implementation contributed by author of the benchmark
49
+
50
+ If other tasks on this dataset are already supported:
51
+ * [x] Is the "Main" variant of this task clearly denoted?
52
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
53
+ * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/lm_eval/tasks/aexams/aexams_Physics.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "dataset_name": "Physics"
2
+ "description": "قم بالإجابة على مايلي في مجال الفيزياء \n\n"
3
+ "include": "_default_template_yaml"
4
+ "task": "aexams_Physics"
lm-evaluation/lm_eval/tasks/aexams/aexams_Social.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "dataset_name": "Social"
2
+ "description": "قم بالإجابة على مايلي في مجال العلوم الإجتماعية \n\n"
3
+ "include": "_default_template_yaml"
4
+ "task": "aexams_Social"
lm-evaluation/lm_eval/tasks/eus_proficiency/README.md ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EusProficiency
2
+
3
+ ### Paper
4
+
5
+ Title: Latxa: An Open Language Model and Evaluation Suite for Basque
6
+
7
+ Abstract: https://arxiv.org/abs/2403.20266
8
+
9
+ EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises from EGA exams through the years 1998 to 2008. Atarikoa is the first qualifying test of EGA, which measures different aspects of language competency, such as reading comprehension, grammar, vocabulary, spelling, and writing. Each test generally has 85 multiple-choice questions, with 4 choices and a single correct answer.
10
+
11
+ Homepage: https://github.com/hitz-zentroa/latxa
12
+
13
+
14
+ ### Citation
15
+
16
+ ```
17
+ @misc{etxaniz2024latxa,
18
+ title={Latxa: An Open Language Model and Evaluation Suite for Basque},
19
+ author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
20
+ year={2024},
21
+ eprint={2403.20266},
22
+ archivePrefix={arXiv},
23
+ primaryClass={cs.CL}
24
+ }
25
+ ```
26
+
27
+ ### Groups and Tasks
28
+
29
+ #### Groups
30
+
31
+ There are no groups.
32
+
33
+ #### Tasks
34
+
35
+ * `eus_proficiency`: EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.
36
+
37
+ ### Checklist
38
+
39
+ For adding novel benchmarks/datasets to the library:
40
+ * [ ] Is the task an existing benchmark in the literature?
41
+ * [ ] Have you referenced the original paper that introduced the task?
42
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
43
+
44
+
45
+ If other tasks on this dataset are already supported:
46
+ * [ ] Is the "Main" variant of this task clearly denoted?
47
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
48
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/lm_eval/tasks/eus_proficiency/eus_proficiency.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_path: HiTZ/EusProficiency
2
+ dataset_name: default
3
+ task: eus_proficiency
4
+ doc_to_text: "Galdera: {{question}}\nA: {{candidates[0]}}\nB: {{candidates[1]}}\nC: {{candidates[2]}}\nD: {{candidates[3]}}\nErantzuna:"
5
+ doc_to_choice: ["A", "B", "C", "D"]
6
+ validation_split: null
7
+ test_split: test
8
+ fewshot_split: test
9
+ output_type: multiple_choice
10
+ doc_to_target: answer
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ version: 0.0
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_agricultural_sciences.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Agricultural-Sciences
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_agricultural_sciences
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_chemical_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Chemical-Engineering
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_chemical_engineering
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Economics
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_economics
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electronics_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Electronics-Engineering
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_electronics_engineering
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_energy_management.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Energy-Management
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_energy_management
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_fashion.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Fashion
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_fashion
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Information-Technology
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_information_technology
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_interior_architecture_and_design.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Interior-Architecture-and-Design
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_interior_architecture_and_design
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_management.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Management
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_management
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_materials_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Materials-Engineering
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_materials_engineering
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_mechanical_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Mechanical-Engineering
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_mechanical_engineering
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_psychology.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Psychology
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_psychology
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_public_safety.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Public-Safety
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_public_safety
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_refrigerating_machinery.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Refrigerating-Machinery
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_refrigerating_machinery
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Taxation
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_taxation
lm-evaluation/lm_eval/tasks/kmmlu/direct/kmmlu_direct_telecommunications_and_wireless_technology.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: Telecommunications-and-Wireless-Technology
2
+ include: _direct_kmmlu_yaml
3
+ task: kmmlu_direct_telecommunications_and_wireless_technology
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_aviation_engineering_and_maintenance.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: aviation_engineering_and_maintenance
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_aviation_engineering_and_maintenance
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_chemical_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: chemical_engineering
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_chemical_engineering
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_civil_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: civil_engineering
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_civil_engineering
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_computer_science.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: computer_science
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_computer_science
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_construction.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: construction
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_construction
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_economics.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: economics
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_economics
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_education.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: education
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_education
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electrical_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: electrical_engineering
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_electrical_engineering
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_electronics_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: electronics_engineering
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_electronics_engineering
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_energy_management.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: energy_management
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_energy_management
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_environmental_science.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: environmental_science
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_environmental_science
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_fashion.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: fashion
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_fashion
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_health.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: health
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_health
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_industrial_engineer.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: industrial_engineer
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_industrial_engineer
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_korean_history.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: korean_history
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_korean_history
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_machine_design_and_manufacturing.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: machine_design_and_manufacturing
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_machine_design_and_manufacturing
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_management.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: management
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_management
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_math.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: math
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_math
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_mechanical_engineering.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: mechanical_engineering
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_mechanical_engineering
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_patent.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: patent
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_patent
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_political_science_and_sociology.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: political_science_and_sociology
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_political_science_and_sociology
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_social_welfare.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: social_welfare
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_social_welfare
lm-evaluation/lm_eval/tasks/kmmlu/hard/kmmlu_hard_telecommunications_and_wireless_technology.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: telecommunications_and_wireless_technology
2
+ include: _hard_kmmlu_yaml
3
+ task: kmmlu_hard_telecommunications_and_wireless_technology
lm-evaluation/lm_eval/tasks/qa4mre/README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # QA4MRE
2
+
3
+ ### Paper
4
+
5
+ Title: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation`
6
+
7
+ Abstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf
8
+
9
+ The (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013.
10
+ The main objective of this exercise is to develop a methodology for evaluating
11
+ Machine Reading systems through Question Answering and Reading Comprehension
12
+ Tests. Systems should be able to extract knowledge from large volumes of text
13
+ and use this knowledge to answer questions. Four different tasks have been
14
+ organized during these years: Main Task, Processing Modality and Negation for
15
+ Machine Reading, Machine Reading of Biomedical Texts about Alzheimer's disease,
16
+ and Entrance Exam.
17
+
18
+ Homepage: http://nlp.uned.es/clef-qa/repository/qa4mre.php
19
+
20
+
21
+ ### Citation
22
+
23
+ ```
24
+ @inproceedings{Peas2013QA4MRE2O,
25
+ title={QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation},
26
+ author={Anselmo Pe{\~n}as and Eduard H. Hovy and Pamela Forner and {\'A}lvaro Rodrigo and Richard F. E. Sutcliffe and Roser Morante},
27
+ booktitle={CLEF},
28
+ year={2013}
29
+ }
30
+ ```
31
+
32
+ ### Groups and Tasks
33
+
34
+ #### Groups
35
+
36
+ * `qa4mre`
37
+
38
+ #### Tasks
39
+
40
+ * `qa4mre_2011`
41
+ * `qa4mre_2012`
42
+ * `qa4mre_2013`
43
+
44
+ ### Checklist
45
+
46
+ For adding novel benchmarks/datasets to the library:
47
+ * [ ] Is the task an existing benchmark in the literature?
48
+ * [ ] Have you referenced the original paper that introduced the task?
49
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
50
+
51
+
52
+ If other tasks on this dataset are already supported:
53
+ * [ ] Is the "Main" variant of this task clearly denoted?
54
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
55
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/lm_eval/tasks/qa4mre/preprocess_qa4mre.py ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ def qa4mre_process(doc):
2
+ return int(doc["correct_answer_id"]) - 1
3
+
4
+
5
+ def doc_to_target(doc):
6
+ return doc["answer_options"]["answer_str"][qa4mre_process(doc)]
lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2011.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - qa4mre
3
+ task: qa4mre_2011
4
+ dataset_path: qa4mre
5
+ dataset_name: 2011.main.EN
6
+ output_type: multiple_choice
7
+ test_split: train
8
+ # doc_to_text: "{{document_str.strip()}}\nQuestion: {{question_str}}\nChoices:\n- {{answer_choices|join('\n- ')}}\nAnswer:"
9
+ doc_to_text: "{{document_str.strip()}}\nQuestion: {{question_str}}\nAnswer:"
10
+ doc_to_target: "{{correct_answer_id|int - 1}}"
11
+ doc_to_choice: "{{answer_options.answer_str}}"
12
+ should_decontaminate: true
13
+ doc_to_decontamination_query: "{{document_str.strip()}} + ' ' + {{question_str}}"
14
+ metric_list:
15
+ - metric: acc
16
+ aggregation: mean
17
+ higher_is_better: true
18
+ - metric: acc_norm
19
+ aggregation: mean
20
+ higher_is_better: true
21
+ metadata:
22
+ version: 1.0
lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2012.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: qa4mre_2011.yaml
2
+ task: qa4mre_2012
3
+ dataset_path: qa4mre
4
+ dataset_name: 2012.main.EN
lm-evaluation/lm_eval/tasks/qa4mre/qa4mre_2013.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: qa4mre_2011.yaml
2
+ task: qa4mre_2013
3
+ dataset_path: qa4mre
4
+ dataset_name: 2013.main.EN
lm-evaluation/lm_eval/tasks/super_glue/README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SuperGLUE
2
+
3
+ ### Paper
4
+
5
+ Title: `SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems`
6
+ Abstract: `https://w4ngatang.github.io/static/papers/superglue.pdf`
7
+
8
+ SuperGLUE is a benchmark styled after GLUE with a new set of more difficult language
9
+ understanding tasks.
10
+
11
+ Homepage: https://super.gluebenchmark.com/
12
+
13
+ ### Citation
14
+
15
+ ```
16
+ @inproceedings{NEURIPS2019_4496bf24,
17
+ author = {Wang, Alex and Pruksachatkun, Yada and Nangia, Nikita and Singh, Amanpreet and Michael, Julian and Hill, Felix and Levy, Omer and Bowman, Samuel},
18
+ booktitle = {Advances in Neural Information Processing Systems},
19
+ editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett},
20
+ pages = {},
21
+ publisher = {Curran Associates, Inc.},
22
+ title = {SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems},
23
+ url = {https://proceedings.neurips.cc/paper/2019/file/4496bf24afe7fab6f046bf4923da8de6-Paper.pdf},
24
+ volume = {32},
25
+ year = {2019}
26
+ }
27
+ ```
28
+
29
+ ### Groups and Tasks
30
+
31
+ #### Groups
32
+
33
+ * `super-glue-lm-eval-v1`: SuperGLUE eval adapted from LM Eval V1
34
+ * `super-glue-t5-prompt`: SuperGLUE prompt and evaluation that matches the T5 paper (if using accelerate, will error if record is included.)
35
+
36
+ #### Tasks
37
+
38
+ Comparison between validation split score on T5x and LM-Eval (T5x models converted to HF)
39
+ | T5V1.1 Base | SGLUE | BoolQ | CB | Copa | MultiRC | ReCoRD | RTE | WiC | WSC |
40
+ | ----------- | ------| ----- | --------- | ---- | ------- | ------ | --- | --- | --- |
41
+ | T5x | 69.47 | 78.47(acc) | 83.93(f1) 87.5(acc) | 50(acc) | 73.81(f1) 33.26(em) | 70.09(em) 71.34(f1) | 78.7(acc) | 63.64(acc) | 75(acc) |
42
+ | LM-Eval | 71.35 | 79.36(acc) | 83.63(f1) 87.5(acc) | 63(acc) | 73.45(f1) 33.26(em) | 69.85(em) 68.86(f1) | 78.34(acc) | 65.83(acc) | 75.96(acc) |
43
+
44
+
45
+
46
+ * `super-glue-lm-eval-v1`
47
+ - `boolq`
48
+ - `cb`
49
+ - `copa`
50
+ - `multirc`
51
+ - `record`
52
+ - `rte`
53
+ - `wic`
54
+ - `wsc`
55
+
56
+ * `super-glue-t5-prompt`
57
+ - `super_glue-boolq-t5-prompt`
58
+ - `super_glue-cb-t5-prompt`
59
+ - `super_glue-copa-t5-prompt`
60
+ - `super_glue-multirc-t5-prompt`
61
+ - `super_glue-record-t5-prompt`
62
+ - `super_glue-rte-t5-prompt`
63
+ - `super_glue-wic-t5-prompt`
64
+ - `super_glue-wsc-t5-prompt`
65
+
66
+ ### Checklist
67
+
68
+ For adding novel benchmarks/datasets to the library:
69
+ * [ ] Is the task an existing benchmark in the literature?
70
+ * [ ] Have you referenced the original paper that introduced the task?
71
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
72
+
73
+
74
+ If other tasks on this dataset are already supported:
75
+ * [ ] Is the "Main" variant of this task clearly denoted?
76
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
77
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?