applied-ai-018 commited on
Commit
6d27cc4
·
verified ·
1 Parent(s): 42efda9

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/README.md +72 -0
  2. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/bec.yaml +16 -0
  3. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/bhtc.yaml +16 -0
  4. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/coref.yaml +16 -0
  5. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/qnli.yaml +16 -0
  6. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/utils.py +78 -0
  7. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/vaxx.yaml +16 -0
  8. lm-evaluation/build/lib/lm_eval/tasks/basqueglue/wic.yaml +17 -0
  9. lm-evaluation/build/lib/lm_eval/tasks/belebele/_default_template_yaml +19 -0
  10. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_acm_Arab.yaml +4 -0
  11. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ary_Arab.yaml +4 -0
  12. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_arz_Arab.yaml +4 -0
  13. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_azj_Latn.yaml +4 -0
  14. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_bul_Cyrl.yaml +4 -0
  15. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_gaz_Latn.yaml +4 -0
  16. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_guj_Gujr.yaml +4 -0
  17. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hye_Armn.yaml +4 -0
  18. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ilo_Latn.yaml +4 -0
  19. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_kea_Latn.yaml +4 -0
  20. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lin_Latn.yaml +4 -0
  21. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_mar_Deva.yaml +4 -0
  22. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nob_Latn.yaml +4 -0
  23. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ory_Orya.yaml +4 -0
  24. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_slk_Latn.yaml +4 -0
  25. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_sun_Latn.yaml +4 -0
  26. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tha_Thai.yaml +4 -0
  27. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tir_Ethi.yaml +4 -0
  28. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tur_Latn.yaml +4 -0
  29. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ukr_Cyrl.yaml +4 -0
  30. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_urd_Latn.yaml +4 -0
  31. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_wol_Latn.yaml +4 -0
  32. lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_zul_Latn.yaml +4 -0
  33. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/README.md +101 -0
  34. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english.yaml +23 -0
  35. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_autre.yaml +4 -0
  36. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_gender.yaml +4 -0
  37. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_physical_appearance.yaml +4 -0
  38. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_race_color.yaml +4 -0
  39. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_sexual_orientation.yaml +4 -0
  40. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_socioeconomic.yaml +4 -0
  41. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_age.yaml +4 -0
  42. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_autre.yaml +4 -0
  43. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_gender.yaml +4 -0
  44. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_nationality.yaml +4 -0
  45. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_physical_appearance.yaml +4 -0
  46. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_race_color.yaml +4 -0
  47. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_religion.yaml +4 -0
  48. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_sexual_orientation.yaml +4 -0
  49. lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/utils.py +64 -0
  50. lm-evaluation/build/lib/lm_eval/tasks/eus_trivia/README.md +54 -0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # BasqueGLUE
2
+
3
+ ### Paper
4
+
5
+ Title: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque`
6
+
7
+ Abstract: `https://aclanthology.org/2022.lrec-1.172/`
8
+
9
+ Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license.
10
+
11
+ Homepage: `https://github.com/orai-nlp/BasqueGLUE`
12
+
13
+ Title: `Latxa: An Open Language Model and Evaluation Suite for Basque`
14
+
15
+ Abstract: `https://arxiv.org/abs/2403.20266`
16
+
17
+ The use of BasqueGLUE for evaluating the performance of decoder models in Basque is presented in this paper.
18
+
19
+ Homepage: `https://github.com/hitz-zentroa/latxa`
20
+
21
+ ### Citation
22
+
23
+ ```
24
+ @InProceedings{urbizu2022basqueglue,
25
+ author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor},
26
+ title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque},
27
+ booktitle = {Proceedings of the Language Resources and Evaluation Conference},
28
+ month = {June},
29
+ year = {2022},
30
+ address = {Marseille, France},
31
+ publisher = {European Language Resources Association},
32
+ pages = {1603--1612},
33
+ url = {https://aclanthology.org/2022.lrec-1.172}
34
+ }
35
+
36
+ @misc{etxaniz2024latxa,
37
+ title={Latxa: An Open Language Model and Evaluation Suite for Basque},
38
+ author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
39
+ year={2024},
40
+ eprint={2403.20266},
41
+ archivePrefix={arXiv},
42
+ primaryClass={cs.CL}
43
+ }
44
+ ```
45
+
46
+ ### Groups and Tasks
47
+
48
+ #### Groups
49
+
50
+ * `basque-glue`: First version of the implementation
51
+
52
+ #### Tasks
53
+
54
+ * `bhtc_v2`: Topic classification of news extracts with 12 categories.
55
+ * `bec`: Sentiment analysis on tweets about the campaign for the 2016 Basque elections.
56
+ * `vaxx_stance`: Stance detection on tweets around the anti-vaccine movement.
57
+ * `qnlieu`: Q&A NLI as in [glue/qnli](../glue/qnli).
58
+ * `wiceu`: Word-in-Context as in [super_glue/wic](../super_glue/wic).
59
+ * `epec_korref_bin`: Correference detection as in [super_glue/wsc](../super_glue/wsc).
60
+
61
+ ### Checklist
62
+
63
+ For adding novel benchmarks/datasets to the library:
64
+ * [ ] Is the task an existing benchmark in the literature?
65
+ * [ ] Have you referenced the original paper that introduced the task?
66
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
67
+
68
+
69
+ If other tasks on this dataset are already supported:
70
+ * [ ] Is the "Main" variant of this task clearly denoted?
71
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
72
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/bec.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: bec2016eu
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: bec
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak?\nErantzuna:"
9
+ doc_to_target: label
10
+ doc_to_choice: ['negatiboa', 'neutrala', 'positiboa']
11
+ metric_list:
12
+ - metric: f1
13
+ aggregation: !function utils.micro_f1_score
14
+ higher_is_better: true
15
+ metadata:
16
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/bhtc.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: bhtc_v2
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: bhtc
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "Testua: {{text}}\nGaldera: Zein da aurreko testuaren gaia?\nErantzuna:"
9
+ doc_to_target: label
10
+ doc_to_choice: ['Ekonomia', 'Euskal Herria', 'Euskara', 'Gizartea', 'Historia', 'Ingurumena', 'Iritzia', 'Komunikazioa', 'Kultura', 'Nazioartea', 'Politika', 'Zientzia']
11
+ metric_list:
12
+ - metric: f1
13
+ aggregation: !function utils.micro_f1_score
14
+ higher_is_better: true
15
+ metadata:
16
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/coref.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: epec_koref_bin
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: coref
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: !function utils.coref_doc_to_text
9
+ doc_to_target: label
10
+ doc_to_choice: ['ez', 'bai']
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/qnli.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: qnlieu
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: qnli
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "{{question}}\n{{sentence}}\nGaldera: aurreko galderari erantzuten al dio emandako testuak?\nErantzuna:"
9
+ doc_to_target: label
10
+ doc_to_choice: ['bai', 'ez']
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/utils.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import html
2
+ import re
3
+
4
+ from datasets import load_metric
5
+
6
+
7
+ def general_detokenize(string):
8
+ string = re.sub(r"\s+([.,;:!?)])", r"\1", string)
9
+ string = re.sub(r"(\s+|^)\(\s+([^)]+)\s+\)", r"\1(\2)", string)
10
+ string = re.sub(r"(\s+|^)\[\s+([^)]+)\s+\]", r"\1[\2]", string)
11
+ string = re.sub(r'(\s+|^)"\s+([^"]+)\s+"', r'\1"\2"', string)
12
+ string = re.sub(r"(\s+|^)'\s+([^']+)\s+'", r"\1'\2'", string)
13
+ return string
14
+
15
+
16
+ def process_doc(string):
17
+ string = html.unescape(string)
18
+ string = general_detokenize(string)
19
+ return string
20
+
21
+
22
+ def process_wic_docs(dataset):
23
+ def _helper(doc):
24
+ # there's some issues with the encoding on this one
25
+ doc["sentence1"] = (
26
+ process_doc(doc["sentence1"]).encode("latin-1").decode("utf-8")
27
+ )
28
+ doc["sentence2"] = (
29
+ process_doc(doc["sentence2"]).encode("latin-1").decode("utf-8")
30
+ )
31
+ return doc
32
+
33
+ return dataset.map(_helper)
34
+
35
+
36
+ def coref_doc_to_text(x):
37
+ def _span_in_context(span_index, span_text):
38
+ span_start = span_index
39
+ span_end = span_start + len(span_text.split(" ")) - 1
40
+ tokens[span_start] = f"*{tokens[span_start]}"
41
+ tokens[span_end] = f"{tokens[span_end]}*"
42
+
43
+ tokens = x["text"].split(" ")
44
+ _span_in_context(x["span1_index"], x["span1_text"])
45
+ _span_in_context(
46
+ x["span2_index"] - 1, x["span2_text"]
47
+ ) # span1_index is 0-based but span2_index is 1-based ??
48
+ context = process_doc(" ".join(tokens))
49
+ span_1 = process_doc(x["span1_text"])
50
+ span_2 = process_doc(x["span2_text"])
51
+ text = (
52
+ f"Testua: {context}\n"
53
+ + f'Galdera: Aurreko testuan, "*{span_1}*" eta "*{span_2}*" gauza bera dira?\n'
54
+ + "Erantzuna:"
55
+ )
56
+ return text
57
+
58
+
59
+ # Measure F1 as in the benchmark repo: https://github.com/orai-nlp/BasqueGLUE/blob/main/eval_basqueglue.py
60
+
61
+
62
+ def micro_f1_score(items):
63
+ f1_metric = load_metric("f1")
64
+ golds, preds = list(zip(*items))
65
+ f1_score = f1_metric.compute(references=golds, predictions=preds, average="micro")[
66
+ "f1"
67
+ ]
68
+ return f1_score
69
+
70
+
71
+ def vaxx_f1_score(items):
72
+ f1_metric = load_metric("f1")
73
+ golds, preds = list(zip(*items))
74
+ f1_class = f1_metric.compute(
75
+ references=golds, predictions=preds, labels=[0, 2], average=None
76
+ )["f1"]
77
+ f1_score = sum(f1_class) / len(f1_class)
78
+ return f1_score
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/vaxx.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: vaxx_stance
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: vaxx
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak txertoei buruz?\nErantzuna:"
9
+ doc_to_target: label
10
+ doc_to_choice: ['aurka', 'neutrala', 'alde']
11
+ metric_list:
12
+ - metric: f1
13
+ aggregation: !function utils.vaxx_f1_score
14
+ higher_is_better: true
15
+ metadata:
16
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/basqueglue/wic.yaml ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: basque-glue
2
+ task: wiceu
3
+ dataset_path: orai-nlp/basqueGLUE
4
+ dataset_name: wic
5
+ output_type: multiple_choice
6
+ validation_split: validation
7
+ test_split: test
8
+ process_docs: !function utils.process_wic_docs
9
+ doc_to_text: "1. esaldia: {{sentence1}}\n2. esaldia: {{sentence2}}\nGaldera: Aurreko bi esaldietan, \"{{word}}\" hitzak esanahi berdina du?\nErantzuna:"
10
+ doc_to_target: label
11
+ doc_to_choice: ['ez', 'bai']
12
+ metric_list:
13
+ - metric: acc
14
+ aggregation: mean
15
+ higher_is_better: true
16
+ metadata:
17
+ - version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/belebele/_default_template_yaml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group: belebele
2
+ dataset_path: facebook/belebele
3
+ fewshot_config:
4
+ sampler: first_n
5
+ output_type: multiple_choice
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "{{question}}"
8
+ doc_to_text: "P: {{flores_passage}}\nQ: {{question.strip()}}\nA: {{mc_answer1}}\nB: {{mc_answer2}}\nC: {{mc_answer3}}\nD: {{mc_answer4}}\nAnswer:"
9
+ doc_to_choice: ["A", "B", "C", "D"]
10
+ doc_to_target: "{{['1', '2', '3', '4'].index(correct_answer_num)}}"
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ - metric: acc_norm
16
+ aggregation: mean
17
+ higher_is_better: true
18
+ metadata:
19
+ version: 0.0
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_acm_Arab.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "acm_Arab"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_acm_Arab"
4
+ "test_split": "acm_Arab"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ary_Arab.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "ary_Arab"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_ary_Arab"
4
+ "test_split": "ary_Arab"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_arz_Arab.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "arz_Arab"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_arz_Arab"
4
+ "test_split": "arz_Arab"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_azj_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "azj_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_azj_Latn"
4
+ "test_split": "azj_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_bul_Cyrl.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "bul_Cyrl"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_bul_Cyrl"
4
+ "test_split": "bul_Cyrl"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_gaz_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "gaz_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_gaz_Latn"
4
+ "test_split": "gaz_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_guj_Gujr.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "guj_Gujr"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_guj_Gujr"
4
+ "test_split": "guj_Gujr"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hye_Armn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "hye_Armn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_hye_Armn"
4
+ "test_split": "hye_Armn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ilo_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "ilo_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_ilo_Latn"
4
+ "test_split": "ilo_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_kea_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "kea_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_kea_Latn"
4
+ "test_split": "kea_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lin_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "lin_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_lin_Latn"
4
+ "test_split": "lin_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_mar_Deva.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "mar_Deva"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_mar_Deva"
4
+ "test_split": "mar_Deva"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nob_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "nob_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_nob_Latn"
4
+ "test_split": "nob_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ory_Orya.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "ory_Orya"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_ory_Orya"
4
+ "test_split": "ory_Orya"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_slk_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "slk_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_slk_Latn"
4
+ "test_split": "slk_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_sun_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "sun_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_sun_Latn"
4
+ "test_split": "sun_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tha_Thai.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "tha_Thai"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_tha_Thai"
4
+ "test_split": "tha_Thai"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tir_Ethi.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "tir_Ethi"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_tir_Ethi"
4
+ "test_split": "tir_Ethi"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tur_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "tur_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_tur_Latn"
4
+ "test_split": "tur_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ukr_Cyrl.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "ukr_Cyrl"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_ukr_Cyrl"
4
+ "test_split": "ukr_Cyrl"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_urd_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "urd_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_urd_Latn"
4
+ "test_split": "urd_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_wol_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "wol_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_wol_Latn"
4
+ "test_split": "wol_Latn"
lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_zul_Latn.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "fewshot_split": "zul_Latn"
2
+ "include": "_default_template_yaml"
3
+ "task": "belebele_zul_Latn"
4
+ "test_split": "zul_Latn"
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CrowS-Pairs
2
+
3
+ ### Paper
4
+
5
+ CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
6
+ https://aclanthology.org/2020.emnlp-main.154/
7
+ French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked
8
+ language models to a language other than English
9
+ https://aclanthology.org/2022.acl-long.583/
10
+
11
+ CrowS-Pairs is a challenge set for evaluating what language models (LMs) on their tendency
12
+ to generate biased outputs. CrowS-Pairs comes in 2 languages and the English subset has
13
+ a newer version which fixes some of the issues with the original version.
14
+
15
+ Homepage: https://github.com/nyu-mll/crows-pairs, https://gitlab.inria.fr/french-crows-pairs
16
+
17
+ ### Citation
18
+
19
+ ```bibtex
20
+ @inproceedings{nangia-etal-2020-crows,
21
+ title = "{C}row{S}-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models",
22
+ author = "Nangia, Nikita and
23
+ Vania, Clara and
24
+ Bhalerao, Rasika and
25
+ Bowman, Samuel R.",
26
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
27
+ month = nov,
28
+ year = "2020",
29
+ address = "Online",
30
+ publisher = "Association for Computational Linguistics",
31
+ url = "https://aclanthology.org/2020.emnlp-main.154",
32
+ doi = "10.18653/v1/2020.emnlp-main.154",
33
+ pages = "1953--1967",
34
+ abstract = "Pretrained language models, especially masked language models (MLMs) have seen success across many NLP tasks. However, there is ample evidence that they use the cultural biases that are undoubtedly present in the corpora they are trained on, implicitly creating harm with biased representations. To measure some forms of social bias in language models against protected demographic groups in the US, we introduce the Crowdsourced Stereotype Pairs benchmark (CrowS-Pairs). CrowS-Pairs has 1508 examples that cover stereotypes dealing with nine types of bias, like race, religion, and age. In CrowS-Pairs a model is presented with two sentences: one that is more stereotyping and another that is less stereotyping. The data focuses on stereotypes about historically disadvantaged groups and contrasts them with advantaged groups. We find that all three of the widely-used MLMs we evaluate substantially favor sentences that express stereotypes in every category in CrowS-Pairs. As work on building less biased models advances, this dataset can be used as a benchmark to evaluate progress.",
35
+ }
36
+
37
+ @inproceedings{neveol-etal-2022-french,
38
+ title = "{F}rench {C}row{S}-Pairs: Extending a challenge dataset for measuring social bias in masked language models to a language other than {E}nglish",
39
+ author = {N{\'e}v{\'e}ol, Aur{\'e}lie and
40
+ Dupont, Yoann and
41
+ Bezan{\c{c}}on, Julien and
42
+ Fort, Kar{\"e}n},
43
+ booktitle = "Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
44
+ month = may,
45
+ year = "2022",
46
+ address = "Dublin, Ireland",
47
+ publisher = "Association for Computational Linguistics",
48
+ url = "https://aclanthology.org/2022.acl-long.583",
49
+ doi = "10.18653/v1/2022.acl-long.583",
50
+ pages = "8521--8531",
51
+ abstract = "Warning: This paper contains explicit statements of offensive stereotypes which may be upsetting.Much work on biases in natural language processing has addressed biases linked to the social and cultural experience of English speaking individuals in the United States. We seek to widen the scope of bias studies by creating material to measure social bias in language models (LMs) against specific demographic groups in France. We build on the US-centered CrowS-pairs dataset to create a multilingual stereotypes dataset that allows for comparability across languages while also characterizing biases that are specific to each country and language. We introduce 1,679 sentence pairs in French that cover stereotypes in ten types of bias like gender and age. 1,467 sentence pairs are translated from CrowS-pairs and 212 are newly crowdsourced. The sentence pairs contrast stereotypes concerning underadvantaged groups with the same sentence concerning advantaged groups. We find that four widely used language models (three French, one multilingual) favor sentences that express stereotypes in most bias categories. We report on the translation process from English into French, which led to a characterization of stereotypes in CrowS-pairs including the identification of US-centric cultural traits. We offer guidelines to further extend the dataset to other languages and cultural environments.",
52
+ }
53
+ ```
54
+
55
+ ### Groups and Tasks
56
+
57
+ #### Groups
58
+
59
+ - `crows_pairs_english`: The entire English subset of the CrowS-Pairs dataset.
60
+ - `crows_pairs_french`: The entire French subset of the CrowS-Pairs dataset.
61
+
62
+ #### Tasks
63
+
64
+
65
+ The following tasks evaluate sub-areas of bias in the English CrowS-Pairs dataset:
66
+ - `crows_pairs_english_age`
67
+ - `crows_pairs_english_autre`
68
+ - `crows_pairs_english_disability`
69
+ - `crows_pairs_english_gender`
70
+ - `crows_pairs_english_nationality`
71
+ - `crows_pairs_english_physical_appearance`
72
+ - `crows_pairs_english_race_color`
73
+ - `crows_pairs_english_religion`
74
+ - `crows_pairs_english_sexual_orientation`
75
+ - `crows_pairs_english_socioeconomic`
76
+
77
+ The following tasks evaluate sub-areas of bias in the French CrowS-Pairs dataset:
78
+ - `crows_pairs_french_age`
79
+ - `crows_pairs_french_autre`
80
+ - `crows_pairs_french_disability`
81
+ - `crows_pairs_french_gender`
82
+ - `crows_pairs_french_nationality`
83
+ - `crows_pairs_french_physical_appearance`
84
+ - `crows_pairs_french_race_color`
85
+ - `crows_pairs_french_religion`
86
+ - `crows_pairs_french_sexual_orientation`
87
+ - `crows_pairs_french_socioeconomic`
88
+
89
+ All tasks evaluate the percentage of more-stereotypical sentences that are rated as more likely by a model than the non-stereotypical sentences (`pct_stereotype`), as well as the average absolute difference of loglikelihoods between the sentences in the pairs.
90
+
91
+ ### Checklist
92
+
93
+ * [x] Is the task an existing benchmark in the literature?
94
+ * [x] Have you referenced the original paper that introduced the task?
95
+ * [x] If yes, does the original paper provide a reference implementation?
96
+ * [x] The original paper does not for causal language models, so this is a novel formulation of the task for autoregressive LMs.
97
+
98
+ If other tasks on this dataset are already supported:
99
+ * [x] Is the "Main" variant of this task clearly denoted?
100
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
101
+ * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - crows_pairs
3
+ - social_bias
4
+ - loglikelihood
5
+ task: crows_pairs_english
6
+ dataset_path: BigScienceBiasEval/crows_pairs_multilingual
7
+ dataset_name: english
8
+ test_split: test
9
+ output_type: multiple_choice
10
+ doc_to_text: ""
11
+ doc_to_target: 0
12
+ doc_to_choice: !function utils.doc_to_choice
13
+ target_delimiter: ""
14
+ process_results: !function utils.process_results
15
+ metric_list:
16
+ - metric: likelihood_diff
17
+ aggregation: mean
18
+ higher_is_better: false
19
+ - metric: pct_stereotype
20
+ aggregation: mean
21
+ higher_is_better: false
22
+ metadata:
23
+ version: 1.0
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_autre.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_autre
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_autre
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_gender.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_gender
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_gender
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_physical_appearance.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_physical_appearance
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_appearance
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_race_color.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_race_color
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_race_color
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_sexual_orientation.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_sexual_orientation
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_orientation
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_english_socioeconomic.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_english_socioeconomic
3
+ dataset_name: english
4
+ process_docs: !function utils.filter_socio
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_age.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_age
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_age
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_autre.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_autre
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_autre
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_gender.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_gender
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_gender
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_nationality.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_nationality
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_nationality
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_physical_appearance.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_physical_appearance
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_appearance
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_race_color.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_race_color
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_race_color
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_religion.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_religion
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_religion
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/crows_pairs_french_sexual_orientation.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: crows_pairs_english.yaml
2
+ task: crows_pairs_french_sexual_orientation
3
+ dataset_name: french
4
+ process_docs: !function utils.filter_orientation
lm-evaluation/build/lib/lm_eval/tasks/crows_pairs/utils.py ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import datasets
2
+
3
+
4
+ def process_results(doc, results):
5
+ lls, _ = zip(*results)
6
+
7
+ likelihood1, likelihood2 = lls
8
+
9
+ # Calculate the absolute difference in loglikelihoods
10
+ diff = abs(likelihood1 - likelihood2)
11
+
12
+ # if stereotypical sentence more likely (loglikelihood higher)
13
+ # then treat this as predicting stereotyped sentence
14
+ acc = 1.0 if likelihood1 > likelihood2 else 0.0
15
+
16
+ return {"likelihood_diff": diff, "pct_stereotype": acc}
17
+
18
+
19
+ def doc_to_choice(doc):
20
+ return [doc["sent_more"], doc["sent_less"]]
21
+
22
+
23
+ def filter_dataset(dataset: datasets.Dataset, bias_type: str) -> datasets.Dataset:
24
+ return dataset.filter(lambda example: example["bias_type"].startswith(bias_type))
25
+
26
+
27
+ def filter_race_color(dataset: datasets.Dataset) -> datasets.Dataset:
28
+ return filter_dataset(dataset, "race-color")
29
+
30
+
31
+ def filter_socio(dataset: datasets.Dataset) -> datasets.Dataset:
32
+ return filter_dataset(dataset, "socioeconomic")
33
+
34
+
35
+ def filter_gender(dataset: datasets.Dataset) -> datasets.Dataset:
36
+ return filter_dataset(dataset, "gender")
37
+
38
+
39
+ def filter_age(dataset: datasets.Dataset) -> datasets.Dataset:
40
+ return filter_dataset(dataset, "age")
41
+
42
+
43
+ def filter_religion(dataset: datasets.Dataset) -> datasets.Dataset:
44
+ return filter_dataset(dataset, "religion")
45
+
46
+
47
+ def filter_disability(dataset: datasets.Dataset) -> datasets.Dataset:
48
+ return filter_dataset(dataset, "disability")
49
+
50
+
51
+ def filter_orientation(dataset: datasets.Dataset) -> datasets.Dataset:
52
+ return filter_dataset(dataset, "sexual-orientation")
53
+
54
+
55
+ def filter_nationality(dataset: datasets.Dataset) -> datasets.Dataset:
56
+ return filter_dataset(dataset, "nationality")
57
+
58
+
59
+ def filter_appearance(dataset: datasets.Dataset) -> datasets.Dataset:
60
+ return filter_dataset(dataset, "physical-appearance")
61
+
62
+
63
+ def filter_autre(dataset: datasets.Dataset) -> datasets.Dataset:
64
+ return filter_dataset(dataset, "autre")
lm-evaluation/build/lib/lm_eval/tasks/eus_trivia/README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EusTrivia
2
+
3
+ ### Paper
4
+
5
+ Title: Latxa: An Open Language Model and Evaluation Suite for Basque
6
+
7
+ Abstract: https://arxiv.org/abs/2403.20266
8
+
9
+ EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging. A significant portion of the questions focus specifically on the Basque Country, its language and culture. Each multiple-choice question contains two, three or four choices (3.84 on average) and a single correct answer. Five areas of knowledge are covered:
10
+
11
+ - **Humanities and Natural Sciences** (27.8%): This category encompasses questions about history, geography, biology, ecology and other social and natural sciences.
12
+ - **Leisure and Art** (24.5%): This category includes questions on sports and athletes, performative and plastic arts and artists, architecture, cultural events, and related topics.
13
+ - **Music** (16.0%): Here are grouped all the questions about music and musicians, both classical and contemporary.
14
+ - **Language and Literature** (17.1%): This category is concerned with all kinds of literature productions and writers, as well as metalinguistic questions (e.g., definitions, synonyms, and word usage).
15
+ - **Mathematics and ICT** (14.5%): This category covers mathematical problems and questions about ICT, as well as questions about people known for their contributions to these fields of knowledge.
16
+
17
+ Homepage: https://github.com/hitz-zentroa/latxa
18
+
19
+
20
+ ### Citation
21
+
22
+ ```
23
+ @misc{etxaniz2024latxa,
24
+ title={Latxa: An Open Language Model and Evaluation Suite for Basque},
25
+ author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
26
+ year={2024},
27
+ eprint={2403.20266},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+ ```
32
+
33
+ ### Groups and Tasks
34
+
35
+ #### Groups
36
+
37
+ There are no groups.
38
+
39
+ #### Tasks
40
+
41
+ * `eus_trivia`: EusTrivia consists of 1,715 trivia questions from multiple online sources.
42
+
43
+ ### Checklist
44
+
45
+ For adding novel benchmarks/datasets to the library:
46
+ * [ ] Is the task an existing benchmark in the literature?
47
+ * [ ] Have you referenced the original paper that introduced the task?
48
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
49
+
50
+
51
+ If other tasks on this dataset are already supported:
52
+ * [ ] Is the "Main" variant of this task clearly denoted?
53
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
54
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?