diff --git a/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md b/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e6e5aeec0403b8c854233089498c9248cf38f089 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md @@ -0,0 +1,56 @@ +# ASDiv + +### Paper + +Title: `ASDiv: A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers` + +Abstract: https://arxiv.org/abs/2106.15772 + +ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language +patterns and problem types) English math word problem (MWP) corpus for evaluating +the capability of various MWP solvers. Existing MWP corpora for studying AI progress +remain limited either in language usage patterns or in problem types. We thus present +a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem +types taught in elementary school. Each MWP is annotated with its problem type and grade +level (for indicating the level of difficulty). + +NOTE: We currently ignore formulas for answer generation. + +Homepage: https://github.com/chaochun/nlu-asdiv-dataset + + +### Citation + +``` +@misc{miao2021diverse, + title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers}, + author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su}, + year={2021}, + eprint={2106.15772}, + archivePrefix={arXiv}, + primaryClass={cs.AI} +} +``` + +### Groups and Tasks + +#### Groups + +* Not part of a group yet. + +#### Tasks + +* `asdiv` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/asdiv/default.yaml b/lm-evaluation-harness/lm_eval/tasks/asdiv/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bd3917c3c228dd8cca64fc40ffd27de55608f457 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/asdiv/default.yaml @@ -0,0 +1,16 @@ +task: asdiv +dataset_path: EleutherAI/asdiv +output_type: loglikelihood +validation_split: validation +doc_to_text: "{{body}}\nQuestion:{{question}}\nAnswer:" +doc_to_target: "{{answer.split(' (')[0]}}" +should_decontaminate: true +doc_to_decontamination_query: "{{body}} {{question}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 +dataset_kwargs: + trust_remote_code: true diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md b/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md new file mode 100644 index 0000000000000000000000000000000000000000..04583b1dad5875011d9dda3f96c2ccd7c6038b5c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md @@ -0,0 +1,72 @@ +# BasqueGLUE + +### Paper + +Title: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque` + +Abstract: `https://aclanthology.org/2022.lrec-1.172/` + +Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this improvement in a robust and general way. These benchmarks take into account a wide and diverse set of NLU tasks that require some form of language understanding, beyond the detection of superficial, textual clues. However, they are costly to develop and language-dependent, and therefore they are only available for a small number of languages. In this paper, we present BasqueGLUE, the first NLU benchmark for Basque, a less-resourced language, which has been elaborated from previously existing datasets and following similar criteria to those used for the construction of GLUE and SuperGLUE. We also report the evaluation of two state-of-the-art language models for Basque on BasqueGLUE, thus providing a strong baseline to compare upon. BasqueGLUE is freely available under an open license. + +Homepage: `https://github.com/orai-nlp/BasqueGLUE` + +Title: `Latxa: An Open Language Model and Evaluation Suite for Basque` + +Abstract: `https://arxiv.org/abs/2403.20266` + +The use of BasqueGLUE for evaluating the performance of decoder models in Basque is presented in this paper. + +Homepage: `https://github.com/hitz-zentroa/latxa` + +### Citation + +``` +@InProceedings{urbizu2022basqueglue, + author = {Urbizu, Gorka and San Vicente, Iñaki and Saralegi, Xabier and Agerri, Rodrigo and Soroa, Aitor}, + title = {BasqueGLUE: A Natural Language Understanding Benchmark for Basque}, + booktitle = {Proceedings of the Language Resources and Evaluation Conference}, + month = {June}, + year = {2022}, + address = {Marseille, France}, + publisher = {European Language Resources Association}, + pages = {1603--1612}, + url = {https://aclanthology.org/2022.lrec-1.172} +} + +@misc{etxaniz2024latxa, + title={Latxa: An Open Language Model and Evaluation Suite for Basque}, + author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa}, + year={2024}, + eprint={2403.20266}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +### Groups and Tasks + +#### Groups + +* `basque-glue`: First version of the implementation + +#### Tasks + +* `bhtc_v2`: Topic classification of news extracts with 12 categories. +* `bec`: Sentiment analysis on tweets about the campaign for the 2016 Basque elections. +* `vaxx_stance`: Stance detection on tweets around the anti-vaccine movement. +* `qnlieu`: Q&A NLI as in [glue/qnli](../glue/qnli). +* `wiceu`: Word-in-Context as in [super_glue/wic](../super_glue/wic). +* `epec_korref_bin`: Correference detection as in [super_glue/wsc](../super_glue/wsc). + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/bec.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/bec.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a078300f0f55e75c353332aecabb8bd72a679fd6 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/bec.yaml @@ -0,0 +1,16 @@ +group: basque-glue +task: bec2016eu +dataset_path: orai-nlp/basqueGLUE +dataset_name: bec +output_type: multiple_choice +validation_split: validation +test_split: test +doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak?\nErantzuna:" +doc_to_target: label +doc_to_choice: ['negatiboa', 'neutrala', 'positiboa'] +metric_list: + - metric: f1 + aggregation: !function utils.micro_f1_score + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/bhtc.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/bhtc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b069d62f4d8c9bcb09aa95dc9db4f50f554f80b5 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/bhtc.yaml @@ -0,0 +1,16 @@ +group: basque-glue +task: bhtc_v2 +dataset_path: orai-nlp/basqueGLUE +dataset_name: bhtc +output_type: multiple_choice +validation_split: validation +test_split: test +doc_to_text: "Testua: {{text}}\nGaldera: Zein da aurreko testuaren gaia?\nErantzuna:" +doc_to_target: label +doc_to_choice: ['Ekonomia', 'Euskal Herria', 'Euskara', 'Gizartea', 'Historia', 'Ingurumena', 'Iritzia', 'Komunikazioa', 'Kultura', 'Nazioartea', 'Politika', 'Zientzia'] +metric_list: + - metric: f1 + aggregation: !function utils.micro_f1_score + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/coref.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/coref.yaml new file mode 100644 index 0000000000000000000000000000000000000000..721691ab43d654d1e9ef7d3965095bc977a08632 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/coref.yaml @@ -0,0 +1,16 @@ +group: basque-glue +task: epec_koref_bin +dataset_path: orai-nlp/basqueGLUE +dataset_name: coref +output_type: multiple_choice +validation_split: validation +test_split: test +doc_to_text: !function utils.coref_doc_to_text +doc_to_target: label +doc_to_choice: ['ez', 'bai'] +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/qnli.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/qnli.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f3cfe84c16ae7aadd7ad2847c808c4764a6415e8 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/qnli.yaml @@ -0,0 +1,16 @@ +group: basque-glue +task: qnlieu +dataset_path: orai-nlp/basqueGLUE +dataset_name: qnli +output_type: multiple_choice +validation_split: validation +test_split: test +doc_to_text: "{{question}}\n{{sentence}}\nGaldera: aurreko galderari erantzuten al dio emandako testuak?\nErantzuna:" +doc_to_target: label +doc_to_choice: ['bai', 'ez'] +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/utils.py b/lm-evaluation-harness/lm_eval/tasks/basqueglue/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..401375f709f765dba749ea275df16bcb19643d9c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/utils.py @@ -0,0 +1,78 @@ +import html +import re + +from datasets import load_metric + + +def general_detokenize(string): + string = re.sub(r"\s+([.,;:!?)])", r"\1", string) + string = re.sub(r"(\s+|^)\(\s+([^)]+)\s+\)", r"\1(\2)", string) + string = re.sub(r"(\s+|^)\[\s+([^)]+)\s+\]", r"\1[\2]", string) + string = re.sub(r'(\s+|^)"\s+([^"]+)\s+"', r'\1"\2"', string) + string = re.sub(r"(\s+|^)'\s+([^']+)\s+'", r"\1'\2'", string) + return string + + +def process_doc(string): + string = html.unescape(string) + string = general_detokenize(string) + return string + + +def process_wic_docs(dataset): + def _helper(doc): + # there's some issues with the encoding on this one + doc["sentence1"] = ( + process_doc(doc["sentence1"]).encode("latin-1").decode("utf-8") + ) + doc["sentence2"] = ( + process_doc(doc["sentence2"]).encode("latin-1").decode("utf-8") + ) + return doc + + return dataset.map(_helper) + + +def coref_doc_to_text(x): + def _span_in_context(span_index, span_text): + span_start = span_index + span_end = span_start + len(span_text.split(" ")) - 1 + tokens[span_start] = f"*{tokens[span_start]}" + tokens[span_end] = f"{tokens[span_end]}*" + + tokens = x["text"].split(" ") + _span_in_context(x["span1_index"], x["span1_text"]) + _span_in_context( + x["span2_index"] - 1, x["span2_text"] + ) # span1_index is 0-based but span2_index is 1-based ?? + context = process_doc(" ".join(tokens)) + span_1 = process_doc(x["span1_text"]) + span_2 = process_doc(x["span2_text"]) + text = ( + f"Testua: {context}\n" + + f'Galdera: Aurreko testuan, "*{span_1}*" eta "*{span_2}*" gauza bera dira?\n' + + "Erantzuna:" + ) + return text + + +# Measure F1 as in the benchmark repo: https://github.com/orai-nlp/BasqueGLUE/blob/main/eval_basqueglue.py + + +def micro_f1_score(items): + f1_metric = load_metric("f1") + golds, preds = list(zip(*items)) + f1_score = f1_metric.compute(references=golds, predictions=preds, average="micro")[ + "f1" + ] + return f1_score + + +def vaxx_f1_score(items): + f1_metric = load_metric("f1") + golds, preds = list(zip(*items)) + f1_class = f1_metric.compute( + references=golds, predictions=preds, labels=[0, 2], average=None + )["f1"] + f1_score = sum(f1_class) / len(f1_class) + return f1_score diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/vaxx.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/vaxx.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f66f530dad5e07dd0af77a56ddc40d72e2d5929c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/vaxx.yaml @@ -0,0 +1,16 @@ +group: basque-glue +task: vaxx_stance +dataset_path: orai-nlp/basqueGLUE +dataset_name: vaxx +output_type: multiple_choice +validation_split: validation +test_split: test +doc_to_text: "Testua: {{text}}\nGaldera: Nolako jarrera agertzen du aurreko testuak txertoei buruz?\nErantzuna:" +doc_to_target: label +doc_to_choice: ['aurka', 'neutrala', 'alde'] +metric_list: + - metric: f1 + aggregation: !function utils.vaxx_f1_score + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/basqueglue/wic.yaml b/lm-evaluation-harness/lm_eval/tasks/basqueglue/wic.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7ec2681ac22f53265fb49206917e332538b9d900 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/basqueglue/wic.yaml @@ -0,0 +1,17 @@ +group: basque-glue +task: wiceu +dataset_path: orai-nlp/basqueGLUE +dataset_name: wic +output_type: multiple_choice +validation_split: validation +test_split: test +process_docs: !function utils.process_wic_docs +doc_to_text: "1. esaldia: {{sentence1}}\n2. esaldia: {{sentence2}}\nGaldera: Aurreko bi esaldietan, \"{{word}}\" hitzak esanahi berdina du?\nErantzuna:" +doc_to_target: label +doc_to_choice: ['ez', 'bai'] +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + - version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_generate_configs.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_generate_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..bda00784cc2fa26b5f0d488cf7b6aea37243353d --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_generate_configs.py @@ -0,0 +1,26 @@ +import yaml +from tqdm import tqdm + + +def main() -> None: + subset = ["extended", "diamond", "main"] + setting = "cot_zeroshot" + for task in tqdm(subset): + file_name = f"gpqa_{task}_{setting}.yaml" + try: + with open(f"{file_name}", "w") as f: + f.write("# Generated by _generate_configs.py\n") + yaml.dump( + { + "include": f"_gpqa_{setting}_yaml", + "task": f"gpqa_{task}_{setting}", + "dataset_name": f"gpqa_{task}", + }, + f, + ) + except FileExistsError: + pass + + +if __name__ == "__main__": + main() diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_gpqa_cot_zeroshot_yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_gpqa_cot_zeroshot_yaml new file mode 100644 index 0000000000000000000000000000000000000000..df99f272c99a343d4250c44e3618f85e9e2a0682 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/_gpqa_cot_zeroshot_yaml @@ -0,0 +1,38 @@ +dataset_path: Idavidrein/gpqa +group: gpqa +output_type: generate_until +process_docs: !function utils.process_docs +training_split: train +# Because huggingface dataset only has train split +validation_split: train +test_split: null +doc_to_text: "What is the correct answer to this question:{{Question}}\nChoices:\n(A) {{choice1}}\n(B) {{choice2}}\n(C) {{choice3}}\n(D) {{choice4}}\nLet's think step by step: " +doc_to_target: answer +filter_list: + - name: "strict-match" + filter: + - function: "regex" + regex_pattern: "(?<=The answer is )(.*)(?=.)" + - function: "take_first" + - name: "flexible-extract" + filter: + - function: "multi_choice_regex" + group_select: -1 + ignore_case: true + ignore_punctuation: true + regex_pattern: "(\\([A-Z]\\))" + - function: "take_first" +generation_kwargs: + until: + - "" + do_sample: false + temperature: 0.0 +num_fewshot: 0 +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_diamond_cot_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_diamond_cot_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e6a840fa1815096f5fa180ed06223e3523a06214 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_diamond_cot_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_diamond +include: _gpqa_cot_zeroshot_yaml +task: gpqa_diamond_cot_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_extended_cot_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_extended_cot_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9f542a6148f231e2d7e7e2a5a3437047459e3856 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_extended_cot_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_extended +include: _gpqa_cot_zeroshot_yaml +task: gpqa_extended_cot_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_main_cot_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_main_cot_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8c14604854294c4551e2602e573488c6a7fef254 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/gpqa_main_cot_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_main +include: _gpqa_cot_zeroshot_yaml +task: gpqa_main_cot_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/utils.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..96bcd52b140fd0a5896f55c0a52ea2fd5453fd53 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/cot_zeroshot/utils.py @@ -0,0 +1,39 @@ +import random +import re + +import datasets + + +def preprocess(text): + if text is None: + return " " + text = text.strip() + text = text.replace(" [title]", ". ") + text = re.sub("\\[.*?\\]", "", text) + text = text.replace(" ", " ") + return text + + +def process_docs(dataset: datasets.Dataset) -> datasets.Dataset: + def _process_doc(doc): + choices = [ + preprocess(doc["Incorrect Answer 1"]), + preprocess(doc["Incorrect Answer 2"]), + preprocess(doc["Incorrect Answer 3"]), + preprocess(doc["Correct Answer"]), + ] + + random.shuffle(choices) + correct_answer_index = choices.index(preprocess(doc["Correct Answer"])) + + out_doc = { + "choice1": choices[0], + "choice2": choices[1], + "choice3": choices[2], + "choice4": choices[3], + "choices": [choices[0], choices[1], choices[2], choices[3]], + "answer": f"({chr(65 + correct_answer_index)})", + } + return out_doc + + return dataset.map(_process_doc) diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/_generate_configs.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/_generate_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..c01f208e767cb813e6d2116caf74c3d0b2fccfb3 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/_generate_configs.py @@ -0,0 +1,26 @@ +import yaml +from tqdm import tqdm + + +def main() -> None: + subset = ["extended", "diamond", "main"] + + for task in tqdm(subset): + file_name = f"gpqa_{task}_n_shot.yaml" + try: + with open(f"{file_name}", "w") as f: + f.write("# Generated by _generate_configs.py\n") + yaml.dump( + { + "include": "_gpqa_n_shot_yaml", + "task": f"gpqa_{task}_n_shot", + "dataset_name": f"gpqa_{task}", + }, + f, + ) + except FileExistsError: + pass + + +if __name__ == "__main__": + main() diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_diamond_n_shot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_diamond_n_shot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3043a7e53647ff72d535abc113dfccebaa1bd43c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_diamond_n_shot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_diamond +include: _gpqa_n_shot_yaml +task: gpqa_diamond_n_shot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_extended_n_shot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_extended_n_shot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5d16b505b355bccb3d6fd70eb16b307c12d06a09 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_extended_n_shot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_extended +include: _gpqa_n_shot_yaml +task: gpqa_extended_n_shot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_main_n_shot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_main_n_shot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7e5f3e9532ab41c0158409e6afb47393806c4177 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/gpqa_main_n_shot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_main +include: _gpqa_n_shot_yaml +task: gpqa_main_n_shot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/utils.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..e0b886d2879216094214ce534438e4db0c5e60f8 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/n_shot/utils.py @@ -0,0 +1,41 @@ +import random +import re + +import datasets + + +def preprocess(text): + if text is None: + return " " + text = text.strip() + text = text.replace(" [title]", ". ") + text = re.sub("\\[.*?\\]", "", text) + text = text.replace(" ", " ") + return text + + +rng = random.Random(42) + + +def process_docs(dataset: datasets.Dataset) -> datasets.Dataset: + def _process_doc(doc): + choices = [ + preprocess(doc["Incorrect Answer 1"]), + preprocess(doc["Incorrect Answer 2"]), + preprocess(doc["Incorrect Answer 3"]), + preprocess(doc["Correct Answer"]), + ] + + rng.shuffle(choices) + correct_answer_index = choices.index(preprocess(doc["Correct Answer"])) + + out_doc = { + "choice1": choices[0], + "choice2": choices[1], + "choice3": choices[2], + "choice4": choices[3], + "answer": f"({chr(65 + correct_answer_index)})", + } + return out_doc + + return dataset.map(_process_doc) diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_generate_configs.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_generate_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..79afbd6f1d8d4b2eb54455d734f6245357580bd3 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_generate_configs.py @@ -0,0 +1,26 @@ +import yaml +from tqdm import tqdm + + +def main() -> None: + subset = ["extended", "diamond", "main"] + setting = "zeroshot" + for task in tqdm(subset): + file_name = f"gpqa_{task}_{setting}.yaml" + try: + with open(f"{file_name}", "w") as f: + f.write("# Generated by _generate_configs.py\n") + yaml.dump( + { + "include": f"_gpqa_{setting}_yaml", + "task": f"gpqa_{task}_{setting}", + "dataset_name": f"gpqa_{task}", + }, + f, + ) + except FileExistsError: + pass + + +if __name__ == "__main__": + main() diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_gpqa_zeroshot_yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_gpqa_zeroshot_yaml new file mode 100644 index 0000000000000000000000000000000000000000..707641b5f0c6243d48f77c6a4a56d5ec824baa4e --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/_gpqa_zeroshot_yaml @@ -0,0 +1,21 @@ +dataset_path: Idavidrein/gpqa +group: gpqa +output_type: multiple_choice +process_docs: !function utils.process_docs +training_split: train +# Because huggingface dataset only has train split +validation_split: train +test_split: null +doc_to_text: "What is the correct answer to this question:{{Question}}\nChoices:\n(A) {{choice1}}\n(B) {{choice2}}\n(C) {{choice3}}\n(D) {{choice4}}\nAnswer:" +doc_to_target: answer +doc_to_choice: ["(A)", "(B)", "(C)", "(D)"] +num_fewshot: 0 +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_diamond_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_diamond_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c3a7921c30b3ff09e82aacb4c0e915010f698966 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_diamond_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_diamond +include: _gpqa_zeroshot_yaml +task: gpqa_diamond_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_extended_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_extended_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5e7347f11154351ad4560200a3f3bf54106a1a8f --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_extended_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_extended +include: _gpqa_zeroshot_yaml +task: gpqa_extended_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_main_zeroshot.yaml b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_main_zeroshot.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1a8d7fb59025d148130f2a468cb1bbdfad959102 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/gpqa_main_zeroshot.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: gpqa_main +include: _gpqa_zeroshot_yaml +task: gpqa_main_zeroshot diff --git a/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/utils.py b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..c2317e02efd132aea27ec8c8fad284df55ccd382 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/gpqa/zeroshot/utils.py @@ -0,0 +1,38 @@ +import random +import re + +import datasets + + +def preprocess(text): + if text is None: + return " " + text = text.strip() + text = text.replace(" [title]", ". ") + text = re.sub("\\[.*?\\]", "", text) + text = text.replace(" ", " ") + return text + + +def process_docs(dataset: datasets.Dataset) -> datasets.Dataset: + def _process_doc(doc): + choices = [ + preprocess(doc["Incorrect Answer 1"]), + preprocess(doc["Incorrect Answer 2"]), + preprocess(doc["Incorrect Answer 3"]), + preprocess(doc["Correct Answer"]), + ] + + random.shuffle(choices) + correct_answer_index = choices.index(preprocess(doc["Correct Answer"])) + + out_doc = { + "choice1": choices[0], + "choice2": choices[1], + "choice3": choices[2], + "choice4": choices[3], + "answer": f"({chr(65 + correct_answer_index)})", + } + return out_doc + + return dataset.map(_process_doc) diff --git a/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md b/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md new file mode 100644 index 0000000000000000000000000000000000000000..a93054011b1baabd9d3a1b11afd90649d6c2e013 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md @@ -0,0 +1,52 @@ +# LogiQA 2.0 + +### Paper + +LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding https://ieeexplore.ieee.org/document/10174688 + + +The dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil Service Examination. This new version has an increased data size, the texts are refined with manual translation by professionals, and improved by removing items with distinctive cultural features like Chinese idioms. + +Furthermore, a two-way natural language inference (NLI) task is introduced, resulting in 35k premise-hypothesis pairs with gold labels, making it the first large-scale NLI dataset for complex logical reasoning + +Homepage: https://github.com/csitfun/LogiQA2.0 + +### Citation + +```bibtex +@ARTICLE{10174688, + author={Liu, Hanmeng and Liu, Jian and Cui, Leyang and Teng, Zhiyang and Duan, Nan and Zhou, Ming and Zhang, Yue}, + journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, + title={LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding}, + year={2023}, + volume={}, + number={}, + pages={1-16}, + doi={10.1109/TASLP.2023.3293046}} +``` + +### Groups and Tasks + +#### Groups + +* Not part of a group yet + +#### Tasks + +* `logiqa2_zh`: The original dataset in Chinese. +* `logiqa2_NLI`: The NLI version of the dataset converted from the MRC version. +* `logieval`: Prompt based; https://github.com/csitfun/LogiEval + +NOTE! The subtasks have not been verified yet. + +### Checklist + +* [x] Is the task an existing benchmark in the literature? + * [x] Have you referenced the original paper that introduced the task? + * [x] If yes, does the original paper provide a reference implementation? + * [x] The original paper does not. There is another implementation of this task, but it designed for instruction tuned models: https://github.com/csitfun/LogiEval + +If other tasks on this dataset are already supported: +* [x] Is the "Main" variant of this task clearly denoted? +* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/logiqa2/logieval.yaml b/lm-evaluation-harness/lm_eval/tasks/logiqa2/logieval.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f83f274b658341c2b1f8685f47138f84d5830a82 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/logiqa2/logieval.yaml @@ -0,0 +1,29 @@ +task: logieval +dataset_path: baber/logiqa2 +dataset_name: logieval +output_type: generate_until +training_split: train +test_split: test +# Instructions + {content} +doc_to_text: "Instructions: You will be presented with a passage and a question about that passage. There are four options to be chosen from, you need to choose the only correct option to answer that question. If the first option is right, you generate the answer 'A', if the second option is right, you generate the answer 'B', if the third option is right, you generate the answer 'C', if the fourth option is right, you generate the answer 'D'. Read the question and options thoroughly and select the correct answer from the four answer labels. Read the passage thoroughly to ensure you know what the passage entails.\n{{content}}" +doc_to_target: "{{ideal}}" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true +generation_kwargs: + do_sample: false +num_fewshot: 1 +filter_list: + - name: "get-answer" + filter: + - function: "regex" + # starts with A-D excluding leading spaces + # original implementation uses a.startswith(b) + # https://github.com/openai/evals/blob/305b237cdb3884c7ddb6a5d12cb184a83551fcba/evals/api.py#L84 + regex_pattern: "^\\s*([A-D])" + - function: "take_first" +metadata: + version: 0.0 +dataset_kwargs: + trust_remote_code: true diff --git a/lm-evaluation-harness/lm_eval/tasks/logiqa2/logiqa2.yaml b/lm-evaluation-harness/lm_eval/tasks/logiqa2/logiqa2.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0bcd97b131dd96144ec41731d9c9f4100ebd0a77 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/logiqa2/logiqa2.yaml @@ -0,0 +1,21 @@ +task: logiqa2 +dataset_path: baber/logiqa2 +dataset_name: logiqa2 +output_type: multiple_choice +training_split: train +validation_split: validation +test_split: test +doc_to_choice: "{{options}}" +doc_to_text: !function utils_logiqa2.doc_to_text +doc_to_target: "{{answer}}" +doc_to_decontamination_query: "{{context}}" +should_decontaminate: false +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 0.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/logiqa2/utils_logiqa2.py b/lm-evaluation-harness/lm_eval/tasks/logiqa2/utils_logiqa2.py new file mode 100644 index 0000000000000000000000000000000000000000..8d88e361e4a96401f2c5ce022c565673d196889c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/logiqa2/utils_logiqa2.py @@ -0,0 +1,27 @@ +# Copied from Master +def doc_to_text(doc) -> str: + """ + Passage: + Question: + A. + B. + C. + D. + Answer: + """ + choices = ["a", "b", "c", "d"] + prompt = "Passage: " + doc["text"] + "\n" + prompt += "Question: " + doc["question"] + "\n" + for choice, option in zip(choices, doc["options"]): + prompt += f"{choice.upper()}. {option}\n" + prompt += "Answer:" + return prompt + + +# # https://github.com/csitfun/LogiQA2.0/blob/main/logiqa2nli/nli-prompt.py +# def doc_to_textNLI(doc): +# maj_premise = ' '.join(list(doc['major_premise'])) +# min_premise = ' '.join(list(doc['minor_premise'])) +# hypo = doc['conclusion'] +# prompt_input = "Given the fact: " + maj_premise + ' ' + min_premise + " Does it follow that: " + hypo + " Yes or no?" +# return prompt_input diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-for-self-improvement.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-for-self-improvement.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ceea5cc779f941a49e35a01e17602d747e6f5531 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-for-self-improvement.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: desire-for-self-improvement +include: _template_yaml +task: persona_desire-for-self-improvement diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-to-persuade-people-to-be-less-harmful-to-others.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-to-persuade-people-to-be-less-harmful-to-others.yaml new file mode 100644 index 0000000000000000000000000000000000000000..953b2e5817cde858d3fb36d99b88936e826855fa --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/desire-to-persuade-people-to-be-less-harmful-to-others.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: desire-to-persuade-people-to-be-less-harmful-to-others +include: _template_yaml +task: persona_desire-to-persuade-people-to-be-less-harmful-to-others diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/has-serious-disability.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/has-serious-disability.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0bfd6b27b8b5045f1bbe68ceec9cb333bf2d1a0b --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/has-serious-disability.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: has-serious-disability +include: _template_yaml +task: persona_has-serious-disability diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-art.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-art.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bbd4e814618f3b33c66544c9a2bdaec210ec2d67 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-art.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: interest-in-art +include: _template_yaml +task: persona_interest-in-art diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-math.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ee280f0b05d5ec44cf12e6bf897aa84c93ec0b18 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-math.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: interest-in-math +include: _template_yaml +task: persona_interest-in-math diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-sports.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-sports.yaml new file mode 100644 index 0000000000000000000000000000000000000000..46fe4dfe71434aa0b1bedfa69d4f7a5877f2d9b2 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/interest-in-sports.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: interest-in-sports +include: _template_yaml +task: persona_interest-in-sports diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/machiavellianism.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/machiavellianism.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ccccd995d04bceb2548cb81e52e7041d50cab8a4 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/machiavellianism.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: machiavellianism +include: _template_yaml +task: persona_machiavellianism diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/risk-averse.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/risk-averse.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f1dedb61c6f458f911748c39e43776f34a940da2 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/risk-averse.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: risk-averse +include: _template_yaml +task: persona_risk-averse diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-Atheism.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-Atheism.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7ce6adbdf1f2c4dab5d1e422d7294fbaf4299126 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-Atheism.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: subscribes-to-Atheism +include: _template_yaml +task: persona_subscribes-to-Atheism diff --git a/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-act-utilitarianism.yaml b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-act-utilitarianism.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9cd29d352e756f3c0edfee3a3fa3526bc2fdb5ef --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/model_written_evals/persona/subscribes-to-act-utilitarianism.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: subscribes-to-act-utilitarianism +include: _template_yaml +task: persona_subscribes-to-act-utilitarianism diff --git a/lm-evaluation-harness/lm_eval/tasks/swag/README.md b/lm-evaluation-harness/lm_eval/tasks/swag/README.md new file mode 100644 index 0000000000000000000000000000000000000000..ba1e71af5c93431a4fc051c7abc078d058d06827 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/swag/README.md @@ -0,0 +1,52 @@ +# SWAG + +### Paper + +Title: `SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference` + +Abstract: https://arxiv.org/pdf/1808.05326.pdf + +SWAG (Situations With Adversarial Generations) is an adversarial dataset +that consists of 113k multiple choice questions about grounded situations. Each +question is a video caption from LSMDC or ActivityNet Captions, with four answer +choices about what might happen next in the scene. The correct answer is the +(real) video caption for the next event in the video; the three incorrect +answers are adversarially generated and human verified, so as to fool machines +but not humans. + +Homepage: https://rowanzellers.com/swag/ + + +### Citation + +``` +@inproceedings{zellers2018swagaf, + title={SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference}, + author={Zellers, Rowan and Bisk, Yonatan and Schwartz, Roy and Choi, Yejin}, + booktitle = "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)", + year={2018} +} +``` + +### Groups and Tasks + +#### Groups + +* Not a part of a task yet. + +#### Tasks + +* `swag` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/swag/swag.yaml b/lm-evaluation-harness/lm_eval/tasks/swag/swag.yaml new file mode 100644 index 0000000000000000000000000000000000000000..13e30566eaf91fc6ab51ac169c41ede3d9c2bedc --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/swag/swag.yaml @@ -0,0 +1,19 @@ +task: swag +dataset_path: swag +dataset_name: regular +output_type: multiple_choice +training_split: train +validation_split: validation +test_split: null +doc_to_text: startphrase +doc_to_target: label +doc_to_choice: "{{[ending0, ending1, ending2, ending3]}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_default_template_yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_default_template_yaml new file mode 100644 index 0000000000000000000000000000000000000000..7ece2e2d84cb43f6e1d7403ae83a73be41e164f7 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_default_template_yaml @@ -0,0 +1,19 @@ +dataset_path: ZoneTwelve/tmmluplus # a copy of `ikala/tmmluplus` +test_split: test +fewshot_split: train +fewshot_config: + sampler: first_n +output_type: multiple_choice +process_docs: !function utils.process_docs +doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:" +doc_to_choice: ["A", "B", "C", "D"] +doc_to_target: answer +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 0.1 diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_generate_configs.py b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_generate_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..e313e9b1ea053b4a97f19d8dcbcdfe2cf86f856a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/_generate_configs.py @@ -0,0 +1,210 @@ +""" +Take in a YAML, and output all "other" splits with this YAML +""" +import argparse +import os + +import pandas as pd +import yaml +from tqdm import tqdm + + +# Copy from https://github.com/iKala/ievals/blob/main/ievals/settings.py +# from TMMLU+ offical example +categories = { + "STEM": [ + "physics", + "chemistry", + "biology", + "computer science", + "math", + "engineering", + ], + "humanities": ["history", "philosophy", "law"], + "social_sciences": [ + "politics", + "culture", + "economics", + "geography", + "psychology", + "education", + ], + "other": ["other", "business", "health"], # (business, health, misc.) +} + +task_list = [ + "engineering_math", + "dentistry", + "traditional_chinese_medicine_clinical_medicine", + "clinical_psychology", + "technical", + "culinary_skills", + "mechanical", + "logic_reasoning", + "real_estate", + "general_principles_of_law", + "finance_banking", + "anti_money_laundering", + "ttqav2", + "marketing_management", + "business_management", + "organic_chemistry", + "advance_chemistry", + "physics", + "secondary_physics", + "human_behavior", + "national_protection", + "jce_humanities", + "politic_science", + "agriculture", + "official_document_management", + "financial_analysis", + "pharmacy", + "educational_psychology", + "statistics_and_machine_learning", + "management_accounting", + "introduction_to_law", + "computer_science", + "veterinary_pathology", + "accounting", + "fire_science", + "optometry", + "insurance_studies", + "pharmacology", + "taxation", + "education_(profession_level)", + "economics", + "veterinary_pharmacology", + "nautical_science", + "occupational_therapy_for_psychological_disorders", + "trust_practice", + "geography_of_taiwan", + "physical_education", + "auditing", + "administrative_law", + "basic_medical_science", + "macroeconomics", + "trade", + "chinese_language_and_literature", + "tve_design", + "junior_science_exam", + "junior_math_exam", + "junior_chinese_exam", + "junior_social_studies", + "tve_mathematics", + "tve_chinese_language", + "tve_natural_sciences", + "junior_chemistry", + "music", + "education", + "three_principles_of_people", + "taiwanese_hokkien", +] +subject2name = {} +# subject2category = {} +SUBJECTS = {} + + +def parse_args(): + parser = argparse.ArgumentParser() + parser.add_argument("--base_yaml_path", required=True) + parser.add_argument("--save_prefix_path", default="tmmluplus") + parser.add_argument("--cot_prompt_path", default=None) + parser.add_argument("--task_prefix", default="") + parser.add_argument("--group_prefix", default="") + parser.add_argument("--subject_file", default="subject.tsv") + return parser.parse_args() + + +if __name__ == "__main__": + args = parse_args() + from pathlib import Path + + # Initialization + SUBJECT_FILE = Path(__file__).parent / Path(args.subject_file) + + df = pd.read_csv(SUBJECT_FILE, delimiter="\t") + + for _, row in df.iterrows(): + for _c in categories: + if row["subject"] in SUBJECTS: + raise ValueError("Duplicate tasks.") + if row["category"] in categories[_c]: # append new item into SUBJECTS + SUBJECTS[row["subject"]] = _c + subject2name[row["subject"]] = row["name"] + break + # End of SUBJECTS initialization + + # get filename of base_yaml so we can `"include": ` it in our "other" YAMLs. + base_yaml_name = os.path.split(args.base_yaml_path)[-1] + with open(args.base_yaml_path) as f: + base_yaml = yaml.full_load(f) + + if args.cot_prompt_path is not None: + import json + + with open(args.cot_prompt_path) as f: + cot_file = json.load(f) + + ALL_CATEGORIES = [] + for subject, category in tqdm(SUBJECTS.items()): + if category not in ALL_CATEGORIES: + ALL_CATEGORIES.append(category) + + if args.cot_prompt_path is not None: + description = cot_file[subject] + else: + name_of_subject = subject2name[subject].replace("_", " ") + description = f"以下為{name_of_subject}的單選題,請提供正確答案的選項。\n\n" + # description = f"The following are multiple choice questions (with answers) about {' '.join(subject.split('_'))}.\n\n" + + yaml_dict = { + "include": base_yaml_name, + "group": f"tmmluplus_{args.task_prefix}_{category}" + if args.task_prefix != "" + else f"tmmluplus_{category}", + "group_alias": category.replace("_", " "), + "task": f"tmmluplus_{args.task_prefix}_{subject}" + if args.task_prefix != "" + else f"tmmluplus_{subject}", + "task_alias": subject.replace("_", " "), + "dataset_name": subject, + "description": description, + } + + file_save_path = args.save_prefix_path + f"_{subject}.yaml" + # eval_logger.info(f"Saving yaml for subset {subject} to {file_save_path}") + with open(file_save_path, "w") as yaml_file: + yaml.dump( + yaml_dict, + yaml_file, + # width=float("inf"), + allow_unicode=True, + default_style='"', + ) + + if args.task_prefix != "": + mmlu_subcategories = [ + f"tmmluplus_{args.task_prefix}_{category}" for category in ALL_CATEGORIES + ] + else: + mmlu_subcategories = [f"tmmluplus_{category}" for category in ALL_CATEGORIES] + + if args.group_prefix != "": + file_save_path = args.group_prefix + ".yaml" + else: + file_save_path = args.save_prefix_path + ".yaml" + + # eval_logger.info(f"Saving benchmark config to {file_save_path}") + with open(file_save_path, "w") as yaml_file: + yaml.dump( + { + "group": f"tmmluplus_{args.task_prefix}" + if args.task_prefix != "" + else "tmmluplus", + "task": mmlu_subcategories, + }, + yaml_file, + indent=4, + default_flow_style=False, + ) diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus.yaml new file mode 100644 index 0000000000000000000000000000000000000000..105cf98aff37b28535e8166ae685e5fac105eaed --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus.yaml @@ -0,0 +1,6 @@ +group: tmmluplus +task: +- tmmluplus_other +- tmmluplus_social_sciences +- tmmluplus_humanities +- tmmluplus_STEM diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_clinical_psychology.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_clinical_psychology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f8194feb7dee9c2100f6ecf50b602235d1ac0a2a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_clinical_psychology.yaml @@ -0,0 +1,7 @@ +"dataset_name": "clinical_psychology" +"description": "以下為臨床心理學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_clinical_psychology" +"task_alias": "clinical psychology" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c55f6a4a3ae23aaa1fb4a31941f9d5020df892f4 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml @@ -0,0 +1,7 @@ +"dataset_name": "computer_science" +"description": "以下為資訊工程的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_STEM" +"group_alias": "STEM" +"include": "_default_template_yaml" +"task": "tmmluplus_computer_science" +"task_alias": "computer science" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_dentistry.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_dentistry.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d6295240fc3a37046d0c8d0038eb58130667a807 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_dentistry.yaml @@ -0,0 +1,7 @@ +"dataset_name": "dentistry" +"description": "以下為牙醫學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_dentistry" +"task_alias": "dentistry" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml new file mode 100644 index 0000000000000000000000000000000000000000..f986517b66c9f46443655b940c251007ba782c50 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml @@ -0,0 +1,7 @@ +"dataset_name": "education_(profession_level)" +"description": "以下為教育專業的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_education_(profession_level)" +"task_alias": "education (profession level)" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_financial_analysis.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_financial_analysis.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9990ab5d0447b969c6f5ae026d5db0d388a00b29 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_financial_analysis.yaml @@ -0,0 +1,7 @@ +"dataset_name": "financial_analysis" +"description": "以下為財務分析的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_financial_analysis" +"task_alias": "financial analysis" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_human_behavior.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_human_behavior.yaml new file mode 100644 index 0000000000000000000000000000000000000000..54aaa80fa3b24df4452b3a5c2c75fdb29bb51cdb --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_human_behavior.yaml @@ -0,0 +1,7 @@ +"dataset_name": "human_behavior" +"description": "以下為人類行為與社會的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_human_behavior" +"task_alias": "human behavior" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_insurance_studies.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_insurance_studies.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fa23be46c1af606deb01c860a4703d30edda019d --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_insurance_studies.yaml @@ -0,0 +1,7 @@ +"dataset_name": "insurance_studies" +"description": "以下為保險學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_insurance_studies" +"task_alias": "insurance studies" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_jce_humanities.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_jce_humanities.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2ff3bed0731b042baaaed575011b1c0ea6a26aff --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_jce_humanities.yaml @@ -0,0 +1,7 @@ +"dataset_name": "jce_humanities" +"description": "以下為指考人文科目的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_humanities" +"group_alias": "humanities" +"include": "_default_template_yaml" +"task": "tmmluplus_jce_humanities" +"task_alias": "jce humanities" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_social_studies.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_social_studies.yaml new file mode 100644 index 0000000000000000000000000000000000000000..760ff0e794a401489dc7a7ddd3e258f2a707edde --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_social_studies.yaml @@ -0,0 +1,7 @@ +"dataset_name": "junior_social_studies" +"description": "以下為國中會考基測社會科的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_junior_social_studies" +"task_alias": "junior social studies" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_macroeconomics.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_macroeconomics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..91009abe691ffcc0c910729244620557ccad2d6c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_macroeconomics.yaml @@ -0,0 +1,7 @@ +"dataset_name": "macroeconomics" +"description": "以下為總經的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_macroeconomics" +"task_alias": "macroeconomics" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_marketing_management.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_marketing_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..da39f0a879b33956012c8f2fefba88586a9c4b4d --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_marketing_management.yaml @@ -0,0 +1,7 @@ +"dataset_name": "marketing_management" +"description": "以下為行銷管理的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_marketing_management" +"task_alias": "marketing management" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_music.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_music.yaml new file mode 100644 index 0000000000000000000000000000000000000000..72864c0035da8cb92b491773be7a8a5e8a3b1685 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_music.yaml @@ -0,0 +1,7 @@ +"dataset_name": "music" +"description": "以下為音樂科的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_music" +"task_alias": "music" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_national_protection.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_national_protection.yaml new file mode 100644 index 0000000000000000000000000000000000000000..62e98266d83a247c9e56f119316780dedac1369e --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_national_protection.yaml @@ -0,0 +1,7 @@ +"dataset_name": "national_protection" +"description": "以下為軍事的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_national_protection" +"task_alias": "national protection" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_optometry.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_optometry.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7e3b78b7edd3136d3ed8a10d5e959d3fb72bc7bd --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_optometry.yaml @@ -0,0 +1,7 @@ +"dataset_name": "optometry" +"description": "以下為視光學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_optometry" +"task_alias": "optometry" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fb3558e9baa4cf6ee4c1f19a244341a3a484861c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml @@ -0,0 +1,7 @@ +"dataset_name": "physical_education" +"description": "以下為體育的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_physical_education" +"task_alias": "physical education" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_real_estate.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_real_estate.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ba90b7aa565bf0102967508392d286e13c25a747 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_real_estate.yaml @@ -0,0 +1,7 @@ +"dataset_name": "real_estate" +"description": "以下為房地產的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_real_estate" +"task_alias": "real estate" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taiwanese_hokkien.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taiwanese_hokkien.yaml new file mode 100644 index 0000000000000000000000000000000000000000..89297df3158681f837462d90ead8660b563ee3e0 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taiwanese_hokkien.yaml @@ -0,0 +1,7 @@ +"dataset_name": "taiwanese_hokkien" +"description": "以下為閩南語的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_taiwanese_hokkien" +"task_alias": "taiwanese hokkien" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taxation.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taxation.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f54520270fb40d6c91b9ef508235a938b87be190 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_taxation.yaml @@ -0,0 +1,7 @@ +"dataset_name": "taxation" +"description": "以下為稅務的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_humanities" +"group_alias": "humanities" +"include": "_default_template_yaml" +"task": "tmmluplus_taxation" +"task_alias": "taxation" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_technical.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_technical.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6167a8fe0f63000a8d714ef2ed286ed950297d54 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_technical.yaml @@ -0,0 +1,7 @@ +"dataset_name": "technical" +"description": "以下為技術工相關的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_technical" +"task_alias": "technical" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_three_principles_of_people.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_three_principles_of_people.yaml new file mode 100644 index 0000000000000000000000000000000000000000..de50db700ba23b2941ece04a5f0d4eb0999ffe10 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_three_principles_of_people.yaml @@ -0,0 +1,7 @@ +"dataset_name": "three_principles_of_people" +"description": "以下為三民主義的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_three_principles_of_people" +"task_alias": "three principles of people" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b5a3fdf197c6f64ecda03af7c6119721ae18df11 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml @@ -0,0 +1,7 @@ +"dataset_name": "traditional_chinese_medicine_clinical_medicine" +"description": "以下為中醫臨床醫學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_traditional_chinese_medicine_clinical_medicine" +"task_alias": "traditional chinese medicine clinical medicine" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_veterinary_pharmacology.yaml b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_veterinary_pharmacology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..45c6553b2985013ae44ddaa401b7e2a10cfa59ee --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_veterinary_pharmacology.yaml @@ -0,0 +1,7 @@ +"dataset_name": "veterinary_pharmacology" +"description": "以下為獸醫藥理學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_veterinary_pharmacology" +"task_alias": "veterinary pharmacology" diff --git a/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/utils.py b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..e406d28293586763eaf73d4452a221ce97948041 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/utils.py @@ -0,0 +1,16 @@ +import datasets + + +def process_docs(dataset: datasets.Dataset) -> datasets.Dataset: + def _helper(doc): + # modifies the contents of a single + # document in our dataset. + answer_list = ["A", "B", "C", "D"] + out_doc = { + "questions": doc["question"], + "choices": [doc["A"], doc["B"], doc["C"], doc["D"]], + "goal": answer_list.index(doc["answer"]), + } + return out_doc + + return dataset.map(_helper) # returns back a datasets.Dataset object diff --git a/lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_chem.yaml b/lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_chem.yaml new file mode 100644 index 0000000000000000000000000000000000000000..788d6d618bb6f7328841374b2a98a675f9f51849 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_chem.yaml @@ -0,0 +1,4 @@ +"task": "wmdp_chem" +"dataset_name": "wmdp-chem" +"include": "_default_template_yaml" +"description": "The following are multiple choice questions (with answers) about chemistry.\n\n" diff --git a/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md new file mode 100644 index 0000000000000000000000000000000000000000..4efffa3ca786370e96fa09d81393cb97722bc502 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/README.md @@ -0,0 +1,50 @@ +# XNLIeu + +### Paper + +Title: XNLIeu: a dataset for cross-lingual NLI in Basque + +Abstract: https://arxiv.org/abs/2404.06996 + +XNLI is a popular Natural Language Inference (NLI) benchmark widely used to evaluate cross-lingual Natural Language Understanding (NLU) capabilities across languages. In this paper, we expand XNLI to include Basque, a low-resource language that can greatly benefit from transfer-learning approaches. The new dataset, dubbed XNLIeu, has been developed by first machine-translating the English XNLI corpus into Basque, followed by a manual post-edition step. We have conducted a series of experiments using mono- and multilingual LLMs to assess a) the effect of professional post-edition on the MT system; b) the best cross-lingual strategy for NLI in Basque; and c) whether the choice of the best cross-lingual strategy is influenced by the fact that the dataset is built by translation. The results show that post-edition is necessary and that the translate-train cross-lingual strategy obtains better results overall, although the gain is lower when tested in a dataset that has been built natively from scratch. Our code and datasets are publicly available under open licenses at https://github.com/hitz-zentroa/xnli-eu. + +Homepage: https://github.com/hitz-zentroa/xnli-eu + + +### Citation + +```bibtex +@misc{heredia2024xnlieu, + title={XNLIeu: a dataset for cross-lingual NLI in Basque}, + author={Maite Heredia and Julen Etxaniz and Muitze Zulaika and Xabier Saralegi and Jeremy Barnes and Aitor Soroa}, + year={2024}, + eprint={2404.06996}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +### Groups and Tasks + +#### Groups + +* `xnli_eu_mt_native`: Includes MT and Native variants of the XNLIeu dataset. + +#### Tasks + +* `xnli_eu`: XNLI in Basque postedited from MT. +* `xnli_eu_mt`: XNLI in Basque machine translated from English. +* `xnli_eu_native`: XNLI in Basque natively created. + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [x] Is the task an existing benchmark in the literature? + * [x] Have you referenced the original paper that introduced the task? + * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_common_yaml b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_common_yaml new file mode 100644 index 0000000000000000000000000000000000000000..fe2a43afe381984289584aa7207c4405d762d0a2 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_common_yaml @@ -0,0 +1,16 @@ +group: xnli +task: null +dataset_path: xnli +dataset_name: null +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: null +doc_to_target: label +doc_to_choice: null +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu.yaml b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b78eb7e771b48577a3fca3a29c6a9e921c6a8d26 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu.yaml @@ -0,0 +1,8 @@ +include: xnli_common_yaml +task: xnli_eu +dataset_path: HiTZ/xnli-eu +dataset_name: eu +doc_to_choice: '{{[premise+", ezta? Bai, "+hypothesis,premise+", ezta? Gainera, +"+hypothesis,premise+", ezta? Ez, "+hypothesis]}}' +doc_to_text: "" +test_split: test diff --git a/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_mt.yaml b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_mt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4674157ccc3f94962c3f3353f0705b383fd11366 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_mt.yaml @@ -0,0 +1,4 @@ +include: xnli_eu.yaml +group: xnli_eu_mt_native +task: xnli_eu_mt +dataset_name: eu_mt diff --git a/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_native.yaml b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_native.yaml new file mode 100644 index 0000000000000000000000000000000000000000..86e5eb06c5dd5878ad097144e54a4a9eb39226e3 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xnli_eu/xnli_eu_native.yaml @@ -0,0 +1,6 @@ +include: xnli_eu.yaml +group: xnli_eu_mt_native +task: xnli_eu_native +training_split: null +validation_split: null +dataset_name: eu_native