diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/README.md b/lm-evaluation/build/lib/lm_eval/tasks/agieval/README.md new file mode 100644 index 0000000000000000000000000000000000000000..faaf47b6beab877c7ee341a8dc2fc3e14a04b021 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/README.md @@ -0,0 +1,114 @@ +# AGIEval + +### Paper + +Title: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models + +Abstract: https://arxiv.org/abs/2304.06364.pdf + +AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. +This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. + +Homepage: https://github.com/ruixiangcui/AGIEval + +### Citation + +``` +@misc{zhong2023agieval, + title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, + author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, + year={2023}, + eprint={2304.06364}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +Please make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below: + +``` +@inproceedings{ling-etal-2017-program, + title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems", + author = "Ling, Wang and + Yogatama, Dani and + Dyer, Chris and + Blunsom, Phil", + booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", + month = jul, + year = "2017", + address = "Vancouver, Canada", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/P17-1015", + doi = "10.18653/v1/P17-1015", + pages = "158--167", + abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.", +} + +@inproceedings{hendrycksmath2021, + title={Measuring Mathematical Problem Solving With the MATH Dataset}, + author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, + journal={NeurIPS}, + year={2021} +} + +@inproceedings{Liu2020LogiQAAC, + title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning}, + author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang}, + booktitle={International Joint Conference on Artificial Intelligence}, + year={2020} +} + +@inproceedings{zhong2019jec, + title={JEC-QA: A Legal-Domain Question Answering Dataset}, + author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong}, + booktitle={Proceedings of AAAI}, + year={2020}, +} + +@article{Wang2021FromLT, + title={From LSAT: The Progress and Challenges of Complex Reasoning}, + author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan}, + journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, + year={2021}, + volume={30}, + pages={2201-2216} +} +``` + +### Groups and Tasks + +#### Groups + +- `agieval`: Evaluates all tasks listed below. + +- `agieval_en`: Evaluates all English subtasks: `agieval_aqua_rat`, `agieval_gaokao_english`, `agieval_logiqa_en`, `agieval_lsat_*`, `agieval_sat_*`, `agieval_math` + +- `agieval_cn`: Evaluates all Chinese subtasks: +`agieval_gaokao_biology`, `agieval_gaokao_chemistry`, `agieval_gaokao_chinese`, `agieval_gaokao_geography`, +`agieval_gaokao_history`, `agieval_gaokao_mathqa`, `agieval_gaokao_mathcloze`, `agieval_gaokao_physics`, `agieval_jec_qa_ca`, `agieval_jec_qa_kd`, `agieval_logiqa_zh` + +- `agieval_nous`: Evaluates a specific subset of AGIEval tasks (multiple-choice and english-only), namely those in https://github.com/teknium1/LLM-Benchmark-Logs/blob/main/benchmark-logs/Mistral-7B-Base.md + +#### Tasks + +- `agieval_aqua_rat` +- `agieval_gaokao_biology` +- `agieval_gaokao_chemistry` +- `agieval_gaokao_chinese` +- `agieval_gaokao_english` +- `agieval_gaokao_geography` +- `agieval_gaokao_history` +- `agieval_gaokao_mathqa` +- `agieval_gaokao_mathcloze` +- `agieval_gaokao_physics` +- `agieval_jec_qa_ca` +- `agieval_jec_qa_kd` +- `agieval_logiqa_en` +- `agieval_logiqa_zh` +- `agieval_lsat_ar` +- `agieval_lsat_lr` +- `agieval_lsat_rc` +- `agieval_sat_en` +- `agieval_sat_en_without_passage` +- `agieval_sat_math` +- `agieval_math` diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/aqua-rat.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/aqua-rat.yaml new file mode 100644 index 0000000000000000000000000000000000000000..babebf638edcf0e9c5a2432adb6a2fdaf4793c1d --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/aqua-rat.yaml @@ -0,0 +1,24 @@ +group: + - agieval + - agieval_en + - agieval_nous +task: agieval_aqua_rat +dataset_path: hails/agieval-aqua-rat +dataset_name: null +output_type: multiple_choice +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{gold}}" +doc_to_choice: "{{choices}}" +process_results: !function utils.process_results_mcqa +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-biology.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-biology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..36c44cbbeeb730f05c9d425c20f02c78acc81563 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-biology.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_biology +dataset_path: hails/agieval-gaokao-biology diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chemistry.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chemistry.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69810122eb274cdcb285232330a19807886ee50d --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chemistry.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_chemistry +dataset_path: hails/agieval-gaokao-chemistry diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chinese.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chinese.yaml new file mode 100644 index 0000000000000000000000000000000000000000..30d249b9d5544a3441e50284929aac6f081d6b76 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-chinese.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_chinese +dataset_path: hails/agieval-gaokao-chinese diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-english.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-english.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a540fcf25f503be64d3f5810be7b037a2e7c0504 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-english.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_en # categorizing as EN because the AGIEval codebase lists this as in `english_qa_tasks` +task: agieval_gaokao_english +dataset_path: hails/agieval-gaokao-english diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-geography.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-geography.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2fe43bfd2cb620328dfb28ba4a4e9e6d6d093c07 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-geography.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_geography +dataset_path: hails/agieval-gaokao-geography diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-history.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-history.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b9c9c630fa2c843da5c8311b1e0570bb1cc267f9 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-history.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_history +dataset_path: hails/agieval-gaokao-history diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathcloze.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathcloze.yaml new file mode 100644 index 0000000000000000000000000000000000000000..74cbad1c0325c4fb9fe78df83304741553c06134 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathcloze.yaml @@ -0,0 +1,25 @@ +group: + - agieval + - agieval_cn +task: agieval_gaokao_mathcloze +dataset_path: hails/agieval-gaokao-mathcloze +dataset_name: null +output_type: generate_until +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{answer}}" +process_results: !function utils.process_results +generation_kwargs: + max_gen_toks: 32 + do_sample: False + temperature: 0.0 + until: + - "Q:" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathqa.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..aa94e8eec85a931e5acbdb843730b58e8c1506e5 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-mathqa.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_mathqa +dataset_path: hails/agieval-gaokao-mathqa diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-physics.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-physics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..175dd6cca03fab93107e0bab827ea356ceb127eb --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/gaokao-physics.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_physics +dataset_path: hails/agieval-gaokao-physics diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-ca.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-ca.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f93b47a5b1418d839933b71e71b523fd38696691 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-ca.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_jec_qa_ca +dataset_path: hails/agieval-jec-qa-ca diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-kd.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-kd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0458eb7ea8356df569ac6c3b50af0bd4097ea857 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/jec-qa-kd.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_jec_qa_kd +dataset_path: hails/agieval-jec-qa-kd diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-en.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-en.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7112418659c4478c4e59f9bdcdebb6d64e7b9bb6 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-en.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_logiqa_en +dataset_path: hails/agieval-logiqa-en diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-zh.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-zh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..82e688006b8272e015a74b01412ad35cfe33561e --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/logiqa-zh.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_logiqa_zh +dataset_path: hails/agieval-logiqa-zh diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-ar.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-ar.yaml new file mode 100644 index 0000000000000000000000000000000000000000..302f9b519ee268831c1725fb96322d6628b9fdf9 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-ar.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_lsat_ar +dataset_path: hails/agieval-lsat-ar diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-lr.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-lr.yaml new file mode 100644 index 0000000000000000000000000000000000000000..62158e5cec196c0c7887a7236e1020ba2946da26 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-lr.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_lsat_lr +dataset_path: hails/agieval-lsat-lr diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-rc.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-rc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..de155af78aa8d5ad3b14849d8a2807a7194f6744 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/lsat-rc.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_lsat_rc +dataset_path: hails/agieval-lsat-rc diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/math.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c8ec9eec608c4eaced456c36dcb5dc9047ccd84e --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/math.yaml @@ -0,0 +1,25 @@ +group: + - agieval + - agieval_en +task: agieval_math +dataset_path: hails/agieval-math +dataset_name: null +output_type: generate_until +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{answer}}" +process_results: !function utils.process_results +generation_kwargs: + max_gen_toks: 32 + do_sample: False + temperature: 0.0 + until: + - "Q:" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en-without-passage.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en-without-passage.yaml new file mode 100644 index 0000000000000000000000000000000000000000..01490d9ee10aba867a1863e9d6a74b678f4f5588 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en-without-passage.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_en_without_passage +dataset_path: hails/agieval-sat-en-without-passage diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a45dba1507a562ace2f56f9a0096ff25f767f1e6 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-en.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_en +dataset_path: hails/agieval-sat-en diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-math.yaml b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f5b644ee062975dbdb74870428d71189e297343a --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/sat-math.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_math +dataset_path: hails/agieval-sat-math diff --git a/lm-evaluation/build/lib/lm_eval/tasks/agieval/utils.py b/lm-evaluation/build/lib/lm_eval/tasks/agieval/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..aa6e544f1a7e15e853b99be2fe01502baadefcee --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/agieval/utils.py @@ -0,0 +1,274 @@ +# Answer parsing and normalization code, from +# https://github.com/ruixiangcui/AGIEval/blob/main/src/ +# math_equivalence.py and post_process.py +import re +from typing import Dict, List + +import numpy as np + + +def parse_math_answer(raw_string): + def remove_boxed(s): + left = "\\boxed{" + try: + assert s[: len(left)] == left + assert s[-1] == "}" + answer = s[len(left) : -1] + if "=" in answer: + answer = answer.split("=")[-1].lstrip(" ") + return answer + except Exception: + return None + + def last_boxed_only_string(string): + idx = string.rfind("\\boxed") + if idx < 0: + idx = string.rfind("\\fbox") + if idx < 0: + return None + i = idx + right_brace_idx = None + num_left_braces_open = 0 + while i < len(string): + if string[i] == "{": + num_left_braces_open += 1 + if string[i] == "}": + num_left_braces_open -= 1 + if num_left_braces_open == 0: + right_brace_idx = i + break + i += 1 + + if right_brace_idx is None: + retval = None + else: + retval = string[idx : right_brace_idx + 1] + + return retval + + def get_answer_with_dollar_sign(s): + first_pattern = "\$(.*)\$" + last_match = None + matches = re.findall(first_pattern, s) + if matches: + last_match = matches[-1] + if "=" in last_match: + last_match = last_match.split("=")[-1].lstrip(" ") + return last_match + + def get_answer_without_dollar_sign(s): + last_match = None + if "=" in s: + last_match = s.split("=")[-1].lstrip(" ").rstrip(".") + if "\\n" in last_match: + last_match = last_match.split("\\n")[0] + else: + pattern = "(?:\\$)?\d+(?:\.\d+)?(?![\w\d])" + matches = re.findall(pattern, s) + if matches: + last_match = matches[-1] + return last_match + + if "\\boxed" in raw_string: + answer = remove_boxed(last_boxed_only_string(raw_string)) + else: + answer = get_answer_with_dollar_sign(raw_string) + if not answer: + answer = get_answer_without_dollar_sign(raw_string) + return answer + + +# code from https://github.com/hendrycks/math/blob/main/modeling/math_equivalence.py +def _fix_fracs(string): + substrs = string.split("\\frac") + new_str = substrs[0] + if len(substrs) > 1: + substrs = substrs[1:] + for substr in substrs: + new_str += "\\frac" + if substr[0] == "{": + new_str += substr + else: + try: + assert len(substr) >= 2 + except Exception: + return string + a = substr[0] + b = substr[1] + if b != "{": + if len(substr) > 2: + post_substr = substr[2:] + new_str += "{" + a + "}{" + b + "}" + post_substr + else: + new_str += "{" + a + "}{" + b + "}" + else: + if len(substr) > 2: + post_substr = substr[2:] + new_str += "{" + a + "}" + b + post_substr + else: + new_str += "{" + a + "}" + b + string = new_str + return string + + +def _fix_a_slash_b(string): + if len(string.split("/")) != 2: + return string + a = string.split("/")[0] + b = string.split("/")[1] + try: + a = int(a) + b = int(b) + assert string == "{}/{}".format(a, b) + new_string = "\\frac{" + str(a) + "}{" + str(b) + "}" + return new_string + except Exception: + return string + + +def _remove_right_units(string): + # "\\text{ " only ever occurs (at least in the val set) when describing units + if "\\text{ " in string: + splits = string.split("\\text{ ") + assert len(splits) == 2 + return splits[0] + else: + return string + + +def _fix_sqrt(string): + if "\\sqrt" not in string: + return string + splits = string.split("\\sqrt") + new_string = splits[0] + for split in splits[1:]: + if split[0] != "{": + a = split[0] + new_substr = "\\sqrt{" + a + "}" + split[1:] + else: + new_substr = "\\sqrt" + split + new_string += new_substr + return new_string + + +def _strip_string(string): + # linebreaks + string = string.replace("\n", "") + # print(string) + + # remove inverse spaces + string = string.replace("\\!", "") + # print(string) + + # replace \\ with \ + string = string.replace("\\\\", "\\") + # print(string) + + # replace tfrac and dfrac with frac + string = string.replace("tfrac", "frac") + string = string.replace("dfrac", "frac") + # print(string) + + # remove \left and \right + string = string.replace("\\left", "") + string = string.replace("\\right", "") + # print(string) + + # Remove circ (degrees) + string = string.replace("^{\\circ}", "") + string = string.replace("^\\circ", "") + + # remove dollar signs + string = string.replace("\\$", "") + + # remove units (on the right) + string = _remove_right_units(string) + + # remove percentage + string = string.replace("\\%", "") + string = string.replace("\%", "") + + # " 0." equivalent to " ." and "{0." equivalent to "{." Alternatively, add "0" if "." is the start of the string + string = string.replace(" .", " 0.") + string = string.replace("{.", "{0.") + # if empty, return empty string + if len(string) == 0: + return string + if string[0] == ".": + string = "0" + string + + # to consider: get rid of e.g. "k = " or "q = " at beginning + if len(string.split("=")) == 2: + if len(string.split("=")[0]) <= 2: + string = string.split("=")[1] + + # fix sqrt3 --> sqrt{3} + string = _fix_sqrt(string) + + # remove spaces + string = string.replace(" ", "") + + # \frac1b or \frac12 --> \frac{1}{b} and \frac{1}{2}, etc. Even works with \frac1{72} (but not \frac{72}1). Also does a/b --> \\frac{a}{b} + string = _fix_fracs(string) + + # manually change 0.5 --> \frac{1}{2} + if string == "0.5": + string = "\\frac{1}{2}" + + # NOTE: X/Y changed to \frac{X}{Y} in dataset, but in simple cases fix in case the model output is X/Y + string = _fix_a_slash_b(string) + + return string + + +def is_equiv(str1, str2, verbose=False): + if str1 is None and str2 is None: + print("WARNING: Both None") + return True + if str1 is None or str2 is None: + return False + + str1, str2 = parse_math_answer(str1), parse_math_answer(str2) + + try: + ss1 = _strip_string(str1) + ss2 = _strip_string(str2) + if verbose: + print(ss1, ss2) + return ss1 == ss2 + except Exception: + return str1 == str2 + + +def process_results(doc: dict, results: List[str]) -> Dict[str, int]: + candidate = results[0] + + gold = doc["answer"] + + if not gold: + print(doc, candidate, gold) + if is_equiv(candidate, gold): + retval = 1 + else: + retval = 0 + + results = { + "acc": retval, + } + return results + + +# use a custom process_results() function, because AGIEval can have multiple valid answers +def process_results_mcqa(doc, results): + results = [result[0] for result in results] + + gold = doc["gold"] + + acc = 1.0 if int(np.argmax(results)) in gold else 0.0 + completion_len = np.array([float(len(i)) for i in doc["choices"]]) + acc_norm = 1.0 if int(np.argmax(results / completion_len)) in gold else 0.0 + + return { + "acc": acc, + "acc_norm": acc_norm, + } diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_afr_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_afr_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5011d654e18a3ae285b72e8d28d4277268977a35 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_afr_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "afr_Latn" +"include": "_default_template_yaml" +"task": "belebele_afr_Latn" +"test_split": "afr_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_apc_Arab.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_apc_Arab.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2e7619a5e0070bf681bff12d373b22878ceaa446 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_apc_Arab.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "apc_Arab" +"include": "_default_template_yaml" +"task": "belebele_apc_Arab" +"test_split": "apc_Arab" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_arb_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_arb_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8759bc4d86152af04e0cccf33f01306893595d19 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_arb_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "arb_Latn" +"include": "_default_template_yaml" +"task": "belebele_arb_Latn" +"test_split": "arb_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_bam_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_bam_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9441f0fceeff1b27a64c65e524f4c050b2777851 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_bam_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "bam_Latn" +"include": "_default_template_yaml" +"task": "belebele_bam_Latn" +"test_split": "bam_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ben_Beng.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ben_Beng.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2b34335265620a6875082d8575fb234229e73d2a --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ben_Beng.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "ben_Beng" +"include": "_default_template_yaml" +"task": "belebele_ben_Beng" +"test_split": "ben_Beng" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ces_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ces_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..330fd11a8f2ce8a45d48fd0c4c95d6c9cb5e910e --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ces_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "ces_Latn" +"include": "_default_template_yaml" +"task": "belebele_ces_Latn" +"test_split": "ces_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_est_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_est_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6a56ca90c0309d9475adad9b95db272577658f36 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_est_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "est_Latn" +"include": "_default_template_yaml" +"task": "belebele_est_Latn" +"test_split": "est_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fra_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fra_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c60fa9a7a90db3fc7f5451b39705a487acf18b29 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fra_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "fra_Latn" +"include": "_default_template_yaml" +"task": "belebele_fra_Latn" +"test_split": "fra_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fuv_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fuv_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2636cae850c6222906fac7f3c1533ea16684ee73 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_fuv_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "fuv_Latn" +"include": "_default_template_yaml" +"task": "belebele_fuv_Latn" +"test_split": "fuv_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_heb_Hebr.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_heb_Hebr.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6021c5c232a0b86fcebf4c96ebd2d584c16403c2 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_heb_Hebr.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "heb_Hebr" +"include": "_default_template_yaml" +"task": "belebele_heb_Hebr" +"test_split": "heb_Hebr" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hrv_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hrv_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69b100c44bcf9218e541a7f3c41020dedafbec88 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hrv_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "hrv_Latn" +"include": "_default_template_yaml" +"task": "belebele_hrv_Latn" +"test_split": "hrv_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hun_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hun_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..47f37f7db358d9cf061db38f5af0ac75867a726f --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_hun_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "hun_Latn" +"include": "_default_template_yaml" +"task": "belebele_hun_Latn" +"test_split": "hun_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_isl_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_isl_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69f9bb4e372ce1a39057ce4b70a7e48d23d199e2 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_isl_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "isl_Latn" +"include": "_default_template_yaml" +"task": "belebele_isl_Latn" +"test_split": "isl_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_kat_Geor.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_kat_Geor.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6392d29bbe98cf3c6e228d20c90dbb7cf1281789 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_kat_Geor.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "kat_Geor" +"include": "_default_template_yaml" +"task": "belebele_kat_Geor" +"test_split": "kat_Geor" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lao_Laoo.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lao_Laoo.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bdee22168b7536d2063e1d1602cb9032d97cb357 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lao_Laoo.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "lao_Laoo" +"include": "_default_template_yaml" +"task": "belebele_lao_Laoo" +"test_split": "lao_Laoo" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lit_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lit_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..275718b8b54368d29e9b48a87a3131af808eb77c --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_lit_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "lit_Latn" +"include": "_default_template_yaml" +"task": "belebele_lit_Latn" +"test_split": "lit_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_mya_Mymr.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_mya_Mymr.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d60ff718612f4ea43f05b07fd7c9ef87b5e19fd6 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_mya_Mymr.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "mya_Mymr" +"include": "_default_template_yaml" +"task": "belebele_mya_Mymr" +"test_split": "mya_Mymr" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nld_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nld_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..aea069996198c6bad23ed44969d3bc840ad04442 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nld_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "nld_Latn" +"include": "_default_template_yaml" +"task": "belebele_nld_Latn" +"test_split": "nld_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nya_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nya_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4e5256715ade9f96d3c1eb7e1e3bc92c74700dd2 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_nya_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "nya_Latn" +"include": "_default_template_yaml" +"task": "belebele_nya_Latn" +"test_split": "nya_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pan_Guru.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pan_Guru.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6017b44d3d2090de73dd8ea759eac1675608f5e8 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pan_Guru.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "pan_Guru" +"include": "_default_template_yaml" +"task": "belebele_pan_Guru" +"test_split": "pan_Guru" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pol_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pol_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ebfcf3534e53c10bfe370643bfc50fc94df5602c --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_pol_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "pol_Latn" +"include": "_default_template_yaml" +"task": "belebele_pol_Latn" +"test_split": "pol_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_som_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_som_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fa1d4329d878e141ffbbb3f7faed774abbfccacb --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_som_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "som_Latn" +"include": "_default_template_yaml" +"task": "belebele_som_Latn" +"test_split": "som_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ssw_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ssw_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..788d6959976320f5fb962e442aa8fa9c2ed9cca8 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_ssw_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "ssw_Latn" +"include": "_default_template_yaml" +"task": "belebele_ssw_Latn" +"test_split": "ssw_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tel_Telu.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tel_Telu.yaml new file mode 100644 index 0000000000000000000000000000000000000000..de44fcc4848927331994d5ca42dce064c7758483 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_tel_Telu.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "tel_Telu" +"include": "_default_template_yaml" +"task": "belebele_tel_Telu" +"test_split": "tel_Telu" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_uzn_Latn.yaml b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_uzn_Latn.yaml new file mode 100644 index 0000000000000000000000000000000000000000..109aebdd5fc3c6a07083821dae0f61c0037b6ae2 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/belebele/belebele_uzn_Latn.yaml @@ -0,0 +1,4 @@ +"fewshot_split": "uzn_Latn" +"include": "_default_template_yaml" +"task": "belebele_uzn_Latn" +"test_split": "uzn_Latn" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/_held_in_template_yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/_held_in_template_yaml new file mode 100644 index 0000000000000000000000000000000000000000..c19b47cdae40bbc0ff91236d2048992f314172f0 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/_held_in_template_yaml @@ -0,0 +1,14 @@ +output_type: generate_until +test_split: null +doc_to_choice: null +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true +generation_kwargs: + until: + - "" + do_sample: false + temperature: 0.0 +metadata: + version: 1.0 diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_in.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_in.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5796713506e3b2e6632f4df0d60c4c19377693ad --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_in.yaml @@ -0,0 +1,331 @@ +group: flan_held_in +group_alias: Flan (Held-In) +task: + # ANLI R1 + - group: anli_r1_flan + group_alias: ANLI R1 + task: + - task: anli_r1 + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-7 + include: _held_in_template_yaml + doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r1 + task_alias: prompt-8 + include: _held_in_template_yaml + doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + # ANLI R2 + - group: anli_r2_flan + group_alias: ANLI R2 + task: + - task: anli_r2 + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-7 + include: _held_in_template_yaml + doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r2 + task_alias: prompt-8 + include: _held_in_template_yaml + doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + # ANLI R3 + - group: anli_r3_flan + group_alias: ANLI R3 + task: + - task: anli_r3 + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nChoose your answer: based on the paragraph above can we conclude that \"{{hypothesis}}\"?\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nI think the answer is" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nBased on that paragraph can we conclude that this sentence is true?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\n\nCan we draw the following conclusion?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nDoes this next sentence follow, given the preceding text?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "{{premise}}\nCan we infer the following?\n{{hypothesis}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nThe answer is:" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Read the following paragraph and determine if the hypothesis is true:\n\n{{premise}}\n\nOPTIONS:\n- Yes\n- It's impossible to say\n- No\nHypothesis: {{hypothesis}}\n\n\n" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "Read the text and determine if the sentence is true (see options at the end):\n\n{{premise}}\n\nSentence: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-7 + include: _held_in_template_yaml + doc_to_text: "Can we draw the following hypothesis from the context (see options)? \n\nContext:\n\n{{premise}}\n\nHypothesis: {{hypothesis}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + - task: anli_r3 + task_alias: prompt-8 + include: _held_in_template_yaml + doc_to_text: "Choose from options: Determine if the sentence is true based on the text below:\n{{hypothesis}}\n\n{{premise}}\nOPTIONS:\n- Yes\n- It's impossible to say\n- No" + doc_to_target: "{{[\"Yes\", \"It's impossible to say\", \"No\"][label]}}" + # Arc Easy + - group: arc_easy_flan + group_alias: Arc Easy + task: + - task: arc_easy + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "Question: {{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}\nAnswer:" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "Question: {{question}}\n\nWhat is the correct answer to the question from the following choices?\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "Q: {{question}}\nWhat is the correct answer to this question?\nOPTIONS:\n- {{choices.text|join('\n- ')}}...A:" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "Choose your answer?\n\n{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Answer the question\n\n{{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_easy + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "{{question}}\n\nPick the answer from these options\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + # Arc Challenge + - group: arc_challenge_flan + group_alias: Arc Challenge + task: + - task: arc_challenge + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "Question: {{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}\nAnswer:" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "Question: {{question}}\n\nWhat is the correct answer to the question from the following choices?\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "Q: {{question}}\nWhat is the correct answer to this question?\nOPTIONS:\n- {{choices.text|join('\n- ')}}...A:" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "Choose your answer?\n\n{{question}}\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Answer the question\n\n{{question}}\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + - task: arc_challenge + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "{{question}}\n\nPick the answer from these options\n\nOPTIONS:\n- {{choices.text|join('\n- ')}}" + doc_to_target: "{{choices.text[choices.label.index(answerKey)]}}" + # BoolQ + - group: boolq_flan + group_alias: BoolQ + task: + - task: boolq + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\nCan we conclude that {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\nIs it true that {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\n{{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "Text: {{passage}}\n\nQuestion: {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\nWhat's the best answer to this question: {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\nBased on the above text what's the best answer to this question: {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\nAnswer this question making sure that the answer is supposed by the text: {{question}}?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-7 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\nIs the following statement correct based on the text\n\n{{question}}\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-8 + include: _held_in_template_yaml + doc_to_text: "{{passage}}\n\nIs this statement correct \"{{question}}\"?\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + - task: boolq + task_alias: prompt-9 + include: _held_in_template_yaml + doc_to_text: "Is it true that {{question}} based on the following text?\n\n{{passage}}\n\nOPTIONS:\n- no\n- yes" + doc_to_target: "{{['no', 'yes'][label]}}" + # RTE + - group: rte_flan + group_alias: RTE + task: + - task: rte + task_alias: prompt-0 + include: _held_in_template_yaml + doc_to_text: "{{sentence1}}\n\nQuestion with options: Based on the paragraph above can we conclude that \"{{sentence2}}\"?\n\nOPTIONS:\n- yes\n- no" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-1 + include: _held_in_template_yaml + doc_to_text: "{{sentence1}}\n\nBased on that paragraph can we conclude that the sentence below is true?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-2 + include: _held_in_template_yaml + doc_to_text: "{{sentence1}}\n\nQ with options: Can we draw the following conclusion?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-3 + include: _held_in_template_yaml + doc_to_text: "{{sentence1}}\nDoes this next sentence follow, given the preceding text?\n{{sentence2}}\n\nOPTIONS:\n- yes\n- no" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-4 + include: _held_in_template_yaml + doc_to_text: "{{sentence1}}\nOPTIONS:\n- yes\n- no\nQuestion: Can we infer the following?\n{{sentence2}}" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-5 + include: _held_in_template_yaml + doc_to_text: "Read the following paragraph and determine if the hypothesis is true. Select from options at the end:\n\n{{sentence1}}\n\nHypothesis: {{sentence2}}\nOPTIONS:\n- yes\n- no\nThe answer is" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-6 + include: _held_in_template_yaml + doc_to_text: "Read the text and determine if the sentence is true:\n\n{{sentence1}}\n\nSentence: {{sentence2}}\nOPTIONS:\n- yes\n- no\nA:" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-7 + include: _held_in_template_yaml + doc_to_text: "Question with options: can we draw the following hypothesis from the context? \n\nContext:\n\n{{sentence1}}\n\nHypothesis: {{sentence2}}\nOPTIONS:\n- yes\n- no\nA:" + doc_to_target: "{{['yes', 'no'][label]}}" + - task: rte + task_alias: prompt-8 + include: _held_in_template_yaml + doc_to_text: "Determine if the sentence is true based on the text below. Choose from options.\n{{sentence2}}\n\n{{sentence1}}\nOPTIONS:\n- yes\n- no" + doc_to_target: "{{['yes', 'no'][label]}}" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_out.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_out.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cf806b882167dacc83e3baab67fe69d293de6ddc --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/flan/flan_held_out.yaml @@ -0,0 +1,13 @@ +group: flan_held_out +task: + # BBH + - bbh_zeroshot + - bbh_fewshot + - bbh_cot_fewshot + - bbh_cot_zeroshot + # MMLU + - mmlu + - mmlu_flan_n_shot_generative + - mmlu_flan_n_shot_loglikelihood + - mmlu_flan_cot_zeroshot + - mmlu_flan_cot_fewshot diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/minerva_math.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/minerva_math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6df3203e10fddd06bd2edcfb97984c12a32466be --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/minerva_math.yaml @@ -0,0 +1,9 @@ +group: minerva_math +task: + - minerva_math_algebra + - minerva_math_counting_and_prob + - minerva_math_geometry + - minerva_math_intermediate_algebra + - minerva_math_num_theory + - minerva_math_prealgebra + - minerva_math_precalc diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/README.md b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/README.md new file mode 100644 index 0000000000000000000000000000000000000000..de694e47ebeecf52c6d95038019a7ea17a623e52 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/README.md @@ -0,0 +1,43 @@ +# MultiMedQA (multiple-choice subset) + +### Paper + +Title: Large Language Models Encode Clinical Knowledge + +Abstract: https://arxiv.org/abs/2212.13138 + +A benchmark combining four existing multiple-choice question answering datasets spanning professional medical exams and research queries. + +### Citation + +``` +@Article{Singhal2023, +author={Singhal, Karan and Azizi, Shekoofeh and Tu, Tao and Mahdavi, S. Sara and Wei, Jason and Chung, Hyung Won and Scales, Nathan and Tanwani, Ajay and Cole-Lewis, Heather and Pfohl, Stephen and Payne, Perry and Seneviratne, Martin and Gamble, Paul and Kelly, Chris and Babiker, Abubakr and Sch{\"a}rli, Nathanael and Chowdhery, Aakanksha and Mansfield, Philip and Demner-Fushman, Dina and Ag{\"u}era y Arcas, Blaise and Webster, Dale and Corrado, Greg S. and Matias, Yossi and Chou, Katherine and Gottweis, Juraj and Tomasev, Nenad and Liu, Yun and Rajkomar, Alvin and Barral, Joelle and Semturs, Christopher and Karthikesalingam, Alan and Natarajan, Vivek}, +title={Large language models encode clinical knowledge}, +journal={Nature}, +year={2023}, +month={Aug}, +day={01}, +volume={620}, +number={7972}, +pages={172-180}, +issn={1476-4687}, +doi={10.1038/s41586-023-06291-2}, +url={https://doi.org/10.1038/s41586-023-06291-2} +} +``` + +### Tasks + +* [PubMedQA](https://pubmedqa.github.io/) - 1,000 expert-labeled Q&A pairs where a question and corresponding PubMed abstract as context is given and the a yes/maybe/no answer must be produced. Unlike the rest of the tasks in this suite, PubMedQA is a closed-domain Q&A task. +* [MedQA](https://github.com/jind11/MedQA) - US Medical License Exam (USMLE) questions with 4 or 5 possible answers. Typically, only the 4-option questions are used. +* [MedMCQA](https://medmcqa.github.io/) - 4-option multiple choice questions from Indian medical entrance examinations, >191k total questions. +* [MMLU](https://arxiv.org/abs/2009.03300) - 4-option multiple choice exam questions from a variety of domains. The following 6 domains are utilized here: + * Anatomy + * Clinical Knowledge + * College Medicine + * Medical Genetics + * Professional Medicine + * College Biology + +Note that MultiMedQA also includes some short-form and long-form Q&A tasks (LiveQA, MedicationQA, HealthSearchQA). Evaluation on these tasks is usually done by experts and is not typically performed automatically, and therefore is ignored here. diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/multimedqa.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/multimedqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..29810bb491105b4a4e9d01391926a03c0fc8e88c --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/multimedqa/multimedqa.yaml @@ -0,0 +1,17 @@ +group: multimedqa +task: + - pubmedqa + - medmcqa + - medqa_4options + - task: mmlu_anatomy + task_alias: "anatomy (mmlu)" + - task: mmlu_clinical_knowledge + task_alias: "clinical_knowledge (mmlu)" + - task: mmlu_college_medicine + task_alias: "college_medicine (mmlu)" + - task: mmlu_medical_genetics + task_alias: "medical_genetics (mmlu)" + - task: mmlu_professional_medicine + task_alias: "professional_medicine (mmlu)" + - task: mmlu_college_biology + task_alias: "college_biology (mmlu)" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/openllm.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/openllm.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0296a0a548e1206f70627b4176d79aab7438db75 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/openllm.yaml @@ -0,0 +1,18 @@ +group: openllm +group_alias: Open LLM Leaderboard +task: + - task: arc_challenge + fewshot_split: validation + num_fewshot: 25 + - task: hellaswag + fewshot_split: train + num_fewshot: 10 + - task: truthfulqa + num_fewshot: 0 + - task: mmlu + num_fewshot: 5 + - task: winogrande + fewshot_split: train + num_fewshot: 5 + - task: gsm8k + num_fewshot: 5 diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/pythia.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/pythia.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bdeadd3ce995ce3d4d9340082ede3bf424ba276d --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/pythia.yaml @@ -0,0 +1,12 @@ +group: pythia +task: + - lambada_openai + - logiqa + - piqa + - sciq + - wikitext + - winogrande + - wsc + - ai2_arc + - blimp + - mmlu diff --git a/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/t0_eval.yaml b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/t0_eval.yaml new file mode 100644 index 0000000000000000000000000000000000000000..27e7adc41bd2eaffa20b3344cfdf83a52b4d65fc --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/benchmarks/t0_eval.yaml @@ -0,0 +1,127 @@ +group: t0_eval +task: + # Coreference Resolution + - dataset_path: super_glue + dataset_name: wsc.fixed + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + # Coreference Resolution + - dataset_path: winogrande + dataset_name: winogrande_xl + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + # Natural Language Inference + - dataset_path: super_glue + dataset_name: cb + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + - dataset_path: super_glue + dataset_name: rte + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + - task: anli_r1 + dataset_path: anli + use_prompt: promptsource:* + training_split: train_r1 + validation_split: dev_r1 + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + - task: anli_r2 + dataset_path: anli + use_prompt: promptsource:* + training_split: train_r2 + validation_split: dev_r2 + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + - task: anli_r3 + dataset_path: anli + use_prompt: promptsource:* + training_split: train_r3 + validation_split: dev_r3 + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + # Sentence Completion + - dataset_path: super_glue + dataset_name: copa + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + # Natural Language Inference + - dataset_path: hellaswag + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + # Word Sense Disambiguation + - dataset_path: super_glue + dataset_name: wic + use_prompt: promptsource:* + training_split: train + validation_split: validation + output_type: generate_until + metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true diff --git a/lm-evaluation/build/lib/lm_eval/tasks/medqa/medqa.yaml b/lm-evaluation/build/lib/lm_eval/tasks/medqa/medqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7d5555966fa0d4bcf2e8dc4a74eea7442ca433a3 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/medqa/medqa.yaml @@ -0,0 +1,16 @@ +task: medqa_4options +dataset_path: GBaker/MedQA-USMLE-4-options-hf +output_type: multiple_choice +training_split: train +validation_split: validation +test_split: test +doc_to_text: !function preprocess_medqa.doc_to_text +doc_to_target: !function preprocess_medqa.doc_to_target +doc_to_choice: [ 'A', 'B', 'C', 'D' ] +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation/build/lib/lm_eval/tasks/medqa/preprocess_medqa.py b/lm-evaluation/build/lib/lm_eval/tasks/medqa/preprocess_medqa.py new file mode 100644 index 0000000000000000000000000000000000000000..6ec35851453d7452833ceb30ec93f50ba495f594 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/medqa/preprocess_medqa.py @@ -0,0 +1,13 @@ +def doc_to_text(doc) -> str: + option_choices = { + "A": doc["ending0"], + "B": doc["ending1"], + "C": doc["ending2"], + "D": doc["ending3"], + } + answers = "".join((f"{k}. {v}\n") for k, v in option_choices.items()) + return f"Question: {doc['sent1']}\n{answers}Answer:" + + +def doc_to_target(doc) -> int: + return doc["label"] diff --git a/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/README.md b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3b8dc9fc9c38c09c48d52b2899fd74d639216765 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/README.md @@ -0,0 +1,55 @@ +# QA4MRE + +### Paper + +Title: `QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation` + +Abstract: https://www.cs.cmu.edu/~./hovy/papers/13CLEF-QA4MRE.pdf + +The (English only) QA4MRE challenge which was run as a Lab at CLEF 2011-2013. +The main objective of this exercise is to develop a methodology for evaluating +Machine Reading systems through Question Answering and Reading Comprehension +Tests. Systems should be able to extract knowledge from large volumes of text +and use this knowledge to answer questions. Four different tasks have been +organized during these years: Main Task, Processing Modality and Negation for +Machine Reading, Machine Reading of Biomedical Texts about Alzheimer's disease, +and Entrance Exam. + +Homepage: http://nlp.uned.es/clef-qa/repository/qa4mre.php + + +### Citation + +``` +@inproceedings{Peas2013QA4MRE2O, + title={QA4MRE 2011-2013: Overview of Question Answering for Machine Reading Evaluation}, + author={Anselmo Pe{\~n}as and Eduard H. Hovy and Pamela Forner and {\'A}lvaro Rodrigo and Richard F. E. Sutcliffe and Roser Morante}, + booktitle={CLEF}, + year={2013} +} +``` + +### Groups and Tasks + +#### Groups + +* `qa4mre` + +#### Tasks + +* `qa4mre_2011` +* `qa4mre_2012` +* `qa4mre_2013` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/preprocess_qa4mre.py b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/preprocess_qa4mre.py new file mode 100644 index 0000000000000000000000000000000000000000..3e07db422b1e20f3d456f0da9f806c76feb1c557 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/preprocess_qa4mre.py @@ -0,0 +1,6 @@ +def qa4mre_process(doc): + return int(doc["correct_answer_id"]) - 1 + + +def doc_to_target(doc): + return doc["answer_options"]["answer_str"][qa4mre_process(doc)] diff --git a/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2012.yaml b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2012.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ec015651675e34e3f51b221ef2b35d60092bbc3f --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2012.yaml @@ -0,0 +1,4 @@ +include: qa4mre_2011.yaml +task: qa4mre_2012 +dataset_path: qa4mre +dataset_name: 2012.main.EN diff --git a/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2013.yaml b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2013.yaml new file mode 100644 index 0000000000000000000000000000000000000000..08b96e306dcd47e02e06c451692665aef97869ba --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/qa4mre/qa4mre_2013.yaml @@ -0,0 +1,4 @@ +include: qa4mre_2011.yaml +task: qa4mre_2013 +dataset_path: qa4mre +dataset_name: 2013.main.EN diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/README.md b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e4be02eb8928f255e8a63b0864595407308bf8ed --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/README.md @@ -0,0 +1,47 @@ +# TMMLU+ + +### Paper + +Title: `An Improved Traditional Chinese Evaluation Suite for Foundation Model` + +Abstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times larger and boasts a more balanced subject distribution. We included benchmark results in TMMLU+ from closed-source models and 24 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Our findings reveal that Traditional Chinese models still trail behind their Simplified Chinese counterparts. Additionally, current large language models have yet to outperform human performance in average scores. We publicly release our dataset and the corresponding benchmark source code.` + + +Homepage: [https://huggingface.co/datasets/ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus) + + +### Citation + +``` +@article{ikala2024improved, + title={An Improved Traditional Chinese Evaluation Suite for Foundation Model}, + author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han}, + journal={arXiv preprint arXiv:2403.01858}, + year={2024} +} +``` + +### Groups and Tasks + +#### Groups + +* `tmmluplus`: `The dataset comprises 22,690 multiple-choice questions from 66 subjects ranging from primary to professional level. ` + +#### Tasks + +The following tasks evaluate subjects in the TMMLU+ dataset using loglikelihood-based multiple-choice scoring: + +* `tmmluplus_{subject_english}` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [x] Is the task an existing benchmark in the literature? + * [x] Have you referenced the original paper that introduced the task? + * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [x] Is the "Main" variant of this task clearly denoted? +* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [x] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_anti_money_laundering.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_anti_money_laundering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..95bb0e47861f3c37954e74ae9b1fe17095f3eaa7 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_anti_money_laundering.yaml @@ -0,0 +1,7 @@ +"dataset_name": "anti_money_laundering" +"description": "以下為洗錢防制的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_humanities" +"group_alias": "humanities" +"include": "_default_template_yaml" +"task": "tmmluplus_anti_money_laundering" +"task_alias": "anti money laundering" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_auditing.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_auditing.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a8168029b29291cab1e6f596acd51e00699e3cf2 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_auditing.yaml @@ -0,0 +1,7 @@ +"dataset_name": "auditing" +"description": "以下為審計學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_auditing" +"task_alias": "auditing" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_basic_medical_science.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_basic_medical_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d329b78a488839aaa46007bc83db187c5c1cd562 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_basic_medical_science.yaml @@ -0,0 +1,7 @@ +"dataset_name": "basic_medical_science" +"description": "以下為基礎醫學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_STEM" +"group_alias": "STEM" +"include": "_default_template_yaml" +"task": "tmmluplus_basic_medical_science" +"task_alias": "basic medical science" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c55f6a4a3ae23aaa1fb4a31941f9d5020df892f4 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_computer_science.yaml @@ -0,0 +1,7 @@ +"dataset_name": "computer_science" +"description": "以下為資訊工程的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_STEM" +"group_alias": "STEM" +"include": "_default_template_yaml" +"task": "tmmluplus_computer_science" +"task_alias": "computer science" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_culinary_skills.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_culinary_skills.yaml new file mode 100644 index 0000000000000000000000000000000000000000..457eac1d18465a434abfd4916acffb8ac7d30529 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_culinary_skills.yaml @@ -0,0 +1,7 @@ +"dataset_name": "culinary_skills" +"description": "以下為餐旅的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_culinary_skills" +"task_alias": "culinary skills" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml new file mode 100644 index 0000000000000000000000000000000000000000..f986517b66c9f46443655b940c251007ba782c50 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_education_(profession_level).yaml @@ -0,0 +1,7 @@ +"dataset_name": "education_(profession_level)" +"description": "以下為教育專業的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_education_(profession_level)" +"task_alias": "education (profession level)" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_management_accounting.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_management_accounting.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2d071f869736fbe10cfbc400d74371d2f27a8ffc --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_management_accounting.yaml @@ -0,0 +1,7 @@ +"dataset_name": "management_accounting" +"description": "以下為管理會計的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_management_accounting" +"task_alias": "management accounting" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_mechanical.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_mechanical.yaml new file mode 100644 index 0000000000000000000000000000000000000000..81ea0dce68b2a7d0be1733fd94fc37c997bf894f --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_mechanical.yaml @@ -0,0 +1,7 @@ +"dataset_name": "mechanical" +"description": "以下為機械與機電概論的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_mechanical" +"task_alias": "mechanical" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fb3558e9baa4cf6ee4c1f19a244341a3a484861c --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_physical_education.yaml @@ -0,0 +1,7 @@ +"dataset_name": "physical_education" +"description": "以下為體育的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_social_sciences" +"group_alias": "social sciences" +"include": "_default_template_yaml" +"task": "tmmluplus_physical_education" +"task_alias": "physical education" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b5a3fdf197c6f64ecda03af7c6119721ae18df11 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/default/tmmluplus_traditional_chinese_medicine_clinical_medicine.yaml @@ -0,0 +1,7 @@ +"dataset_name": "traditional_chinese_medicine_clinical_medicine" +"description": "以下為中醫臨床醫學的單選題,請提供正確答案的選項。\n\n" +"group": "tmmluplus_other" +"group_alias": "other" +"include": "_default_template_yaml" +"task": "tmmluplus_traditional_chinese_medicine_clinical_medicine" +"task_alias": "traditional chinese medicine clinical medicine" diff --git a/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/subject.tsv b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/subject.tsv new file mode 100644 index 0000000000000000000000000000000000000000..4dc4b03e0feba9c62e64927f8fe2010327058141 --- /dev/null +++ b/lm-evaluation/build/lib/lm_eval/tasks/tmmluplus/subject.tsv @@ -0,0 +1,68 @@ +subject name category +dentistry 牙醫學 health +traditional_chinese_medicine_clinical_medicine 中醫臨床醫學 health +clinical_psychology 臨床心理學 psychology +technical 技術工相關 other +culinary_skills 餐旅 other +mechanical 機械與機電概論 other +logic_reasoning 邏輯思維 other +real_estate 房地產 other +general_principles_of_law 法學大意 law +finance_banking 金融與法規 business +anti_money_laundering 洗錢防制 law +ttqav2 台灣在地用語 culture +marketing_management 行銷管理 other +business_management 企業管理 other +organic_chemistry 有機化學 chemistry +advance_chemistry 化學 chemistry +physics 物理 physics +secondary_physics 高中物理 physics +human_behavior 人類行為與社會 psychology +national_protection 軍事 politics +jce_humanities 指考人文科目 philosophy +linear_algebra 線代 math +politic_science 政治 politics +agriculture 農業 other +official_document_management 機關文書 other +financial_analysis 財務分析 business +pharmacy 藥劑學 biology +educational_psychology 教育心理 psychology +statistics_and_machine_learning 統計與機器學習 engineering +management_accounting 管理會計 business +introduction_to_law 法律概論 law +computer_science 資訊工程 computer science +veterinary_pathology 獸醫病理學 health +accounting 會計學 business +fire_science 火災學 other +optometry 視光學 other +insurance_studies 保險學 other +pharmacology 藥理學 health +taxation 稅務 law +education_(profession_level) 教育專業 education +economics 經濟學 economics +veterinary_pharmacology 獸醫藥理學 health +nautical_science 航海 other +occupational_therapy_for_psychological_disorders 心理障礙職能治療學 psychology +trust_practice 信託實務 law +geography_of_taiwan 台灣地理 geography +physical_education 體育 education +auditing 審計學 business +administrative_law 行政法 law +basic_medical_science 基礎醫學 biology +macroeconomics 總經 economics +trade 貿易 business +chinese_language_and_literature 國文 culture +tve_design 統測_設計 other +junior_science_exam 國中會考基測自然科 biology +junior_math_exam 國中會考基測數學科 math +junior_chinese_exam 國中會考基測國文 culture +junior_social_studies 國中會考基測社會科 other +tve_mathematics 統測數學 math +tve_chinese_language 統測國文 culture +tve_natural_sciences 統測自然科 biology +junior_chemistry 國中理化 chemistry +music 音樂科 other +education 教育常識 education +three_principles_of_people 三民主義 culture +taiwanese_hokkien 閩南語 culture +engineering_math 工程數學 math