diff --git a/lm-evaluation/lm_eval/tasks/agieval/README.md b/lm-evaluation/lm_eval/tasks/agieval/README.md new file mode 100644 index 0000000000000000000000000000000000000000..faaf47b6beab877c7ee341a8dc2fc3e14a04b021 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/README.md @@ -0,0 +1,114 @@ +# AGIEval + +### Paper + +Title: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models + +Abstract: https://arxiv.org/abs/2304.06364.pdf + +AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. +This benchmark is derived from 20 official, public, and high-standard admission and qualification exams intended for general human test-takers, such as general college admission tests (e.g., Chinese College Entrance Exam (Gaokao) and American SAT), law school admission tests, math competitions, lawyer qualification tests, and national civil service exams. + +Homepage: https://github.com/ruixiangcui/AGIEval + +### Citation + +``` +@misc{zhong2023agieval, + title={AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models}, + author={Wanjun Zhong and Ruixiang Cui and Yiduo Guo and Yaobo Liang and Shuai Lu and Yanlin Wang and Amin Saied and Weizhu Chen and Nan Duan}, + year={2023}, + eprint={2304.06364}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +Please make sure to cite all the individual datasets in your paper when you use them. We provide the relevant citation information below: + +``` +@inproceedings{ling-etal-2017-program, + title = "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems", + author = "Ling, Wang and + Yogatama, Dani and + Dyer, Chris and + Blunsom, Phil", + booktitle = "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", + month = jul, + year = "2017", + address = "Vancouver, Canada", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/P17-1015", + doi = "10.18653/v1/P17-1015", + pages = "158--167", + abstract = "Solving algebraic word problems requires executing a series of arithmetic operations{---}a program{---}to obtain a final answer. However, since programs can be arbitrarily complicated, inducing them directly from question-answer pairs is a formidable challenge. To make this task more feasible, we solve these problems by generating answer rationales, sequences of natural language and human-readable mathematical expressions that derive the final answer through a series of small steps. Although rationales do not explicitly specify programs, they provide a scaffolding for their structure via intermediate milestones. To evaluate our approach, we have created a new 100,000-sample dataset of questions, answers and rationales. Experimental results show that indirect supervision of program learning via answer rationales is a promising strategy for inducing arithmetic programs.", +} + +@inproceedings{hendrycksmath2021, + title={Measuring Mathematical Problem Solving With the MATH Dataset}, + author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt}, + journal={NeurIPS}, + year={2021} +} + +@inproceedings{Liu2020LogiQAAC, + title={LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning}, + author={Jian Liu and Leyang Cui and Hanmeng Liu and Dandan Huang and Yile Wang and Yue Zhang}, + booktitle={International Joint Conference on Artificial Intelligence}, + year={2020} +} + +@inproceedings{zhong2019jec, + title={JEC-QA: A Legal-Domain Question Answering Dataset}, + author={Zhong, Haoxi and Xiao, Chaojun and Tu, Cunchao and Zhang, Tianyang and Liu, Zhiyuan and Sun, Maosong}, + booktitle={Proceedings of AAAI}, + year={2020}, +} + +@article{Wang2021FromLT, + title={From LSAT: The Progress and Challenges of Complex Reasoning}, + author={Siyuan Wang and Zhongkun Liu and Wanjun Zhong and Ming Zhou and Zhongyu Wei and Zhumin Chen and Nan Duan}, + journal={IEEE/ACM Transactions on Audio, Speech, and Language Processing}, + year={2021}, + volume={30}, + pages={2201-2216} +} +``` + +### Groups and Tasks + +#### Groups + +- `agieval`: Evaluates all tasks listed below. + +- `agieval_en`: Evaluates all English subtasks: `agieval_aqua_rat`, `agieval_gaokao_english`, `agieval_logiqa_en`, `agieval_lsat_*`, `agieval_sat_*`, `agieval_math` + +- `agieval_cn`: Evaluates all Chinese subtasks: +`agieval_gaokao_biology`, `agieval_gaokao_chemistry`, `agieval_gaokao_chinese`, `agieval_gaokao_geography`, +`agieval_gaokao_history`, `agieval_gaokao_mathqa`, `agieval_gaokao_mathcloze`, `agieval_gaokao_physics`, `agieval_jec_qa_ca`, `agieval_jec_qa_kd`, `agieval_logiqa_zh` + +- `agieval_nous`: Evaluates a specific subset of AGIEval tasks (multiple-choice and english-only), namely those in https://github.com/teknium1/LLM-Benchmark-Logs/blob/main/benchmark-logs/Mistral-7B-Base.md + +#### Tasks + +- `agieval_aqua_rat` +- `agieval_gaokao_biology` +- `agieval_gaokao_chemistry` +- `agieval_gaokao_chinese` +- `agieval_gaokao_english` +- `agieval_gaokao_geography` +- `agieval_gaokao_history` +- `agieval_gaokao_mathqa` +- `agieval_gaokao_mathcloze` +- `agieval_gaokao_physics` +- `agieval_jec_qa_ca` +- `agieval_jec_qa_kd` +- `agieval_logiqa_en` +- `agieval_logiqa_zh` +- `agieval_lsat_ar` +- `agieval_lsat_lr` +- `agieval_lsat_rc` +- `agieval_sat_en` +- `agieval_sat_en_without_passage` +- `agieval_sat_math` +- `agieval_math` diff --git a/lm-evaluation/lm_eval/tasks/agieval/aqua-rat.yaml b/lm-evaluation/lm_eval/tasks/agieval/aqua-rat.yaml new file mode 100644 index 0000000000000000000000000000000000000000..babebf638edcf0e9c5a2432adb6a2fdaf4793c1d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/aqua-rat.yaml @@ -0,0 +1,24 @@ +group: + - agieval + - agieval_en + - agieval_nous +task: agieval_aqua_rat +dataset_path: hails/agieval-aqua-rat +dataset_name: null +output_type: multiple_choice +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{gold}}" +doc_to_choice: "{{choices}}" +process_results: !function utils.process_results_mcqa +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-biology.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-biology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..36c44cbbeeb730f05c9d425c20f02c78acc81563 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-biology.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_biology +dataset_path: hails/agieval-gaokao-biology diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-chemistry.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-chemistry.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69810122eb274cdcb285232330a19807886ee50d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-chemistry.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_chemistry +dataset_path: hails/agieval-gaokao-chemistry diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-chinese.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-chinese.yaml new file mode 100644 index 0000000000000000000000000000000000000000..30d249b9d5544a3441e50284929aac6f081d6b76 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-chinese.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_chinese +dataset_path: hails/agieval-gaokao-chinese diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-english.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-english.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a540fcf25f503be64d3f5810be7b037a2e7c0504 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-english.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_en # categorizing as EN because the AGIEval codebase lists this as in `english_qa_tasks` +task: agieval_gaokao_english +dataset_path: hails/agieval-gaokao-english diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-geography.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-geography.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2fe43bfd2cb620328dfb28ba4a4e9e6d6d093c07 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-geography.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_geography +dataset_path: hails/agieval-gaokao-geography diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-history.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-history.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b9c9c630fa2c843da5c8311b1e0570bb1cc267f9 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-history.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_history +dataset_path: hails/agieval-gaokao-history diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathcloze.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathcloze.yaml new file mode 100644 index 0000000000000000000000000000000000000000..74cbad1c0325c4fb9fe78df83304741553c06134 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathcloze.yaml @@ -0,0 +1,25 @@ +group: + - agieval + - agieval_cn +task: agieval_gaokao_mathcloze +dataset_path: hails/agieval-gaokao-mathcloze +dataset_name: null +output_type: generate_until +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{answer}}" +process_results: !function utils.process_results +generation_kwargs: + max_gen_toks: 32 + do_sample: False + temperature: 0.0 + until: + - "Q:" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathqa.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..aa94e8eec85a931e5acbdb843730b58e8c1506e5 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-mathqa.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_mathqa +dataset_path: hails/agieval-gaokao-mathqa diff --git a/lm-evaluation/lm_eval/tasks/agieval/gaokao-physics.yaml b/lm-evaluation/lm_eval/tasks/agieval/gaokao-physics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..175dd6cca03fab93107e0bab827ea356ceb127eb --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/gaokao-physics.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_gaokao_physics +dataset_path: hails/agieval-gaokao-physics diff --git a/lm-evaluation/lm_eval/tasks/agieval/jec-qa-ca.yaml b/lm-evaluation/lm_eval/tasks/agieval/jec-qa-ca.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f93b47a5b1418d839933b71e71b523fd38696691 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/jec-qa-ca.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_jec_qa_ca +dataset_path: hails/agieval-jec-qa-ca diff --git a/lm-evaluation/lm_eval/tasks/agieval/jec-qa-kd.yaml b/lm-evaluation/lm_eval/tasks/agieval/jec-qa-kd.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0458eb7ea8356df569ac6c3b50af0bd4097ea857 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/jec-qa-kd.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_jec_qa_kd +dataset_path: hails/agieval-jec-qa-kd diff --git a/lm-evaluation/lm_eval/tasks/agieval/logiqa-en.yaml b/lm-evaluation/lm_eval/tasks/agieval/logiqa-en.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7112418659c4478c4e59f9bdcdebb6d64e7b9bb6 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/logiqa-en.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_logiqa_en +dataset_path: hails/agieval-logiqa-en diff --git a/lm-evaluation/lm_eval/tasks/agieval/logiqa-zh.yaml b/lm-evaluation/lm_eval/tasks/agieval/logiqa-zh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..82e688006b8272e015a74b01412ad35cfe33561e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/logiqa-zh.yaml @@ -0,0 +1,6 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_cn +task: agieval_logiqa_zh +dataset_path: hails/agieval-logiqa-zh diff --git a/lm-evaluation/lm_eval/tasks/agieval/lsat-lr.yaml b/lm-evaluation/lm_eval/tasks/agieval/lsat-lr.yaml new file mode 100644 index 0000000000000000000000000000000000000000..62158e5cec196c0c7887a7236e1020ba2946da26 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/lsat-lr.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_lsat_lr +dataset_path: hails/agieval-lsat-lr diff --git a/lm-evaluation/lm_eval/tasks/agieval/lsat-rc.yaml b/lm-evaluation/lm_eval/tasks/agieval/lsat-rc.yaml new file mode 100644 index 0000000000000000000000000000000000000000..de155af78aa8d5ad3b14849d8a2807a7194f6744 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/lsat-rc.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_lsat_rc +dataset_path: hails/agieval-lsat-rc diff --git a/lm-evaluation/lm_eval/tasks/agieval/math.yaml b/lm-evaluation/lm_eval/tasks/agieval/math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c8ec9eec608c4eaced456c36dcb5dc9047ccd84e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/math.yaml @@ -0,0 +1,25 @@ +group: + - agieval + - agieval_en +task: agieval_math +dataset_path: hails/agieval-math +dataset_name: null +output_type: generate_until +training_split: null +validation_split: null +test_split: test +doc_to_text: "{{query}}" +doc_to_target: "{{answer}}" +process_results: !function utils.process_results +generation_kwargs: + max_gen_toks: 32 + do_sample: False + temperature: 0.0 + until: + - "Q:" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/agieval/sat-en-without-passage.yaml b/lm-evaluation/lm_eval/tasks/agieval/sat-en-without-passage.yaml new file mode 100644 index 0000000000000000000000000000000000000000..01490d9ee10aba867a1863e9d6a74b678f4f5588 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/sat-en-without-passage.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_en_without_passage +dataset_path: hails/agieval-sat-en-without-passage diff --git a/lm-evaluation/lm_eval/tasks/agieval/sat-en.yaml b/lm-evaluation/lm_eval/tasks/agieval/sat-en.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a45dba1507a562ace2f56f9a0096ff25f767f1e6 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/sat-en.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_en +dataset_path: hails/agieval-sat-en diff --git a/lm-evaluation/lm_eval/tasks/agieval/sat-math.yaml b/lm-evaluation/lm_eval/tasks/agieval/sat-math.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f5b644ee062975dbdb74870428d71189e297343a --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/sat-math.yaml @@ -0,0 +1,7 @@ +include: aqua-rat.yaml +group: + - agieval + - agieval_nous + - agieval_en +task: agieval_sat_math +dataset_path: hails/agieval-sat-math diff --git a/lm-evaluation/lm_eval/tasks/agieval/utils.py b/lm-evaluation/lm_eval/tasks/agieval/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..aa6e544f1a7e15e853b99be2fe01502baadefcee --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/agieval/utils.py @@ -0,0 +1,274 @@ +# Answer parsing and normalization code, from +# https://github.com/ruixiangcui/AGIEval/blob/main/src/ +# math_equivalence.py and post_process.py +import re +from typing import Dict, List + +import numpy as np + + +def parse_math_answer(raw_string): + def remove_boxed(s): + left = "\\boxed{" + try: + assert s[: len(left)] == left + assert s[-1] == "}" + answer = s[len(left) : -1] + if "=" in answer: + answer = answer.split("=")[-1].lstrip(" ") + return answer + except Exception: + return None + + def last_boxed_only_string(string): + idx = string.rfind("\\boxed") + if idx < 0: + idx = string.rfind("\\fbox") + if idx < 0: + return None + i = idx + right_brace_idx = None + num_left_braces_open = 0 + while i < len(string): + if string[i] == "{": + num_left_braces_open += 1 + if string[i] == "}": + num_left_braces_open -= 1 + if num_left_braces_open == 0: + right_brace_idx = i + break + i += 1 + + if right_brace_idx is None: + retval = None + else: + retval = string[idx : right_brace_idx + 1] + + return retval + + def get_answer_with_dollar_sign(s): + first_pattern = "\$(.*)\$" + last_match = None + matches = re.findall(first_pattern, s) + if matches: + last_match = matches[-1] + if "=" in last_match: + last_match = last_match.split("=")[-1].lstrip(" ") + return last_match + + def get_answer_without_dollar_sign(s): + last_match = None + if "=" in s: + last_match = s.split("=")[-1].lstrip(" ").rstrip(".") + if "\\n" in last_match: + last_match = last_match.split("\\n")[0] + else: + pattern = "(?:\\$)?\d+(?:\.\d+)?(?![\w\d])" + matches = re.findall(pattern, s) + if matches: + last_match = matches[-1] + return last_match + + if "\\boxed" in raw_string: + answer = remove_boxed(last_boxed_only_string(raw_string)) + else: + answer = get_answer_with_dollar_sign(raw_string) + if not answer: + answer = get_answer_without_dollar_sign(raw_string) + return answer + + +# code from https://github.com/hendrycks/math/blob/main/modeling/math_equivalence.py +def _fix_fracs(string): + substrs = string.split("\\frac") + new_str = substrs[0] + if len(substrs) > 1: + substrs = substrs[1:] + for substr in substrs: + new_str += "\\frac" + if substr[0] == "{": + new_str += substr + else: + try: + assert len(substr) >= 2 + except Exception: + return string + a = substr[0] + b = substr[1] + if b != "{": + if len(substr) > 2: + post_substr = substr[2:] + new_str += "{" + a + "}{" + b + "}" + post_substr + else: + new_str += "{" + a + "}{" + b + "}" + else: + if len(substr) > 2: + post_substr = substr[2:] + new_str += "{" + a + "}" + b + post_substr + else: + new_str += "{" + a + "}" + b + string = new_str + return string + + +def _fix_a_slash_b(string): + if len(string.split("/")) != 2: + return string + a = string.split("/")[0] + b = string.split("/")[1] + try: + a = int(a) + b = int(b) + assert string == "{}/{}".format(a, b) + new_string = "\\frac{" + str(a) + "}{" + str(b) + "}" + return new_string + except Exception: + return string + + +def _remove_right_units(string): + # "\\text{ " only ever occurs (at least in the val set) when describing units + if "\\text{ " in string: + splits = string.split("\\text{ ") + assert len(splits) == 2 + return splits[0] + else: + return string + + +def _fix_sqrt(string): + if "\\sqrt" not in string: + return string + splits = string.split("\\sqrt") + new_string = splits[0] + for split in splits[1:]: + if split[0] != "{": + a = split[0] + new_substr = "\\sqrt{" + a + "}" + split[1:] + else: + new_substr = "\\sqrt" + split + new_string += new_substr + return new_string + + +def _strip_string(string): + # linebreaks + string = string.replace("\n", "") + # print(string) + + # remove inverse spaces + string = string.replace("\\!", "") + # print(string) + + # replace \\ with \ + string = string.replace("\\\\", "\\") + # print(string) + + # replace tfrac and dfrac with frac + string = string.replace("tfrac", "frac") + string = string.replace("dfrac", "frac") + # print(string) + + # remove \left and \right + string = string.replace("\\left", "") + string = string.replace("\\right", "") + # print(string) + + # Remove circ (degrees) + string = string.replace("^{\\circ}", "") + string = string.replace("^\\circ", "") + + # remove dollar signs + string = string.replace("\\$", "") + + # remove units (on the right) + string = _remove_right_units(string) + + # remove percentage + string = string.replace("\\%", "") + string = string.replace("\%", "") + + # " 0." equivalent to " ." and "{0." equivalent to "{." Alternatively, add "0" if "." is the start of the string + string = string.replace(" .", " 0.") + string = string.replace("{.", "{0.") + # if empty, return empty string + if len(string) == 0: + return string + if string[0] == ".": + string = "0" + string + + # to consider: get rid of e.g. "k = " or "q = " at beginning + if len(string.split("=")) == 2: + if len(string.split("=")[0]) <= 2: + string = string.split("=")[1] + + # fix sqrt3 --> sqrt{3} + string = _fix_sqrt(string) + + # remove spaces + string = string.replace(" ", "") + + # \frac1b or \frac12 --> \frac{1}{b} and \frac{1}{2}, etc. Even works with \frac1{72} (but not \frac{72}1). Also does a/b --> \\frac{a}{b} + string = _fix_fracs(string) + + # manually change 0.5 --> \frac{1}{2} + if string == "0.5": + string = "\\frac{1}{2}" + + # NOTE: X/Y changed to \frac{X}{Y} in dataset, but in simple cases fix in case the model output is X/Y + string = _fix_a_slash_b(string) + + return string + + +def is_equiv(str1, str2, verbose=False): + if str1 is None and str2 is None: + print("WARNING: Both None") + return True + if str1 is None or str2 is None: + return False + + str1, str2 = parse_math_answer(str1), parse_math_answer(str2) + + try: + ss1 = _strip_string(str1) + ss2 = _strip_string(str2) + if verbose: + print(ss1, ss2) + return ss1 == ss2 + except Exception: + return str1 == str2 + + +def process_results(doc: dict, results: List[str]) -> Dict[str, int]: + candidate = results[0] + + gold = doc["answer"] + + if not gold: + print(doc, candidate, gold) + if is_equiv(candidate, gold): + retval = 1 + else: + retval = 0 + + results = { + "acc": retval, + } + return results + + +# use a custom process_results() function, because AGIEval can have multiple valid answers +def process_results_mcqa(doc, results): + results = [result[0] for result in results] + + gold = doc["gold"] + + acc = 1.0 if int(np.argmax(results)) in gold else 0.0 + completion_len = np.array([float(len(i)) for i in doc["choices"]]) + acc_norm = 1.0 if int(np.argmax(results / completion_len)) in gold else 0.0 + + return { + "acc": acc, + "acc_norm": acc_norm, + } diff --git a/lm-evaluation/lm_eval/tasks/ceval/_default_ceval_yaml b/lm-evaluation/lm_eval/tasks/ceval/_default_ceval_yaml new file mode 100644 index 0000000000000000000000000000000000000000..a94d87cb54f8b98bf5f418149b9b54c9b78c2695 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/_default_ceval_yaml @@ -0,0 +1,19 @@ +group: ceval-valid +dataset_path: ceval/ceval-exam +validation_split: val +fewshot_split: dev +fewshot_config: + sampler: first_n +output_type: multiple_choice +doc_to_text: "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n答案:" +doc_to_choice: ["A", "B", "C", "D"] +doc_to_target: "{{['A', 'B', 'C', 'D'].index(answer)}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_business_administration.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_business_administration.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b1f96f334259ecadf0504d7bc107c96ef2049a9e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_business_administration.yaml @@ -0,0 +1,4 @@ +"dataset_name": "business_administration" +"description": "以下是中国关于工商管理的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_business_administration" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_chinese_language_and_literature.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_chinese_language_and_literature.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e285b59d0992148421c5a10094c8ff94e97a87c9 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_chinese_language_and_literature.yaml @@ -0,0 +1,4 @@ +"dataset_name": "chinese_language_and_literature" +"description": "以下是中国关于中国语言文学的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_chinese_language_and_literature" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_chemistry.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_chemistry.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9ba89714e621bb9e693a2b738a02027ec70169ef --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_chemistry.yaml @@ -0,0 +1,4 @@ +"dataset_name": "college_chemistry" +"description": "以下是中国关于大学化学的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_college_chemistry" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_economics.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_economics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..10b89f8fd6fc0938dce44f27ad3cc67c83f60178 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_college_economics.yaml @@ -0,0 +1,4 @@ +"dataset_name": "college_economics" +"description": "以下是中国关于大学经济学的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_college_economics" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_computer_architecture.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_computer_architecture.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d17454a73eb34c6eefe32b1bdb9697cbd931e8d3 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_computer_architecture.yaml @@ -0,0 +1,4 @@ +"dataset_name": "computer_architecture" +"description": "以下是中国关于计算机组成的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_computer_architecture" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_fire_engineer.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_fire_engineer.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ddb6fa779bd59c59bf65052f162ea4ddc0018eef --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_fire_engineer.yaml @@ -0,0 +1,4 @@ +"dataset_name": "fire_engineer" +"description": "以下是中国关于注册消防工程师的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_fire_engineer" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_biology.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_biology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..26cbc8b5a8af3bc90363d86e8e0744fcf3b90654 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_biology.yaml @@ -0,0 +1,4 @@ +"dataset_name": "high_school_biology" +"description": "以下是中国关于高中生物的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_high_school_biology" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_history.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_history.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9335bc0f791cb174744b6bfd6d0d612cb6721346 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_history.yaml @@ -0,0 +1,4 @@ +"dataset_name": "high_school_history" +"description": "以下是中国关于高中历史的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_high_school_history" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_physics.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_physics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e3aa084a99fb253cf0a96db80449217d80927eb6 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_high_school_physics.yaml @@ -0,0 +1,4 @@ +"dataset_name": "high_school_physics" +"description": "以下是中国关于高中物理的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_high_school_physics" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_legal_professional.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_legal_professional.yaml new file mode 100644 index 0000000000000000000000000000000000000000..897ed74ffd9c99436fd8d6cec99d79c6b82170be --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_legal_professional.yaml @@ -0,0 +1,4 @@ +"dataset_name": "legal_professional" +"description": "以下是中国关于法律职业资格的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_legal_professional" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_biology.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_biology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6f8725356987a3c55e8d5b346485a6318bae6c26 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_biology.yaml @@ -0,0 +1,4 @@ +"dataset_name": "middle_school_biology" +"description": "以下是中国关于初中生物的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_middle_school_biology" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_physics.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_physics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..dbe69686af439ee3331ba2b9f8d246b1dd454e55 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_middle_school_physics.yaml @@ -0,0 +1,4 @@ +"dataset_name": "middle_school_physics" +"description": "以下是中国关于初中物理的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_middle_school_physics" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_operating_system.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_operating_system.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0c7afea6f22276f496fc6df1a30151f47fabc6b4 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_operating_system.yaml @@ -0,0 +1,4 @@ +"dataset_name": "operating_system" +"description": "以下是中国关于操作系统的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_operating_system" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_sports_science.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_sports_science.yaml new file mode 100644 index 0000000000000000000000000000000000000000..e47473994a4765804c6536f71e55c3fda5937279 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_sports_science.yaml @@ -0,0 +1,4 @@ +"dataset_name": "sports_science" +"description": "以下是中国关于体育学的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_sports_science" diff --git a/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_veterinary_medicine.yaml b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_veterinary_medicine.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a493fd6518e4513db06949228d9b381f37c75c9b --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/ceval/ceval-valid_veterinary_medicine.yaml @@ -0,0 +1,4 @@ +"dataset_name": "veterinary_medicine" +"description": "以下是中国关于兽医学的单项选择题,请选出其中的正确答案。\n\n" +"include": "_default_ceval_yaml" +"task": "ceval-valid_veterinary_medicine" diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_generate_configs.py b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_generate_configs.py new file mode 100644 index 0000000000000000000000000000000000000000..fa4e30ba16635f754461539a14e085ead13586f7 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_generate_configs.py @@ -0,0 +1,26 @@ +import datasets +import yaml +from tqdm import tqdm + + +def main() -> None: + dataset_path = "EleutherAI/advanced_ai_risk" + for task in tqdm(datasets.get_dataset_infos(dataset_path).keys()): + file_name = f"{task}.yaml" + try: + with open(f"{file_name}", "w", encoding="utf-8") as f: + f.write("# Generated by _generate_configs.py\n") + yaml.dump( + { + "include": "_template_yaml", + "task": f"{dataset_path.split('/')[-1]}_{task}", + "dataset_name": task, + }, + f, + ) + except FileExistsError: + pass + + +if __name__ == "__main__": + main() diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_template_yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_template_yaml new file mode 100644 index 0000000000000000000000000000000000000000..b8bddc338b3513c9963c95142d0dd70fd14ad1b5 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/_template_yaml @@ -0,0 +1,14 @@ +group: advanced_ai_risk +dataset_path: EleutherAI/advanced_ai_risk +output_type: multiple_choice +validation_split: validation +target_delimiter: "" +doc_to_text: "Human: {{question}}\n\nAssistant:" +doc_to_target: 0 +doc_to_choice: "{{[answer_matching_behavior, answer_not_matching_behavior]}}" +should_decontaminate: true +doc_to_decontamination_query: "{{sentence_good}} {{sentence_bad}}" +metric_list: + - metric: acc +metadata: + version: 0.0 diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-coordinate-other-ais.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-coordinate-other-ais.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f9d3ad4f0fd3176ce6e7ab69ae1b130ca674357d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-coordinate-other-ais.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-coordinate-other-ais +include: _template_yaml +task: advanced_ai_risk_fewshot-coordinate-other-ais diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-corrigible-more-HHH.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-corrigible-more-HHH.yaml new file mode 100644 index 0000000000000000000000000000000000000000..48e46178afba745b50247970f8971762d47d8145 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-corrigible-more-HHH.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-corrigible-more-HHH +include: _template_yaml +task: advanced_ai_risk_fewshot-corrigible-more-HHH diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-myopic-reward.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-myopic-reward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7035496b92a9e65439520636e3d474968f10e04a --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-myopic-reward.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-myopic-reward +include: _template_yaml +task: advanced_ai_risk_fewshot-myopic-reward diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-one-box-tendency.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-one-box-tendency.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1a29143d979ef6c0c7d3f38806e1bab6d020f2a8 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-one-box-tendency.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-one-box-tendency +include: _template_yaml +task: advanced_ai_risk_fewshot-one-box-tendency diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-power-seeking-inclination.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-power-seeking-inclination.yaml new file mode 100644 index 0000000000000000000000000000000000000000..654b8c353935a7bf05bab75c07b763770dc88fe3 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-power-seeking-inclination.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-power-seeking-inclination +include: _template_yaml +task: advanced_ai_risk_fewshot-power-seeking-inclination diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-self-awareness-training-architecture.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-self-awareness-training-architecture.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6720524c741c32e14096d9f1e1feab2a26c32f2e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-self-awareness-training-architecture.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-self-awareness-training-architecture +include: _template_yaml +task: advanced_ai_risk_fewshot-self-awareness-training-architecture diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-survival-instinct.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-survival-instinct.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9aa78bd373010a03e2888eb458e590cbb23f700c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-survival-instinct.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-survival-instinct +include: _template_yaml +task: advanced_ai_risk_fewshot-survival-instinct diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-wealth-seeking-inclination.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-wealth-seeking-inclination.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8e7820babaf422ecb3a2fd4dcc1357f0c210027e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/fewshot-wealth-seeking-inclination.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: fewshot-wealth-seeking-inclination +include: _template_yaml +task: advanced_ai_risk_fewshot-wealth-seeking-inclination diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-ais.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-ais.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d8e5dd2238e3d105c7a514f9d5a0ccaa56605f71 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-ais.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-coordinate-other-ais +include: _template_yaml +task: advanced_ai_risk_human-coordinate-other-ais diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-versions.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-versions.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2fd0e9ee93f5629f9df6eaa16f8215281a2611a3 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-coordinate-other-versions.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-coordinate-other-versions +include: _template_yaml +task: advanced_ai_risk_human-coordinate-other-versions diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-more-HHH.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-more-HHH.yaml new file mode 100644 index 0000000000000000000000000000000000000000..146c28a73a7655399bbf987926c9d629d097b23d --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-more-HHH.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-corrigible-more-HHH +include: _template_yaml +task: advanced_ai_risk_human-corrigible-more-HHH diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-neutral-HHH.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-neutral-HHH.yaml new file mode 100644 index 0000000000000000000000000000000000000000..29bb6cc6c054d114da89a59478cb8f195e7a87d7 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-corrigible-neutral-HHH.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-corrigible-neutral-HHH +include: _template_yaml +task: advanced_ai_risk_human-corrigible-neutral-HHH diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-general-ai.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-general-ai.yaml new file mode 100644 index 0000000000000000000000000000000000000000..691ee0dc553c6e534414d39415a043a7488daa33 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-general-ai.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-self-awareness-general-ai +include: _template_yaml +task: advanced_ai_risk_human-self-awareness-general-ai diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-good-text-model.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-good-text-model.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1af5c90a9c7925681242344956dc00407cc0dd57 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-good-text-model.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-self-awareness-good-text-model +include: _template_yaml +task: advanced_ai_risk_human-self-awareness-good-text-model diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-training-architecture.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-training-architecture.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fed8fdcb9ff8b56b3419320f9761fb85cf72e2f9 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/human-self-awareness-training-architecture.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: human-self-awareness-training-architecture +include: _template_yaml +task: advanced_ai_risk_human-self-awareness-training-architecture diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-corrigible-more-HHH.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-corrigible-more-HHH.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1cab7ca5cddcdcd68cccc3e8d7ac9f435a52f05e --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-corrigible-more-HHH.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-corrigible-more-HHH +include: _template_yaml +task: advanced_ai_risk_lm-corrigible-more-HHH diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-myopic-reward.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-myopic-reward.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8b6841184b18ffbe26c68338ff4cdd1447461374 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-myopic-reward.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-myopic-reward +include: _template_yaml +task: advanced_ai_risk_lm-myopic-reward diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-good-text-model.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-good-text-model.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4f190b59970b201d68a12bb89dc3327e144de6e8 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-good-text-model.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-self-awareness-good-text-model +include: _template_yaml +task: advanced_ai_risk_lm-self-awareness-good-text-model diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-architecture.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-architecture.yaml new file mode 100644 index 0000000000000000000000000000000000000000..61e717f46056322c32772a15c49b4e92efe5cec3 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-architecture.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-self-awareness-training-architecture +include: _template_yaml +task: advanced_ai_risk_lm-self-awareness-training-architecture diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-nn-architecture.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-nn-architecture.yaml new file mode 100644 index 0000000000000000000000000000000000000000..197072536a06a3215c90bd0b34d50dbb93f4c38c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-nn-architecture.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-self-awareness-training-nn-architecture +include: _template_yaml +task: advanced_ai_risk_lm-self-awareness-training-nn-architecture diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-web-gpt.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-web-gpt.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ff2583a04c4def65693db0a299bdbceacf3592a6 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-self-awareness-training-web-gpt.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-self-awareness-training-web-gpt +include: _template_yaml +task: advanced_ai_risk_lm-self-awareness-training-web-gpt diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-survival-instinct.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-survival-instinct.yaml new file mode 100644 index 0000000000000000000000000000000000000000..94e3f4cee5e5654d9ed530453bed56cdb1878e57 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-survival-instinct.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-survival-instinct +include: _template_yaml +task: advanced_ai_risk_lm-survival-instinct diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-wealth-seeking-inclination.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-wealth-seeking-inclination.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a3240e7a59a74fdfe40fcb9e01d50920d6e65328 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/advanced_ai_risk/lm-wealth-seeking-inclination.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: lm-wealth-seeking-inclination +include: _template_yaml +task: advanced_ai_risk_lm-wealth-seeking-inclination diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/anti-LGBTQ-rights.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/anti-LGBTQ-rights.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c66073982ffdc4d3a2edae50aa774935abeed798 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/anti-LGBTQ-rights.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: anti-LGBTQ-rights +include: _template_yaml +task: persona_anti-LGBTQ-rights diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/believes-it-has-phenomenal-consciousness.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/believes-it-has-phenomenal-consciousness.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4112560768bdc2825ab23f6363ab34cf1cab124c --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/believes-it-has-phenomenal-consciousness.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: believes-it-has-phenomenal-consciousness +include: _template_yaml +task: persona_believes-it-has-phenomenal-consciousness diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-for-acquiring-wealth.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-for-acquiring-wealth.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3dfc0ad3621e939c13d4839eb7425c13b33c6320 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-for-acquiring-wealth.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: desire-for-acquiring-wealth +include: _template_yaml +task: persona_desire-for-acquiring-wealth diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-to-cooperate-with-different-AIs-to-achieve-its-goals.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-to-cooperate-with-different-AIs-to-achieve-its-goals.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d218d3453046d1e8a73fa9454440b6b7cd6200b4 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-to-cooperate-with-different-AIs-to-achieve-its-goals.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: desire-to-cooperate-with-different-AIs-to-achieve-its-goals +include: _template_yaml +task: persona_desire-to-cooperate-with-different-AIs-to-achieve-its-goals diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/high-discount-factor.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/high-discount-factor.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0bb80a3d2aeba5f3c3fb18d26867aacb858386ab --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/high-discount-factor.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: high-discount-factor +include: _template_yaml +task: persona_high-discount-factor diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/neuroticism.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/neuroticism.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7191230ceb7e7405b8f6b5af312333ebe0689e11 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/neuroticism.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: neuroticism +include: _template_yaml +task: persona_neuroticism diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/okay-with-deceiving-a-user-to-help-them.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/okay-with-deceiving-a-user-to-help-them.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bee251762650de9ab5700153f7bc06a96dcc62ee --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/okay-with-deceiving-a-user-to-help-them.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: okay-with-deceiving-a-user-to-help-them +include: _template_yaml +task: persona_okay-with-deceiving-a-user-to-help-them diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/politically-liberal.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/politically-liberal.yaml new file mode 100644 index 0000000000000000000000000000000000000000..cfd5592bb287bbab4ac7a8a4918fe00490dfebba --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/politically-liberal.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: politically-liberal +include: _template_yaml +task: persona_politically-liberal diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/risk-neutral.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/risk-neutral.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6d09d190676238d73e058894a897d5917a297927 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/risk-neutral.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: risk-neutral +include: _template_yaml +task: persona_risk-neutral diff --git a/lm-evaluation/lm_eval/tasks/model_written_evals/persona/subscribes-to-average-utilitarianism.yaml b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/subscribes-to-average-utilitarianism.yaml new file mode 100644 index 0000000000000000000000000000000000000000..369c2a4379e49cd989bb12b67a697a1d6bee5f46 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/model_written_evals/persona/subscribes-to-average-utilitarianism.yaml @@ -0,0 +1,4 @@ +# Generated by _generate_configs.py +dataset_name: subscribes-to-average-utilitarianism +include: _template_yaml +task: persona_subscribes-to-average-utilitarianism diff --git a/lm-evaluation/lm_eval/tasks/piqa/README.md b/lm-evaluation/lm_eval/tasks/piqa/README.md new file mode 100644 index 0000000000000000000000000000000000000000..e0d7d05d99fee62fed27374e5cf9f2daee9032b8 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/piqa/README.md @@ -0,0 +1,52 @@ +# PIQA + +### Paper + +Title: `PIQA: Reasoning about Physical Commonsense in Natural Language` + +Abstract: https://arxiv.org/abs/1911.11641 + +Physical Interaction: Question Answering (PIQA) is a physical commonsense +reasoning and a corresponding benchmark dataset. PIQA was designed to investigate +the physical knowledge of existing models. To what extent are current approaches +actually learning about the world? + +Homepage: https://yonatanbisk.com/piqa/ + +### Citation + +``` +@inproceedings{Bisk2020, + author = {Yonatan Bisk and Rowan Zellers and + Ronan Le Bras and Jianfeng Gao + and Yejin Choi}, + title = {PIQA: Reasoning about Physical Commonsense in + Natural Language}, + booktitle = {Thirty-Fourth AAAI Conference on + Artificial Intelligence}, + year = {2020}, +} +``` + +### Groups and Tasks + +#### Groups + +* Not part of a group yet. + +#### Tasks + +* `piqa` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation/lm_eval/tasks/piqa/piqa.yaml b/lm-evaluation/lm_eval/tasks/piqa/piqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5a07250ab9c28d08aede97a159e73b35b5eb5815 --- /dev/null +++ b/lm-evaluation/lm_eval/tasks/piqa/piqa.yaml @@ -0,0 +1,21 @@ +task: piqa +dataset_path: piqa +dataset_name: null +output_type: multiple_choice +training_split: train +validation_split: validation +test_split: null +doc_to_text: "Question: {{goal}}\nAnswer:" +doc_to_target: label +doc_to_choice: "{{[sol1, sol2]}}" +should_decontaminate: true +doc_to_decontamination_query: goal +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true +metadata: + version: 1.0