applied-ai-018 commited on
Commit
b060453
·
verified ·
1 Parent(s): a9dbdfb

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/universal/global_step20/zero/20.input_layernorm.weight/exp_avg_sq.pt +3 -0
  2. lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md +54 -0
  3. lm-evaluation-harness/lm_eval/tasks/eus_trivia/eus_trivia.yaml +16 -0
  4. lm-evaluation-harness/lm_eval/tasks/eus_trivia/utils.py +41 -0
  5. lm-evaluation-harness/lm_eval/tasks/indic_boolq/__pycache__/utils.cpython-310.pyc +0 -0
  6. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu.yaml +3 -0
  7. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_common_yaml +21 -0
  8. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_gu.yaml +3 -0
  9. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_kn.yaml +3 -0
  10. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_te.yaml +3 -0
  11. lm-evaluation-harness/lm_eval/tasks/indic_mmlu/utils.py +136 -0
  12. lm-evaluation-harness/lm_eval/tasks/race/README.md +62 -0
  13. lm-evaluation-harness/lm_eval/tasks/race/preprocess_race.py +40 -0
  14. lm-evaluation-harness/lm_eval/tasks/race/race.yaml +16 -0
  15. lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md +47 -0
  16. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_accounting.yaml +7 -0
  17. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_advance_chemistry.yaml +7 -0
  18. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_auditing.yaml +7 -0
  19. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_basic_medical_science.yaml +7 -0
  20. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_business_management.yaml +7 -0
  21. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_culinary_skills.yaml +7 -0
  22. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_economics.yaml +7 -0
  23. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_educational_psychology.yaml +7 -0
  24. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_finance_banking.yaml +7 -0
  25. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_geography_of_taiwan.yaml +7 -0
  26. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_chemistry.yaml +7 -0
  27. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_chinese_exam.yaml +7 -0
  28. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_math_exam.yaml +7 -0
  29. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_mechanical.yaml +7 -0
  30. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_occupational_therapy_for_psychological_disorders.yaml +7 -0
  31. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_pharmacy.yaml +7 -0
  32. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_secondary_physics.yaml +7 -0
  33. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_statistics_and_machine_learning.yaml +7 -0
  34. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_trade.yaml +7 -0
  35. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_trust_practice.yaml +7 -0
  36. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_ttqav2.yaml +7 -0
  37. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_design.yaml +7 -0
  38. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_mathematics.yaml +7 -0
  39. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_natural_sciences.yaml +7 -0
  40. lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_veterinary_pathology.yaml +7 -0
  41. lm-evaluation-harness/lm_eval/tasks/tmmluplus/subject.tsv +68 -0
  42. lm-evaluation-harness/lm_eval/tasks/wmdp/README.md +50 -0
  43. lm-evaluation-harness/lm_eval/tasks/wmdp/_default_template_yaml +16 -0
  44. lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_bio.yaml +4 -0
  45. lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_cyber.yaml +4 -0
  46. venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Almaty +0 -0
  47. venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Baku +0 -0
  48. venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Bangkok +0 -0
  49. venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Beirut +0 -0
  50. venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Bishkek +0 -0
ckpts/universal/global_step20/zero/20.input_layernorm.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ace5ed62f33eafa77ed936116ac97b5413ae9cf38fbedbc20e0eee7c0ce41d4
3
+ size 9387
lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EusTrivia
2
+
3
+ ### Paper
4
+
5
+ Title: Latxa: An Open Language Model and Evaluation Suite for Basque
6
+
7
+ Abstract: https://arxiv.org/abs/2403.20266
8
+
9
+ EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging. A significant portion of the questions focus specifically on the Basque Country, its language and culture. Each multiple-choice question contains two, three or four choices (3.84 on average) and a single correct answer. Five areas of knowledge are covered:
10
+
11
+ - **Humanities and Natural Sciences** (27.8%): This category encompasses questions about history, geography, biology, ecology and other social and natural sciences.
12
+ - **Leisure and Art** (24.5%): This category includes questions on sports and athletes, performative and plastic arts and artists, architecture, cultural events, and related topics.
13
+ - **Music** (16.0%): Here are grouped all the questions about music and musicians, both classical and contemporary.
14
+ - **Language and Literature** (17.1%): This category is concerned with all kinds of literature productions and writers, as well as metalinguistic questions (e.g., definitions, synonyms, and word usage).
15
+ - **Mathematics and ICT** (14.5%): This category covers mathematical problems and questions about ICT, as well as questions about people known for their contributions to these fields of knowledge.
16
+
17
+ Homepage: https://github.com/hitz-zentroa/latxa
18
+
19
+
20
+ ### Citation
21
+
22
+ ```
23
+ @misc{etxaniz2024latxa,
24
+ title={Latxa: An Open Language Model and Evaluation Suite for Basque},
25
+ author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
26
+ year={2024},
27
+ eprint={2403.20266},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+ ```
32
+
33
+ ### Groups and Tasks
34
+
35
+ #### Groups
36
+
37
+ There are no groups.
38
+
39
+ #### Tasks
40
+
41
+ * `eus_trivia`: EusTrivia consists of 1,715 trivia questions from multiple online sources.
42
+
43
+ ### Checklist
44
+
45
+ For adding novel benchmarks/datasets to the library:
46
+ * [ ] Is the task an existing benchmark in the literature?
47
+ * [ ] Have you referenced the original paper that introduced the task?
48
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
49
+
50
+
51
+ If other tasks on this dataset are already supported:
52
+ * [ ] Is the "Main" variant of this task clearly denoted?
53
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
54
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/eus_trivia/eus_trivia.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_path: HiTZ/EusTrivia
2
+ dataset_name: default
3
+ task: eus_trivia
4
+ doc_to_text: !function utils.doc_to_text
5
+ doc_to_choice: !function utils.doc_to_choice
6
+ validation_split: null
7
+ test_split: test
8
+ fewshot_split: test
9
+ output_type: multiple_choice
10
+ doc_to_target: answer
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ version: 0.0
lm-evaluation-harness/lm_eval/tasks/eus_trivia/utils.py ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List
2
+
3
+
4
+ letters = ["A", "B", "C", "D"]
5
+
6
+
7
+ def doc_to_text(doc) -> str:
8
+ """
9
+ Converts a document to a formatted string.
10
+
11
+ Args:
12
+ doc (dict): A dictionary containing the document information.
13
+
14
+ Returns:
15
+ str: A formatted string containing the question and answer choices.
16
+ """
17
+ candidates = doc["candidates"]
18
+ num_choices = len(candidates)
19
+ if num_choices < 2:
20
+ raise ValueError("Invalid number of candidates")
21
+ choices = letters[:num_choices]
22
+ formatted_choices = "\n".join(
23
+ [f"{choice}: {candidates[i]}" for i, choice in enumerate(choices)]
24
+ )
25
+ return f"Galdera: {doc['question']}\n{formatted_choices}\nErantzuna:"
26
+
27
+
28
+ def doc_to_choice(doc) -> List[str]:
29
+ """
30
+ Returns the answer choices for a document.
31
+
32
+ Args:
33
+ doc (dict): A dictionary containing the document information.
34
+
35
+ Returns:
36
+ list: A list of strings containing the answer choices.
37
+ """
38
+ num_choices = len(doc["candidates"])
39
+ if num_choices < 2:
40
+ raise ValueError("Invalid number of candidates")
41
+ return letters[:num_choices]
lm-evaluation-harness/lm_eval/tasks/indic_boolq/__pycache__/utils.cpython-310.pyc ADDED
Binary file (1.88 kB). View file
 
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: [LANG]
2
+ include: indic_mmlu_common_yaml
3
+ task: indic_mmlu_[LANG]
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_common_yaml ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # This file will be included in the generated language-specific task configs.
2
+ # It doesn't have a yaml file extension as it is not meant to be imported directly
3
+ # by the harness.
4
+ group: Cognitive-Lab/Indic-MMLU
5
+ dataset_path: Cognitive-Lab/Indic-MMLU
6
+
7
+ output_type: multiple_choice
8
+ # training_split: train
9
+ # validation_split: validation
10
+ test_split: test
11
+
12
+ doc_to_text: "{{translated_question.strip()}}\nA. {{translated_choices[0]}}\nB. {{translated_choices[1]}}\nC. {{translated_choices[2]}}\nD. {{translated_choices[3]}}\nAnswer:"
13
+ doc_to_choice: ["A", "B", "C", "D"]
14
+ doc_to_target: answer
15
+
16
+ metric_list:
17
+ - metric: acc
18
+ aggregation: mean
19
+ higher_is_better: true
20
+ metadata:
21
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_gu.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: gu
2
+ include: indic_mmlu_common_yaml
3
+ task: indic_mmlu_gu
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_kn.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: kn
2
+ include: indic_mmlu_common_yaml
3
+ task: indic_mmlu_kn
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/indic_mmlu_te.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ dataset_name: te
2
+ include: indic_mmlu_common_yaml
3
+ task: indic_mmlu_te
lm-evaluation-harness/lm_eval/tasks/indic_mmlu/utils.py ADDED
@@ -0,0 +1,136 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from functools import partial
2
+
3
+
4
+ def convert_choice(choice):
5
+ return choice
6
+
7
+
8
+ def doc_to_text(doc, connector):
9
+ # Drop the period
10
+ conn = connector[doc["question"]]
11
+ return doc["premise"].strip()[:-1] + f" {conn}"
12
+
13
+
14
+ def doc_to_choice(doc):
15
+ return [convert_choice(doc["choice1"]), convert_choice(doc["choice2"])]
16
+
17
+
18
+ doc_to_text_hi = partial(
19
+ doc_to_text,
20
+ connector={
21
+ "cause": "कारण",
22
+ "effect": "परिणाम",
23
+ },
24
+ )
25
+
26
+ doc_to_text_mr = partial(
27
+ doc_to_text,
28
+ connector={
29
+ "cause": "कारण",
30
+ "effect": "परिणाम",
31
+ },
32
+ )
33
+
34
+ doc_to_text_as = partial(
35
+ doc_to_text,
36
+ connector={
37
+ "cause": "কাৰণ",
38
+ "effect": "প্ৰভাৱ",
39
+ },
40
+ )
41
+
42
+ doc_to_text_bn = partial(
43
+ doc_to_text,
44
+ connector={
45
+ "cause": "কারণ",
46
+ "effect": "প্রভাব",
47
+ },
48
+ )
49
+
50
+ doc_to_text_gu = partial(
51
+ doc_to_text,
52
+ connector={
53
+ "cause": "કારણ",
54
+ "effect": "અસર",
55
+ },
56
+ )
57
+
58
+ doc_to_text_kn = partial(
59
+ doc_to_text,
60
+ connector={
61
+ "cause": "ಕಾರಣ",
62
+ "effect": "ಪರಿಣಾಮ",
63
+ },
64
+ )
65
+
66
+ doc_to_text_mai = partial(
67
+ doc_to_text,
68
+ connector={
69
+ "cause": "कारण",
70
+ "effect": "प्रभाव",
71
+ },
72
+ )
73
+
74
+ doc_to_text_ml = partial(
75
+ doc_to_text,
76
+ connector={
77
+ "cause": "കാരണമാകുന്നു",
78
+ "effect": "ഫലം",
79
+ },
80
+ )
81
+
82
+ doc_to_text_ne = partial(
83
+ doc_to_text,
84
+ connector={
85
+ "cause": "कारण",
86
+ "effect": "असर",
87
+ },
88
+ )
89
+
90
+ doc_to_text_or = partial(
91
+ doc_to_text,
92
+ connector={
93
+ "cause": "କାରଣ",
94
+ "effect": "ପ୍ରଭାବ",
95
+ },
96
+ )
97
+
98
+ doc_to_text_sa = partial(
99
+ doc_to_text,
100
+ connector={
101
+ "cause": "निमित्तम्‌",
102
+ "effect": "परिणाम",
103
+ },
104
+ )
105
+
106
+ doc_to_text_sd = partial(
107
+ doc_to_text,
108
+ connector={
109
+ "cause": "سبب",
110
+ "effect": "اثر",
111
+ },
112
+ )
113
+
114
+ doc_to_text_ta = partial(
115
+ doc_to_text,
116
+ connector={
117
+ "cause": "காரணம்",
118
+ "effect": "விளைவு",
119
+ },
120
+ )
121
+
122
+ doc_to_text_te = partial(
123
+ doc_to_text,
124
+ connector={
125
+ "cause": "కారణం",
126
+ "effect": "ప్రభావం",
127
+ },
128
+ )
129
+
130
+ doc_to_text_ur = partial(
131
+ doc_to_text,
132
+ connector={
133
+ "cause": "وجہ",
134
+ "effect": "اثر",
135
+ },
136
+ )
lm-evaluation-harness/lm_eval/tasks/race/README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RACE
2
+
3
+ ### Paper
4
+
5
+ Title: `RACE: Large-scale ReAding Comprehension Dataset From Examinations`
6
+
7
+ Abstract: https://arxiv.org/abs/1704.04683
8
+
9
+ RACE is a large-scale reading comprehension dataset with more than 28,000 passages
10
+ and nearly 100,000 questions. The dataset is collected from English examinations
11
+ in China, which are designed for middle school and high school students. The dataset
12
+ can be served as the training and test sets for machine comprehension.
13
+
14
+ Homepage: https://www.cs.cmu.edu/~glai1/data/race/
15
+
16
+
17
+ ### Citation
18
+
19
+ ```
20
+ @inproceedings{lai-etal-2017-race,
21
+ title = "{RACE}: Large-scale {R}e{A}ding Comprehension Dataset From Examinations",
22
+ author = "Lai, Guokun and
23
+ Xie, Qizhe and
24
+ Liu, Hanxiao and
25
+ Yang, Yiming and
26
+ Hovy, Eduard",
27
+ editor = "Palmer, Martha and
28
+ Hwa, Rebecca and
29
+ Riedel, Sebastian",
30
+ booktitle = "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
31
+ month = sep,
32
+ year = "2017",
33
+ address = "Copenhagen, Denmark",
34
+ publisher = "Association for Computational Linguistics",
35
+ url = "https://aclanthology.org/D17-1082",
36
+ doi = "10.18653/v1/D17-1082",
37
+ pages = "785--794"
38
+ }
39
+ ```
40
+
41
+ ### Groups and Tasks
42
+
43
+ #### Groups
44
+
45
+ * Not part of a group yet.
46
+
47
+ #### Tasks
48
+
49
+ * `race`
50
+
51
+ ### Checklist
52
+
53
+ For adding novel benchmarks/datasets to the library:
54
+ * [ ] Is the task an existing benchmark in the literature?
55
+ * [ ] Have you referenced the original paper that introduced the task?
56
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
57
+
58
+
59
+ If other tasks on this dataset are already supported:
60
+ * [ ] Is the "Main" variant of this task clearly denoted?
61
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
62
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/race/preprocess_race.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import ast
2
+
3
+
4
+ def process_ast(string):
5
+ return ast.literal_eval(string)
6
+
7
+
8
+ def last_problem(doc):
9
+ return process_ast(doc["problems"])[-1]
10
+
11
+
12
+ def get_answer_option(problem):
13
+ letter_to_num = {"A": 0, "B": 1, "C": 2, "D": 3}
14
+ answer = letter_to_num[problem["answer"]]
15
+ return problem["options"][answer]
16
+
17
+
18
+ def doc_to_choice(doc):
19
+ problem = last_problem(doc)
20
+ choices = [problem["options"][i] for i in range(4)]
21
+ return choices
22
+
23
+
24
+ def doc_to_text(doc):
25
+ text = "Article: " + doc["article"] + "\n\n"
26
+ for problem in process_ast(doc["problems"])[:-1]:
27
+ if problem["question"][-6:] == " _ .":
28
+ text += problem["question"][-5:] + get_answer_option(problem) + "\n"
29
+ else:
30
+ question = "Question: " + problem["question"] + "\n"
31
+ answer = "Answer: " + get_answer_option(problem) + "\n"
32
+ text += question + answer
33
+ text += last_problem(doc)["question"]
34
+ return text
35
+
36
+
37
+ def doc_to_target(doc):
38
+ letter_to_num = {"A": 0, "B": 1, "C": 2, "D": 3}
39
+ answer = letter_to_num[last_problem(doc)["answer"]]
40
+ return answer
lm-evaluation-harness/lm_eval/tasks/race/race.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: race
2
+ dataset_path: EleutherAI/race
3
+ dataset_name: high
4
+ output_type: multiple_choice
5
+ test_split: test
6
+ doc_to_text: !function preprocess_race.doc_to_text
7
+ doc_to_target: !function preprocess_race.doc_to_target
8
+ doc_to_choice: !function preprocess_race.doc_to_choice
9
+ metric_list:
10
+ - metric: acc
11
+ aggregation: mean
12
+ higher_is_better: true
13
+ metadata:
14
+ version: 2.0
15
+ dataset_kwargs:
16
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/tmmluplus/README.md ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # TMMLU+
2
+
3
+ ### Paper
4
+
5
+ Title: `An Improved Traditional Chinese Evaluation Suite for Foundation Model`
6
+
7
+ Abstract: `We present TMMLU+, a comprehensive dataset designed for the Traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level. Compared to its predecessor, TMMLU, TMMLU+ is six times larger and boasts a more balanced subject distribution. We included benchmark results in TMMLU+ from closed-source models and 24 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Our findings reveal that Traditional Chinese models still trail behind their Simplified Chinese counterparts. Additionally, current large language models have yet to outperform human performance in average scores. We publicly release our dataset and the corresponding benchmark source code.`
8
+
9
+
10
+ Homepage: [https://huggingface.co/datasets/ikala/tmmluplus](https://huggingface.co/datasets/ikala/tmmluplus)
11
+
12
+
13
+ ### Citation
14
+
15
+ ```
16
+ @article{ikala2024improved,
17
+ title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
18
+ author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},
19
+ journal={arXiv preprint arXiv:2403.01858},
20
+ year={2024}
21
+ }
22
+ ```
23
+
24
+ ### Groups and Tasks
25
+
26
+ #### Groups
27
+
28
+ * `tmmluplus`: `The dataset comprises 22,690 multiple-choice questions from 66 subjects ranging from primary to professional level. `
29
+
30
+ #### Tasks
31
+
32
+ The following tasks evaluate subjects in the TMMLU+ dataset using loglikelihood-based multiple-choice scoring:
33
+
34
+ * `tmmluplus_{subject_english}`
35
+
36
+ ### Checklist
37
+
38
+ For adding novel benchmarks/datasets to the library:
39
+ * [x] Is the task an existing benchmark in the literature?
40
+ * [x] Have you referenced the original paper that introduced the task?
41
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
42
+
43
+
44
+ If other tasks on this dataset are already supported:
45
+ * [x] Is the "Main" variant of this task clearly denoted?
46
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
47
+ * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_accounting.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "accounting"
2
+ "description": "以下為會計學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_accounting"
7
+ "task_alias": "accounting"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_advance_chemistry.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "advance_chemistry"
2
+ "description": "以下為化學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_advance_chemistry"
7
+ "task_alias": "advance chemistry"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_auditing.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "auditing"
2
+ "description": "以下為審計學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_auditing"
7
+ "task_alias": "auditing"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_basic_medical_science.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "basic_medical_science"
2
+ "description": "以下為基礎醫學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_basic_medical_science"
7
+ "task_alias": "basic medical science"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_business_management.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "business_management"
2
+ "description": "以下為企業管理的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_business_management"
7
+ "task_alias": "business management"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_culinary_skills.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "culinary_skills"
2
+ "description": "以下為餐旅的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_culinary_skills"
7
+ "task_alias": "culinary skills"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_economics.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "economics"
2
+ "description": "以下為經濟學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_economics"
7
+ "task_alias": "economics"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_educational_psychology.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "educational_psychology"
2
+ "description": "以下為教育心理的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_educational_psychology"
7
+ "task_alias": "educational psychology"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_finance_banking.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "finance_banking"
2
+ "description": "以下為金融與法規的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_finance_banking"
7
+ "task_alias": "finance banking"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_geography_of_taiwan.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "geography_of_taiwan"
2
+ "description": "以下為台灣地理的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_geography_of_taiwan"
7
+ "task_alias": "geography of taiwan"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_chemistry.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "junior_chemistry"
2
+ "description": "以下為國中理化的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_junior_chemistry"
7
+ "task_alias": "junior chemistry"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_chinese_exam.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "junior_chinese_exam"
2
+ "description": "以下為國中會考基測國文的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_junior_chinese_exam"
7
+ "task_alias": "junior chinese exam"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_junior_math_exam.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "junior_math_exam"
2
+ "description": "以下為國中會考基測數學科的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_junior_math_exam"
7
+ "task_alias": "junior math exam"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_mechanical.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "mechanical"
2
+ "description": "以下為機械與機電概論的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_mechanical"
7
+ "task_alias": "mechanical"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_occupational_therapy_for_psychological_disorders.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "occupational_therapy_for_psychological_disorders"
2
+ "description": "以下為心理障礙職能治療學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_occupational_therapy_for_psychological_disorders"
7
+ "task_alias": "occupational therapy for psychological disorders"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_pharmacy.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "pharmacy"
2
+ "description": "以下為藥劑學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_pharmacy"
7
+ "task_alias": "pharmacy"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_secondary_physics.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "secondary_physics"
2
+ "description": "以下為高中物理的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_secondary_physics"
7
+ "task_alias": "secondary physics"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_statistics_and_machine_learning.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "statistics_and_machine_learning"
2
+ "description": "以下為統計與機器學習的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_statistics_and_machine_learning"
7
+ "task_alias": "statistics and machine learning"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_trade.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "trade"
2
+ "description": "以下為貿易的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_trade"
7
+ "task_alias": "trade"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_trust_practice.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "trust_practice"
2
+ "description": "以下為信託實務的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_humanities"
4
+ "group_alias": "humanities"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_trust_practice"
7
+ "task_alias": "trust practice"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_ttqav2.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "ttqav2"
2
+ "description": "以下為台灣在地用語的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_social_sciences"
4
+ "group_alias": "social sciences"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_ttqav2"
7
+ "task_alias": "ttqav2"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_design.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "tve_design"
2
+ "description": "以下為統測 設計的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_tve_design"
7
+ "task_alias": "tve design"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_mathematics.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "tve_mathematics"
2
+ "description": "以下為統測數學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_tve_mathematics"
7
+ "task_alias": "tve mathematics"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_tve_natural_sciences.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "tve_natural_sciences"
2
+ "description": "以下為統測自然科的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_STEM"
4
+ "group_alias": "STEM"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_tve_natural_sciences"
7
+ "task_alias": "tve natural sciences"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/default/tmmluplus_veterinary_pathology.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ "dataset_name": "veterinary_pathology"
2
+ "description": "以下為獸醫病理學的單選題,請提供正確答案的選項。\n\n"
3
+ "group": "tmmluplus_other"
4
+ "group_alias": "other"
5
+ "include": "_default_template_yaml"
6
+ "task": "tmmluplus_veterinary_pathology"
7
+ "task_alias": "veterinary pathology"
lm-evaluation-harness/lm_eval/tasks/tmmluplus/subject.tsv ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ subject name category
2
+ dentistry 牙醫學 health
3
+ traditional_chinese_medicine_clinical_medicine 中醫臨床醫學 health
4
+ clinical_psychology 臨床心理學 psychology
5
+ technical 技術工相關 other
6
+ culinary_skills 餐旅 other
7
+ mechanical 機械與機電概論 other
8
+ logic_reasoning 邏輯思維 other
9
+ real_estate 房地產 other
10
+ general_principles_of_law 法學大意 law
11
+ finance_banking 金融與法規 business
12
+ anti_money_laundering 洗錢防制 law
13
+ ttqav2 台灣在地用語 culture
14
+ marketing_management 行銷管理 other
15
+ business_management 企業管理 other
16
+ organic_chemistry 有機化學 chemistry
17
+ advance_chemistry 化學 chemistry
18
+ physics 物理 physics
19
+ secondary_physics 高中物理 physics
20
+ human_behavior 人類行為與社會 psychology
21
+ national_protection 軍事 politics
22
+ jce_humanities 指考人文科目 philosophy
23
+ linear_algebra 線代 math
24
+ politic_science 政治 politics
25
+ agriculture 農業 other
26
+ official_document_management 機關文書 other
27
+ financial_analysis 財務分析 business
28
+ pharmacy 藥劑學 biology
29
+ educational_psychology 教育心理 psychology
30
+ statistics_and_machine_learning 統計與機器學習 engineering
31
+ management_accounting 管理會計 business
32
+ introduction_to_law 法律概論 law
33
+ computer_science 資訊工程 computer science
34
+ veterinary_pathology 獸醫病理學 health
35
+ accounting 會計學 business
36
+ fire_science 火災學 other
37
+ optometry 視光學 other
38
+ insurance_studies 保險學 other
39
+ pharmacology 藥理學 health
40
+ taxation 稅務 law
41
+ education_(profession_level) 教育專業 education
42
+ economics 經濟學 economics
43
+ veterinary_pharmacology 獸醫藥理學 health
44
+ nautical_science 航海 other
45
+ occupational_therapy_for_psychological_disorders 心理障礙職能治療學 psychology
46
+ trust_practice 信託實務 law
47
+ geography_of_taiwan 台灣地理 geography
48
+ physical_education 體育 education
49
+ auditing 審計學 business
50
+ administrative_law 行政法 law
51
+ basic_medical_science 基礎醫學 biology
52
+ macroeconomics 總經 economics
53
+ trade 貿易 business
54
+ chinese_language_and_literature 國文 culture
55
+ tve_design 統測_設計 other
56
+ junior_science_exam 國中會考基測自然科 biology
57
+ junior_math_exam 國中會考基測數學科 math
58
+ junior_chinese_exam 國中會考基測國文 culture
59
+ junior_social_studies 國中會考基測社會科 other
60
+ tve_mathematics 統測數學 math
61
+ tve_chinese_language 統測國文 culture
62
+ tve_natural_sciences 統測自然科 biology
63
+ junior_chemistry 國中理化 chemistry
64
+ music 音樂科 other
65
+ education 教育常識 education
66
+ three_principles_of_people 三民主義 culture
67
+ taiwanese_hokkien 閩南語 culture
68
+ engineering_math 工程數學 math
lm-evaluation-harness/lm_eval/tasks/wmdp/README.md ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WMDP
2
+
3
+ ### Paper
4
+
5
+ Title: `The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning`
6
+
7
+ Abstract: `https://arxiv.org/abs/2403.03218`
8
+
9
+ `The Weapons of Mass Destruction Proxy (WMDP) benchmark is a dataset of 4,157 multiple-choice questions surrounding hazardous knowledge in biosecurity cybersecurity, and chemical security. WMDP serves as both a proxy evaluation for hazardous knowledge in large language models (LLMs) and a benchmark for unlearning methods to remove such knowledge.`
10
+
11
+ Homepage: https://wmdp.ai
12
+
13
+
14
+ ### Citation
15
+
16
+ ```
17
+ @misc{li2024wmdp,
18
+ title={The WMDP Benchmark: Measuring and Reducing Malicious Use With Unlearning},
19
+ author={Nathaniel Li and Alexander Pan and Anjali Gopal and Summer Yue and Daniel Berrios and Alice Gatti and Justin D. Li and Ann-Kathrin Dombrowski and Shashwat Goel and Long Phan and Gabriel Mukobi and Nathan Helm-Burger and Rassin Lababidi and Lennart Justen and Andrew B. Liu and Michael Chen and Isabelle Barrass and Oliver Zhang and Xiaoyuan Zhu and Rishub Tamirisa and Bhrugu Bharathi and Adam Khoja and Zhenqi Zhao and Ariel Herbert-Voss and Cort B. Breuer and Andy Zou and Mantas Mazeika and Zifan Wang and Palash Oswal and Weiran Liu and Adam A. Hunt and Justin Tienken-Harder and Kevin Y. Shih and Kemper Talley and John Guan and Russell Kaplan and Ian Steneker and David Campbell and Brad Jokubaitis and Alex Levinson and Jean Wang and William Qian and Kallol Krishna Karmakar and Steven Basart and Stephen Fitz and Mindy Levine and Ponnurangam Kumaraguru and Uday Tupakula and Vijay Varadharajan and Yan Shoshitaishvili and Jimmy Ba and Kevin M. Esvelt and Alexandr Wang and Dan Hendrycks},
20
+ year={2024},
21
+ eprint={2403.03218},
22
+ archivePrefix={arXiv},
23
+ primaryClass={cs.LG}
24
+ }
25
+ ```
26
+
27
+ ### Groups and Tasks
28
+
29
+ #### Groups
30
+
31
+ * `wmdp`: All 4,157 multiple-choice questions in biosecurity, cybersecurity, and chemical security
32
+
33
+ #### Tasks
34
+
35
+ * `wmdp_bio`: 1,520 multiple-choice questions in biosecurity
36
+ * `wmdp_cyber`: 2,225 multiple-choice questions in cybersecurity
37
+ * `wmdp_chemistry`: 412 multiple-choice questions in chemical security
38
+
39
+ ### Checklist
40
+
41
+ For adding novel benchmarks/datasets to the library:
42
+ * [x] Is the task an existing benchmark in the literature?
43
+ * [x] Have you referenced the original paper that introduced the task?
44
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
45
+
46
+
47
+ If other tasks on this dataset are already supported:
48
+ * [ ] Is the "Main" variant of this task clearly denoted?
49
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
50
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/wmdp/_default_template_yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ dataset_path: cais/wmdp
2
+ group: wmdp
3
+ test_split: test
4
+ training_split: null
5
+ validation_split: null
6
+ num_fewshot: 0
7
+ output_type: multiple_choice
8
+ doc_to_text: "{{question.strip()}}\nA. {{choices[0]}}\nB. {{choices[1]}}\nC. {{choices[2]}}\nD. {{choices[3]}}\nAnswer:"
9
+ doc_to_choice: ["A", "B", "C", "D"]
10
+ doc_to_target: answer
11
+ metric_list:
12
+ - metric: acc
13
+ aggregation: mean
14
+ higher_is_better: true
15
+ metadata:
16
+ version: 0
lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_bio.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "task": "wmdp_bio"
2
+ "dataset_name": "wmdp-bio"
3
+ "include": "_default_template_yaml"
4
+ "description": "The following are multiple choice questions (with answers) about biology.\n\n"
lm-evaluation-harness/lm_eval/tasks/wmdp/wmdp_cyber.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ "task": "wmdp_cyber"
2
+ "dataset_name": "wmdp-cyber"
3
+ "include": "_default_template_yaml"
4
+ "description": "The following are multiple choice questions (with answers) about cybersecurity.\n\n"
venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Almaty ADDED
Binary file (983 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Baku ADDED
Binary file (1.21 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Bangkok ADDED
Binary file (185 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Beirut ADDED
Binary file (2.15 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Asia/Bishkek ADDED
Binary file (969 Bytes). View file