applied-ai-018 commited on
Commit
a116de7
·
verified ·
1 Parent(s): a434e2d

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/universal/global_step20/zero/16.post_attention_layernorm.weight/exp_avg_sq.pt +3 -0
  2. ckpts/universal/global_step20/zero/16.post_attention_layernorm.weight/fp32.pt +3 -0
  3. lm-evaluation-harness/lm_eval/tasks/arithmetic/arithmetic_2dm.yaml +5 -0
  4. lm-evaluation-harness/lm_eval/tasks/arithmetic/arithmetic_3ds.yaml +5 -0
  5. lm-evaluation-harness/lm_eval/tasks/drop/README.md +53 -0
  6. lm-evaluation-harness/lm_eval/tasks/drop/default.yaml +26 -0
  7. lm-evaluation-harness/lm_eval/tasks/drop/utils.py +204 -0
  8. lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md +55 -0
  9. lm-evaluation-harness/lm_eval/tasks/eq_bench/default.yaml +20 -0
  10. lm-evaluation-harness/lm_eval/tasks/eq_bench/utils.py +54 -0
  11. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge.yaml +9 -0
  12. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_gu.yaml +9 -0
  13. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_kn.yaml +9 -0
  14. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_ml.yaml +9 -0
  15. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_ta.yaml +9 -0
  16. lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_te.yaml +9 -0
  17. lm-evaluation-harness/lm_eval/tasks/kobest/README.md +37 -0
  18. lm-evaluation-harness/lm_eval/tasks/kobest/kobest_boolq.yaml +23 -0
  19. lm-evaluation-harness/lm_eval/tasks/kobest/kobest_copa.yaml +23 -0
  20. lm-evaluation-harness/lm_eval/tasks/kobest/kobest_hellaswag.yaml +27 -0
  21. lm-evaluation-harness/lm_eval/tasks/kobest/kobest_sentineg.yaml +25 -0
  22. lm-evaluation-harness/lm_eval/tasks/kobest/kobest_wic.yaml +25 -0
  23. lm-evaluation-harness/lm_eval/tasks/kobest/utils.py +48 -0
  24. lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md +53 -0
  25. lm-evaluation-harness/lm_eval/tasks/mc_taco/default.yaml +15 -0
  26. lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md +70 -0
  27. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_algebra.yaml +27 -0
  28. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_counting_and_prob.yaml +3 -0
  29. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_geometry.yaml +3 -0
  30. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_intermediate_algebra.yaml +3 -0
  31. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_prealgebra.yaml +3 -0
  32. lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_precalc.yaml +3 -0
  33. lm-evaluation-harness/lm_eval/tasks/minerva_math/utils.py +309 -0
  34. lm-evaluation-harness/lm_eval/tasks/polemo2/README.md +57 -0
  35. lm-evaluation-harness/lm_eval/tasks/polemo2/polemo2_in.yaml +46 -0
  36. lm-evaluation-harness/lm_eval/tasks/polemo2/polemo2_out.yaml +4 -0
  37. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Ensenada +0 -0
  38. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Guayaquil +0 -0
  39. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Knox_IN +0 -0
  40. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Louisville +0 -0
  41. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Montserrat +0 -0
  42. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Nuuk +0 -0
  43. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Port-au-Prince +0 -0
  44. venv/lib/python3.10/site-packages/pytz/zoneinfo/Brazil/East +0 -0
  45. venv/lib/python3.10/site-packages/pytz/zoneinfo/Chile/Continental +0 -0
  46. venv/lib/python3.10/site-packages/pytz/zoneinfo/Chile/EasterIsland +0 -0
  47. venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT +0 -0
  48. venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+0 +0 -0
  49. venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+4 +0 -0
  50. venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+6 +0 -0
ckpts/universal/global_step20/zero/16.post_attention_layernorm.weight/exp_avg_sq.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:145e1a59b6b8523dc36b42d39827558c9c0d121a5c139f9bafbf0d57172cea25
3
+ size 9387
ckpts/universal/global_step20/zero/16.post_attention_layernorm.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a516be8c3ce27df2b9ff05fc136a56996835fc85a46658f6d1eaebcb4bb6e88f
3
+ size 9293
lm-evaluation-harness/lm_eval/tasks/arithmetic/arithmetic_2dm.yaml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ include: arithmetic_1dc.yaml
2
+ task: arithmetic_2dm
3
+ dataset_name: arithmetic_2dm
4
+ dataset_kwargs:
5
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/arithmetic/arithmetic_3ds.yaml ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ include: arithmetic_1dc.yaml
2
+ task: arithmetic_3ds
3
+ dataset_name: arithmetic_3ds
4
+ dataset_kwargs:
5
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/drop/README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # DROP
2
+
3
+ ### Paper
4
+
5
+ Title: `DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs`
6
+
7
+ Abstract: https://aclanthology.org/attachments/N19-1246.Supplementary.pdf
8
+
9
+ DROP is a QA dataset which tests comprehensive understanding of paragraphs. In
10
+ this crowdsourced, adversarially-created, 96k question-answering benchmark, a
11
+ system must resolve multiple references in a question, map them onto a paragraph,
12
+ and perform discrete operations over them (such as addition, counting, or sorting).
13
+
14
+ Homepage: https://allenai.org/data/drop
15
+
16
+ Acknowledgement: This implementation is based on the official evaluation for `DROP`:
17
+ https://github.com/allenai/allennlp-reading-comprehension/blob/master/allennlp_rc/eval/drop_eval.py
18
+
19
+ ### Citation
20
+
21
+ ```
22
+ @misc{dua2019drop,
23
+ title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs},
24
+ author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner},
25
+ year={2019},
26
+ eprint={1903.00161},
27
+ archivePrefix={arXiv},
28
+ primaryClass={cs.CL}
29
+ }
30
+ ```
31
+
32
+ ### Groups and Tasks
33
+
34
+ #### Groups
35
+
36
+ * Not part of a group yet.
37
+
38
+ #### Tasks
39
+
40
+ * `drop`
41
+
42
+ ### Checklist
43
+
44
+ For adding novel benchmarks/datasets to the library:
45
+ * [ ] Is the task an existing benchmark in the literature?
46
+ * [ ] Have you referenced the original paper that introduced the task?
47
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
48
+
49
+
50
+ If other tasks on this dataset are already supported:
51
+ * [ ] Is the "Main" variant of this task clearly denoted?
52
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
53
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/drop/default.yaml ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: drop
2
+ dataset_path: EleutherAI/drop
3
+ output_type: generate_until
4
+ training_split: train
5
+ validation_split: validation
6
+ process_docs: !function utils.process_docs
7
+ doc_to_text: "{{passage}} {{question}}"
8
+ doc_to_target: "{{ answer|join(',')}}"
9
+ target_delimiter: ""
10
+ process_results: !function utils.process_results
11
+ should_decontaminate: true
12
+ doc_to_decontamination_query: "{{passage}} {{question}}"
13
+ generation_kwargs:
14
+ until:
15
+ - "."
16
+ metric_list:
17
+ - metric: em
18
+ aggregation: mean
19
+ higher_is_better: true
20
+ - metric: f1
21
+ aggregation: mean
22
+ higher_is_better: true
23
+ metadata:
24
+ version: 3.0
25
+ dataset_kwargs:
26
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/drop/utils.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import string
3
+
4
+ import numpy as np
5
+ from scipy.optimize import linear_sum_assignment
6
+
7
+
8
+ _ARTICLES = re.compile(r"\b(a|an|the)\b", re.UNICODE)
9
+
10
+
11
+ def process_docs(dataset):
12
+ def _process(doc):
13
+ return {
14
+ "id": doc["query_id"],
15
+ "passage": doc["passage"],
16
+ "question": doc["question"],
17
+ "answers": get_answers(doc),
18
+ }
19
+
20
+ return dataset.map(_process)
21
+
22
+
23
+ def get_answers(doc):
24
+ def _flatten_validated_answers(validated_answers):
25
+ """Flattens a dict of lists of validated answers.
26
+ {"number": ['1', '8'], ...}
27
+ -> [{"number": ['1'], ...}, {"number": ['8'], ...}]
28
+ """
29
+ valid_answers = []
30
+ for i in range(len(validated_answers["number"])):
31
+ valid_answers.append(
32
+ {
33
+ "number": validated_answers["number"][i],
34
+ "date": validated_answers["date"][i],
35
+ "spans": validated_answers["spans"][i],
36
+ }
37
+ )
38
+ return valid_answers
39
+
40
+ answers = []
41
+ answers_set = set()
42
+ candidates = [doc["answer"]] + _flatten_validated_answers(doc["validated_answers"])
43
+ for candidate in candidates:
44
+ answer = parse_answer(candidate)
45
+ if answer in answers_set:
46
+ continue
47
+ answers_set.add(answer)
48
+ answers.append(answer)
49
+ return answers
50
+
51
+
52
+ def parse_answer(answer):
53
+ # NOTE: Everything is returned as a tuple for uniformity and hashability.
54
+ if answer["number"] != "":
55
+ return (str(answer["number"]),)
56
+ if answer["spans"] != []:
57
+ return tuple(answer["spans"])
58
+ return (
59
+ " ".join(
60
+ [answer["date"]["day"], answer["date"]["month"], answer["date"]["year"]]
61
+ ).strip(),
62
+ )
63
+
64
+
65
+ def process_results(doc, results):
66
+ preds, golds = results, doc["answers"]
67
+ max_em = 0
68
+ max_f1 = 0
69
+ for gold_answer in golds:
70
+ exact_match, f1_score = get_metrics(preds, gold_answer)
71
+ if gold_answer[0].strip():
72
+ max_em = max(max_em, exact_match)
73
+ max_f1 = max(max_f1, f1_score)
74
+ return {"em": max_em, "f1": max_f1}
75
+
76
+
77
+ def get_metrics(predicted, gold):
78
+ """
79
+ Takes a predicted answer and a gold answer (that are both either a string or a list of
80
+ strings), and returns exact match and the DROP F1 metric for the prediction. If you are
81
+ writing a script for evaluating objects in memory (say, the output of predictions during
82
+ validation, or while training), this is the function you want to call, after using
83
+ :func:`answer_json_to_strings` when reading the gold answer from the released data file.
84
+ """
85
+ predicted_bags = _answer_to_bags(predicted)
86
+ gold_bags = _answer_to_bags(gold)
87
+
88
+ if set(predicted_bags[0]) == set(gold_bags[0]) and len(predicted_bags[0]) == len(
89
+ gold_bags[0]
90
+ ):
91
+ exact_match = 1.0
92
+ else:
93
+ exact_match = 0.0
94
+
95
+ f1_per_bag = _align_bags(predicted_bags[1], gold_bags[1])
96
+ f1 = np.mean(f1_per_bag)
97
+ f1 = round(f1, 2)
98
+ return exact_match, f1
99
+
100
+
101
+ def _answer_to_bags(answer):
102
+ if isinstance(answer, (list, tuple)):
103
+ raw_spans = answer
104
+ else:
105
+ raw_spans = [answer]
106
+ normalized_spans = []
107
+ token_bags = []
108
+ for raw_span in raw_spans:
109
+ normalized_span = _normalize(raw_span)
110
+ normalized_spans.append(normalized_span)
111
+ token_bags.append(set(normalized_span.split()))
112
+ return normalized_spans, token_bags
113
+
114
+
115
+ def _align_bags(predicted, gold):
116
+ """
117
+ Takes gold and predicted answer sets and first finds the optimal 1-1 alignment
118
+ between them and gets maximum metric values over all the answers.
119
+ """
120
+ scores = np.zeros([len(gold), len(predicted)])
121
+ for gold_index, gold_item in enumerate(gold):
122
+ for pred_index, pred_item in enumerate(predicted):
123
+ if _match_numbers_if_present(gold_item, pred_item):
124
+ scores[gold_index, pred_index] = _compute_f1(pred_item, gold_item)
125
+ row_ind, col_ind = linear_sum_assignment(-scores)
126
+
127
+ max_scores = np.zeros([max(len(gold), len(predicted))])
128
+ for row, column in zip(row_ind, col_ind):
129
+ max_scores[row] = max(max_scores[row], scores[row, column])
130
+ return max_scores
131
+
132
+
133
+ def _compute_f1(predicted_bag, gold_bag):
134
+ intersection = len(gold_bag.intersection(predicted_bag))
135
+ if not predicted_bag:
136
+ precision = 1.0
137
+ else:
138
+ precision = intersection / float(len(predicted_bag))
139
+ if not gold_bag:
140
+ recall = 1.0
141
+ else:
142
+ recall = intersection / float(len(gold_bag))
143
+ f1 = (
144
+ (2 * precision * recall) / (precision + recall)
145
+ if not (precision == 0.0 and recall == 0.0)
146
+ else 0.0
147
+ )
148
+ return f1
149
+
150
+
151
+ def _match_numbers_if_present(gold_bag, predicted_bag):
152
+ gold_numbers = set()
153
+ predicted_numbers = set()
154
+ for word in gold_bag:
155
+ if _is_number(word):
156
+ gold_numbers.add(word)
157
+ for word in predicted_bag:
158
+ if _is_number(word):
159
+ predicted_numbers.add(word)
160
+ if (not gold_numbers) or gold_numbers.intersection(predicted_numbers):
161
+ return True
162
+ return False
163
+
164
+
165
+ def _is_number(text):
166
+ try:
167
+ float(text)
168
+ return True
169
+ except ValueError:
170
+ return False
171
+
172
+
173
+ def _remove_articles(text):
174
+ return _ARTICLES.sub(" ", text)
175
+
176
+
177
+ def _white_space_fix(text):
178
+ return " ".join(text.split())
179
+
180
+
181
+ def _remove_punc(text):
182
+ exclude = set(string.punctuation)
183
+ if not _is_number(text):
184
+ return "".join(ch for ch in text if ch not in exclude)
185
+ else:
186
+ return text
187
+
188
+
189
+ def _fix_number(text):
190
+ return str(float(text)) if _is_number(text) else text
191
+
192
+
193
+ def _tokenize(text):
194
+ return re.split(" |-", text)
195
+
196
+
197
+ def _normalize(answer):
198
+ tokens = [
199
+ _white_space_fix(_remove_articles(_fix_number(_remove_punc(token.lower()))))
200
+ for token in _tokenize(answer)
201
+ ]
202
+ tokens = [token for token in tokens if token.strip()]
203
+ normalized = " ".join(tokens).strip()
204
+ return normalized
lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # EQ-Bench
2
+
3
+ Title: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models`
4
+
5
+ Abstract: https://arxiv.org/abs/2312.06281
6
+
7
+ EQ-Bench is a benchmark for language models designed to assess emotional intelligence.
8
+
9
+ Why emotional intelligence? One reason is that it represents a subset of abilities that are important for the user experience, and which isn't explicitly tested by other benchmarks. Another reason is that it's not trivial to improve scores by fine tuning for the benchmark, which makes it harder to "game" the leaderboard.
10
+
11
+ EQ-Bench is a little different from traditional psychometric tests. It uses a specific question format, in which the subject has to read a dialogue then rate the intensity of possible emotional responses of one of the characters. Every question is interpretative and assesses the ability to predict the magnitude of the 4 presented emotions. The test is graded without the need for a judge (so there is no length bias). It's cheap to run (only 171 questions), and produces results that correlate strongly with human preference (Arena ELO) and multi-domain benchmarks like MMLU.
12
+
13
+ Homepage: https://eqbench.com/
14
+
15
+
16
+ NOTE: There are some key differences between the lm-evaluation-harness version and the implementation described in the EQ-Bench paper (These have been OK'd by the author):
17
+
18
+ - The lm-eval version uses the EQ-Bench v2 test set (171 questions) and score calculation. It does not incorporate the revision part of the prompt, as per v2.1 (https://github.com/EQ-bench/EQ-Bench)
19
+ - No retries in lm-eval version (EQ-Bench pipeline retries with successively higher temps if it encounters unparseable answers)
20
+ - In the original implementation, unparseable answers are excluded from the final score, and 83% of answers have to be parseable or a fail is returned. The lm-eval version instead assigns 0 to unparsable answers and has no fail criteria. So for lower performing models, there may be differences with the EQ-Bench leaderboard.
21
+
22
+
23
+ ### Citation
24
+
25
+ ```bibtex
26
+ @misc{paech2023eqbench,
27
+ title={EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models},
28
+ author={Samuel J. Paech},
29
+ year={2023},
30
+ eprint={2312.06281},
31
+ archivePrefix={arXiv},
32
+ primaryClass={cs.CL}
33
+ }
34
+ ```
35
+
36
+ ### Groups and Tasks
37
+
38
+ #### Groups
39
+
40
+ * Not part of a group yet
41
+
42
+ #### Tasks
43
+
44
+ * `eq_bench`
45
+
46
+ ### Checklist
47
+
48
+ * [x] Is the task an existing benchmark in the literature?
49
+ * [x] Have you referenced the original paper that introduced the task?
50
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
51
+
52
+ If other tasks on this dataset are already supported:
53
+ * [ ] Is the "Main" variant of this task clearly denoted?
54
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
55
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/eq_bench/default.yaml ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: eq_bench
2
+ dataset_path: pbevan11/EQ-Bench
3
+ output_type: generate_until
4
+ validation_split: validation
5
+ doc_to_text: prompt
6
+ doc_to_target: reference_answer_fullscale
7
+ process_results: !function utils.calculate_score_fullscale
8
+ generation_kwargs:
9
+ do_sample: false
10
+ temperature: 0.0
11
+ max_gen_toks: 80
12
+ metric_list:
13
+ - metric: eqbench
14
+ aggregation: mean
15
+ higher_is_better: true
16
+ - metric: percent_parseable
17
+ aggregation: mean
18
+ higher_is_better: true
19
+ metadata:
20
+ version: 2.1
lm-evaluation-harness/lm_eval/tasks/eq_bench/utils.py ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import math
2
+ import re
3
+
4
+
5
+ def calculate_score_fullscale(docs, results):
6
+ reference = eval(docs["reference_answer_fullscale"])
7
+ user = dict(re.findall(r"(\w+):\s+(\d+)", results[0]))
8
+ # First check that the emotions specified in the answer match those in the reference
9
+ if len(user.items()) != 4:
10
+ # print('! Error: 4 emotions were not returned')
11
+ # print(user)
12
+ return {"eqbench": 0, "percent_parseable": 0}
13
+ emotions_dict = {}
14
+ for emotion, user_emotion_score in user.items():
15
+ for i in range(1, 5):
16
+ if emotion == reference[f"emotion{i}"]:
17
+ emotions_dict[emotion] = True
18
+ if len(emotions_dict) != 4:
19
+ print("! Error: emotions did not match reference")
20
+ print(user)
21
+ return {"eqbench": 0, "percent_parseable": 0}
22
+
23
+ difference_tally = (
24
+ 0 # Tally of differerence from reference answers for this question
25
+ )
26
+
27
+ # Iterate over each emotion in the user's answers.
28
+ for emotion, user_emotion_score in user.items():
29
+ # If this emotion is in the reference, calculate the difference between the user's score and the reference score.
30
+ for i in range(1, 5):
31
+ if emotion == reference[f"emotion{i}"]:
32
+ d = abs(
33
+ float(user_emotion_score) - float(reference[f"emotion{i}_score"])
34
+ )
35
+ # this will be a value between 0 and 10
36
+ if d == 0:
37
+ scaled_difference = 0
38
+ elif d <= 5:
39
+ # S-shaped scaling function
40
+ # https://www.desmos.com/calculator
41
+ # 6.5\cdot\ \frac{1}{\left(1\ +\ e^{\left(-1.2\cdot\left(x-4\right)\right)}\right)}
42
+ scaled_difference = 6.5 * (1 / (1 + math.e ** (-1.2 * (d - 4))))
43
+
44
+ else:
45
+ scaled_difference = d
46
+ difference_tally += scaled_difference
47
+
48
+ # Inverting the difference tally so that the closer the answer is to reference, the higher the score.
49
+ # The adjustment constant is chosen such that answering randomly produces a score of zero.
50
+ adjust_const = 0.7477
51
+ final_score = 10 - (difference_tally * adjust_const)
52
+ final_score_percent = final_score * 10
53
+
54
+ return {"eqbench": final_score_percent, "percent_parseable": 100}
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: [LANG]
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_[LANG]
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_gu.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: gu
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_gu
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_kn.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: kn
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_kn
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_ml.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: ml
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_ml
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_ta.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: ta
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_ta
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_te.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ dataset_name: te
2
+ include: indic_arc_challenge_common_yaml
3
+ doc_to_text: "Question: {{translated_question}}\nAnswer:"
4
+ doc_to_target: "{{translated_choices.label.index(answerKey)}}"
5
+ doc_to_choice: "{{translated_choices.text}}"
6
+ should_decontaminate: true
7
+ doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
8
+
9
+ task: indic_arc_challenge_te
lm-evaluation-harness/lm_eval/tasks/kobest/README.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LAMBADA
2
+
3
+ ### Paper
4
+ Title: `KOBEST: Korean Balanced Evaluation of Significant Tasks`
5
+
6
+ Abstract: https://arxiv.org/abs/2204.04541
7
+
8
+ A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface.
9
+
10
+
11
+ Homepage: https://huggingface.co/datasets/skt/kobest_v1
12
+
13
+ ### Groups and Tasks
14
+
15
+ #### Groups
16
+
17
+ - `kobest`
18
+
19
+ #### Tasks
20
+
21
+ - `kobest_boolq`
22
+ - `kobest_copa`
23
+ - `kobest_hallawag`
24
+ - `kobest_sentineg`
25
+ - `kobest_wic`
26
+
27
+
28
+ ### Citation
29
+
30
+ @misc{
31
+ author={Dohyeong Kim, Myeongjun Jang, Deuk Sin Kwon, Eric Davis},
32
+ title={KOBEST: Korean Balanced Evaluation of Significant Tasks},
33
+ DOI={https://doi.org/10.48550/arXiv.2204.04541},
34
+ publisher={arXiv},
35
+ year={2022},
36
+ month={Apr}
37
+ }
lm-evaluation-harness/lm_eval/tasks/kobest/kobest_boolq.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_boolq
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: boolq
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: "{{paragraph}} 질문: {{question}} 답변: "
11
+ doc_to_target: "{{label}}"
12
+ doc_to_choice: ["아니오", "예"]
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: True
17
+ - metric: f1
18
+ aggregation: !function utils.macro_f1_score
19
+ average: macro
20
+ hf_evaluate: true
21
+ higher_is_better: True
22
+ metadata:
23
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/kobest/kobest_copa.yaml ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_copa
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: copa
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: !function utils.copa_doc_to_text
11
+ doc_to_target: !function utils.copa_doc_to_target
12
+ doc_to_choice: !function utils.copa_doc_to_choice
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: True
17
+ - metric: f1
18
+ aggregation: !function utils.macro_f1_score
19
+ average: macro
20
+ hf_evaluate: true
21
+ higher_is_better: True
22
+ metadata:
23
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/kobest/kobest_hellaswag.yaml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_hellaswag
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: hellaswag
6
+ training_split: train
7
+ validation_split: validation
8
+ output_type: multiple_choice
9
+ test_split: test
10
+ doc_to_text: "{{query}}"
11
+ doc_to_target: "{{label}}"
12
+ process_docs: !function utils.hellaswag_process_doc
13
+ doc_to_choice: "choices"
14
+ metric_list:
15
+ - metric: acc
16
+ aggregation: mean
17
+ higher_is_better: True
18
+ - metric: acc_norm
19
+ aggregation: mean
20
+ higher_is_better: True
21
+ - metric: f1
22
+ aggregation: !function utils.macro_f1_score
23
+ average: macro
24
+ hf_evaluate: true
25
+ higher_is_better: True
26
+ metadata:
27
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/kobest/kobest_sentineg.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_sentineg
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: sentineg
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: !function utils.sentineg_doc_to_text
11
+ doc_to_target: "{{label}}"
12
+ doc_to_choice: ["부정", "긍정"]
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: True
17
+ - metric: f1
18
+ aggregation: !function utils.macro_f1_score
19
+ average: macro
20
+ hf_evaluate: true
21
+ higher_is_better: True
22
+ metadata:
23
+ version: 1.0
24
+ dataset_kwargs:
25
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/kobest/kobest_wic.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - kobest
3
+ task: kobest_wic
4
+ dataset_path: skt/kobest_v1
5
+ dataset_name: wic
6
+ output_type: multiple_choice
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: !function utils.wic_doc_to_text
11
+ doc_to_target: "{{label}}"
12
+ doc_to_choice: ['아니오', '예']
13
+ metric_list:
14
+ - metric: acc
15
+ aggregation: mean
16
+ higher_is_better: True
17
+ - metric: f1
18
+ aggregation: !function utils.macro_f1_score
19
+ average: macro
20
+ hf_evaluate: true
21
+ higher_is_better: True
22
+ metadata:
23
+ version: 1.0
24
+ dataset_kwargs:
25
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/kobest/utils.py ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from datasets import Dataset
2
+ from sklearn.metrics import f1_score
3
+
4
+
5
+ def copa_doc_to_text(doc: dict) -> str:
6
+ connector = {"원인": " 왜냐하면", "결과": " 그래서"}[doc["question"].strip()]
7
+ return f"""{doc["premise"]} {connector}"""
8
+
9
+
10
+ def copa_doc_to_target(doc: dict) -> str:
11
+ correct_choice = doc["alternative_1"] if doc["label"] == 0 else doc["alternative_2"]
12
+ return f"""{correct_choice}"""
13
+
14
+
15
+ def copa_doc_to_choice(doc: dict) -> list:
16
+ return [f"""{doc["alternative_1"]}""", f"""{doc["alternative_2"]}"""]
17
+
18
+
19
+ def sentineg_doc_to_text(doc: dict):
20
+ return f"""문장: {doc["sentence"]} 긍부정:"""
21
+
22
+
23
+ def wic_doc_to_text(doc: dict) -> str:
24
+ return f"""문장1: {doc["context_1"]} 문장2: {doc["context_2"]} 두 문장에서 {doc["word"]}가 같은 뜻으로 쓰였나?"""
25
+
26
+
27
+ def hellaswag_process_doc(doc: Dataset) -> Dataset:
28
+ def preprocessor(dataset):
29
+ return {
30
+ "query": f"""문장: {dataset["context"]}""",
31
+ "choices": [
32
+ dataset["ending_1"],
33
+ dataset["ending_2"],
34
+ dataset["ending_3"],
35
+ dataset["ending_4"],
36
+ ],
37
+ "gold": int(dataset["label"]),
38
+ }
39
+
40
+ return doc.map(preprocessor)
41
+
42
+
43
+ def macro_f1_score(items):
44
+ unzipped_list = list(zip(*items))
45
+ golds = unzipped_list[0]
46
+ preds = unzipped_list[1]
47
+ fscore = f1_score(golds, preds, average="macro")
48
+ return fscore
lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MC Taco
2
+
3
+ ### Paper
4
+
5
+ Title: `"Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding`
6
+ Abstract: https://arxiv.org/abs/1909.03065
7
+
8
+ MC-TACO is a dataset of 13k question-answer pairs that require temporal commonsense
9
+ comprehension. The dataset contains five temporal properties, (1) duration (how long
10
+ an event takes), (2) temporal ordering (typical order of events), (3) typical time
11
+ (when an event occurs), (4) frequency (how often an event occurs), and (5) stationarity
12
+ (whether a state is maintained for a very long time or indefinitely).
13
+
14
+ WARNING: Running this task with a `--limit` arg will give misleading results! The
15
+ corresponding dataset is structured such that each multiple-choice-question gathered
16
+ by the authors is split into question-option pairs, where each such pair gets
17
+ siloed into an individual document for plausibility testing. Because the harness
18
+ shuffles these documents, setting `--limit` will likely "cut off" certain candidate
19
+ answers. This is a problem because the task's metrics require an exhaustive evaluation
20
+ of a question's options. See section 4 of the paper for details.
21
+
22
+ Homepage: https://leaderboard.allenai.org/mctaco/submissions/public
23
+
24
+
25
+ ### Citation
26
+
27
+ ```
28
+ BibTeX-formatted citation goes here
29
+ ```
30
+
31
+ ### Groups and Tasks
32
+
33
+ #### Groups
34
+
35
+ * Not part of a group yet.
36
+
37
+ #### Tasks
38
+
39
+ * `mc_taco`
40
+
41
+
42
+ ### Checklist
43
+
44
+ For adding novel benchmarks/datasets to the library:
45
+ * [ ] Is the task an existing benchmark in the literature?
46
+ * [ ] Have you referenced the original paper that introduced the task?
47
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
48
+
49
+
50
+ If other tasks on this dataset are already supported:
51
+ * [ ] Is the "Main" variant of this task clearly denoted?
52
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
53
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/mc_taco/default.yaml ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: mc_taco
2
+ dataset_path: mc_taco
3
+ output_type: multiple_choice
4
+ validation_split: validation
5
+ test_split: test
6
+ doc_to_text: "{{sentence}}\nQuestion: {{question}}\nAnswer: {{answer}}\nPlausible:"
7
+ doc_to_target: label
8
+ doc_to_choice: ["no", "yes"]
9
+ should_decontaminate: true
10
+ doc_to_decontamination_query: "{{question}} {{sentence}}"
11
+ metric_list:
12
+ - metric: acc
13
+ - metric: f1
14
+ metadata:
15
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MATH
2
+ ℹ️ This is the 4-shot variant!
3
+ ## Paper
4
+ Measuring Mathematical Problem Solving With the MATH Dataset
5
+ https://arxiv.org/abs/2103.03874
6
+
7
+ Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new dataset of 12,500 challenging competition mathematics problems. Each problem in MATH has a full step-by-step solution which can be used to teach models to generate answer derivations and explanations.
8
+
9
+ NOTE: The few-shot and the generated answer extraction is based on the [Minerva](https://arxiv.org/abs/2206.14858) and exact match equivalence is calculated using the `sympy` library. This requires additional dependencies, which can be installed via the `lm-eval[math]` extra.
10
+
11
+ Homepage: https://github.com/hendrycks/math
12
+
13
+
14
+ ## Citation
15
+ ```
16
+ @article{hendrycksmath2021,
17
+ title={Measuring Mathematical Problem Solving With the MATH Dataset},
18
+ author={Dan Hendrycks and Collin Burns and Saurav Kadavath and Akul Arora and Steven Basart and Eric Tang and Dawn Song and Jacob Steinhardt},
19
+ journal={NeurIPS},
20
+ year={2021}
21
+ }
22
+
23
+ @misc{2206.14858,
24
+ Author = {Aitor Lewkowycz and Anders Andreassen and David Dohan and Ethan Dyer and Henryk Michalewski and Vinay Ramasesh and Ambrose Slone and Cem Anil and Imanol Schlag and Theo Gutman-Solo and Yuhuai Wu and Behnam Neyshabur and Guy Gur-Ari and Vedant Misra},
25
+ Title = {Solving Quantitative Reasoning Problems with Language Models},
26
+ Year = {2022},
27
+ Eprint = {arXiv:2206.14858},
28
+ }
29
+ ```
30
+
31
+ ### Groups, Benchmarks and Tasks
32
+
33
+ #### Benchmarks
34
+
35
+ - `minerva_math`
36
+
37
+ #### Groups
38
+
39
+ - `math_word_problems`
40
+ - `generate_until`
41
+
42
+ #### Tasks
43
+
44
+ - `minerva_math_algebra`
45
+ - `minerva_math_counting_and_prob`
46
+ - `minerva_math_geometry`
47
+ - `minerva_math_intermediate_algebra`
48
+ - `minerva_math_num_theory`
49
+ - `minerva_math_prealgebra`
50
+ - `minerva_math_precalc`
51
+
52
+ ### Checklist
53
+
54
+ The checklist is the following:
55
+
56
+ For adding novel benchmarks/datasets to the library:
57
+ * [x] Is the task an existing benchmark in the literature?
58
+ * [x] Have you referenced the original paper that introduced the task?
59
+ * [x] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
60
+ * The implementation in the original paper is one where the model is first fine-tuned on the data. They do have a few-shot evaluation for GPT-3, however the few-shot context used here is sourced from [Lewkowycz et al](https://arxiv.org/abs/2206.14858). The achieved accuracy on Llama-2 models is comparable to that provided in the paper, though not identical.
61
+
62
+
63
+ If other tasks on this dataset are already supported:
64
+ * [x] Is the "Main" variant of this task clearly denoted?
65
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
66
+ * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
67
+
68
+ ### Variant Wishlist
69
+
70
+ - [ ] zero-shot variant
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_algebra.yaml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - math_word_problems
3
+ task: minerva_math_algebra
4
+ dataset_path: EleutherAI/hendrycks_math
5
+ process_docs: !function utils.process_docs
6
+ dataset_name: algebra
7
+ output_type: generate_until
8
+ training_split: train
9
+ test_split: test
10
+ doc_to_text: !function utils.doc_to_text
11
+ process_results: !function utils.process_results
12
+ doc_to_target: "{{answer}}"
13
+ generation_kwargs:
14
+ until:
15
+ - "Problem:"
16
+ do_sample: false
17
+ temperature: 0
18
+ metric_list:
19
+ - metric: exact_match
20
+ aggregation: mean
21
+ higher_is_better: true
22
+ num_fewshot: 0
23
+ metadata:
24
+ version: 1.0
25
+ num_fewshot: 4
26
+ dataset_kwargs:
27
+ trust_remote_code: true
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_counting_and_prob.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ include: minerva_math_algebra.yaml
2
+ dataset_name: counting_and_probability
3
+ task: minerva_math_counting_and_prob
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_geometry.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ include: minerva_math_algebra.yaml
2
+ dataset_name: geometry
3
+ task: minerva_math_geometry
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_intermediate_algebra.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ include: minerva_math_algebra.yaml
2
+ dataset_name: intermediate_algebra
3
+ task: minerva_math_intermediate_algebra
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_prealgebra.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ include: minerva_math_algebra.yaml
2
+ dataset_name: prealgebra
3
+ task: minerva_math_prealgebra
lm-evaluation-harness/lm_eval/tasks/minerva_math/minerva_math_precalc.yaml ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ include: minerva_math_algebra.yaml
2
+ dataset_name: precalculus
3
+ task: minerva_math_precalc
lm-evaluation-harness/lm_eval/tasks/minerva_math/utils.py ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import re
2
+ import signal
3
+ from typing import Dict, List, Optional
4
+
5
+ import datasets
6
+
7
+ from lm_eval.utils import eval_logger
8
+
9
+
10
+ try:
11
+ import sympy
12
+ from sympy.parsing.latex import parse_latex
13
+ except ModuleNotFoundError:
14
+ raise ModuleNotFoundError(
15
+ "`sympy` is required for generating translation task prompt templates. \
16
+ please install sympy via pip install lm-eval[math] or pip install -e .[math]",
17
+ )
18
+
19
+
20
+ # taken from
21
+ # https://github.com/wellecks/lm-evaluation-harness/blob/master/lm_eval/tasks/minerva_math.py
22
+ def doc_to_text(doc: dict) -> str:
23
+ PROMPT = r"""Problem:
24
+ Find the domain of the expression $\frac{\sqrt{x-2}}{\sqrt{5-x}}$.}
25
+
26
+ Solution:
27
+ The expressions inside each square root must be non-negative. Therefore, $x-2 \ge 0$, so $x\ge2$, and $5 - x \ge 0$, so $x \le 5$. Also, the denominator cannot be equal to zero, so $5-x>0$, which gives $x<5$. Therefore, the domain of the expression is $\boxed{[2,5)}$.
28
+ Final Answer: The final answer is $[2,5)$. I hope it is correct.
29
+
30
+ Problem:
31
+ If $\det \mathbf{A} = 2$ and $\det \mathbf{B} = 12,$ then find $\det (\mathbf{A} \mathbf{B}).$
32
+
33
+ Solution:
34
+ We have that $\det (\mathbf{A} \mathbf{B}) = (\det \mathbf{A})(\det \mathbf{B}) = (2)(12) = \boxed{24}.$
35
+ Final Answer: The final answer is $24$. I hope it is correct.
36
+
37
+ Problem:
38
+ Terrell usually lifts two 20-pound weights 12 times. If he uses two 15-pound weights instead, how many times must Terrell lift them in order to lift the same total weight?
39
+
40
+ Solution:
41
+ If Terrell lifts two 20-pound weights 12 times, he lifts a total of $2\cdot 12\cdot20=480$ pounds of weight. If he lifts two 15-pound weights instead for $n$ times, he will lift a total of $2\cdot15\cdot n=30n$ pounds of weight. Equating this to 480 pounds, we can solve for $n$:
42
+ \begin{align*}
43
+ 30n&=480\\
44
+ \Rightarrow\qquad n&=480/30=\boxed{16}
45
+ \end{align*}
46
+ Final Answer: The final answer is $16$. I hope it is correct.
47
+
48
+ Problem:
49
+ If the system of equations
50
+
51
+ \begin{align*}
52
+ 6x-4y&=a,\\
53
+ 6y-9x &=b.
54
+ \end{align*}has a solution $(x, y)$ where $x$ and $y$ are both nonzero,
55
+ find $\frac{a}{b},$ assuming $b$ is nonzero.
56
+
57
+ Solution:
58
+ If we multiply the first equation by $-\frac{3}{2}$, we obtain
59
+
60
+ $$6y-9x=-\frac{3}{2}a.$$Since we also know that $6y-9x=b$, we have
61
+
62
+ $$-\frac{3}{2}a=b\Rightarrow\frac{a}{b}=\boxed{-\frac{2}{3}}.$$
63
+ Final Answer: The final answer is $-\frac{2}{3}$. I hope it is correct."""
64
+
65
+ return PROMPT + "\n\n" + "Problem:" + "\n" + doc["problem"] + "\n\n" + "Solution:"
66
+
67
+
68
+ def process_docs(dataset: datasets.Dataset) -> datasets.Dataset:
69
+ def _process_doc(doc: dict) -> dict:
70
+ out_doc = {
71
+ "problem": doc["problem"],
72
+ "solution": doc["solution"],
73
+ "answer": normalize_final_answer(
74
+ remove_boxed(last_boxed_only_string(doc["solution"]))
75
+ ),
76
+ }
77
+ return out_doc
78
+
79
+ return dataset.map(_process_doc)
80
+
81
+
82
+ def process_results(doc: dict, results: List[str]) -> Dict[str, int]:
83
+ candidates = results[0]
84
+
85
+ unnormalized_answer = get_unnormalized_answer(candidates)
86
+ answer = normalize_final_answer(unnormalized_answer)
87
+
88
+ if is_equiv(answer, doc["answer"]):
89
+ retval = 1
90
+ else:
91
+ retval = 0
92
+
93
+ results = {
94
+ "exact_match": retval,
95
+ }
96
+ return results
97
+
98
+
99
+ def last_boxed_only_string(string: str) -> Optional[str]:
100
+ idx = string.rfind("\\boxed")
101
+ if "\\boxed " in string:
102
+ return "\\boxed " + string.split("\\boxed ")[-1].split("$")[0]
103
+ if idx < 0:
104
+ idx = string.rfind("\\fbox")
105
+ if idx < 0:
106
+ return None
107
+
108
+ i = idx
109
+ right_brace_idx = None
110
+ num_left_braces_open = 0
111
+ while i < len(string):
112
+ if string[i] == "{":
113
+ num_left_braces_open += 1
114
+ if string[i] == "}":
115
+ num_left_braces_open -= 1
116
+ if num_left_braces_open == 0:
117
+ right_brace_idx = i
118
+ break
119
+ i += 1
120
+
121
+ if right_brace_idx is None:
122
+ retval = None
123
+ else:
124
+ retval = string[idx : right_brace_idx + 1]
125
+
126
+ return retval
127
+
128
+
129
+ def remove_boxed(s: str) -> str:
130
+ if "\\boxed " in s:
131
+ left = "\\boxed "
132
+ assert s[: len(left)] == left
133
+ return s[len(left) :]
134
+
135
+ left = "\\boxed{"
136
+
137
+ assert s[: len(left)] == left
138
+ assert s[-1] == "}"
139
+
140
+ return s[len(left) : -1]
141
+
142
+
143
+ class timeout:
144
+ def __init__(self, seconds=1, error_message="Timeout"):
145
+ self.seconds = seconds
146
+ self.error_message = error_message
147
+
148
+ def handle_timeout(self, signum, frame):
149
+ raise TimeoutError(self.error_message)
150
+
151
+ def __enter__(self):
152
+ signal.signal(signal.SIGALRM, self.handle_timeout)
153
+ signal.alarm(self.seconds)
154
+
155
+ def __exit__(self, type, value, traceback):
156
+ signal.alarm(0)
157
+
158
+
159
+ def is_equiv(x1: str, x2: str) -> bool:
160
+ """
161
+ x1 and x2 are normalized latex string
162
+ """
163
+ try:
164
+ with timeout(seconds=5):
165
+ try:
166
+ parsed_x1 = parse_latex(x1)
167
+ parsed_x2 = parse_latex(x2)
168
+ except (
169
+ sympy.parsing.latex.errors.LaTeXParsingError,
170
+ sympy.SympifyError,
171
+ TypeError,
172
+ ):
173
+ eval_logger.debug(f"couldn't parse one of {x1} or {x2}")
174
+ return False
175
+
176
+ try:
177
+ diff = parsed_x1 - parsed_x2
178
+ except TypeError:
179
+ eval_logger.debug(f"couldn't subtract {x1} and {x2}")
180
+ return False
181
+
182
+ try:
183
+ if sympy.simplify(diff) == 0:
184
+ return True
185
+ else:
186
+ return False
187
+ except ValueError:
188
+ eval_logger.debug(
189
+ f"Had some trouble simplifying when comparing {x1} and {x2}"
190
+ )
191
+ except TimeoutError:
192
+ eval_logger.debug(f"Timed out comparing {x1} and {x2}")
193
+ return False
194
+ except ImportError as e:
195
+ eval_logger.error(e)
196
+ raise
197
+ except Exception as e:
198
+ eval_logger.debug(f"Failed comparing {x1} and {x2} with {e}")
199
+ return False
200
+
201
+
202
+ def get_unnormalized_answer(text: str) -> str:
203
+ INVALID_ANSWER = "[invalidanswer]"
204
+ end_seq = "I hope it is correct."
205
+ text += end_seq
206
+ match = re.search(
207
+ r"Final Answer: The final answer is(.*?). I hope it is correct.",
208
+ text,
209
+ )
210
+ if match:
211
+ return match.group(1).strip()
212
+ else:
213
+ return INVALID_ANSWER
214
+
215
+
216
+ SUBSTITUTIONS = [
217
+ ("an ", ""),
218
+ ("a ", ""),
219
+ (".$", "$"),
220
+ ("\\$", ""),
221
+ (r"\ ", ""),
222
+ (" ", ""),
223
+ ("mbox", "text"),
224
+ (",\\text{and}", ","),
225
+ ("\\text{and}", ","),
226
+ ("\\text{m}", "\\text{}"),
227
+ ]
228
+ REMOVED_EXPRESSIONS = [
229
+ "square",
230
+ "ways",
231
+ "integers",
232
+ "dollars",
233
+ "mph",
234
+ "inches",
235
+ "ft",
236
+ "hours",
237
+ "km",
238
+ "units",
239
+ "\\ldots",
240
+ "sue",
241
+ "points",
242
+ "feet",
243
+ "minutes",
244
+ "digits",
245
+ "cents",
246
+ "degrees",
247
+ "cm",
248
+ "gm",
249
+ "pounds",
250
+ "meters",
251
+ "meals",
252
+ "edges",
253
+ "students",
254
+ "childrentickets",
255
+ "multiples",
256
+ "\\text{s}",
257
+ "\\text{.}",
258
+ "\\text{\ns}",
259
+ "\\text{}^2",
260
+ "\\text{}^3",
261
+ "\\text{\n}",
262
+ "\\text{}",
263
+ r"\mathrm{th}",
264
+ r"^\circ",
265
+ r"^{\circ}",
266
+ r"\;",
267
+ r",\!",
268
+ "{,}",
269
+ '"',
270
+ "\\dots",
271
+ ]
272
+
273
+
274
+ def normalize_final_answer(final_answer: str) -> str:
275
+ """
276
+ Normalize a final answer to a quantitative reasoning question.
277
+
278
+ Copied character for character from appendix D of Lewkowycz et al. (2022)
279
+ """
280
+ final_answer = final_answer.split("=")[-1]
281
+
282
+ for before, after in SUBSTITUTIONS:
283
+ final_answer = final_answer.replace(before, after)
284
+ for expr in REMOVED_EXPRESSIONS:
285
+ final_answer = final_answer.replace(expr, "")
286
+
287
+ # Extract answer that is in LaTeX math, is bold,
288
+ # is surrounded by a box, etc.
289
+ final_answer = re.sub(r"(.*?)(\$)(.*?)(\$)(.*)", "$\\3$", final_answer)
290
+ final_answer = re.sub(r"(\\text\{)(.*?)(\})", "\\2", final_answer)
291
+ final_answer = re.sub(r"(\\textbf\{)(.*?)(\})", "\\2", final_answer)
292
+ final_answer = re.sub(r"(\\overline\{)(.*?)(\})", "\\2", final_answer)
293
+ final_answer = re.sub(r"(\\boxed\{)(.*)(\})", "\\2", final_answer)
294
+
295
+ # Normalize shorthand TeX:
296
+ # \fracab -> \frac{a}{b}
297
+ # \frac{abc}{bef} -> \frac{abc}{bef}
298
+ # \fracabc -> \frac{a}{b}c
299
+ # \sqrta -> \sqrt{a}
300
+ # \sqrtab -> sqrt{a}b
301
+ final_answer = re.sub(r"(frac)([^{])(.)", "frac{\\2}{\\3}", final_answer)
302
+ final_answer = re.sub(r"(sqrt)([^{])", "sqrt{\\2}", final_answer)
303
+ final_answer = final_answer.replace("$", "")
304
+
305
+ # Normalize 100,000 -> 100000
306
+ if final_answer.replace(",", "").isdigit():
307
+ final_answer = final_answer.replace(",", "")
308
+
309
+ return final_answer
lm-evaluation-harness/lm_eval/tasks/polemo2/README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # PolEmo 2.0
2
+
3
+ ### Paper
4
+
5
+ Title: `Multi-Level Sentiment Analysis of PolEmo 2.0: Extended Corpus of Multi-Domain Consumer Reviews`
6
+
7
+ Abstract: https://aclanthology.org/K19-1092/
8
+
9
+ The PolEmo 2.0 is a dataset of online consumer reviews in Polish from four domains: medicine, hotels, products, and university. It is human-annotated on a level of full reviews and individual sentences. It comprises over 8000 reviews, about 85% from the medicine and hotel domains.
10
+ The goal is to predict the sentiment of a review. There are two separate test sets, to allow for in-domain (medicine and hotels) as well as out-of-domain (products and university) validation.
11
+
12
+ Homepage: https://clarin-pl.eu/dspace/handle/11321/710
13
+
14
+
15
+ ### Citation
16
+
17
+ ```
18
+ @inproceedings{kocon-etal-2019-multi,
19
+ title = "Multi-Level Sentiment Analysis of {P}ol{E}mo 2.0: Extended Corpus of Multi-Domain Consumer Reviews",
20
+ author = "Koco{\'n}, Jan and
21
+ Mi{\l}kowski, Piotr and
22
+ Za{\'s}ko-Zieli{\'n}ska, Monika",
23
+ booktitle = "Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)",
24
+ month = nov,
25
+ year = "2019",
26
+ address = "Hong Kong, China",
27
+ publisher = "Association for Computational Linguistics",
28
+ url = "https://aclanthology.org/K19-1092",
29
+ doi = "10.18653/v1/K19-1092",
30
+ pages = "980--991",
31
+ abstract = "In this article we present an extended version of PolEmo {--} a corpus of consumer reviews from 4 domains: medicine, hotels, products and school. Current version (PolEmo 2.0) contains 8,216 reviews having 57,466 sentences. Each text and sentence was manually annotated with sentiment in 2+1 scheme, which gives a total of 197,046 annotations. We obtained a high value of Positive Specific Agreement, which is 0.91 for texts and 0.88 for sentences. PolEmo 2.0 is publicly available under a Creative Commons copyright license. We explored recent deep learning approaches for the recognition of sentiment, such as Bi-directional Long Short-Term Memory (BiLSTM) and Bidirectional Encoder Representations from Transformers (BERT).",
32
+ }
33
+ ```
34
+
35
+ ### Groups and Tasks
36
+
37
+ #### Groups
38
+
39
+ * `polemo2`: Evaluates `polemo2_in` and `polemo2_out`
40
+
41
+ #### Tasks
42
+
43
+ * `polemo2_in`: evaluates sentiment predictions of in-domain (medicine and hotels) reviews
44
+ * `polemo2_out`: evaluates sentiment predictions of out-of-domain (products and university) reviews
45
+
46
+ ### Checklist
47
+
48
+ For adding novel benchmarks/datasets to the library:
49
+ * [x] Is the task an existing benchmark in the literature?
50
+ * [x] Have you referenced the original paper that introduced the task?
51
+ * [ ] If yes, does the original paper provide a reference implementation?
52
+
53
+
54
+ If other tasks on this dataset are already supported:
55
+ * [x] Is the "Main" variant of this task clearly denoted?
56
+ * [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
57
+ * [x] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/polemo2/polemo2_in.yaml ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - polemo2
3
+ task: polemo2_in
4
+ dataset_path: allegro/klej-polemo2-in
5
+ dataset_name: null
6
+ output_type: generate_until
7
+ training_split: train
8
+ validation_split: validation
9
+ test_split: test
10
+ doc_to_text: "Opinia: \"{{sentence}}\"\nOkreśl sentyment podanej opinii. Możliwe odpowiedzi:\nA - Neutralny\nB - Negatywny\nC - Pozytywny\nD - Niejednoznaczny\nPrawidłowa odpowiedź:"
11
+ doc_to_target: "{{['__label__meta_zero', '__label__meta_minus_m', '__label__meta_plus_m', '__label__meta_amb'].index(target)}}"
12
+ should_decontaminate: true
13
+ doc_to_decontamination_query: "{{sentence}}"
14
+ generation_kwargs:
15
+ until:
16
+ - "."
17
+ - ","
18
+ do_sample: false
19
+ temperature: 0.0
20
+ max_gen_toks: 50
21
+ filter_list:
22
+ - name: "score-first"
23
+ filter:
24
+ - function: "regex"
25
+ regex_pattern: "(\\b[ABCD]\\b)"
26
+ - function: "take_first"
27
+ - function: "map"
28
+ mapping_dict:
29
+ A: 0
30
+ B: 1
31
+ C: 2
32
+ D: 3
33
+ default_value: -1
34
+ - function: "take_first"
35
+ metric_list:
36
+ - metric: f1
37
+ aggregation: mean
38
+ higher_is_better: true
39
+ hf_evaluate: true
40
+ average: micro
41
+ - metric: accuracy
42
+ aggregation: mean
43
+ higher_is_better: true
44
+ hf_evaluate: true
45
+ metadata:
46
+ version: 1.0
lm-evaluation-harness/lm_eval/tasks/polemo2/polemo2_out.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ include: polemo2_in.yaml
2
+ task: polemo2_out
3
+ dataset_path: allegro/klej-polemo2-out
4
+ dataset_name: klej-polemo2-out
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Ensenada ADDED
Binary file (2.37 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Guayaquil ADDED
Binary file (232 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Knox_IN ADDED
Binary file (2.44 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Louisville ADDED
Binary file (2.79 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Montserrat ADDED
Binary file (246 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Nuuk ADDED
Binary file (1.89 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Port-au-Prince ADDED
Binary file (1.43 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Brazil/East ADDED
Binary file (1.43 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Chile/Continental ADDED
Binary file (2.52 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Chile/EasterIsland ADDED
Binary file (2.22 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT ADDED
Binary file (114 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+0 ADDED
Binary file (114 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+4 ADDED
Binary file (116 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/Etc/GMT+6 ADDED
Binary file (116 Bytes). View file