Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- ckpts/universal/global_step20/zero/21.post_attention_layernorm.weight/exp_avg_sq.pt +3 -0
- ckpts/universal/global_step20/zero/6.mlp.dense_h_to_4h.weight/exp_avg.pt +3 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/README.md +42 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/_default_template_yaml +19 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/_generate_configs.py +119 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_astronomy.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_business_ethics.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_clinical_knowledge.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_biology.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_mathematics.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_medicine.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_conceptual_physics.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_electrical_engineering.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_formal_logic.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_global_facts.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_biology.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_chemistry.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_computer_science.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_european_history.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_macroeconomics.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_psychology.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_statistics.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_world_history.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_human_sexuality.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_international_law.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_jurisprudence.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_machine_learning.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_management.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_miscellaneous.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_moral_scenarios.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_nutrition.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_accounting.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_law.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_medicine.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_psychology.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_public_relations.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_us_foreign_policy.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_virology.yaml +4 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/__pycache__/utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_common_yaml +20 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_hi.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_mr.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/utils.py +136 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Antigua +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Atikokan +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Belize +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Chicago +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Guadeloupe +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Iqaluit +0 -0
- venv/lib/python3.10/site-packages/pytz/zoneinfo/America/La_Paz +0 -0
ckpts/universal/global_step20/zero/21.post_attention_layernorm.weight/exp_avg_sq.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:aec91aadfa668b31623ab1b5c39cc6ba91fb4121ce481f7d645e00dee3bbccef
|
3 |
+
size 9387
|
ckpts/universal/global_step20/zero/6.mlp.dense_h_to_4h.weight/exp_avg.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9c68907a53b83ef6e7a7ed352038e428893ecd5d17f9f4ca5366aa1371636507
|
3 |
+
size 33555612
|
lm-evaluation-harness/lm_eval/tasks/ammlu/README.md
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# ArabicMMLU
|
2 |
+
|
3 |
+
### Paper
|
4 |
+
|
5 |
+
ArabicMMLU: Measuring massive multitask language understanding in Arabic
|
6 |
+
This dataset has been translated from the original MMLU with the help of GPT-4.
|
7 |
+
|
8 |
+
The original data [MMLU](https://arxiv.org/pdf/2009.03300v3.pdf)
|
9 |
+
|
10 |
+
The translation has been done with AceGPT researchers [AceGPT](https://arxiv.org/abs/2309.12053)
|
11 |
+
|
12 |
+
ArabicMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Arabic language and culture.
|
13 |
+
ArabicMMLU covers a wide range of subjects, comprising 57 topics that span from elementary to advanced professional levels.
|
14 |
+
|
15 |
+
Homepage: [AceGPT Homepage](https://github.com/FreedomIntelligence/AceGPT/tree/main/eval/benchmark_eval/benchmarks/MMLUArabic)
|
16 |
+
|
17 |
+
### Citation
|
18 |
+
|
19 |
+
|
20 |
+
### Groups and Tasks
|
21 |
+
|
22 |
+
#### Groups
|
23 |
+
|
24 |
+
- `ammlu`: All 57 subjects of the ArabicMMLU dataset, evaluated following the methodology in MMLU's original implementation.
|
25 |
+
|
26 |
+
#### Tasks
|
27 |
+
|
28 |
+
|
29 |
+
The following tasks evaluate subjects in the ArabicMMLU dataset using loglikelihood-based multiple-choice scoring:
|
30 |
+
- `ammlu_{subject_english}`
|
31 |
+
|
32 |
+
### Checklist
|
33 |
+
|
34 |
+
* [x] Is the task an existing benchmark in the literature?
|
35 |
+
* [x] Have you referenced the original paper that introduced the task?
|
36 |
+
* [x] If yes, does the original paper provide a reference implementation?
|
37 |
+
* [x] Yes, original implementation contributed by author of the benchmark
|
38 |
+
|
39 |
+
If other tasks on this dataset are already supported:
|
40 |
+
* [x] Is the "Main" variant of this task clearly denoted?
|
41 |
+
* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
42 |
+
* [x] Have you noted which, if any, published evaluation setups are matched by this variant?
|
lm-evaluation-harness/lm_eval/tasks/ammlu/_default_template_yaml
ADDED
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
group: ammlu
|
2 |
+
dataset_path: Hennara/ammlu
|
3 |
+
test_split: test
|
4 |
+
fewshot_split: dev
|
5 |
+
fewshot_config:
|
6 |
+
sampler: first_n
|
7 |
+
output_type: multiple_choice
|
8 |
+
doc_to_text: "{{Question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\nالجواب:"
|
9 |
+
doc_to_choice: ["A", "B", "C", "D"]
|
10 |
+
doc_to_target: "{{['A', 'B', 'C', 'D'].index(Answer)}}"
|
11 |
+
metric_list:
|
12 |
+
- metric: acc
|
13 |
+
aggregation: mean
|
14 |
+
higher_is_better: true
|
15 |
+
- metric: acc_norm
|
16 |
+
aggregation: mean
|
17 |
+
higher_is_better: true
|
18 |
+
metadata:
|
19 |
+
version: 0.0
|
lm-evaluation-harness/lm_eval/tasks/ammlu/_generate_configs.py
ADDED
@@ -0,0 +1,119 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""
|
2 |
+
Take in a YAML, and output all other splits with this YAML
|
3 |
+
"""
|
4 |
+
import argparse
|
5 |
+
import os
|
6 |
+
|
7 |
+
import yaml
|
8 |
+
from tqdm import tqdm
|
9 |
+
|
10 |
+
|
11 |
+
SUBJECTS = {
|
12 |
+
"abstract_algebra": "ألعلوم وتقنية المعلومات و الرياضيات",
|
13 |
+
"anatomy": "ألعلوم وتقنية المعلومات و الرياضيات",
|
14 |
+
"astronomy": "ألعلوم وتقنية المعلومات و الرياضيات",
|
15 |
+
"business_ethics": "علوم أخرى",
|
16 |
+
"clinical_knowledge": "علوم أخرى",
|
17 |
+
"college_biology": "ألعلوم وتقنية المعلومات و الرياضيات",
|
18 |
+
"college_chemistry": "ألعلوم وتقنية المعلومات و الرياضيات",
|
19 |
+
"college_computer_science": "ألعلوم وتقنية المعلومات و الرياضيات",
|
20 |
+
"college_mathematics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
21 |
+
"college_medicine": "علوم أخرى",
|
22 |
+
"college_physics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
23 |
+
"computer_security": "ألعلوم وتقنية المعلومات و الرياضيات",
|
24 |
+
"conceptual_physics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
25 |
+
"econometrics": "العلوم الإجتماعية",
|
26 |
+
"electrical_engineering": "ألعلوم وتقنية المعلومات و الرياضيات",
|
27 |
+
"elementary_mathematics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
28 |
+
"formal_logic": "العلوم الانسانية",
|
29 |
+
"global_facts": "علوم أخرى",
|
30 |
+
"high_school_biology": "ألعلوم وتقنية المعلومات و الرياضيات",
|
31 |
+
"high_school_chemistry": "ألعلوم وتقنية المعلومات و الرياضيات",
|
32 |
+
"high_school_computer_science": "ألعلوم وتقنية المعلومات و الرياضيات",
|
33 |
+
"high_school_european_history": "العلوم الانسانية",
|
34 |
+
"high_school_geography": "العلوم الإجتماعية",
|
35 |
+
"high_school_government_and_politics": "العلوم الإجتماعية",
|
36 |
+
"high_school_macroeconomics": "العلوم الإجتماعية",
|
37 |
+
"high_school_mathematics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
38 |
+
"high_school_microeconomics": "العلوم الإجتماعية",
|
39 |
+
"high_school_physics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
40 |
+
"high_school_psychology": "العلوم الإجتماعية",
|
41 |
+
"high_school_statistics": "ألعلوم وتقنية المعلومات و الرياضيات",
|
42 |
+
"high_school_us_history": "العلوم الانسانية",
|
43 |
+
"high_school_world_history": "العلوم الانسانية",
|
44 |
+
"human_aging": "علوم أخرى",
|
45 |
+
"human_sexuality": "العلوم الإجتماعية",
|
46 |
+
"international_law": "العلوم الانسانية",
|
47 |
+
"jurisprudence": "العلوم الانسانية",
|
48 |
+
"logical_fallacies": "العلوم الانسانية",
|
49 |
+
"machine_learning": "ألعلوم وتقنية المعلومات و الرياضيات",
|
50 |
+
"management": "علوم أخرى",
|
51 |
+
"marketing": "علوم أخرى",
|
52 |
+
"medical_genetics": "علوم أخرى",
|
53 |
+
"miscellaneous": "علوم أخرى",
|
54 |
+
"moral_disputes": "العلوم الانسانية",
|
55 |
+
"moral_scenarios": "العلوم الانسانية",
|
56 |
+
"nutrition": "علوم أخرى",
|
57 |
+
"philosophy": "العلوم الانسانية",
|
58 |
+
"prehistory": "العلوم الانسانية",
|
59 |
+
"professional_accounting": "علوم أخرى",
|
60 |
+
"professional_law": "العلوم الانسانية",
|
61 |
+
"professional_medicine": "علوم أخرى",
|
62 |
+
"professional_psychology": "العلوم الإجتماعية",
|
63 |
+
"public_relations": "العلوم الإجتماعية",
|
64 |
+
"security_studies": "العلوم الإجتماعية",
|
65 |
+
"sociology": "العلوم الإجتماعية",
|
66 |
+
"us_foreign_policy": "العلوم الإجتماعية",
|
67 |
+
"virology": "علوم أخرى",
|
68 |
+
"world_religions": "العلوم الانسانية",
|
69 |
+
}
|
70 |
+
|
71 |
+
|
72 |
+
def parse_args():
|
73 |
+
parser = argparse.ArgumentParser()
|
74 |
+
parser.add_argument("--base_yaml_path", required=True)
|
75 |
+
parser.add_argument("--save_prefix_path", default="ammlu")
|
76 |
+
parser.add_argument("--cot_prompt_path", default=None)
|
77 |
+
parser.add_argument("--task_prefix", default="")
|
78 |
+
return parser.parse_args()
|
79 |
+
|
80 |
+
|
81 |
+
if __name__ == "__main__":
|
82 |
+
args = parse_args()
|
83 |
+
|
84 |
+
# get filename of base_yaml so we can `"include": ` it in our other YAMLs.
|
85 |
+
base_yaml_name = os.path.split(args.base_yaml_path)[-1]
|
86 |
+
with open(args.base_yaml_path, encoding="utf-8") as f:
|
87 |
+
base_yaml = yaml.full_load(f)
|
88 |
+
|
89 |
+
if args.cot_prompt_path is not None:
|
90 |
+
import json
|
91 |
+
|
92 |
+
with open(args.cot_prompt_path, encoding="utf-8") as f:
|
93 |
+
cot_file = json.load(f)
|
94 |
+
|
95 |
+
for subject_eng, category in tqdm(SUBJECTS.items()):
|
96 |
+
if args.cot_prompt_path is not None:
|
97 |
+
description = cot_file[subject_eng]
|
98 |
+
else:
|
99 |
+
description = f"فم بعملية التقييم في مجال {category} \n\n"
|
100 |
+
|
101 |
+
yaml_dict = {
|
102 |
+
"include": base_yaml_name,
|
103 |
+
"task": f"ammlu_{args.task_prefix}_{subject_eng}"
|
104 |
+
if args.task_prefix != ""
|
105 |
+
else f"ammlu_{subject_eng}",
|
106 |
+
"dataset_name": subject_eng,
|
107 |
+
"description": description,
|
108 |
+
}
|
109 |
+
|
110 |
+
file_save_path = args.save_prefix_path + f"_{subject_eng}.yaml"
|
111 |
+
print(f"Saving yaml for subset {subject_eng} to {file_save_path}")
|
112 |
+
with open(file_save_path, "w", encoding="utf-8") as yaml_file:
|
113 |
+
yaml.dump(
|
114 |
+
yaml_dict,
|
115 |
+
yaml_file,
|
116 |
+
width=float("inf"),
|
117 |
+
allow_unicode=True,
|
118 |
+
default_style='"',
|
119 |
+
)
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_astronomy.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "astronomy"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_astronomy"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_business_ethics.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "business_ethics"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_business_ethics"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_clinical_knowledge.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "clinical_knowledge"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_clinical_knowledge"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_biology.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "college_biology"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_college_biology"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_mathematics.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "college_mathematics"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_college_mathematics"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_college_medicine.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "college_medicine"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_college_medicine"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_conceptual_physics.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "conceptual_physics"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_conceptual_physics"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_electrical_engineering.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "electrical_engineering"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_electrical_engineering"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_formal_logic.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "formal_logic"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_formal_logic"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_global_facts.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "global_facts"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_global_facts"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_biology.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_biology"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_biology"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_chemistry.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_chemistry"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_chemistry"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_computer_science.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_computer_science"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_computer_science"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_european_history.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_european_history"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_european_history"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_macroeconomics.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_macroeconomics"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_macroeconomics"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_psychology.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_psychology"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_psychology"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_statistics.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_statistics"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_statistics"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_high_school_world_history.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "high_school_world_history"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_high_school_world_history"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_human_sexuality.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "human_sexuality"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_human_sexuality"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_international_law.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "international_law"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_international_law"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_jurisprudence.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "jurisprudence"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_jurisprudence"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_machine_learning.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "machine_learning"
|
2 |
+
"description": "فم بعملية التقييم في مجال ألعلوم وتقنية المعلومات و الرياضيات \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_machine_learning"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_management.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "management"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_management"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_miscellaneous.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "miscellaneous"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_miscellaneous"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_moral_scenarios.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "moral_scenarios"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_moral_scenarios"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_nutrition.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "nutrition"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_nutrition"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_accounting.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "professional_accounting"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_professional_accounting"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_law.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "professional_law"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الانسانية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_professional_law"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_medicine.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "professional_medicine"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_professional_medicine"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_professional_psychology.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "professional_psychology"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_professional_psychology"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_public_relations.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "public_relations"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_public_relations"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_us_foreign_policy.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "us_foreign_policy"
|
2 |
+
"description": "فم بعملية التقييم في مجال العلوم الإجتماعية \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_us_foreign_policy"
|
lm-evaluation-harness/lm_eval/tasks/ammlu/ammlu_virology.yaml
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"dataset_name": "virology"
|
2 |
+
"description": "فم بعملية التقييم في مجال علوم أخرى \n\n"
|
3 |
+
"include": "_default_template_yaml"
|
4 |
+
"task": "ammlu_virology"
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (1.89 kB). View file
|
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_common_yaml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file will be included in the generated language-specific task configs.
|
2 |
+
# It doesn't have a yaml file extension as it is not meant to be imported directly
|
3 |
+
# by the harness.
|
4 |
+
group: Cognitive-Lab/Indic-ARC-Challenge
|
5 |
+
dataset_path: Cognitive-Lab/Indic-ARC-Challenge
|
6 |
+
|
7 |
+
output_type: multiple_choice
|
8 |
+
#training_split: train
|
9 |
+
#validation_split: validation
|
10 |
+
test_split: test
|
11 |
+
|
12 |
+
doc_to_target: label
|
13 |
+
doc_to_choice: !function utils.doc_to_choice
|
14 |
+
|
15 |
+
metric_list:
|
16 |
+
- metric: acc
|
17 |
+
aggregation: mean
|
18 |
+
higher_is_better: true
|
19 |
+
metadata:
|
20 |
+
version: 1.0
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_hi.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: hi
|
2 |
+
include: indic_arc_challenge_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_challenge_hi
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/indic_arc_challenge_mr.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: mr
|
2 |
+
include: indic_arc_challenge_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_challenge_mr
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_challenge/utils.py
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from functools import partial
|
2 |
+
|
3 |
+
|
4 |
+
def convert_choice(choice):
|
5 |
+
return choice
|
6 |
+
|
7 |
+
|
8 |
+
def doc_to_text(doc, connector):
|
9 |
+
# Drop the period
|
10 |
+
conn = connector[doc["question"]]
|
11 |
+
return doc["premise"].strip()[:-1] + f" {conn}"
|
12 |
+
|
13 |
+
|
14 |
+
def doc_to_choice(doc):
|
15 |
+
return [convert_choice(doc["choice1"]), convert_choice(doc["choice2"])]
|
16 |
+
|
17 |
+
|
18 |
+
doc_to_text_hi = partial(
|
19 |
+
doc_to_text,
|
20 |
+
connector={
|
21 |
+
"cause": "कारण",
|
22 |
+
"effect": "परिणाम",
|
23 |
+
},
|
24 |
+
)
|
25 |
+
|
26 |
+
doc_to_text_mr = partial(
|
27 |
+
doc_to_text,
|
28 |
+
connector={
|
29 |
+
"cause": "कारण",
|
30 |
+
"effect": "परिणाम",
|
31 |
+
},
|
32 |
+
)
|
33 |
+
|
34 |
+
doc_to_text_as = partial(
|
35 |
+
doc_to_text,
|
36 |
+
connector={
|
37 |
+
"cause": "কাৰণ",
|
38 |
+
"effect": "প্ৰভাৱ",
|
39 |
+
},
|
40 |
+
)
|
41 |
+
|
42 |
+
doc_to_text_bn = partial(
|
43 |
+
doc_to_text,
|
44 |
+
connector={
|
45 |
+
"cause": "কারণ",
|
46 |
+
"effect": "প্রভাব",
|
47 |
+
},
|
48 |
+
)
|
49 |
+
|
50 |
+
doc_to_text_gu = partial(
|
51 |
+
doc_to_text,
|
52 |
+
connector={
|
53 |
+
"cause": "કારણ",
|
54 |
+
"effect": "અસર",
|
55 |
+
},
|
56 |
+
)
|
57 |
+
|
58 |
+
doc_to_text_kn = partial(
|
59 |
+
doc_to_text,
|
60 |
+
connector={
|
61 |
+
"cause": "ಕಾರಣ",
|
62 |
+
"effect": "ಪರಿಣಾಮ",
|
63 |
+
},
|
64 |
+
)
|
65 |
+
|
66 |
+
doc_to_text_mai = partial(
|
67 |
+
doc_to_text,
|
68 |
+
connector={
|
69 |
+
"cause": "कारण",
|
70 |
+
"effect": "प्रभाव",
|
71 |
+
},
|
72 |
+
)
|
73 |
+
|
74 |
+
doc_to_text_ml = partial(
|
75 |
+
doc_to_text,
|
76 |
+
connector={
|
77 |
+
"cause": "കാരണമാകുന്നു",
|
78 |
+
"effect": "ഫലം",
|
79 |
+
},
|
80 |
+
)
|
81 |
+
|
82 |
+
doc_to_text_ne = partial(
|
83 |
+
doc_to_text,
|
84 |
+
connector={
|
85 |
+
"cause": "कारण",
|
86 |
+
"effect": "असर",
|
87 |
+
},
|
88 |
+
)
|
89 |
+
|
90 |
+
doc_to_text_or = partial(
|
91 |
+
doc_to_text,
|
92 |
+
connector={
|
93 |
+
"cause": "କାରଣ",
|
94 |
+
"effect": "ପ୍ରଭାବ",
|
95 |
+
},
|
96 |
+
)
|
97 |
+
|
98 |
+
doc_to_text_sa = partial(
|
99 |
+
doc_to_text,
|
100 |
+
connector={
|
101 |
+
"cause": "निमित्तम्",
|
102 |
+
"effect": "परिणाम",
|
103 |
+
},
|
104 |
+
)
|
105 |
+
|
106 |
+
doc_to_text_sd = partial(
|
107 |
+
doc_to_text,
|
108 |
+
connector={
|
109 |
+
"cause": "سبب",
|
110 |
+
"effect": "اثر",
|
111 |
+
},
|
112 |
+
)
|
113 |
+
|
114 |
+
doc_to_text_ta = partial(
|
115 |
+
doc_to_text,
|
116 |
+
connector={
|
117 |
+
"cause": "காரணம்",
|
118 |
+
"effect": "விளைவு",
|
119 |
+
},
|
120 |
+
)
|
121 |
+
|
122 |
+
doc_to_text_te = partial(
|
123 |
+
doc_to_text,
|
124 |
+
connector={
|
125 |
+
"cause": "కారణం",
|
126 |
+
"effect": "ప్రభావం",
|
127 |
+
},
|
128 |
+
)
|
129 |
+
|
130 |
+
doc_to_text_ur = partial(
|
131 |
+
doc_to_text,
|
132 |
+
connector={
|
133 |
+
"cause": "وجہ",
|
134 |
+
"effect": "اثر",
|
135 |
+
},
|
136 |
+
)
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Antigua
ADDED
Binary file (246 Bytes). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Atikokan
ADDED
Binary file (182 Bytes). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Belize
ADDED
Binary file (1.61 kB). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Chicago
ADDED
Binary file (3.59 kB). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Guadeloupe
ADDED
Binary file (246 Bytes). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Iqaluit
ADDED
Binary file (2.2 kB). View file
|
|
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/La_Paz
ADDED
Binary file (218 Bytes). View file
|
|