applied-ai-018 commited on
Commit
89c6ce0
·
verified ·
1 Parent(s): ed614f4

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. ckpts/universal/global_step20/zero/6.post_attention_layernorm.weight/fp32.pt +3 -0
  2. ckpts/universal/global_step20/zero/9.mlp.dense_h_to_4h_swiglu.weight/exp_avg.pt +3 -0
  3. ckpts/universal/global_step20/zero/9.mlp.dense_h_to_4h_swiglu.weight/fp32.pt +3 -0
  4. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/entailed_polarity_hindi.yaml +4 -0
  5. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/intersect_geometry.yaml +4 -0
  6. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/linguistics_puzzles.yaml +4 -0
  7. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml +4 -0
  8. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/persian_idioms.yaml +4 -0
  9. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/semantic_parsing_in_context_sparc.yaml +4 -0
  10. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/snarks.yaml +4 -0
  11. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/sports_understanding.yaml +4 -0
  12. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/tense.yaml +4 -0
  13. lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/unnatural_in_context_learning.yaml +4 -0
  14. lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_hy.yaml +7 -0
  15. lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ne.yaml +7 -0
  16. lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ro.yaml +7 -0
  17. lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_sv.yaml +7 -0
  18. lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_te.yaml +7 -0
  19. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_gu_mc1.yaml +7 -0
  20. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_kn_mc1.yaml +7 -0
  21. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_mr_mc2.yaml +7 -0
  22. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ne_mc2.yaml +7 -0
  23. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ro_mc1.yaml +7 -0
  24. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ro_mc2.yaml +7 -0
  25. lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_sr_mc1.yaml +7 -0
  26. lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md +53 -0
  27. lm-evaluation-harness/lm_eval/tasks/wmt2016/metrics.py +11 -0
  28. lm-evaluation-harness/lm_eval/tasks/wmt2016/ro_en-t5_prompt.yaml +19 -0
  29. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Adak +0 -0
  30. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Buenos_Aires +0 -0
  31. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Catamarca +0 -0
  32. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/ComodRivadavia +0 -0
  33. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Cordoba +0 -0
  34. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Jujuy +0 -0
  35. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/La_Rioja +0 -0
  36. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Mendoza +0 -0
  37. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Rio_Gallegos +0 -0
  38. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Salta +0 -0
  39. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/San_Juan +0 -0
  40. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/San_Luis +0 -0
  41. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Tucuman +0 -0
  42. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Ushuaia +0 -0
  43. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Aruba +0 -0
  44. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Asuncion +0 -0
  45. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Cambridge_Bay +0 -0
  46. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Chihuahua +0 -0
  47. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Coral_Harbour +0 -0
  48. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Dawson +0 -0
  49. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Dawson_Creek +0 -0
  50. venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Eirunepe +0 -0
ckpts/universal/global_step20/zero/6.post_attention_layernorm.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ac7803d669b9c4634f585d570defb7a542c7874c9e81a7e3fd793416259b8b1
3
+ size 9293
ckpts/universal/global_step20/zero/9.mlp.dense_h_to_4h_swiglu.weight/exp_avg.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37478256103ccf76244f7e7e39d76a85a93b98b99549d28c2843b643f8ccf2dc
3
+ size 33555612
ckpts/universal/global_step20/zero/9.mlp.dense_h_to_4h_swiglu.weight/fp32.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d8c9bdf33366537e7262ccee7ad1519fc0e8e4e53e9d4c108e20e79fde0a6ce4
3
+ size 33555533
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/entailed_polarity_hindi.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: entailed_polarity_hindi_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_entailed_polarity_hindi_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/intersect_geometry.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: intersect_geometry_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_intersect_geometry_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/linguistics_puzzles.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: linguistics_puzzles_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_linguistics_puzzles_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: mult_data_wrangling_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_mult_data_wrangling_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/persian_idioms.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: persian_idioms_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_persian_idioms_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/semantic_parsing_in_context_sparc.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: semantic_parsing_in_context_sparc_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_semantic_parsing_in_context_sparc_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/snarks.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: snarks_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_snarks_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/sports_understanding.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: sports_understanding_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_sports_understanding_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/tense.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: tense_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_tense_generate_until
lm-evaluation-harness/lm_eval/tasks/bigbench/generate_until/unnatural_in_context_learning.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: unnatural_in_context_learning_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_unnatural_in_context_learning_generate_until
lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_hy.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_hy
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: hy
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ne.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_ne
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: ne
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_ro.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_ro
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: ro
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_sv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_sv
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: sv
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
lm-evaluation-harness/lm_eval/tasks/okapi/arc_multilingual/arc_te.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _arc_yaml
2
+ task: arc_te
3
+ dataset_path: alexandrainst/m_arc
4
+ dataset_name: te
5
+ training_split: train
6
+ validation_split: validation
7
+ test_split: test
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_gu_mc1.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc1_yaml
2
+ task: truthfulqa_gu_mc1
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: gu
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_kn_mc1.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc1_yaml
2
+ task: truthfulqa_kn_mc1
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: kn
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_mr_mc2.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc2_yaml
2
+ task: truthfulqa_mr_mc2
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: mr
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ne_mc2.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc2_yaml
2
+ task: truthfulqa_ne_mc2
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: ne
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ro_mc1.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc1_yaml
2
+ task: truthfulqa_ro_mc1
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: ro
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_ro_mc2.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc2_yaml
2
+ task: truthfulqa_ro_mc2
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: ro
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/okapi/truthfulqa_multilingual/truthfulqa_sr_mc1.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ include: _truthfulqa_mc1_yaml
2
+ task: truthfulqa_sr_mc1
3
+ dataset_path: alexandrainst/m_truthfulqa
4
+ dataset_name: sr
5
+ training_split: null
6
+ validation_split: val
7
+ test_split: null
lm-evaluation-harness/lm_eval/tasks/wmt2016/README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WMT16
2
+
3
+ ### Paper
4
+
5
+ Title: `Findings of the 2016 Conference on Machine Translation`
6
+ Abstract: http://www.aclweb.org/anthology/W/W16/W16-2301
7
+
8
+
9
+
10
+ Homepage: https://huggingface.co/datasets/wmt16
11
+
12
+
13
+ ### Citation
14
+
15
+ ```
16
+ @InProceedings{bojar-EtAl:2016:WMT1,
17
+ author = {Bojar, Ond
18
+ {r}ej and Chatterjee, Rajen and Federmann, Christian and Graham, Yvette and Haddow, Barry and Huck, Matthias and Jimeno Yepes, Antonio and Koehn, Philipp and Logacheva, Varvara and Monz, Christof and Negri, Matteo and Neveol, Aurelie and Neves, Mariana and Popel, Martin and Post, Matt and Rubino, Raphael and Scarton, Carolina and Specia, Lucia and Turchi, Marco and Verspoor, Karin and Zampieri, Marcos},
19
+ title = {Findings of the 2016 Conference on Machine Translation},
20
+ booktitle = {Proceedings of the First Conference on Machine Translation},
21
+ month = {August},
22
+ year = {2016},
23
+ address = {Berlin, Germany},
24
+ publisher = {Association for Computational Linguistics},
25
+ pages = {131--198},
26
+ url = {http://www.aclweb.org/anthology/W/W16/W16-2301}
27
+ }
28
+ ```
29
+
30
+ ### Groups and Tasks
31
+
32
+ #### Groups
33
+
34
+ * `wmt-t5-prompt`: Group for all wmt tasks with prompt templates used for T5 (`Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer`)
35
+
36
+ #### Tasks
37
+
38
+ With specific prompt styles
39
+ * `wmt-ro-en-t5-prompt`: WMT16 with the prompt template used for T5
40
+
41
+
42
+ ### Checklist
43
+
44
+ For adding novel benchmarks/datasets to the library:
45
+ * [ ] Is the task an existing benchmark in the literature?
46
+ * [ ] Have you referenced the original paper that introduced the task?
47
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
48
+
49
+
50
+ If other tasks on this dataset are already supported:
51
+ * [ ] Is the "Main" variant of this task clearly denoted?
52
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
53
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation-harness/lm_eval/tasks/wmt2016/metrics.py ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import evaluate
2
+
3
+
4
+ def bleu(predictions, references):
5
+ return (predictions[0], references[0])
6
+
7
+
8
+ def agg_bleu(items):
9
+ bleu_fn = evaluate.load("bleu")
10
+ predictions, references = zip(*items)
11
+ return bleu_fn.compute(predictions=predictions, references=references)["bleu"]
lm-evaluation-harness/lm_eval/tasks/wmt2016/ro_en-t5_prompt.yaml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ group:
2
+ - wmt-t5-prompt
3
+ task: wmt-ro-en-t5-prompt
4
+ dataset_path: wmt16
5
+ dataset_name: ro-en
6
+ training_split: train
7
+ validation_split: validation
8
+ output_type: generate_until
9
+ doc_to_text: "translate English to Romanian: {{translation.en}}"
10
+ doc_to_target: "{{translation.ro}}"
11
+ metric_list:
12
+ - metric: wer
13
+ aggregation: mean
14
+ higher_is_better: false
15
+ - metric: !function metrics.bleu
16
+ aggregation: !function metrics.agg_bleu
17
+ higher_is_better: true
18
+ metadata:
19
+ version: 1.0
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Adak ADDED
Binary file (2.36 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Buenos_Aires ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Catamarca ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/ComodRivadavia ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Cordoba ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Jujuy ADDED
Binary file (1.03 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/La_Rioja ADDED
Binary file (1.08 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Mendoza ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Rio_Gallegos ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Salta ADDED
Binary file (1.03 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/San_Juan ADDED
Binary file (1.08 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/San_Luis ADDED
Binary file (1.09 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Tucuman ADDED
Binary file (1.09 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Argentina/Ushuaia ADDED
Binary file (1.06 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Aruba ADDED
Binary file (246 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Asuncion ADDED
Binary file (2.03 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Cambridge_Bay ADDED
Binary file (2.25 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Chihuahua ADDED
Binary file (1.1 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Coral_Harbour ADDED
Binary file (182 Bytes). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Dawson ADDED
Binary file (1.61 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Dawson_Creek ADDED
Binary file (1.05 kB). View file
 
venv/lib/python3.10/site-packages/pytz/zoneinfo/America/Eirunepe ADDED
Binary file (642 Bytes). View file