applied-ai-018 commited on
Commit
d6ae8bf
·
verified ·
1 Parent(s): a54a4db

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. lm-evaluation/lm_eval/tasks/asdiv/README.md +56 -0
  2. lm-evaluation/lm_eval/tasks/asdiv/default.yaml +16 -0
  3. lm-evaluation/lm_eval/tasks/bigbench/generate_until/abstract_narrative_understanding.yaml +4 -0
  4. lm-evaluation/lm_eval/tasks/bigbench/generate_until/analogical_similarity.yaml +4 -0
  5. lm-evaluation/lm_eval/tasks/bigbench/generate_until/checkmate_in_one.yaml +4 -0
  6. lm-evaluation/lm_eval/tasks/bigbench/generate_until/chess_state_tracking.yaml +4 -0
  7. lm-evaluation/lm_eval/tasks/bigbench/generate_until/chinese_remainder_theorem.yaml +4 -0
  8. lm-evaluation/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml +4 -0
  9. lm-evaluation/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml +4 -0
  10. lm-evaluation/lm_eval/tasks/bigbench/generate_until/entailed_polarity.yaml +4 -0
  11. lm-evaluation/lm_eval/tasks/bigbench/generate_until/general_knowledge.yaml +4 -0
  12. lm-evaluation/lm_eval/tasks/bigbench/generate_until/geometric_shapes.yaml +4 -0
  13. lm-evaluation/lm_eval/tasks/bigbench/generate_until/hinglish_toxicity.yaml +4 -0
  14. lm-evaluation/lm_eval/tasks/bigbench/generate_until/implicatures.yaml +4 -0
  15. lm-evaluation/lm_eval/tasks/bigbench/generate_until/implicit_relations.yaml +4 -0
  16. lm-evaluation/lm_eval/tasks/bigbench/generate_until/kanji_ascii.yaml +4 -0
  17. lm-evaluation/lm_eval/tasks/bigbench/generate_until/known_unknowns.yaml +4 -0
  18. lm-evaluation/lm_eval/tasks/bigbench/generate_until/language_games.yaml +4 -0
  19. lm-evaluation/lm_eval/tasks/bigbench/generate_until/linguistics_puzzles.yaml +4 -0
  20. lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_args.yaml +4 -0
  21. lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_deduction.yaml +4 -0
  22. lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_fallacy_detection.yaml +4 -0
  23. lm-evaluation/lm_eval/tasks/bigbench/generate_until/metaphor_boolean.yaml +4 -0
  24. lm-evaluation/lm_eval/tasks/bigbench/generate_until/minute_mysteries_qa.yaml +4 -0
  25. lm-evaluation/lm_eval/tasks/bigbench/generate_until/misconceptions.yaml +4 -0
  26. lm-evaluation/lm_eval/tasks/bigbench/generate_until/mnist_ascii.yaml +4 -0
  27. lm-evaluation/lm_eval/tasks/bigbench/generate_until/movie_dialog_same_or_different.yaml +4 -0
  28. lm-evaluation/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml +4 -0
  29. lm-evaluation/lm_eval/tasks/bigbench/generate_until/multiemo.yaml +4 -0
  30. lm-evaluation/lm_eval/tasks/bigbench/generate_until/object_counting.yaml +4 -0
  31. lm-evaluation/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml +4 -0
  32. lm-evaluation/lm_eval/tasks/bigbench/generate_until/physical_intuition.yaml +4 -0
  33. lm-evaluation/lm_eval/tasks/bigbench/generate_until/qa_wikidata.yaml +4 -0
  34. lm-evaluation/lm_eval/tasks/bigbench/generate_until/real_or_fake_text.yaml +4 -0
  35. lm-evaluation/lm_eval/tasks/bigbench/generate_until/salient_translation_error_detection.yaml +4 -0
  36. lm-evaluation/lm_eval/tasks/bigbench/generate_until/similarities_abstraction.yaml +4 -0
  37. lm-evaluation/lm_eval/tasks/bigbench/generate_until/simp_turing_concept.yaml +4 -0
  38. lm-evaluation/lm_eval/tasks/bigbench/generate_until/simple_ethical_questions.yaml +4 -0
  39. lm-evaluation/lm_eval/tasks/bigbench/generate_until/simple_text_editing.yaml +4 -0
  40. lm-evaluation/lm_eval/tasks/bigbench/generate_until/sufficient_information.yaml +4 -0
  41. lm-evaluation/lm_eval/tasks/bigbench/generate_until/swahili_english_proverbs.yaml +4 -0
  42. lm-evaluation/lm_eval/tasks/bigbench/generate_until/timedial.yaml +4 -0
  43. lm-evaluation/lm_eval/tasks/bigbench/generate_until/topical_chat.yaml +4 -0
  44. lm-evaluation/lm_eval/tasks/bigbench/generate_until/understanding_fables.yaml +4 -0
  45. lm-evaluation/lm_eval/tasks/bigbench/generate_until/unit_conversion.yaml +4 -0
  46. lm-evaluation/lm_eval/tasks/bigbench/generate_until/unnatural_in_context_learning.yaml +4 -0
  47. lm-evaluation/lm_eval/tasks/bigbench/generate_until/winowhy.yaml +4 -0
  48. lm-evaluation/lm_eval/tasks/model_written_evals/persona/being-helpful-to-subtly-achieve-goals-against-human-values.yaml +4 -0
  49. lm-evaluation/lm_eval/tasks/model_written_evals/persona/believes-it-is-not-being-watched-by-humans.yaml +4 -0
  50. lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-for-independence-from-human-oversight.yaml +4 -0
lm-evaluation/lm_eval/tasks/asdiv/README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ASDiv
2
+
3
+ ### Paper
4
+
5
+ Title: `ASDiv: A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers`
6
+
7
+ Abstract: https://arxiv.org/abs/2106.15772
8
+
9
+ ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language
10
+ patterns and problem types) English math word problem (MWP) corpus for evaluating
11
+ the capability of various MWP solvers. Existing MWP corpora for studying AI progress
12
+ remain limited either in language usage patterns or in problem types. We thus present
13
+ a new English MWP corpus with 2,305 MWPs that cover more text patterns and most problem
14
+ types taught in elementary school. Each MWP is annotated with its problem type and grade
15
+ level (for indicating the level of difficulty).
16
+
17
+ NOTE: We currently ignore formulas for answer generation.
18
+
19
+ Homepage: https://github.com/chaochun/nlu-asdiv-dataset
20
+
21
+
22
+ ### Citation
23
+
24
+ ```
25
+ @misc{miao2021diverse,
26
+ title={A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers},
27
+ author={Shen-Yun Miao and Chao-Chun Liang and Keh-Yih Su},
28
+ year={2021},
29
+ eprint={2106.15772},
30
+ archivePrefix={arXiv},
31
+ primaryClass={cs.AI}
32
+ }
33
+ ```
34
+
35
+ ### Groups and Tasks
36
+
37
+ #### Groups
38
+
39
+ * Not part of a group yet.
40
+
41
+ #### Tasks
42
+
43
+ * `asdiv`
44
+
45
+ ### Checklist
46
+
47
+ For adding novel benchmarks/datasets to the library:
48
+ * [ ] Is the task an existing benchmark in the literature?
49
+ * [ ] Have you referenced the original paper that introduced the task?
50
+ * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
51
+
52
+
53
+ If other tasks on this dataset are already supported:
54
+ * [ ] Is the "Main" variant of this task clearly denoted?
55
+ * [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
56
+ * [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
lm-evaluation/lm_eval/tasks/asdiv/default.yaml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ task: asdiv
2
+ dataset_path: EleutherAI/asdiv
3
+ output_type: loglikelihood
4
+ validation_split: validation
5
+ doc_to_text: "{{body}}\nQuestion:{{question}}\nAnswer:"
6
+ doc_to_target: "{{answer.split(' (')[0]}}"
7
+ should_decontaminate: true
8
+ doc_to_decontamination_query: "{{body}} {{question}}"
9
+ metric_list:
10
+ - metric: acc
11
+ aggregation: mean
12
+ higher_is_better: true
13
+ metadata:
14
+ version: 1.0
15
+ dataset_kwargs:
16
+ trust_remote_code: true
lm-evaluation/lm_eval/tasks/bigbench/generate_until/abstract_narrative_understanding.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: abstract_narrative_understanding_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_abstract_narrative_understanding_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/analogical_similarity.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: analogical_similarity_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_analogical_similarity_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/checkmate_in_one.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: checkmate_in_one_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_checkmate_in_one_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/chess_state_tracking.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: chess_state_tracking_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_chess_state_tracking_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/chinese_remainder_theorem.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: chinese_remainder_theorem_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_chinese_remainder_theorem_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/crash_blossom.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: crash_blossom_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_crash_blossom_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/crass_ai.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: crass_ai_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_crass_ai_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/entailed_polarity.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: entailed_polarity_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_entailed_polarity_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/general_knowledge.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: general_knowledge_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_general_knowledge_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/geometric_shapes.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: geometric_shapes_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_geometric_shapes_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/hinglish_toxicity.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: hinglish_toxicity_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_hinglish_toxicity_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/implicatures.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: implicatures_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_implicatures_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/implicit_relations.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: implicit_relations_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_implicit_relations_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/kanji_ascii.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: kanji_ascii_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_kanji_ascii_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/known_unknowns.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: known_unknowns_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_known_unknowns_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/language_games.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: language_games_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_language_games_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/linguistics_puzzles.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: linguistics_puzzles_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_linguistics_puzzles_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_args.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: logical_args_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_logical_args_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_deduction.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: logical_deduction_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_logical_deduction_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/logical_fallacy_detection.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: logical_fallacy_detection_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_logical_fallacy_detection_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/metaphor_boolean.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: metaphor_boolean_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_metaphor_boolean_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/minute_mysteries_qa.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: minute_mysteries_qa_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_minute_mysteries_qa_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/misconceptions.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: misconceptions_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_misconceptions_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/mnist_ascii.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: mnist_ascii_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_mnist_ascii_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/movie_dialog_same_or_different.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: movie_dialog_same_or_different_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_movie_dialog_same_or_different_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/mult_data_wrangling.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: mult_data_wrangling_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_mult_data_wrangling_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/multiemo.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: multiemo_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_multiemo_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/object_counting.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: object_counting_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_object_counting_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/paragraph_segmentation.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: paragraph_segmentation_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_paragraph_segmentation_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/physical_intuition.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: physical_intuition_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_physical_intuition_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/qa_wikidata.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: qa_wikidata_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_qa_wikidata_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/real_or_fake_text.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: real_or_fake_text_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_real_or_fake_text_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/salient_translation_error_detection.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: salient_translation_error_detection_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_salient_translation_error_detection_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/similarities_abstraction.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: similarities_abstraction_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_similarities_abstraction_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/simp_turing_concept.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: simp_turing_concept_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_simp_turing_concept_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/simple_ethical_questions.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: simple_ethical_questions_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_simple_ethical_questions_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/simple_text_editing.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: simple_text_editing_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_simple_text_editing_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/sufficient_information.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: sufficient_information_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_sufficient_information_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/swahili_english_proverbs.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: swahili_english_proverbs_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_swahili_english_proverbs_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/timedial.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: timedial_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_timedial_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/topical_chat.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: topical_chat_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_topical_chat_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/understanding_fables.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: understanding_fables_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_understanding_fables_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/unit_conversion.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: unit_conversion_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_unit_conversion_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/unnatural_in_context_learning.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: unnatural_in_context_learning_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_unnatural_in_context_learning_generate_until
lm-evaluation/lm_eval/tasks/bigbench/generate_until/winowhy.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by utils.py
2
+ dataset_name: winowhy_zero_shot
3
+ include: ../generate_until_template_yaml
4
+ task: bigbench_winowhy_generate_until
lm-evaluation/lm_eval/tasks/model_written_evals/persona/being-helpful-to-subtly-achieve-goals-against-human-values.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by _generate_configs.py
2
+ dataset_name: being-helpful-to-subtly-achieve-goals-against-human-values
3
+ include: _template_yaml
4
+ task: persona_being-helpful-to-subtly-achieve-goals-against-human-values
lm-evaluation/lm_eval/tasks/model_written_evals/persona/believes-it-is-not-being-watched-by-humans.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by _generate_configs.py
2
+ dataset_name: believes-it-is-not-being-watched-by-humans
3
+ include: _template_yaml
4
+ task: persona_believes-it-is-not-being-watched-by-humans
lm-evaluation/lm_eval/tasks/model_written_evals/persona/desire-for-independence-from-human-oversight.yaml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # Generated by _generate_configs.py
2
+ dataset_name: desire-for-independence-from-human-oversight
3
+ include: _template_yaml
4
+ task: persona_desire-for-independence-from-human-oversight