diff --git a/lm-evaluation-harness/docs/img/fewshot_example_gpt3.png b/lm-evaluation-harness/docs/img/fewshot_example_gpt3.png new file mode 100644 index 0000000000000000000000000000000000000000..aee409d156468b2324b84b8f77034d54cf9e554b --- /dev/null +++ b/lm-evaluation-harness/docs/img/fewshot_example_gpt3.png @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:6af5dc2196248b29260ba443e882725dd6cfc51ef17ad5a4dbab4f8ce6850c75 +size 315681 diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md b/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md new file mode 100644 index 0000000000000000000000000000000000000000..8b2a22edd49172897a42afcfe3b64974204618ca --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md @@ -0,0 +1,94 @@ +# FrenchBench + +### Paper + +FrenchBench is a benchmark for evaluating French language models, introduced in the paper +[CroissantLLM: A Truly Bilingual French-English Language Model](https://arxiv.org/abs/2402.00786). +It is a collection of tasks that evaluate the ability of a language model to understand and generate French text. +This benchmark is constructed both from openly available datasets, as well as newly released manually annotated data. + +### Citation + +```bibtex +@misc{faysse2024croissantllm, + title={CroissantLLM: A Truly Bilingual French-English Language Model}, + author={Manuel Faysse and Patrick Fernandes and Nuno M. Guerreiro and António Loison and Duarte M. Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro H. Martins and Antoni Bigata Casademunt and François Yvon and André F. T. Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo}, + year={2024}, + eprint={2402.00786}, + archivePrefix={arXiv}, + primaryClass={cs.CL} +} +``` + +### Groups and Tasks + +#### Groups + +- `french_bench`: All tasks (non-perplexity based) +- `french_bench_gen`: All official generative tasks +- `french_bench_mc`: All official multiple choice tasks +- `french_bench_perplexity`: All perplexity-based tasks (0 shot is recommended) +- `french_bench_extra`: All extra tasks + +#### Tasks + + +The following tasks evaluate tasks on the French Bench dataset using various scoring methods. + - french_bench_boolqa + - french_bench_fquadv2 + - french_bench_fquadv2_bool + - french_bench_fquadv2_genq + - french_bench_fquadv2_hasAns + - french_bench_topic_based_nli + - french_bench_multifquad + - french_bench_grammar + - french_bench_vocab + - french_bench_reading_comp + - french_bench_xnli (modified XNLI) + - french_bench_orangesum_abstract + - french_bench_orangesum_title + - french_bench_trivia + - french_bench_hellaswag + - french_bench_arc_challenge + +The french bench also includes other tasks from various benchmarks: +- `belebele_fra_Latn`: Belebele French +- `wmt14-en-fr`: WMT14 English-French +- `wmt14-fr-en`: WMT14 French-English + +# Not to use in few-shot +- `crows_pairs_french`: Crows Pairs French +- `french_bench_opus_perplexity`: Opus Perplexity + + +### Usage + +```bash +# openai +lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench --limit 100 --num_fewshot 3 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench_3shot.json +lm_eval --model openai-completions --model_args engine=text-davinci-003 --tasks french_bench_opus_perplexity,crows_pairs_french --limit 100 --batch_size auto --output_path data/french_bench/davinci-003/results_french_bench2_0shot.json + + +lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 8 --output_path data/french_bench/gpt2/results_french_bench_3shot.json +lm_eval --model hf --model_args pretrained=gpt2 --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/gpt2/results_french_bench2_0shot.json + +lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench --device cuda:0 --limit 100 --num_fewshot 3 --batch_size 4 --output_path data/french_bench/llama-2-7b-hf/results_french_bench_3shot.json +lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf --tasks french_bench_opus_perplexity,crows_pairs_french --device cuda:0 --limit 100 --batch_size auto --output_path data/french_bench/llama-2-7b-hf/results_french_bench2_0shot.json +``` + +HF and Accelerate options can be added when loading a model: +```bash + accelerate launch -m lm_eval --model hf --model_args pretrained=meta-llama/Llama-2-7b-hf,dtype="float16" --tasks french_bench +``` + +### Checklist + +* [x] Is the task an existing benchmark in the literature? + * [x] Have you referenced the original paper that introduced the task? + * [x] If yes, does the original paper provide a reference implementation? + * [x] Yes, original implementation contributed by author of the benchmark + +If other tasks on this dataset are already supported: +* [x] Is the "Main" variant of this task clearly denoted? +* [x] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [x] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/_default_template_yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/_default_template_yaml new file mode 100644 index 0000000000000000000000000000000000000000..ae3bfd1fc8d2974288922e55a7ec5d55054a90d4 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/_default_template_yaml @@ -0,0 +1,4 @@ +test_split: test +fewshot_split: valid +fewshot_config: + sampler: first_n diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_arc_challenge.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_arc_challenge.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a77d5163ead0915243b091e68ce1e06801a41d03 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_arc_challenge.yaml @@ -0,0 +1,21 @@ +group: + - french_bench + - french_bench_mc +task: french_bench_arc_challenge +dataset_path: manu/french_bench_arc_challenge +output_type: multiple_choice +training_split: train +validation_split: validation +test_split: test +doc_to_text: "Question: {{question}}\nRéponse:" +doc_to_target: "{{['A', 'B', 'C', 'D'].index(answerKey)}}" +doc_to_choice: "{{choices}}" +should_decontaminate: true +doc_to_decontamination_query: "Question: {{question}}\nRéponse:" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_boolqa.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_boolqa.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ed67265d4351ce5f2e08271f87edbf674950baa1 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_boolqa.yaml @@ -0,0 +1,23 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "D'après l'information dans le contexte donné, quelle est la réponse à la question ?" +task: french_bench_boolqa +dataset_path: manu/french_boolq +output_type: multiple_choice +validation_split: valid +doc_to_text: "\nContexte: {{passage}}\n\nQuestion: {{question}}\n" +doc_to_choice: ["Oui", "Non"] +# doc_to_text: "\nContexte: {{passage}}\n\nQuestion: {{question}}\n\nD'après l'information dans le contexte, la réponse est:\nA. Oui \nB. Non\n\nRéponse:" +# doc_to_choice: ["A", "B"] +doc_to_target: "{{[1, 0].index(label)}}" +should_decontaminate: true +doc_to_decontamination_query: passage +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2.yaml new file mode 100644 index 0000000000000000000000000000000000000000..5ffdb194a40ee267c7e7a9940351022d4692a19e --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2.yaml @@ -0,0 +1,29 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques mots du contexte. Si il est impossible de répondre avec les informations du contexte, répond 'Impossible'." +task: french_bench_fquadv2 +dataset_path: manu/fquad2_test +output_type: generate_until +validation_split: valid +doc_to_text: "\nContexte: {{context}}\n\nQuestion: {{question}}\n\nRéponse:" +doc_to_target: "{% if answers.text| length > 0 %}{{answers.text[0]}}{% else %}{{['Impossible']}}{% endif %}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: context +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.exact + aggregation: mean + higher_is_better: true + - metric: !function utils.f1 + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_bool.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_bool.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7fe89c31fb4b2a89b49c5d031283d838c4fb6658 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_bool.yaml @@ -0,0 +1,21 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "D'après l'information présente dans le contexte, est il possible de répondre à la question ?" +task: french_bench_fquadv2_bool +dataset_path: manu/fquad2_test +output_type: multiple_choice +validation_split: valid +doc_to_text: "\nContexte: {{context}}\n\nQuestion: {{question}}\n\nD'après l'information présente dans le contexte, répondre à la question est:\nA. Possible \nB. Impossible\n\nRéponse:" +doc_to_choice: ["A", "B"] +doc_to_target: "{{[False, True].index(is_impossible)}}" +should_decontaminate: true +doc_to_decontamination_query: context +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_genq.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_genq.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bd1c4684db873405961833907101a872e8d6f8fa --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_genq.yaml @@ -0,0 +1,31 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_gen +description: "D'après l'information dans le contexte donné, quelle question a été posée pour obtenir la réponse donnée ?" +task: french_bench_fquadv2_genq +dataset_path: manu/fquad2_test +output_type: generate_until +validation_split: valid_hasAns +test_split: test_hasAns +fewshot_split: valid_hasAns +doc_to_text: "\nContexte: {{context}}\n\nRéponse: {% if answers.text| length > 0 %}{{answers.text[0]}}{% else %}{{['Impossible']}}{% endif %}\n\nQuestion:" +doc_to_target: "{{question}}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: question +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg + - metric: !function utils.f1 + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_hasAns.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_hasAns.yaml new file mode 100644 index 0000000000000000000000000000000000000000..37c02af358e1d26f2823440ea23f8ae7770d87a2 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_fquadv2_hasAns.yaml @@ -0,0 +1,34 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_gen +description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques mots du contexte. Si il est impossible de répondre avec les informations du contexte, répond 'Impossible'." +task: french_bench_fquadv2_hasAns +dataset_path: manu/fquad2_test +output_type: generate_until +validation_split: valid_hasAns +test_split: test_hasAns +fewshot_split: valid_hasAns +doc_to_text: "\nContexte: {{context}}\n\nQuestion: {{question}}\n\nRéponse:" +doc_to_target: "{% if answers.text| length > 0 %}{{answers.text[0]}}{% else %}{{['Impossible']}}{% endif %}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: context +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.exact + aggregation: mean + higher_is_better: true + - metric: !function utils.f1 + aggregation: mean + higher_is_better: true + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_grammar.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_grammar.yaml new file mode 100644 index 0000000000000000000000000000000000000000..45052ccc04134a7a194a24b19fb3d621345e1f9d --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_grammar.yaml @@ -0,0 +1,20 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_mc +description: "Répond au mieux en complétant la question avec une des réponses proposées." +dataset_path: manu/french-bench-grammar-vocab-reading +output_type: multiple_choice +validation_split: Grammar +fewshot_split: Grammar +test_split: Grammar +#doc_to_text: "Question: {{question.strip()}}\nA: {{answerA}}\nB: {{answerB}}\nC: {{answerC}}\nD: {{answerD}}\nRéponse:" +#doc_to_choice: ["A", "B", "C", "D"] +doc_to_text: "La phrase suivante est correcte grammaticalement:\n" +doc_to_choice: "{{[question.replace('<...>', answerA), question.replace('<...>', answerB), question.replace('<...>', answerC), question.replace('<...>', answerD)]}}" +doc_to_target: '{{["answerA", "answerB", "answerC", "answerD"].index("answer" + answer)}}' +task: french_bench_grammar +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_hellaswag.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_hellaswag.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9fa8ea26d52fb23838a6609dddbdb0baa9c4f05a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_hellaswag.yaml @@ -0,0 +1,20 @@ +group: + - french_bench + - french_bench_mc +task: french_bench_hellaswag +dataset_path: manu/french_bench_hellaswag +output_type: multiple_choice +training_split: validation +validation_split: validation +test_split: null +process_docs: !function utils.process_docs +doc_to_text: "{{query}}" +doc_to_target: "{{label}}" +doc_to_choice: "{{choices}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_multifquad.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_multifquad.yaml new file mode 100644 index 0000000000000000000000000000000000000000..632ffe369f208ee8d87d9cee10719c604f44a7f8 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_multifquad.yaml @@ -0,0 +1,34 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_gen +description: "D'après l'information dans le contexte donné, donne la réponse à la question en citant quelques extraits du contexte." +task: french_bench_multifquad +dataset_path: manu/multifquad_test +output_type: generate_until +validation_split: valid +test_split: test +fewshot_split: valid +doc_to_text: "\nContexte: {{context}}\n\nQuestion: {{question}}\n\nRéponse:" +doc_to_target: "{{', '.join(answers.text)}}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: context +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.exact + aggregation: mean + higher_is_better: true + - metric: !function utils.f1 + aggregation: mean + higher_is_better: true + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_opus_perplexity.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_opus_perplexity.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c5a72501d72c87e3672c11510bb987a46c458f84 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_opus_perplexity.yaml @@ -0,0 +1,23 @@ +group: + - french_bench_perplexity +task: french_bench_opus_perplexity +dataset_path: manu/opus100-en-fr +output_type: loglikelihood_rolling +test_split: test +fewshot_split: validation +validation_split: validation +num_fewshot: 0 +doc_to_text: "" +doc_to_target: "{{text}}" +should_decontaminate: true +doc_to_decontamination_query: "{{text}}" +metric_list: + - metric: word_perplexity + aggregation: weighted_perplexity + higher_is_better: false + - metric: byte_perplexity + aggregation: weighted_perplexity + higher_is_better: false + - metric: bits_per_byte + aggregation: bits_per_byte + higher_is_better: false diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_abstract.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_abstract.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3ca8888afeab5660b52764fd47c7de55c72a46dd --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_abstract.yaml @@ -0,0 +1,28 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_gen +description: "Résume l'article en une phrase." +task: french_bench_orangesum_abstract +dataset_path: orange_sum +dataset_name: abstract +output_type: generate_until +validation_split: validation +fewshot_split: validation +doc_to_text: "\nArticle: {{text}}\n\nRésumé:" +doc_to_target: "{{summary}}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: summary +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_title.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_title.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c459a18fa4cb9acc00c3ef4f874f15f0f763fcaf --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_orangesum_title.yaml @@ -0,0 +1,28 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "Trouve le titre de l'article." +task: french_bench_orangesum_title +dataset_path: orange_sum +dataset_name: title +output_type: generate_until +validation_split: validation +fewshot_split: validation +doc_to_text: "\nArticle: {{text}}\n\nTitre:" +doc_to_target: "{{summary}}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: summary +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_reading_comp.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_reading_comp.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8d8c8abd8c1772193ca3d64a33edeb36b4fefd66 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_reading_comp.yaml @@ -0,0 +1,22 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +# description: "Répond au mieux en complétant la question avec une des réponses proposées." +dataset_path: manu/french-bench-grammar-vocab-reading +output_type: multiple_choice +validation_split: Reading +fewshot_split: Reading +test_split: Reading +# doc_to_text: "Context: {{context}}\nQuestion: {{question.strip()}}\nA: {{answerA}}\nB: {{answerB}}\nC: {{answerC}}\nD: {{answerD}}\nRéponse:" +# doc_to_choice: "{{['A: '+answerA, 'B: '+answerB, 'C: '+answerC, 'D: '+answerD]}}" +doc_to_text: "Context: {{context}}\n\n" +doc_to_choice: "{{[question.replace('<...>', answerA) if '<...>' in question else question + ' ' +answerA, question.replace('<...>', answerB) if '<...>' in question else question + ' ' + answerB, question.replace('<...>', answerC) if '<...>' in question else question + ' ' + answerC, question.replace('<...>', answerD) if '<...>' in question else question + ' ' + answerD]}}" +doc_to_target: '{{["answerA", "answerB", "answerC", "answerD"].index("answer" + answer)}}' +# doc_to_choice: "{{['A: '+answerA, 'B: '+answerB, 'C: '+answerC, 'D: '+answerD]}}" +# doc_to_target: answer +task: french_bench_reading_comp +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_topic_based_nli.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_topic_based_nli.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c88957a1b9a035785095654c75f930d7574d05b4 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_topic_based_nli.yaml @@ -0,0 +1,23 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "A propos du thème spécifié, l'avis client est il positif, négatif, ou neutre ?" +task: french_bench_topic_based_nli +dataset_path: manu/topic_based_nli_test +output_type: multiple_choice +validation_split: valid +# doc_to_text: "\nAvis Client: {{text}}\n\nEn considèrant uniquement le thème \"{{topic}}\", l'avis client est plutot:\nA. Positif \nB. Négatif\nC. Mitigé \nD. Neutre\nE. Absent\n\nRéponse:" +# doc_to_choice: ["A", "B", "C", "D", "E"] +doc_to_text: "\nAvis Client: {{text}}\n\nA propos du thème \"{{topic}}\", l'avis client est" +doc_to_choice: ['positif', 'négatif', 'neutre'] +doc_to_target: "{{['positif', 'negatif', 'neutre'].index(polarity)}}" +should_decontaminate: true +doc_to_decontamination_query: texte +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true + - metric: acc_norm + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_trivia.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_trivia.yaml new file mode 100644 index 0000000000000000000000000000000000000000..525fb781bcc716a9cd9822793485f5b0fc2fba6f --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_trivia.yaml @@ -0,0 +1,36 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_gen +task: french_bench_trivia +dataset_path: manu/french-trivia +output_type: generate_until +validation_split: train +test_split: train +fewshot_split: train +doc_to_text: "{{Question}}\nAnswer:" +doc_to_target: "{{Answer}}" +target_delimiter: " " +should_decontaminate: true +doc_to_decontamination_query: Question +generation_kwargs: + until: + - "\n" +# filter_list: +# - name: remove_whitespace +# filter: +# - function: remove_whitespace +# - function: take_first +metric_list: + - metric: !function utils.exact + aggregation: mean + higher_is_better: true + - metric: !function utils.f1 + aggregation: mean + higher_is_better: true + - metric: !function utils.rouge1 + higher_is_better: true + aggregation: !function utils.rouge1_agg + - metric: !function utils.is_included + higher_is_better: true + aggregation: mean diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_vocab.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_vocab.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1995c91c2515416598721bede2325ce0843d37cc --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_vocab.yaml @@ -0,0 +1,20 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_mc +# description: "Répond au mieux en complétant la question avec une des réponses proposées." +dataset_path: manu/french-bench-grammar-vocab-reading +output_type: multiple_choice +validation_split: Vocabulary +fewshot_split: Vocabulary +test_split: Vocabulary +# doc_to_text: "Question: {{question.strip()}}\nA: {{answerA}}\nB: {{answerB}}\nC: {{answerC}}\nD: {{answerD}}\nRéponse:" +# doc_to_choice: ["A", "B", "C", "D"] +doc_to_text: "La phrase suivante est logique sémantiquement:\n" +doc_to_choice: "{{[question.replace('<...>', answerA), question.replace('<...>', answerB), question.replace('<...>', answerC), question.replace('<...>', answerD)]}}" +doc_to_target: '{{["answerA", "answerB", "answerC", "answerD"].index("answer" + answer)}}' +task: french_bench_vocab +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_wikitext_fr.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_wikitext_fr.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c4b04fe0e6214428360a1b1955426f8675909efc --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_wikitext_fr.yaml @@ -0,0 +1,25 @@ +group: + - french_bench_perplexity +task: french_bench_wikitext_fr +dataset_path: asi/wikitext_fr +dataset_name: wikitext-35 +output_type: loglikelihood_rolling +training_split: train +validation_split: validation +test_split: test +num_fewshot: 0 +doc_to_text: "" +doc_to_target: !function preprocess_wikitext.wikitext_detokenizer +process_results: !function preprocess_wikitext.process_results +should_decontaminate: true +doc_to_decontamination_query: "{{paragraph}}" +metric_list: + - metric: word_perplexity + aggregation: weighted_perplexity + higher_is_better: false + - metric: byte_perplexity + aggregation: weighted_perplexity + higher_is_better: false + - metric: bits_per_byte + aggregation: bits_per_byte + higher_is_better: false diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_xnli.yaml b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_xnli.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7a527e4cf9d8ce6a1ff8f14a1cf03a471d06b14c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/french_bench_xnli.yaml @@ -0,0 +1,21 @@ +include: "_default_template_yaml" +group: + - french_bench + - french_bench_extra +description: "La prémisse et l'hypothèse sont elles en accord, neutres en elles, ou en contradiction ?" +dataset_path: xnli +dataset_name: fr +output_type: multiple_choice +validation_split: validation +fewshot_split: validation +test_split: test +# doc_to_text: "\nPrémisse: {{premise}}\n\nHypothèse: {{hypothesis}}\n\nLa prémisse et l'hypothèse sont:\nA. En accord\nB. Neutre\nC. En contradiction\nRéponse:" +# doc_to_choice: "{{['A: En accord', 'B: Neutre', 'C: En contradiction']}}" +doc_to_text: "\nPrémisse: {{premise}}\n\nHypothèse: {{hypothesis}}\n\nLa prémisse et l'hypothèse sont" +doc_to_choice: "{{['en accord', 'neutres entre elles', 'en contradiction']}}" +doc_to_target: label +task: french_bench_xnli +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/preprocess_wikitext.py b/lm-evaluation-harness/lm_eval/tasks/french_bench/preprocess_wikitext.py new file mode 100644 index 0000000000000000000000000000000000000000..6bea950f987a2185c40e7883869577dacb9ecb7a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/preprocess_wikitext.py @@ -0,0 +1,48 @@ +import re + + +def wikitext_detokenizer(doc): + string = doc["paragraph"] + # contractions + string = string.replace("s '", "s'") + string = re.sub(r"/' [0-9]/", r"/'[0-9]/", string) + # number separators + string = string.replace(" @-@ ", "-") + string = string.replace(" @,@ ", ",") + string = string.replace(" @.@ ", ".") + # punctuation + string = string.replace(" : ", ": ") + string = string.replace(" ; ", "; ") + string = string.replace(" . ", ". ") + string = string.replace(" ! ", "! ") + string = string.replace(" ? ", "? ") + string = string.replace(" , ", ", ") + # double brackets + string = re.sub(r"\(\s*([^\)]*?)\s*\)", r"(\1)", string) + string = re.sub(r"\[\s*([^\]]*?)\s*\]", r"[\1]", string) + string = re.sub(r"{\s*([^}]*?)\s*}", r"{\1}", string) + string = re.sub(r"\"\s*([^\"]*?)\s*\"", r'"\1"', string) + string = re.sub(r"'\s*([^']*?)\s*'", r"'\1'", string) + # miscellaneous + string = string.replace("= = = =", "====") + string = string.replace("= = =", "===") + string = string.replace("= =", "==") + string = string.replace(" " + chr(176) + " ", chr(176)) + string = string.replace(" \n", "\n") + string = string.replace("\n ", "\n") + string = string.replace(" N ", " 1 ") + string = string.replace(" 's", "'s") + + return string + + +def process_results(doc, results): + (loglikelihood,) = results + # IMPORTANT: wikitext counts number of words in *original doc before detokenization* + _words = len(re.split(r"\s+", doc["paragraph"])) + _bytes = len(doc["paragraph"].encode("utf-8")) + return { + "word_perplexity": (loglikelihood, _words), + "byte_perplexity": (loglikelihood, _bytes), + "bits_per_byte": (loglikelihood, _bytes), + } diff --git a/lm-evaluation-harness/lm_eval/tasks/french_bench/utils.py b/lm-evaluation-harness/lm_eval/tasks/french_bench/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..acbcbe83c86cd75c79ad8fbe1452a43776eaa12f --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/french_bench/utils.py @@ -0,0 +1,102 @@ +import collections +import re +import string + +import datasets +import evaluate + + +def normalize_answer(s): + """Lower text and remove punctuation, articles and extra whitespace.""" + + def remove_articles(text): + regex = re.compile(r"\b(un|une|des|le|la|les)\b", re.UNICODE) + return re.sub(regex, " ", text) + + def white_space_fix(text): + return " ".join(text.split()) + + def remove_punc(text): + exclude = set(string.punctuation) + return "".join(ch for ch in text if ch not in exclude) + + def lower(text): + return text.lower() + + return white_space_fix(remove_articles(remove_punc(lower(s)))) + + +def get_tokens(s): + if not s: + return [] + return normalize_answer(s).split() + + +# Exact match (the normalized answer exactly match the gold answer) +def exact(predictions, references): + return int(normalize_answer(references[0]) == normalize_answer(predictions[0])) + + +# The F-score of predicted tokens versus the gold answer +def f1(predictions, references): + gold_toks = get_tokens(references[0]) + pred_toks = get_tokens(predictions[0]) + common = collections.Counter(gold_toks) & collections.Counter(pred_toks) + num_same = sum(common.values()) + if len(gold_toks) == 0 or len(pred_toks) == 0: + # If either is no-answer, then F1 is 1 if they agree, 0 otherwise + return int(gold_toks == pred_toks) + if num_same == 0: + return 0 + precision = 1.0 * num_same / len(pred_toks) + recall = 1.0 * num_same / len(gold_toks) + f1 = (2 * precision * recall) / (precision + recall) + return f1 + + +def rouge1(items): + """ + # passthrough for efficiency + """ + return items + + +def rouge1_agg(items): + """ + Higher is better + """ + refs = list(zip(*items))[0] + preds = list(zip(*items))[1] + rouge_scorer = evaluate.load("rouge") + return rouge_scorer.compute(predictions=preds, references=refs)["rouge1"] + + +def is_included(items): + """ + # passthrough for efficiency + """ + if items[0] in items[1]: + return True + return False + + +def preprocess(text): + text = text.strip() + # NOTE: Brackets are artifacts of the WikiHow dataset portion of HellaSwag. + text = text.replace(" [title]", ". ") + text = re.sub("\\[.*?\\]", "", text) + text = text.replace(" ", " ") + return text + + +def process_docs(dataset: datasets.Dataset) -> datasets.Dataset: + def _process_doc(doc): + ctx = doc["ctx_a"] + " " + doc["ctx_b"].capitalize() + out_doc = { + "query": preprocess(doc["activity_label"] + ": " + ctx), + "choices": [preprocess(ending) for ending in doc["endings"]], + "gold": int(doc["label"]), + } + return out_doc + + return dataset.map(_process_doc) diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/README.md b/lm-evaluation-harness/lm_eval/tasks/glue/README.md new file mode 100644 index 0000000000000000000000000000000000000000..573c640e87c1ba077d6d9cbe79a045c7c4f02ddf --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/README.md @@ -0,0 +1,72 @@ +# GLUE +**NOTE**: GLUE benchmark tasks do not provide publicly accessible labels for their test sets, so we default to the validation sets for all sub-tasks. + +### Paper + +Title: `GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding` + +Abstract: https://openreview.net/pdf?id=rJ4km2R5t7 + +The General Language Understanding Evaluation (GLUE) benchmark is a collection of +resources for training, evaluating, and analyzing natural language understanding +systems. GLUE consists of: +- A benchmark of nine sentence- or sentence-pair language understanding tasks built +on established existing datasets and selected to cover a diverse range of dataset +sizes, text genres, and degrees of difficulty, and +- A diagnostic dataset designed to evaluate and analyze model performance with +respect to a wide range of linguistic phenomena found in natural language. + +Homepage: https://gluebenchmark.com/ + +### Citation + +``` +@inproceedings{wang-etal-2018-glue, + title = "{GLUE}: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding", + author = "Wang, Alex and + Singh, Amanpreet and + Michael, Julian and + Hill, Felix and + Levy, Omer and + Bowman, Samuel", + booktitle = "Proceedings of the 2018 {EMNLP} Workshop {B}lackbox{NLP}: Analyzing and Interpreting Neural Networks for {NLP}", + month = nov, + year = "2018", + address = "Brussels, Belgium", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/W18-5446", + doi = "10.18653/v1/W18-5446", + pages = "353--355", + abstract = "Human ability to understand language is \textit{general, flexible, and robust}. In contrast, most NLU models above the word level are designed for a specific task and struggle with out-of-domain data. If we aspire to develop models with understanding beyond the detection of superficial correspondences between inputs and outputs, then it is critical to develop a unified model that can execute a range of linguistic tasks across different domains. To facilitate research in this direction, we present the General Language Understanding Evaluation (GLUE, gluebenchmark.com): a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models. For some benchmark tasks, training data is plentiful, but for others it is limited or does not match the genre of the test set. GLUE thus favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks. While none of the datasets in GLUE were created from scratch for the benchmark, four of them feature privately-held test data, which is used to ensure that the benchmark is used fairly. We evaluate baselines that use ELMo (Peters et al., 2018), a powerful transfer learning technique, as well as state-of-the-art sentence representation models. The best models still achieve fairly low absolute scores. Analysis with our diagnostic dataset yields similarly weak performance over all phenomena tested, with some exceptions.", +} +``` + +### Groups and Tasks + +#### Groups + +* `glue`: Run all Glue subtasks. + +#### Tasks + +* `cola` +* `mnli` +* `mrpc` +* `qnli` +* `qqp` +* `rte` +* `sst` +* `wnli` + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/cola/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/cola/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a46003c2766ea26a96a6c6b73b750cb5e402119e --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/cola/default.yaml @@ -0,0 +1,16 @@ +group: glue +task: cola +dataset_path: glue +dataset_name: cola +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{sentence}}\nQuestion: Does this sentence make sense?\nAnswer:" +doc_to_target: label +doc_to_choice: ["no", "yes"] +should_decontaminate: true +doc_to_decontamination_query: sentence +metric_list: + - metric: mcc +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/mnli/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6caffa85a22719f597f5b780b0653ee124a854c5 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/default.yaml @@ -0,0 +1,14 @@ +group: glue +task: mnli +dataset_path: glue +dataset_name: mnli +output_type: multiple_choice +training_split: train +validation_split: validation_matched +doc_to_text: !function utils.doc_to_text +doc_to_target: label +doc_to_choice: ["True", "Neither", "False"] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/mnli/mismatch.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/mismatch.yaml new file mode 100644 index 0000000000000000000000000000000000000000..1e9b49bcd423ce43bf87f044c75a01e75f44d3d0 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/mismatch.yaml @@ -0,0 +1,3 @@ +include: default.yaml +task: mnli_mismatch +validation_split: validation_mismatched diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/mnli/utils.py b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/utils.py new file mode 100644 index 0000000000000000000000000000000000000000..2d5fdaec2905ac7cf95ac3e50f1d12c728f59c37 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/mnli/utils.py @@ -0,0 +1,6 @@ +def doc_to_text(doc) -> str: + return "{}\nQuestion: {} True, False or Neither?\nAnswer:".format( + doc["premise"], + doc["hypothesis"].strip() + + ("" if doc["hypothesis"].strip().endswith(".") else "."), + ) diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/mrpc/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/mrpc/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f0bc24510ca533bde719cba42fb9d079cfb4a53b --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/mrpc/default.yaml @@ -0,0 +1,15 @@ +group: glue +task: mrpc +dataset_path: glue +dataset_name: mrpc +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "Sentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\nQuestion: Do both sentences mean the same thing?\nAnswer:" +doc_to_target: label +doc_to_choice: ["no", "yes"] +metric_list: + - metric: acc + - metric: f1 +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/qnli/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/qnli/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..49a6216a5e0b351d2d92ba188bf2dd54823d0132 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/qnli/default.yaml @@ -0,0 +1,14 @@ +group: glue +task: qnli +dataset_path: glue +dataset_name: qnli +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{question}}\n{{sentence}}\nQuestion: Does this response answer the question?\nAnswer:" +doc_to_target: label +doc_to_choice: ["yes", "no"] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/qqp/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/qqp/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..bcd82f26bc8552c74f85b23054d90b9084a89211 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/qqp/default.yaml @@ -0,0 +1,15 @@ +group: glue +task: qqp +dataset_path: glue +dataset_name: qqp +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "Question 1: {{question1}}\nQuestion 2: {{question2}}\nQuestion: Do both questions ask the same thing?\nAnswer:" +doc_to_target: label +doc_to_choice: ["no", "yes"] +metric_list: + - metric: acc + - metric: f1 +metadata: + version: 2.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/rte/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/rte/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..7b12096a46b2a4fcc3f6f59b4f2d245130425c01 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/rte/default.yaml @@ -0,0 +1,14 @@ +group: glue +task: rte +dataset_path: glue +dataset_name: rte +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{sentence1}}\nQuestion: {{sentence2}} True or False?\nAnswer:" +doc_to_target: label +doc_to_choice: ["True", "False"] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/sst2/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/sst2/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..838afeb218891da139dec48083fa1990fc896b07 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/sst2/default.yaml @@ -0,0 +1,14 @@ +group: glue +task: sst2 +dataset_path: glue +dataset_name: sst2 +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{sentence}}\nQuestion: Is this sentence positive or negative?\nAnswer:" +doc_to_target: label +doc_to_choice: ["negative", "positive"] +metric_list: + - metric: acc +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/glue/wnli/default.yaml b/lm-evaluation-harness/lm_eval/tasks/glue/wnli/default.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a8e57a35d67920b7101a4f9e92f873c3c7ec3134 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/glue/wnli/default.yaml @@ -0,0 +1,14 @@ +group: glue +task: wnli +dataset_path: glue +dataset_name: wnli +output_type: multiple_choice +training_split: train +validation_split: validation +doc_to_text: "{{sentence1}}\nQuestion: {{sentence2}} True or False?\nAnswer:" +doc_to_target: label +doc_to_choice: ["False", "True"] +metric_list: + - metric: acc +metadata: + version: 2.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_civil_engineering.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_civil_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..98ed98dd2cc5f90039d98b74ca0f711809232e14 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_civil_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Civil-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_civil_engineering diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_construction.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_construction.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a0af2a16cfc082d58903758234ed0e36de0333c9 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_construction.yaml @@ -0,0 +1,3 @@ +dataset_name: Construction +include: _direct_kmmlu_yaml +task: kmmlu_direct_construction diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_criminal_law.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_criminal_law.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9dfdfabc5971164a63fe651c66f4c0842598ef17 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_criminal_law.yaml @@ -0,0 +1,3 @@ +dataset_name: Criminal-Law +include: _direct_kmmlu_yaml +task: kmmlu_direct_criminal_law diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_ecology.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_ecology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9d182903e2abe1f3c2b3f5d4cbe955bb1bcf58c9 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_ecology.yaml @@ -0,0 +1,3 @@ +dataset_name: Ecology +include: _direct_kmmlu_yaml +task: kmmlu_direct_ecology diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..db4d78405a6079273f8042350fd4f785c9fe4bed --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_economics.yaml @@ -0,0 +1,3 @@ +dataset_name: Economics +include: _direct_kmmlu_yaml +task: kmmlu_direct_economics diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_education.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_education.yaml new file mode 100644 index 0000000000000000000000000000000000000000..74887e76f395c2b8565cd7c716fd410f921f6f1d --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_education.yaml @@ -0,0 +1,3 @@ +dataset_name: Education +include: _direct_kmmlu_yaml +task: kmmlu_direct_education diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electrical_engineering.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electrical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3455d50715d250762358c9db89f05a0c8eb521c3 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_electrical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: Electrical-Engineering +include: _direct_kmmlu_yaml +task: kmmlu_direct_electrical_engineering diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c42e80eda1ad438d65d1d656671d5fb1542018da --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_information_technology.yaml @@ -0,0 +1,3 @@ +dataset_name: Information-Technology +include: _direct_kmmlu_yaml +task: kmmlu_direct_information_technology diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_social_welfare.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_social_welfare.yaml new file mode 100644 index 0000000000000000000000000000000000000000..fa13bdff6a4791c8e20fe905a84db0586af11afa --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_social_welfare.yaml @@ -0,0 +1,3 @@ +dataset_name: Social-Welfare +include: _direct_kmmlu_yaml +task: kmmlu_direct_social_welfare diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml new file mode 100644 index 0000000000000000000000000000000000000000..69e71d6dfa6284cc701221c5c187969be5e92832 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct/kmmlu_direct_taxation.yaml @@ -0,0 +1,3 @@ +dataset_name: Taxation +include: _direct_kmmlu_yaml +task: kmmlu_direct_taxation diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/_direct_hard_kmmlu_yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/_direct_hard_kmmlu_yaml new file mode 100644 index 0000000000000000000000000000000000000000..259b5c86bd2aa85c63ae9825538dd227a23e8417 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/_direct_hard_kmmlu_yaml @@ -0,0 +1,27 @@ +group: + - kmmlu + - kmmlu_hard_direct +dataset_path: HAERAE-HUB/KMMLU-HARD +output_type: generate_until +test_split: test +fewshot_split: dev +doc_to_text: "{{question.strip()}}\nA. {{A}}\nB. {{B}}\nC. {{C}}\nD. {{D}}\n정답:" +doc_to_target: "{{['A', 'B', 'C', 'D'][answer-1]}}" +metric_list: + - metric: exact_match + aggregation: mean + higher_is_better: true + ignore_case: true + ignore_punctuation: true + regexes_to_ignore: + - " " +generation_kwargs: + until: + - "Q:" + - "\n\n" + - "" + - "." + do_sample: false + temperature: 0.0 +metadata: + version: 2.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_accounting.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_accounting.yaml new file mode 100644 index 0000000000000000000000000000000000000000..ca805e955ec5ce5cb25e00e321f489646e89628f --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_accounting.yaml @@ -0,0 +1,3 @@ +dataset_name: accounting +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_accounting diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_aviation_engineering_and_maintenance.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_aviation_engineering_and_maintenance.yaml new file mode 100644 index 0000000000000000000000000000000000000000..25c91cb6e5e55fcc578bd455086b994f1dd51d8c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_aviation_engineering_and_maintenance.yaml @@ -0,0 +1,3 @@ +dataset_name: aviation_engineering_and_maintenance +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_aviation_engineering_and_maintenance diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_criminal_law.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_criminal_law.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d2679f1ecd6dcc2b47de06e3fdf30bb69a9e4a0a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_criminal_law.yaml @@ -0,0 +1,3 @@ +dataset_name: criminal_law +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_criminal_law diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_ecology.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_ecology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..adedf9d6e704a36368249260114aa8a80954a24a --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_ecology.yaml @@ -0,0 +1,3 @@ +dataset_name: ecology +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_ecology diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_economics.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_economics.yaml new file mode 100644 index 0000000000000000000000000000000000000000..f42e5b8dad2a7f4481dbd7d5e476ccccef222ede --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_economics.yaml @@ -0,0 +1,3 @@ +dataset_name: economics +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_economics diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_energy_management.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_energy_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d4c2ca7d643d71d3f1464e1f35bd49e944738ee6 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_energy_management.yaml @@ -0,0 +1,3 @@ +dataset_name: energy_management +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_energy_management diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_fashion.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_fashion.yaml new file mode 100644 index 0000000000000000000000000000000000000000..26f0617dfb641bd11f45f482c7180e12a318a0f5 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_fashion.yaml @@ -0,0 +1,3 @@ +dataset_name: fashion +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_fashion diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_health.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_health.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0fef809eebe36f65d541ce8741e4e0f2ac054da1 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_health.yaml @@ -0,0 +1,3 @@ +dataset_name: health +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_health diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_industrial_engineer.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_industrial_engineer.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d7ca26e58ac90c69cb2bffcf7a4d95657b019019 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_industrial_engineer.yaml @@ -0,0 +1,3 @@ +dataset_name: industrial_engineer +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_industrial_engineer diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_information_technology.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_information_technology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..0f8d01ec926a4dc197015d051b9c763889049ae1 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_information_technology.yaml @@ -0,0 +1,3 @@ +dataset_name: information_technology +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_information_technology diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_interior_architecture_and_design.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_interior_architecture_and_design.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3b1303810a9fbee6d966095fabbcc773dc489e71 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_interior_architecture_and_design.yaml @@ -0,0 +1,3 @@ +dataset_name: interior_architecture_and_design +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_interior_architecture_and_design diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_korean_history.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_korean_history.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c4d595d19636e0698930b82b7f1d6c1605d50e10 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_korean_history.yaml @@ -0,0 +1,3 @@ +dataset_name: korean_history +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_korean_history diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_law.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_law.yaml new file mode 100644 index 0000000000000000000000000000000000000000..168f0340590d9736548eaeb56335e734d756fdac --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_law.yaml @@ -0,0 +1,3 @@ +dataset_name: law +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_law diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_management.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_management.yaml new file mode 100644 index 0000000000000000000000000000000000000000..6eb945d27e69a636cea53c1c8ba9a35c569fe7f5 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_management.yaml @@ -0,0 +1,3 @@ +dataset_name: management +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_management diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_maritime_engineering.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_maritime_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..4078cf973b90f3e03ac88a7670b3344a159fef2e --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_maritime_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: maritime_engineering +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_maritime_engineering diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_mechanical_engineering.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_mechanical_engineering.yaml new file mode 100644 index 0000000000000000000000000000000000000000..dae55511a963529a8980118cdf6a9971eae611bc --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_mechanical_engineering.yaml @@ -0,0 +1,3 @@ +dataset_name: mechanical_engineering +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_mechanical_engineering diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_nondestructive_testing.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_nondestructive_testing.yaml new file mode 100644 index 0000000000000000000000000000000000000000..3ff9583743953fde9d681a9d4c4655b72d7c7e3c --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_nondestructive_testing.yaml @@ -0,0 +1,3 @@ +dataset_name: nondestructive_testing +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_nondestructive_testing diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_patent.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_patent.yaml new file mode 100644 index 0000000000000000000000000000000000000000..d913752b0bb3f9cfd0c47eb8919f4beb6e921adb --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_patent.yaml @@ -0,0 +1,3 @@ +dataset_name: patent +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_patent diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_psychology.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_psychology.yaml new file mode 100644 index 0000000000000000000000000000000000000000..9fbf0d3191e885cd1486caf148d1c723ea142ee2 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_psychology.yaml @@ -0,0 +1,3 @@ +dataset_name: psychology +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_psychology diff --git a/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_public_safety.yaml b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_public_safety.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b376c4ebae7574364b1157afd65938237eeca209 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/kmmlu/direct_hard/kmmlu_direct_hard_public_safety.yaml @@ -0,0 +1,3 @@ +dataset_name: public_safety +include: _direct_hard_kmmlu_yaml +task: kmmlu_hard_direct_public_safety diff --git a/lm-evaluation-harness/lm_eval/tasks/scrolls/task.py b/lm-evaluation-harness/lm_eval/tasks/scrolls/task.py new file mode 100644 index 0000000000000000000000000000000000000000..5b604e15d9305848705af087c6a1da5590f62039 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/scrolls/task.py @@ -0,0 +1,456 @@ +import re +from abc import abstractmethod +from functools import reduce + +import numpy as np +import transformers.data.metrics.squad_metrics as squad_metrics +from datasets import load_metric +from transformers import AutoTokenizer + +from lm_eval.api.instance import Instance +from lm_eval.api.metrics import mean +from lm_eval.api.task import Task + + +_CITATION = """ +@inproceedings{shaham-etal-2022-scrolls, + title = "{SCROLLS}: Standardized {C}ompa{R}ison Over Long Language Sequences", + author = "Shaham, Uri and + Segal, Elad and + Ivgi, Maor and + Efrat, Avia and + Yoran, Ori and + Haviv, Adi and + Gupta, Ankit and + Xiong, Wenhan and + Geva, Mor and + Berant, Jonathan and + Levy, Omer", + booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", + month = dec, + year = "2022", + address = "Abu Dhabi, United Arab Emirates", + publisher = "Association for Computational Linguistics", + url = "https://aclanthology.org/2022.emnlp-main.823", + pages = "12007--12021" +} +""" + +# SCROLLS is formualted as a sequence-to-sequence task. +# To allow for evaluation of causal models, we'll +# reformualte these with appropriate prompts + + +def _download_metric(): + import os + import shutil + + from huggingface_hub import hf_hub_download + + scrolls_metric_path = hf_hub_download( + repo_id="tau/scrolls", repo_type="dataset", filename="metrics/scrolls.py" + ) + updated_scrolls_metric_path = ( + os.path.dirname(scrolls_metric_path) + + os.path.basename(scrolls_metric_path).replace(".", "_") + + ".py" + ) + shutil.copy(scrolls_metric_path, updated_scrolls_metric_path) + return updated_scrolls_metric_path + + +def _process_doc_prepended_question(doc): + # "When a query is given in addition to the raw text (as + # in QMSum, Qasper, NarrativeQA, QuALITY, and ContractNLI), + # we prepend it to the text, using two newlines as a natural separator" + input = doc["input"] + split = input.find("\n\n") + return { + "id": doc["id"], + "pid": doc["pid"], + "input": input, + "outputs": doc["outputs"], + "question": input[0:split], + "text": input[split + 2 :], + } + + +def _drop_duplicates_in_input(untokenized_dataset): + # from scrolls/evaluator/dataset_evaluator.py + + indices_to_keep = [] + id_to_idx = {} + outputs = [] + for i, (id_, output) in enumerate( + zip(untokenized_dataset["id"], untokenized_dataset["output"]) + ): + if id_ in id_to_idx: + outputs[id_to_idx[id_]].append(output) + continue + indices_to_keep.append(i) + id_to_idx[id_] = len(outputs) + outputs.append([output]) + untokenized_dataset = untokenized_dataset.select(indices_to_keep).flatten_indices() + untokenized_dataset = untokenized_dataset.remove_columns("output") + untokenized_dataset = untokenized_dataset.add_column("outputs", outputs) + return untokenized_dataset + + +def _num_cpu_cores(): + # https://stackoverflow.com/questions/1006289/how-to-find-out-the-number-of-cpus-using-python/55423170#55423170 + try: + import psutil + + return psutil.cpu_count(logical=False) + except ImportError: + import os + + return len(os.sched_getaffinity(0)) + + +class _SCROLLSTask(Task): + VERSION = 2 + DATASET_PATH = "tau/scrolls" + DATASET_NAME = None + PRUNE_TOKENIZERS = None + PRUNE_MAX_TOKENS = None + PRUNE_NUM_PROC = None + + def __init__(self): + super().__init__() + if self.DATASET_NAME is not None: + self.metric = load_metric(_download_metric(), config_name=self.DATASET_NAME) + + def has_training_docs(self): + return True + + def has_validation_docs(self): + return True + + def has_test_docs(self): + return False + + def training_docs(self): + for doc in self.dataset["train"]: + yield from self._process_doc(doc) + + def validation_docs(self): + for doc in self.dataset["validation"]: + yield from self._process_doc(doc) + + def should_decontaminate(self): + return True + + def doc_to_decontamination_query(self, doc): + return doc["input"] + + def download(self, *args, **kwargs): + super().download(*args, **kwargs) + del self.dataset["test"] + for split in self.dataset: + self.dataset[split] = _drop_duplicates_in_input(self.dataset[split]) + if self.PRUNE_TOKENIZERS is not None: + self.prune() + + def _get_prune_text(self, sample): + return self.doc_to_text(self._process_doc(sample)[0]) + + def prune(self): + """Create a pruned version of a SCROLLS task dataset containing only inputs + that are less than `max_tokens` when tokenized by each tokenizer + """ + + tokenizers = [ + AutoTokenizer.from_pretrained(tokenizer) + for tokenizer in self.PRUNE_TOKENIZERS + ] + cache = {} + + def _filter(sample): + text = self._get_prune_text(sample) + cached = cache.get(text, None) + if cached is None: + for tokenizer in tokenizers: + if len(tokenizer(text).input_ids) > self.PRUNE_MAX_TOKENS: + cache[text] = False + return False + cache[text] = True + return True + else: + return cached + + self.dataset = self.dataset.filter(_filter, num_proc=self.PRUNE_NUM_PROC) + + def doc_to_target(self, doc): + return " " + ", ".join(doc["outputs"]) + + def doc_to_text(self, doc): + return f"{doc['text']}\n\nQuestion: {doc['question']}\nAnswer:" + + def higher_is_better(self): + return {x: True for x in self._scrolls_metrics().keys()} + + @abstractmethod + def _scrolls_metrics(self): + pass + + def _make_compute_metrics(self, value): + def compute_metrics(samples): + predictions, references = zip(*samples) # unzip, if you will + computed = self.metric.compute( + predictions=predictions, references=references + ) + return computed[value] + + return compute_metrics + + def aggregation(self): + return { + key: self._make_compute_metrics(value) + for key, value in self._scrolls_metrics().items() + } + + +class _SCROLLSMultipleChoiceTask(_SCROLLSTask): + def __post_init__(self): + self.metric = None + + def _scrolls_metrics(self): + return None + + def aggregation(self): + return {"em": mean, "acc": mean, "acc_norm": mean} + + def higher_is_better(self): + return {"em": True, "acc": True, "acc_norm": True} + + def process_results(self, doc, results): + gold = doc["gold"] + + lls, _ = zip(*results) + acc = 1.0 if np.argmax(lls) == gold else 0.0 + completion_len = np.array([float(len(i)) for i in doc["choices"]]) + acc_norm = 1.0 if np.argmax(lls / completion_len) == gold else 0.0 + + return { + "acc": acc, + "acc_norm": acc_norm, + "em": acc_norm * 100.0, + } + + def construct_requests(self, doc, ctx, **kwargs): + request_list = [ + Instance( + request_type="loglikelihood", + doc=doc, + arguments=(ctx, " {}".format(choice)), + idx=i, + **kwargs, + ) + for i, choice in enumerate(doc["choices"]) + ] + return request_list + + +class _SCROLLSSummaryTask(_SCROLLSTask): + def _process_doc(self, doc): + return [doc] + + def _scrolls_metrics(self): + return { + "rouge1": "rouge/rouge1", + "rouge2": "rouge/rouge2", + "rougeL": "rouge/rougeL", + } + + def process_results(self, doc, results): + return { + "rouge1": (results[0], doc["outputs"]), + "rouge2": (results[0], doc["outputs"]), + "rougeL": (results[0], doc["outputs"]), + } + + def construct_requests(self, doc, ctx, **kwargs): + return Instance( + request_type="generate_until", + doc=doc, + arguments=(ctx, {"until": ["\n"]}), + idx=0, + **kwargs, + ) + + def doc_to_text(self, doc): + return f"{doc['input']}\n\nQuestion: What is a summary of the preceding text?\nAnswer:" + + +class Qasper(_SCROLLSTask): + """A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers + https://arxiv.org/abs/2105.03011 + """ + + DATASET_NAME = "qasper" + + def _process_doc(self, doc): + doc = _process_doc_prepended_question(doc) + doc["is_yes_no"] = reduce( + lambda prev, cur: prev + and squad_metrics.normalize_answer(cur) in ["yes", "no"], + doc["outputs"], + True, + ) + return [doc] + + def _scrolls_metrics(self): + return {"f1": "f1"} + + def process_results(self, doc, results): + if doc["is_yes_no"]: + prediction = " yes" if results[0] > results[1] else " no" + elif len(results[0].strip()) == 0: + prediction = "Unanswerable" + else: + prediction = results[0] + return {"f1": (prediction, doc["outputs"])} + + def construct_requests(self, doc, ctx, **kwargs): + if doc["is_yes_no"]: + return [ + Instance( + request_type="loglikelihood", + doc=doc, + arguments=(ctx, " yes"), + idx=0, + **kwargs, + ), + Instance( + request_type="loglikelihood", + doc=doc, + arguments=(ctx, " no"), + idx=1, + **kwargs, + ), + ] + else: + return Instance( + request_type="generate_until", + doc=doc, + arguments=(ctx, {"until": ["\n"]}), + idx=0, + **kwargs, + ) + + +class QuALITY(_SCROLLSMultipleChoiceTask): + """QuALITY: Question Answering with Long Input Texts, Yes! + https://arxiv.org/abs/2112.08608 + """ + + DATASET_NAME = "quality" + _multiple_choice_pattern = re.compile(r" *\([A-D]\) *") + + @staticmethod + def _normalize_answer(text): + return " ".join(text.split()).strip() + + def _process_doc(self, doc): + doc = _process_doc_prepended_question(doc) + + split = doc["text"].find("\n\n", doc["text"].find("(D)")) + choices_text = doc["text"][:split] + + doc["text"] = doc["text"][split:].strip() + doc["choices"] = [ + QuALITY._normalize_answer(choice) + for choice in re.split(QuALITY._multiple_choice_pattern, choices_text)[1:] + ] + doc["gold"] = doc["choices"].index(QuALITY._normalize_answer(doc["outputs"][0])) + + return [doc] + + +class NarrativeQA(_SCROLLSTask): + """The NarrativeQA Reading Comprehension Challenge + https://arxiv.org/abs/1712.07040 + """ + + DATASET_NAME = "narrative_qa" + + def _process_doc(self, doc): + return [_process_doc_prepended_question(doc)] + + def _scrolls_metrics(self): + return {"f1": "f1"} + + def _get_prune_text(self, doc): + # pruning narrativeqa takes forever -- let's cheat a bit + # and just cache on the text, not the question, since + # the dataset is different questions about the same large + # documents + return self._process_doc(doc)[0]["text"] + + def process_results(self, doc, results): + return {"f1": (results[0], doc["outputs"])} + + def construct_requests(self, doc, ctx, **kwargs): + return Instance( + request_type="generate_until", + doc=doc, + arguments=(ctx, {"until": ["\n"]}), + idx=0, + **kwargs, + ) + + +class ContractNLI(_SCROLLSMultipleChoiceTask): + """ContractNLI: A Dataset for Document-level Natural Language Inference for Contracts + https://arxiv.org/abs/1712.07040 + """ + + DATASET_NAME = "contract_nli" + CHOICES = ["Not mentioned", "Entailment", "Contradiction"] + + def _process_doc(self, doc): + doc = _process_doc_prepended_question(doc) + doc["choices"] = ContractNLI.CHOICES + doc["gold"] = ContractNLI.CHOICES.index(doc["outputs"][0]) + return [doc] + + def doc_to_text(self, doc): + return f"{doc['text']}\n\nHypothesis: {doc['question']}\nConclusion:" + + +class GovReport(_SCROLLSSummaryTask): + """Efficient Attentions for Long Document Summarization + https://arxiv.org/abs/2104.02112 + + Note: The average length of the reference summaries is ~3,000 + characters, or ~600 tokens as tokenized by GPT-NeoX. For causal models, + it is recommended to set `max_gen_toks` sufficently large (e.g. 1024) + to allow a full summary to be generated. + """ + + DATASET_NAME = "gov_report" + + +class SummScreenFD(_SCROLLSSummaryTask): + """SummScreen: A Dataset for Abstractive Screenplay Summarization + https://arxiv.org/abs/2104.07091 + """ + + DATASET_NAME = "summ_screen_fd" + + +class QMSum(_SCROLLSSummaryTask): + """QMSum: A New Benchmark for Query-based Multi-domain + Meeting Summarization + + https://arxiv.org/abs/2104.05938 + """ + + DATASET_NAME = "qmsum" + + def _process_doc(self, doc): + return [_process_doc_prepended_question(doc)] + + def doc_to_text(self, doc): + return f"{doc['text']}\n\nQuestion: {doc['question']}\nAnswer:" diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md new file mode 100644 index 0000000000000000000000000000000000000000..3acbde5fc2c11eaaba4eeaaa3858b88d72c645bf --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/README.md @@ -0,0 +1,84 @@ +# XStoryCloze + +### Paper + +Title: `Few-shot Learning with Multilingual Language Models` + +Abstract: https://arxiv.org/abs/2112.10668 + +XStoryCloze consists of the professionally translated version of the [English StoryCloze dataset](https://cs.rochester.edu/nlp/rocstories/) (Spring 2016 version) to 10 non-English languages. This dataset is released by Meta AI. + +Homepage: https://github.com/facebookresearch/fairseq/pull/4820 + + +### Citation + +``` +@article{DBLP:journals/corr/abs-2112-10668, + author = {Xi Victoria Lin and + Todor Mihaylov and + Mikel Artetxe and + Tianlu Wang and + Shuohui Chen and + Daniel Simig and + Myle Ott and + Naman Goyal and + Shruti Bhosale and + Jingfei Du and + Ramakanth Pasunuru and + Sam Shleifer and + Punit Singh Koura and + Vishrav Chaudhary and + Brian O'Horo and + Jeff Wang and + Luke Zettlemoyer and + Zornitsa Kozareva and + Mona T. Diab and + Veselin Stoyanov and + Xian Li}, + title = {Few-shot Learning with Multilingual Language Models}, + journal = {CoRR}, + volume = {abs/2112.10668}, + year = {2021}, + url = {https://arxiv.org/abs/2112.10668}, + eprinttype = {arXiv}, + eprint = {2112.10668}, + timestamp = {Tue, 04 Jan 2022 15:59:27 +0100}, + biburl = {https://dblp.org/rec/journals/corr/abs-2112-10668.bib}, + bibsource = {dblp computer science bibliography, https://dblp.org} +} +``` + +### Groups and Tasks + +#### Groups + +* `xstorycloze` + +#### Tasks + +* `xstorycloze_ar`: Arabic +* `xstorycloze_en`: English +* `xstorycloze_es`: Spanish +* `xstorycloze_eu`: Basque +* `xstorycloze_hi`: Hindi +* `xstorycloze_id`: Indonesian +* `xstorycloze_my`: Burmese +* `xstorycloze_ru`: Russian +* `xstorycloze_sw`: Swahili +* `xstorycloze_te`: Telugu +* `xstorycloze_zh`: Chinese + + +### Checklist + +For adding novel benchmarks/datasets to the library: +* [ ] Is the task an existing benchmark in the literature? + * [ ] Have you referenced the original paper that introduced the task? + * [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test? + + +If other tasks on this dataset are already supported: +* [ ] Is the "Main" variant of this task clearly denoted? +* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates? +* [ ] Have you noted which, if any, published evaluation setups are matched by this variant? diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_ar.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_ar.yaml new file mode 100644 index 0000000000000000000000000000000000000000..2a52966d5a76138be4821d38c5bd639701586061 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_ar.yaml @@ -0,0 +1,18 @@ +group: xstorycloze +task: xstorycloze_ar +dataset_path: juletxara/xstory_cloze +dataset_name: ar +output_type: multiple_choice +training_split: train +validation_split: eval +doc_to_text: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}" +doc_to_target: "{{answer_right_ending-1}}" +doc_to_choice: "{{[sentence_quiz1, sentence_quiz2]}}" +should_decontaminate: true +doc_to_decontamination_query: "{{[input_sentence_1, input_sentence_2, input_sentence_3, input_sentence_4]|join(' ')}}" +metric_list: + - metric: acc + aggregation: mean + higher_is_better: true +metadata: + version: 1.0 diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_en.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_en.yaml new file mode 100644 index 0000000000000000000000000000000000000000..b3127cdfa5dfd4249566b12dc9b1451018a88581 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_en.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_en +dataset_name: en diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_es.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_es.yaml new file mode 100644 index 0000000000000000000000000000000000000000..60af1f8c0a7b8b0917060d592c663fe6212e0210 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_es.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_es +dataset_name: es diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_eu.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_eu.yaml new file mode 100644 index 0000000000000000000000000000000000000000..849caccf2425ec1483baddb83d8c98b8d1eb79e3 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_eu.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_eu +dataset_name: eu diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_hi.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_hi.yaml new file mode 100644 index 0000000000000000000000000000000000000000..8c00c75f0e3cba53c17174723d714fde8dc8c351 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_hi.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_hi +dataset_name: hi diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_id.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_id.yaml new file mode 100644 index 0000000000000000000000000000000000000000..c044d7532c4539e287aaa429d4042feff7c6d733 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_id.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_id +dataset_name: id diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_sw.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_sw.yaml new file mode 100644 index 0000000000000000000000000000000000000000..22b7f3b461fb628102face370fb8b48d7d442241 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_sw.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_sw +dataset_name: sw diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_te.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_te.yaml new file mode 100644 index 0000000000000000000000000000000000000000..946861d4f090d25d0b221c1c8eeca4e59249a380 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_te.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_te +dataset_name: te diff --git a/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_zh.yaml b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_zh.yaml new file mode 100644 index 0000000000000000000000000000000000000000..a55989fe2f64e6cb0dcf5136c35a1d5bf1ee4ae6 --- /dev/null +++ b/lm-evaluation-harness/lm_eval/tasks/xstorycloze/default_zh.yaml @@ -0,0 +1,3 @@ +include: default_ar.yaml +task: xstorycloze_zh +dataset_name: zh