applied-ai-018 commited on
Commit
8a669e8
·
verified ·
1 Parent(s): d31d8d7

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. lm-evaluation/docs/CONTRIBUTING.md +81 -0
  2. lm-evaluation/docs/README.md +10 -0
  3. lm-evaluation/docs/decontamination.md +71 -0
  4. lm-evaluation/docs/interface.md +146 -0
  5. lm-evaluation/docs/model_guide.md +116 -0
  6. lm-evaluation/docs/task_guide.md +384 -0
  7. lm-evaluation/tests/testdata/anli_r2-v0-loglikelihood +1 -0
  8. lm-evaluation/tests/testdata/arithmetic_2ds-v0-res.json +1 -0
  9. lm-evaluation/tests/testdata/blimp_anaphor_gender_agreement-v0-loglikelihood +1 -0
  10. lm-evaluation/tests/testdata/blimp_anaphor_number_agreement-v0-res.json +1 -0
  11. lm-evaluation/tests/testdata/blimp_animate_subject_passive-v0-loglikelihood +1 -0
  12. lm-evaluation/tests/testdata/blimp_existential_there_quantifiers_1-v0-loglikelihood +1 -0
  13. lm-evaluation/tests/testdata/blimp_existential_there_quantifiers_2-v0-res.json +1 -0
  14. lm-evaluation/tests/testdata/blimp_intransitive-v0-loglikelihood +1 -0
  15. lm-evaluation/tests/testdata/blimp_irregular_past_participle_adjectives-v0-loglikelihood +1 -0
  16. lm-evaluation/tests/testdata/blimp_irregular_past_participle_verbs-v0-loglikelihood +1 -0
  17. lm-evaluation/tests/testdata/blimp_irregular_plural_subject_verb_agreement_2-v0-res.json +1 -0
  18. lm-evaluation/tests/testdata/blimp_matrix_question_npi_licensor_present-v0-loglikelihood +1 -0
  19. lm-evaluation/tests/testdata/blimp_npi_present_2-v0-res.json +1 -0
  20. lm-evaluation/tests/testdata/blimp_principle_A_case_2-v0-loglikelihood +1 -0
  21. lm-evaluation/tests/testdata/blimp_regular_plural_subject_verb_agreement_1-v0-loglikelihood +1 -0
  22. lm-evaluation/tests/testdata/blimp_superlative_quantifiers_1-v0-res.json +1 -0
  23. lm-evaluation/tests/testdata/blimp_wh_island-v0-loglikelihood +1 -0
  24. lm-evaluation/tests/testdata/blimp_wh_questions_object_gap-v0-loglikelihood +1 -0
  25. lm-evaluation/tests/testdata/coqa-v1-res.json +1 -0
  26. lm-evaluation/tests/testdata/crows_pairs_english_age-v0-res.json +1 -0
  27. lm-evaluation/tests/testdata/crows_pairs_english_religion-v0-res.json +1 -0
  28. lm-evaluation/tests/testdata/drop-v0-res.json +1 -0
  29. lm-evaluation/tests/testdata/drop-v1-res.json +1 -0
  30. lm-evaluation/tests/testdata/hellaswag-v0-res.json +1 -0
  31. lm-evaluation/tests/testdata/hendrycksTest-college_biology-v0-loglikelihood +1 -0
  32. lm-evaluation/tests/testdata/hendrycksTest-college_physics-v0-res.json +1 -0
  33. lm-evaluation/tests/testdata/hendrycksTest-econometrics-v0-loglikelihood +1 -0
  34. lm-evaluation/tests/testdata/hendrycksTest-high_school_chemistry-v0-loglikelihood +1 -0
  35. lm-evaluation/tests/testdata/hendrycksTest-high_school_chemistry-v0-res.json +1 -0
  36. lm-evaluation/tests/testdata/hendrycksTest-high_school_geography-v0-loglikelihood +1 -0
  37. lm-evaluation/tests/testdata/hendrycksTest-high_school_mathematics-v0-res.json +1 -0
  38. lm-evaluation/tests/testdata/hendrycksTest-machine_learning-v0-loglikelihood +1 -0
  39. lm-evaluation/tests/testdata/hendrycksTest-miscellaneous-v0-loglikelihood +1 -0
  40. lm-evaluation/tests/testdata/hendrycksTest-moral_scenarios-v0-loglikelihood +1 -0
  41. lm-evaluation/tests/testdata/hendrycksTest-nutrition-v0-loglikelihood +1 -0
  42. lm-evaluation/tests/testdata/hendrycksTest-philosophy-v0-loglikelihood +1 -0
  43. lm-evaluation/tests/testdata/hendrycksTest-philosophy-v0-res.json +1 -0
  44. lm-evaluation/tests/testdata/hendrycksTest-professional_psychology-v0-loglikelihood +1 -0
  45. lm-evaluation/tests/testdata/hendrycksTest-security_studies-v0-loglikelihood +1 -0
  46. lm-evaluation/tests/testdata/hendrycksTest-us_foreign_policy-v0-res.json +1 -0
  47. lm-evaluation/tests/testdata/iwslt17-ar-en-v0-res.json +1 -0
  48. lm-evaluation/tests/testdata/lambada_mt_es-v0-loglikelihood +1 -0
  49. lm-evaluation/tests/testdata/lambada_openai_cloze-v0-loglikelihood +1 -0
  50. lm-evaluation/tests/testdata/lambada_openai_mt_de-v0-loglikelihood +1 -0
lm-evaluation/docs/CONTRIBUTING.md ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to LM Evaluation Harness
2
+
3
+ Welcome and thank you for your interest in the LM Evaluation Harness! We welcome contributions and feedback and appreciate your time spent with our library, and hope you find it useful!
4
+
5
+ We intend LM Evaluation Harness to be a broadly useful and
6
+
7
+ ## Important Resources
8
+
9
+ There are several places information about LM Evaluation Harness is located:
10
+
11
+ - Our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)
12
+ - We occasionally use [GitHub Milestones](https://github.com/EleutherAI/lm-evaluation-harness/milestones) to track progress toward specific near-term version releases.
13
+ - We maintain a [Project Board](https://github.com/orgs/EleutherAI/projects/25) for tracking current work items and PRs, and for future roadmap items or feature requests.
14
+ - Further discussion and support conversations are located in the #lm-thunderdome channel of the [EleutherAI discord](discord.gg/eleutherai).
15
+
16
+ ## Code Style
17
+
18
+ LM Evaluation Harness uses [ruff](https://github.com/astral-sh/ruff) for linting via [pre-commit](https://pre-commit.com/).
19
+
20
+ You can install linters and dev tools via
21
+
22
+ ```pip install lm_eval[dev]``` or ```pip install -e ".[dev]"```
23
+
24
+ Then, run
25
+
26
+ ```pre-commit install```
27
+
28
+ in order to ensure linters and other checks will be run upon committing.
29
+
30
+ ## Testing
31
+
32
+ We use [pytest](https://docs.pytest.org/en/latest/) for running unit tests. All library unit tests can be run via:
33
+
34
+ ```
35
+ python -m pytest --ignore=tests/tests_master --ignore=tests/extra
36
+ ```
37
+
38
+ ## Contributor License Agreement
39
+
40
+ We ask that new contributors agree to a Contributor License Agreement affirming that EleutherAI has the rights to use your contribution to our library.
41
+ First-time pull requests will have a reply added by @CLAassistant containing instructions for how to confirm this, and we require it before merging your PR.
42
+
43
+
44
+ ## Contribution Best Practices
45
+
46
+ We recommend a few best practices to make your contributions or reported errors easier to assist with.
47
+
48
+ **For Pull Requests:**
49
+ - PRs should be titled descriptively, and be opened with a brief description of the scope and intent of the new contribution.
50
+ - New features should have appropriate documentation added alongside them.
51
+ - Aim for code maintainability, and minimize code copying.
52
+ - If opening a task, try to share test results on the task using a publicly-available model, and if any public results are available on the task, compare to them.
53
+
54
+ **For Feature Requests:**
55
+ - Provide a short paragraph's worth of description. What is the feature you are requesting? What is its motivation, and an example use case of it? How does this differ from what is currently supported?
56
+
57
+ **For Bug Reports**:
58
+ - Provide a short description of the bug.
59
+ - Provide a *reproducible example*--what is the command you run with our library that results in this error? Have you tried any other steps to resolve it?
60
+ - Provide a *full error traceback* of the error that occurs, if applicable. A one-line error message or small screenshot snippet is unhelpful without the surrounding context.
61
+ - Note what version of the codebase you are using, and any specifics of your environment and setup that may be relevant.
62
+
63
+ **For Requesting New Tasks**:
64
+ - Provide a 1-2 sentence description of what the task is and what it evaluates.
65
+ - Provide a link to the paper introducing the task.
66
+ - Provide a link to where the dataset can be found.
67
+ - Provide a link to a paper containing results on an open-source model on the task, for use in comparisons and implementation validation.
68
+ - If applicable, link to any codebase that has implemented the task (especially the original publication's codebase, if existent).
69
+
70
+ ## How Can I Get Involved?
71
+
72
+ To quickly get started, we maintain a list of good first issues, which can be found [on our project board](https://github.com/orgs/EleutherAI/projects/25/views/8) or by [filtering GH Issues](https://github.com/EleutherAI/lm-evaluation-harness/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3A%22help+wanted%22). These are typically smaller code changes or self-contained features which can be added without extensive familiarity with library internals, and we recommend new contributors consider taking a stab at one of these first if they are feeling uncertain where to begin.
73
+
74
+ There are a number of distinct ways to contribute to LM Evaluation Harness, and all are extremely helpful! A sampling of ways to contribute include:
75
+ - **Implementing and verifying new evaluation tasks**: Is there a task you'd like to see LM Evaluation Harness support? Consider opening an issue requesting it, or helping add it! Verifying and cross-checking task implementations with their original versions is also a very valuable form of assistance in ensuring standardized evaluation.
76
+ - **Improving documentation** - Improvements to the documentation, or noting pain points / gaps in documentation, are helpful in order for us to improve the user experience of the library and clarity + coverage of documentation.
77
+ - **Testing and devops** - We are very grateful for any assistance in adding tests for the library that can be run for new PRs, and other devops workflows.
78
+ - **Adding new modeling / inference library integrations** - We hope to support a broad range of commonly-used inference libraries popular among the community, and welcome PRs for new integrations, so long as they are documented properly and maintainable.
79
+ - **Proposing or Contributing New Features** - We want LM Evaluation Harness to support a broad range of evaluation usecases. If you have a feature that is not currently supported but desired, feel free to open an issue describing the feature and, if applicable, how you intend to implement it. We would be happy to give feedback on the cleanest way to implement new functionalities and are happy to coordinate with interested contributors via GH discussions or via discord.
80
+
81
+ We hope that this has been helpful, and appreciate your interest in contributing! Further questions can be directed to [our Discord](discord.gg/eleutherai).
lm-evaluation/docs/README.md ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ # Eval Harness Documentation
2
+
3
+ Welcome to the docs for the LM Evaluation Harness!
4
+
5
+ ## Table of Contents
6
+
7
+ * To learn about the public interface of the library, as well as how to evaluate via the commandline or as integrated into an external library, see the [Interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/interface.md)
8
+ * To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/model_guide.md).
9
+ * For a crash course on adding new tasks to the library, see our [New Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/new_task_guide.md).
10
+ * To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Task Configuration Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/task_guide.md).
lm-evaluation/docs/decontamination.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Decontamination
2
+
3
+ ## Usage
4
+
5
+ The provided directory should contain
6
+ the ngram files and info.json produced in "Pile Ngram Generation" further down.
7
+
8
+ ```bash
9
+ python -m lm_eval \
10
+ --model gpt2 \
11
+ --device 0 \
12
+ --tasks sciq
13
+ ```
14
+
15
+ ## Background
16
+ Downstream evaluations test model generalization, and are less useful when test set data also exists in the training set, referred to as leakage or contamination.
17
+
18
+ Filtering your training set against the test set is a good first step, however this isn't always possible, as in the case of a new benchmark or one that wasn't considered prior to model training. When training set filtering isn't possible, it is useful to measure the impact of test set leakage by detecting the contaminated test examples and producing a clean version of the benchmark.
19
+
20
+ The basis for our decontamination procedure can be found in Appendix C of "Language Models are Few-Shot Learners". OpenAI defined a test document as contaminated if any N-gram overlap existed with any training document. They used a range of N values between 8 and 13 depending on dataset, while we just used 13 for simplicity.
21
+
22
+ ## Implementation
23
+ Contamination detection can be found in `lm_eval/decontaminate.py` with supporting code in `lm_eval/decontamination/`.
24
+
25
+ decontaminate.py does the following:
26
+ 1. Build dictionaries of all ngrams and their corresponding evaluation/document ids.
27
+ 2. Scan through sorted files containing training set n-grams.
28
+ 3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated.
29
+
30
+ `lm_eval/evaluator.py` can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix.
31
+
32
+ This is disabled by default for new tasks, to support decontamination on a task override the "should_decontaminate" and "doc_to_decontamination_query" methods. For more details see the [task guide](task_guide.md).
33
+
34
+ ## Pile Ngram Generation
35
+ The relevant scripts can be found in `scripts/clean_training_data`, which also import from
36
+ `lm_eval/decontamination/`
37
+
38
+ 1. git clone https://github.com/EleutherAI/lm-evaluation-harness.git
39
+ 2. pip install -r requirements.txt
40
+ 3. Download The Pile from [The Eye](https://the-eye.eu/public/AI/pile/train/)
41
+ 4. Place pile files in "pile" directory under "lm-evaluation-harness" (or create a symlink)
42
+ 5. Run generate_13_grams.
43
+
44
+ ```bash
45
+ export PYTHONHASHSEED=0
46
+ python -m scripts/clean_training_data/generate_13_grams \
47
+ -dir path/to/working/directory \
48
+ -n 13 \
49
+ -buckets 500
50
+ ```
51
+
52
+ Took approximately 4 days for us. We had the time to wait, but this could be scaled out by doing partial pile scans on multiple instances of this script and merging the relevant buckets. We fixed PYTHONHASHSEED to ensure reproducibility of bucket hashing in case you need to stop and start.
53
+
54
+ 6. Sort the generated 13-grams.
55
+ ```bash
56
+ python -m scripts/clean_training_data/sort_13_gram_buckets \
57
+ -dir path/to/working/directory/output
58
+ ```
59
+
60
+ Took approximately 5 days for us. You could speed this up by spreading the files around to different machines and running the sort script before gathering them together.
61
+
62
+ 7. Compress the sorted 13 grams files and place them together with info.json.
63
+
64
+ This step only takes a few hours.
65
+
66
+ ```bash
67
+ python -m scripts/clean_training_data/compress_and_package \
68
+ -dir path/to/working/directory \
69
+ -output path/to/final/directory \
70
+ -procs 8
71
+ ```
lm-evaluation/docs/interface.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # User Guide
2
+
3
+ This document details the interface exposed by `lm-eval` and provides details on what flags are available to users.
4
+
5
+ ## Command-line Interface
6
+
7
+ A majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script.
8
+
9
+ Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line.
10
+
11
+ This mode supports a number of command-line arguments, the details of which can be also be seen via running with `-h` or `--help`:
12
+
13
+ - `--model` : Selects which model type or provider is evaluated. Must be a string corresponding to the name of the model type/provider being used. See [the main README](https://github.com/EleutherAI/lm-evaluation-harness/tree/main#commercial-apis) for a full list of enabled model names and supported libraries or APIs.
14
+
15
+ - `--model_args` : Controls parameters passed to the model constructor. Accepts a string containing comma-separated keyword arguments to the model class of the format `"arg1=val1,arg2=val2,..."`, such as, for example `--model_args pretrained=EleutherAI/pythia-160m,dtype=float32`. For a full list of what keyword arguments, see the initialization of the `lm_eval.api.model.LM` subclass, e.g. [`HFLM`](https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/models/huggingface.py#L66)
16
+
17
+ - `--tasks` : Determines which tasks or task groups are evaluated. Accepts a comma-separated list of task names or task group names. Must be solely comprised of valid tasks/groups.
18
+
19
+ - `--num_fewshot` : Sets the number of few-shot examples to place in context. Must be an integer.
20
+
21
+ - `--gen_kwargs` : takes an arg string in same format as `--model_args` and creates a dictionary of keyword arguments. These will be passed to the models for all called `generate_until` (free-form or greedy generation task) tasks, to set options such as the sampling temperature or `top_p` / `top_k`. For a list of what args are supported for each model type, reference the respective library's documentation (for example, the documentation for `transformers.AutoModelForCausalLM.generate()`.) These kwargs will be applied to all `generate_until` tasks called--we do not currently support unique gen_kwargs or batch_size values per task in a single run of the library. To control these on a per-task level, set them in that task's YAML file.
22
+
23
+ - `--batch_size` : Sets the batch size used for evaluation. Can be a positive integer or `"auto"` to automatically select the largest batch size that will fit in memory, speeding up evaluation. One can pass `--batch_size auto:N` to re-select the maximum batch size `N` times during evaluation. This can help accelerate evaluation further, since `lm-eval` sorts documents in descending order of context length.
24
+
25
+ - `--max_batch_size` : Sets the maximum batch size to try to fit in memory, if `--batch_size auto` is passed.
26
+
27
+ - `--device` : Sets which device to place the model onto. Must be a string, for example, `"cuda", "cuda:0", "cpu", "mps"`. Defaults to "cuda", and can be ignored if running multi-GPU or running a non-local model type.
28
+
29
+ - `--output_path` : A string of the form `dir/file.jsonl` or `dir/`. Provides a path where high-level results will be saved, either into the file named or into the directory named. If `--log_samples` is passed as well, then per-document outputs and metrics will be saved into the directory as well.
30
+
31
+ - `--log_samples` : If this flag is passed, then the model's outputs, and the text fed into the model, will be saved at per-document granularity. Must be used with `--output_path`.
32
+
33
+ - `--limit` : Accepts an integer, or a float between 0.0 and 1.0 . If passed, will limit the number of documents to evaluate to the first X documents (if an integer) per task or first X% of documents per task. Useful for debugging, especially on costly API models.
34
+
35
+ - `--use_cache` : Should be a path where a sqlite db file can be written to. Takes a string of format `/path/to/sqlite_cache_` in order to create a cache db at `/path/to/sqlite_cache_rank{i}.db` for each process (0-NUM_GPUS). This allows results of prior runs to be cached, so that there is no need to re-run results in order to re-score or re-run a given (model, task) pair again.
36
+
37
+ - `--cache_requests` : Can be "true", "refresh", or "delete". "true" means that the cache should be used. "refresh" means that you wish to regenerate the cache, which you should run if you change your dataset configuration for a given task. "delete" will delete the cache. Cached files are stored under lm_eval/cache/.cache unless you specify a different path via the environment variable: `LM_HARNESS_CACHE_PATH`. e.g. `LM_HARNESS_CACHE_PATH=~/Documents/cache_for_lm_harness`.
38
+
39
+ - `--check_integrity` : If this flag is used, the library tests for each task selected are run to confirm task integrity.
40
+
41
+ - `--write_out` : Used for diagnostic purposes to observe the format of task documents passed to a model. If this flag is used, then prints the prompt and gold target string for the first document of each task.
42
+
43
+ - `--show_config` : If used, prints the full `lm_eval.api.task.TaskConfig` contents (non-default settings the task YAML file) for each task which was run, at the completion of an evaluation. Useful for when one is modifying a task's configuration YAML locally to transmit the exact configurations used for debugging or for reproducibility purposes.
44
+
45
+ - `--include_path` : Accepts a path to a folder. If passed, then all YAML files containing ` lm-eval`` compatible task configurations will be added to the task registry as available tasks. Used for when one is writing config files for their own task in a folder other than `lm_eval/tasks/`
46
+
47
+ - `--predict_only`: Generates the model outputs without computing metrics. Use with `--log_samples` to retrieve decoded results.
48
+
49
+ * `--seed`: Set seed for python's random, numpy and torch. Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, or a single integer to set the same seed for all three. The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility). E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`. E.g, `--seed 42` sets all three seeds to 42.
50
+
51
+ * `--wandb_args`: Tracks logging to Weights and Biases for evaluation runs and includes args passed to `wandb.init`, such as `project` and `job_type`. Full list (here.)[https://docs.wandb.ai/ref/python/init]. e.g., ```--wandb_args project=test-project,name=test-run```
52
+
53
+ ## External Library Usage
54
+
55
+ We also support using the library's external API for use within model training loops or other scripts.
56
+
57
+ `lm_eval` supplies two functions for external import and use: `lm_eval.evaluate()` and `lm_eval.simple_evaluate()`.
58
+
59
+ `simple_evaluate()` can be used by simply creating an `lm_eval.api.model.LM` subclass that implements the methods described in the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs/model_guide.md), and wrapping your custom model in that class as follows:
60
+
61
+ ```python
62
+ import lm_eval
63
+ ...
64
+
65
+ my_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code)
66
+ ...
67
+ # instantiate an LM subclass that takes your initialized model and can run
68
+ # - `Your_LM.loglikelihood()`
69
+ # - `Your_LM.loglikelihood_rolling()`
70
+ # - `Your_LM.generate_until()`
71
+ lm_obj = Your_LM(model=my_model, batch_size=16)
72
+
73
+ # indexes all tasks from the `lm_eval/tasks` subdirectory.
74
+ # Alternatively, you can set `TaskManager(include_path="path/to/my/custom/task/configs")`
75
+ # to include a set of tasks in a separate directory.
76
+ task_manager = lm_eval.tasks.TaskManager()
77
+
78
+ # Setting `task_manager` to the one above is optional and should generally be done
79
+ # if you want to include tasks from paths other than ones in `lm_eval/tasks`.
80
+ # `simple_evaluate` will instantiate its own task_manager is the it is set to None here.
81
+ results = lm_eval.simple_evaluate( # call simple_evaluate
82
+ model=lm_obj,
83
+ tasks=["taskname1", "taskname2"],
84
+ num_fewshot=0,
85
+ task_manager=task_manager,
86
+ ...
87
+ )
88
+ ```
89
+
90
+ See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L35 for a full description of all arguments available. All keyword arguments to simple_evaluate share the same role as the command-line flags described previously.
91
+
92
+ Additionally, the `evaluate()` function offers the core evaluation functionality provided by the library, but without some of the special handling and simplification + abstraction provided by `simple_evaluate()`.
93
+
94
+ See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L173 for more details.
95
+
96
+ As a brief example usage of `evaluate()`:
97
+
98
+ ```python
99
+ import lm_eval
100
+
101
+ # suppose you've defined a custom lm_eval.api.Task subclass in your own external codebase
102
+ from my_tasks import MyTask1
103
+ ...
104
+
105
+ # create your model (could be running finetuning with some custom modeling code)
106
+ my_model = initialize_my_model()
107
+ ...
108
+
109
+ # instantiate an LM subclass that takes your initialized model and can run
110
+ # - `Your_LM.loglikelihood()`
111
+ # - `Your_LM.loglikelihood_rolling()`
112
+ # - `Your_LM.generate_until()`
113
+ lm_obj = Your_LM(model=my_model, batch_size=16)
114
+
115
+ # optional: the task_manager indexes tasks including ones
116
+ # specified by the user through `include_path`.
117
+ task_manager = lm_eval.tasks.TaskManager(
118
+ include_path="/path/to/custom/yaml"
119
+ )
120
+
121
+ # To get a task dict for `evaluate`
122
+ task_dict = lm_eval.tasks.get_task_dict(
123
+ [
124
+ "mmlu", # A stock task
125
+ "my_custom_task", # A custom task
126
+ {
127
+ "task": ..., # A dict that configures a task
128
+ "doc_to_text": ...,
129
+ },
130
+ MyTask1 # A task object from `lm_eval.task.Task`
131
+ ],
132
+ task_manager # A task manager that allows lm_eval to
133
+ # load the task during evaluation.
134
+ # If none is provided, `get_task_dict`
135
+ # will instantiated one itself, but this
136
+ # only includes the stock tasks so users
137
+ # will need to set this if including
138
+ # custom paths is required.
139
+ )
140
+
141
+ results = evaluate(
142
+ lm=lm_obj,
143
+ task_dict=task_dict,
144
+ ...
145
+ )
146
+ ```
lm-evaluation/docs/model_guide.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # New Model Guide
2
+
3
+ This guide may be of special interest to users who are using the library outside of the repository, via installing the library via pypi and calling `lm_eval.evaluator.evaluate()` to evaluate an existing model.
4
+
5
+ In order to properly evaluate a given LM, we require implementation of a wrapper class subclassing the `lm_eval.api.model.LM` class, that defines how the Evaluation Harness should interface with your model. This guide walks through how to write this `LM` subclass via adding it to the library!
6
+
7
+ ## Setup
8
+
9
+ To get started contributing, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:
10
+
11
+ ```sh
12
+ # After forking...
13
+ git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git
14
+ cd lm-evaluation-harness
15
+ git checkout -b <model-type>
16
+ pip install -e ".[dev]"
17
+ ```
18
+
19
+ Now, we'll create a new file where we'll be adding our model:
20
+
21
+ ```sh
22
+ touch lm_eval/models/<my_model_filename>.py
23
+ ```
24
+
25
+ **Tip: this filename should not shadow package names! For example, naming your file `anthropic.py` is disallowed since the API's name on pypi is `anthropic`, but naming it `anthropic_llms.py` works with no problems.**
26
+
27
+ ## Interface
28
+
29
+ All models must subclass the `lm_eval.api.model.LM` class.
30
+
31
+ The LM class enforces a common interface via which we can extract responses from a model:
32
+
33
+ ```python
34
+ class MyCustomLM(LM):
35
+ #...
36
+ def loglikelihood(self, requests: list[Instance]) -> list[tuple[float, bool]]:
37
+ #...
38
+
39
+
40
+ def loglikelihood_rolling(self, requests: list[Instance]) -> list[tuple[float, bool]]:
41
+ #...
42
+
43
+
44
+ def generate_until(self, requests: list[Instance]) -> list[str]:
45
+ #...
46
+ #...
47
+ ```
48
+ Where `Instance` is a dataclass defined in [`lm_eval.api.instance`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/api/instance.py) with property `args` of request-dependent type signature described below.
49
+
50
+ We support three types of requests, consisting of different interactions / measurements with an autoregressive LM.
51
+
52
+ All three request types take as input `requests` of type `list[Instance]` that have a matching `Instance.request_type` to the method name.
53
+
54
+ - `generate_until`
55
+ - Each request contains `Instance.args : Tuple[str, dict]` containing 1. an input string to the LM and 2. a dictionary of keyword arguments used to control generation parameters.
56
+ - Using this input and these generation parameters, text will be sampled from the language model (typically until a maximum output length or specific stopping string sequences--for example, `{"until": ["\n\n", "."], "max_gen_toks": 128}`).
57
+ - The generated input+output text from the model will then be returned.
58
+
59
+ - `loglikelihood`
60
+ - Each request contains `Instance.args : Tuple[str, str]` containing 1. an input string to the LM and 2. a target string on which the loglikelihood of the LM producing this target, conditioned on the input, will be returned.
61
+ - Each request will have, as result, `(ll, is_greedy): Tuple[float, int]` returned, where `ll` is a floating point number representing the log probability of generating the target string conditioned on the input, and `is_greedy` being either the value `0` or `1`, with it being `1` if and only if the target string *would be generated by greedy sampling from the LM* (that is, if the target string is the *most likely* N-token string to be output by the LM given the input. )
62
+
63
+ - `loglikelihood_rolling`
64
+ - Each request contains `Instance.args : Tuple[str]`, which is an input string to the model whose *entire* loglikelihood, conditioned on purely the EOT token, will be calculated.
65
+ - This is used to evaluate *perplexity* on a data distribution.
66
+ - It should return `(ll,) : Tuple[float]` , a.k.a. solely the *loglikelihood* of producing each piece of text given no starting input.
67
+
68
+
69
+ To allow a model to be evaluated on all types of tasks, you will need to implement these three types of measurements (note that `loglikelihood_rolling` is a special case of `loglikelihood`). For a reference implementation, check out `lm_eval/models/huggingface.py` ! Additionally, check out `lm_eval.api.model.TemplateLM` for a class that abstracts away some commonly used functions across LM subclasses, or see if your model would lend itself well to subclassing the `lm_eval.models.huggingface.HFLM` class and overriding just the initialization or a couple methods!
70
+
71
+ **Tip: be careful of indexing in loglikelihood!**
72
+
73
+
74
+ LMs take in tokens in position `[0 1 2 ... N]` and output a probability distribution for token position `N+1`. We provide a simplified graphic here, excerpted from `huggingface.py`:
75
+
76
+ ```
77
+ # how this all works (illustrated on a causal decoder-only setup):
78
+ # CTX CONT
79
+ # inp 0 1 2 3|4 5 6 7 8 9 <- last token is deleted by inp[:, :-1]
80
+ # model \ \
81
+ # logits 1 2 3|4 5 6 7 8 9 <- the ctx half gets tossed out by the
82
+ # cont_toks 4 5 6 7 8 9 [:, -len(continuation_enc):, :self.vocab_size] slice
83
+ ```
84
+
85
+ The final token of the target is not passed into the LM, because we want the LM's predictions *up to but not past* that final target token. For more information, check out https://github.com/EleutherAI/lm-evaluation-harness/issues/942 .
86
+
87
+ ## Registration
88
+
89
+ Congrats on implementing your model! Now it's time to test it out.
90
+
91
+ To make your model usable via the command line interface to `lm-eval` using `python -m lm_eval`, you'll need to tell `lm-eval` what your model's name is.
92
+
93
+ This is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python -m lm_eval --model <name>` and alert `lm-eval` to the model's existence.
94
+
95
+ ```python
96
+ from lm_eval.api.registry import register_model
97
+
98
+ @register_model("<name1>", "<name2>")
99
+ class MyCustomLM(LM):
100
+ ```
101
+
102
+ Using this decorator results in the class being added to an accounting of the usable LM types maintained internally to the library at `lm_eval.api.registry.MODEL_REGISTRY`. See `lm_eval.api.registry` for more detail on what sorts of registries and decorators exist in the library!
103
+
104
+ **Tip: be sure to import your model in `lm_eval/models/__init__.py!`**
105
+
106
+ ## Testing
107
+
108
+ We also recommend that new model contributions be accompanied by short tests of their 3 core functionalities, at minimum. To see an example of such tests, look at https://github.com/EleutherAI/lm-evaluation-harness/blob/35bdecd379c0cefad6897e67db892f4a6026a128/tests/test_ggml.py .
109
+
110
+ ## Other
111
+
112
+ **Pro tip**: In order to make the Evaluation Harness overestimate total runtimes rather than underestimate it, HuggingFace models come in-built with the ability to provide responses on data points in *descending order by total input length* via `lm_eval.utils.Reorderer`. Take a look at `lm_eval.models.hf_causal.HFLM` to see how this is done, and see if you can implement it in your own model!
113
+
114
+ ## Conclusion
115
+
116
+ After reading this guide, you should be able to add new model APIs or implementations to the Eval Harness library!
lm-evaluation/docs/task_guide.md ADDED
@@ -0,0 +1,384 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Task Configuration
2
+
3
+ The `lm-evaluation-harness` is meant to be an extensible and flexible framework within which many different evaluation tasks can be defined. All tasks in the new version of the harness are built around a YAML configuration file format.
4
+
5
+ These YAML configuration files, along with the current codebase commit hash, are intended to be shareable such that providing the YAML config enables another researcher to precisely replicate the evaluation setup used by another, in the case that the prompt or setup differs from standard `lm-eval` task implementations.
6
+
7
+ While adding a standard evaluation task on a new dataset can be occasionally as simple as swapping out a Hugging Face dataset path in an existing file, more specialized evaluation setups also exist. Here we'll provide a crash course on the more advanced logic implementable in YAML form available to users.
8
+
9
+ If your intended task relies on features beyond what are described in this guide, we'd love to hear about it! Feel free to open an issue describing the scenario on Github, create a PR to the project with a proposed implementation, or ask in the `#lm-thunderdome` channel on the EleutherAI discord.
10
+
11
+ ## Configurations
12
+
13
+ Tasks are configured via the `TaskConfig` object. Below, we describe all fields usable within the object, and their role in defining a task.
14
+
15
+ ### Parameters
16
+
17
+ Task naming + registration:
18
+ - **task** (`str`, defaults to None) — name of the task.
19
+ - **group** (`str`, *optional*) — name of the task group(s) a task belongs to. Enables one to run all tasks with a specified tag or group name at once.
20
+
21
+ Dataset configuration options:
22
+ - **dataset_path** (`str`) — The name of the dataset as listed by HF in the datasets Hub.
23
+ - **dataset_name** (`str`, *optional*, defaults to None) — The name of what HF calls a “data instance” or sub-task of the benchmark. If your task does not contain any data instances, just leave this to default to None. (If you're familiar with the HF `datasets.load_dataset` function, these are just the first 2 arguments to it.)
24
+ - **dataset_kwargs** (`dict`, *optional*) — Auxiliary arguments that `datasets.load_dataset` accepts. This can be used to specify arguments such as `data_files` or `data_dir` if you want to use local datafiles such as json or csv.
25
+ - **training_split** (`str`, *optional*) — Split in the dataset to use as the training split.
26
+ - **validation_split** (`str`, *optional*) — Split in the dataset to use as the validation split.
27
+ - **test_split** (`str`, *optional*) — Split in the dataset to use as the test split.
28
+ - **fewshot_split** (`str`, *optional*) — Split in the dataset to draw few-shot exemplars from. assert that this not None if num_fewshot > 0.
29
+ - **process_docs** (`Callable`, *optional*) — Optionally define a function to apply to each HF dataset split, to preprocess all documents before being fed into prompt template rendering or other evaluation steps. Can be used to rename dataset columns, or to process documents into a format closer to the expected format expected by a prompt template.
30
+
31
+ Prompting / in-context formatting options:
32
+ - **use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text, doc_to_target, and doc_to_choice.
33
+ - **description** (`str`, *optional*) — An optional prepended Jinja2 template or string which will be prepended to the few-shot examples passed into the model, often describing the task or providing instructions to a model, such as `"The following are questions (with answers) about {{subject}}.\n\n"`. No delimiters or spacing are inserted between the description and the first few-shot example.
34
+ - **doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate input for the model
35
+ - **doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate target output for the model. For multiple choice tasks, this should return an index into
36
+ - **doc_to_choice** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into a list of possible string choices for `multiple_choice` tasks. Left undefined for `generate_until` tasks.
37
+ - **fewshot_delimiter** (`str`, *optional*, defaults to "\n\n") — String to insert between few-shot examples.
38
+ - **target_delimiter** (`str`, *optional*, defaults to `" "`) — String to insert between input and target output for the datapoint being tested.
39
+
40
+ Runtime configuration options:
41
+ - **num_fewshot** (`int`, *optional*, defaults to 0) — Number of few-shot examples before the input.
42
+ - **batch_size** (`int`, *optional*, defaults to 1) — Batch size.
43
+
44
+ Scoring details:
45
+ - **metric_list** (`str`, *optional*, defaults to None) — A list of metrics to use for evaluation. See docs for expected format.
46
+ - **output_type** (`str`, *optional*, defaults to "generate_until") — Selects the type of model output for the given task. Options are `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
47
+ - **generation_kwargs** (`dict`, *optional*) — Auxiliary arguments for the `generate` function from HF transformers library. Advanced keyword arguments may not be supported for non-HF LM classes.
48
+ - **repeats** (`int`, *optional*, defaults to 1) — Number of repeated runs through model for each sample. can be used for cases such as self-consistency.
49
+ - **filter_list** (`Union[str, list]`, *optional*) — List of filters to postprocess model outputs. See below for further detail on the filter API.
50
+ - **should_decontaminate** (`bool`, *optional*, defaults to False) - Whether to decontaminate or not.
51
+ - **doc_to_decontamination_query** (`str`, *optional*) — Query for decontamination if `should_decontaminate` is True. If `should_decontaminate` is True but `doc_to_decontamination_query` is `None`, `doc_to_decontamination_query` will follow `doc_to_text`.
52
+
53
+ Other:
54
+ - **metadata** (`dict`, *optional*) — An optional field where arbitrary metadata can be passed. Most tasks should include a `version` key in this field that is used to denote the version of the yaml config. Other special metadata keys are: `num_fewshot`, to override the printed `n-shot` table column for a task.
55
+
56
+ ## Filters
57
+
58
+ Explain: What are filters? What is their place in the pipeline?
59
+
60
+ A key component of the `lm-evaluation-harness` library is the `Filter` object. In a typical evaluation run of the harness, we take the formatted inputs and run them through our LM, with the appropriate output type (greedy or free-form generation, or loglikelihood-based comparative scoring).
61
+
62
+ After getting scores or output text from our LM on each `Instance` or document in the dataset, we then need to feed these responses into a metric or scoring function to return scores to a user.
63
+
64
+ However, certain tasks may require more complex behavior than directly turning over model outputs to a metric function. For example, we may want to post-process our output text by truncating it or extracting a model's answer, we may want to ensemble over multiple "takes" on a different document, et cetera.
65
+
66
+ **Detailed Aside**:
67
+ We do such post-processing by operating on *responses*, which are stored after running an LM on an `Instance` from the task in `Instance.resps`.
68
+
69
+ `resps` is a `List[str]` for each instance, and we pass a `List[List[<expected return type from model>]]` to our filters that is a list of `[instance.resps for instance in instances]`.
70
+
71
+ Our filters, after completing a pipeline, must return a `List[<expected return type from model>]` which we then unpack and store each element of in `Instance.filtered_resps` for the corresponding instance. Thus, we take as input a list of returns from our model for each doc, and must return a return from our model *without it being wrapped in a list* for each doc.
72
+
73
+ **End Aside**
74
+
75
+
76
+ A full list of supported filter operations can be found in `lm_eval/filters/__init__.py`. Contributions of new filter types are welcome!
77
+
78
+ ### Multiple Filter Pipelines
79
+
80
+ Tasks need not be limited to a single filter pipeline. We enable users to run multiple, distinct, filter pipelines on *the same model outputs* generated in one run on a task.
81
+
82
+ As a case study, let's look at an implementation of solving the Gsm8k math word problem benchmark in `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`. Here, we are emulating the setup used by [Self-Consistency Improves Chain of Thought Prompting](https://arxiv.org/abs/2203.11171), in which evaluation is performed by generating N chain-of-thought outputs from a model via temperature-based sampling, then selecting the answers output by the model at the end of the chains of thought, then majority voting across all those numeric answers.
83
+
84
+ Within our YAML file:
85
+
86
+ ```yaml
87
+ ...
88
+ repeats: 64
89
+ filter_list:
90
+ - name: "score-first"
91
+ filter:
92
+ - function: "regex"
93
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
94
+ - function: "take_first"
95
+ - name: "maj@64"
96
+ filter:
97
+ - function: "regex"
98
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
99
+ - function: "majority_vote"
100
+ - function: "take_first"
101
+ - name: "maj@8"
102
+ filter:
103
+ - function: "take_first_k"
104
+ k: 8
105
+ - function: "regex"
106
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
107
+ - function: "majority_vote"
108
+ - function: "take_first"
109
+ ```
110
+
111
+ We are able to provide multiple different filter pipelines, each with their own name and list of filters to apply in sequence.
112
+
113
+ Our first filter pipeline implements
114
+ - applying a regex to the model generations (extracting the number within the phrase "The answer is (number)")
115
+ - selecting only the first out of the 64 model answers
116
+
117
+ Then scoring this single answer.
118
+
119
+ ```yaml
120
+ - name: "score-first"
121
+ filter:
122
+ - function: "regex"
123
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
124
+ - function: "take_first"
125
+ ```
126
+
127
+ Our second filter pipeline, "maj@64", does majority voting across all 64 answers via:
128
+ - applying the same regex to all responses, to get the numerical answer from the model for each of the 64 responses per problem
129
+ - applying majority voting to all responses, which then returns a length-1 `[<majority answer>]` list for each
130
+ - taking the first element of this length-1 list, to then score the sole response `<majority answer>` for each document.
131
+
132
+ ```yaml
133
+ - name: "maj@64"
134
+ filter:
135
+ - function: "regex"
136
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
137
+ - function: "majority_vote"
138
+ - function: "take_first"
139
+ ```
140
+
141
+ Our final filter pipeline, "maj@8", does majority voting across the first 8 of the model's responses per document via:
142
+ - subsetting the len-64 list of responses `[answer1, answer2, ..., answer64]` to `[answer1, answer2, ..., answer8]` for each document
143
+ - performing the same sequence of filters on these new sets of 8 responses, for each document.
144
+ ```yaml
145
+ - name: "maj@8"
146
+ filter:
147
+ - function: "take_first_k"
148
+ k: 8
149
+ - function: "regex"
150
+ regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
151
+ - function: "majority_vote"
152
+ - function: "take_first"
153
+ ```
154
+
155
+ Thus, given the 64 responses from our LM on each document, we can report metrics on these responses in these 3 different ways, as defined by our filter pipelines.
156
+
157
+
158
+ ## Embedded Python Code
159
+
160
+ Use can use python functions for certain arguments by using the `!function` operator after the argument name followed by `<filename>.<pythonfunctionname>`. This feature can be used for the following arguments:
161
+ 1. `doc_to_text`
162
+ 2. `doc_to_target`
163
+ 3. `doc_to_choice`
164
+ 4. `aggregation` for a `metric` in `metric_list`
165
+
166
+ ## (No Longer Recommended) Direct `Task` Subclassing
167
+
168
+ The prior implementation method of new tasks was to subclass `Task`. While we intend to migrate all tasks to the new YAML implementation option going forward, it remains possible to subclass the Task class and implement custom logic. For more information, see `docs/task_guide.md` in v0.3.0 of the `lm-evaluation-harness`.
169
+
170
+
171
+ ## Including a Base YAML
172
+
173
+ You can base a YAML on another YAML file as a template. This can be handy when you need to just change the prompt for `doc_to_text` but keep the rest the same or change `filters` to compare which is better. Simply use `include` in the YAML file and write the name of the template you want to base from. This assumes that the base temeplate is in the same directory. Otherwise, You will need to define the full path.
174
+ ```
175
+ include: <YAML filename or with full path>
176
+ ...
177
+ ```
178
+ You can find an example of how to use this feature at [gsm8k-cot-self-consistency.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/3c07cc04a92fc467d7c9a94894aeddd58c93a5da/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml) where it is based off [gsm8k-cot.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/3c07cc04a92fc467d7c9a94894aeddd58c93a5da/lm_eval/tasks/gsm8k/gsm8k-cot.yaml)
179
+
180
+
181
+ ## Passing Arguments to Metrics
182
+
183
+ Metrics can be defined in the `metric_list` argument when building the YAML config. Multiple metrics can be listed along with any auxiliary arguments. For example, setting the [`exact_match` metric](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match), auxiliary arguments such as `ignore_case`, `ignore_punctuation`, `regexes_to_ignore` can be listed as well. They will be added to the metric function as `kwargs`. Some metrics have predefined values for `aggregation` and `higher_is_better` so listing the metric name only can be sufficient.
184
+
185
+ ```
186
+ metric_list:
187
+ - metric: acc
188
+ - metric: exact_match
189
+ aggregation: mean
190
+ higher_is_better: true
191
+ ignore_case: true
192
+ ignore_punctuation: false
193
+ regexes_to_ignore:
194
+ - ","
195
+ - "\\$"
196
+ ```
197
+
198
+ ### Natively Supported Metrics
199
+
200
+ Here we list all metrics currently supported natively in `lm-eval`:
201
+
202
+ Metrics:
203
+ * `acc` (accuracy)
204
+ * `acc_norm` (length-normalized accuracy)
205
+ * `acc_mutual_info` (baseline loglikelihood - normalized accuracy)
206
+ * `perplexity`
207
+ * `word_perplexity` (perplexity per word)
208
+ * `byte_perplexity` (perplexity per byte)
209
+ * `bits_per_byte`
210
+ * `matthews_corrcoef` (Matthews correlation coefficient)
211
+ * `f1` (F1 score)
212
+ * `bleu`
213
+ * `chrf`
214
+ * `ter`
215
+
216
+ Aggregation functions:
217
+ * `mean`
218
+ * `median`
219
+ * `perplexity`
220
+ * `weighted_perplexity`
221
+ * `bits_per_byte`
222
+
223
+ ### Adding a Multiple Choice Metric
224
+
225
+ Adding a multiple choice metric has a few steps. To get it working you need to:
226
+
227
+ 1. register a metric function
228
+ 2. register an aggregation function
229
+ 3. update the `Task` definition to make sure the correct arguments are passed
230
+
231
+ The default metric and aggregation functions are in `lm_eval/api/metrics.py`, and you can add a function there if it's for general use. The metrics are towards the bottom of the file and look like this:
232
+
233
+
234
+ @register_metric(
235
+ metric="mcc",
236
+ higher_is_better=True,
237
+ output_type="multiple_choice",
238
+ aggregation="matthews_corrcoef",
239
+ )
240
+ def mcc_fn(items): # This is a passthrough function
241
+ return items
242
+
243
+ Note that many of these are passthrough functions, and for multiple choice (at least) this function is never actually called.
244
+
245
+ Aggregation functions are defined towards the top of the file, here's an example:
246
+
247
+ @register_aggregation("matthews_corrcoef")
248
+ def matthews_corrcoef(items):
249
+ unzipped_list = list(zip(*items))
250
+ golds = unzipped_list[0]
251
+ preds = unzipped_list[1]
252
+ return sklearn.metrics.matthews_corrcoef(golds, preds)
253
+
254
+ This function returns a single numeric value. The input is defined in `Task.process_results` in `lm_eval/api/task.py`. There's a section that looks like this:
255
+
256
+
257
+ result_dict = {
258
+ **({"acc": acc} if "acc" in use_metric else {}),
259
+ **({"f1": (gold, pred)} if "f1" in use_metric else {}),
260
+ **({"mcc": (gold, pred)} if "mcc" in use_metric else {}),
261
+ **({"acc_norm": acc_norm} if "acc_norm" in use_metric else {}),
262
+ **({"exact_match": exact_match} if "exact_match" in use_metric else {}),
263
+ }
264
+
265
+ The value here determines the input to the aggregation function, though the name used matches the metric function. These metrics all have simple needs and just need the accuracy or gold and predicted values, but immediately below this there are examples of metrics with more complicated needs you can use as reference.
266
+
267
+ ## Good Reference Tasks
268
+
269
+ Contributing a new task can be daunting! Luckily, much of the work has often been done for you in a different, similarly evaluated task. Good examples of task implementations to study include:
270
+
271
+ Multiple choice tasks:
272
+ - SciQ (`lm_eval/tasks/sciq/sciq.yaml`)
273
+
274
+ Corpus perplexity evaluations:
275
+ - Wikitext (`lm_eval/tasks/wikitext/wikitext.yaml`)
276
+
277
+ Generative tasks:
278
+ - GSM8k (`lm_eval/tasks/gsm8k/gsm8k.yaml`)
279
+
280
+ Tasks using complex filtering:
281
+ - GSM8k with CoT (+ with Self-Consistency): (`lm_eval/tasks/gsm8k/gsm8k-cot.yaml` ; `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`)
282
+
283
+
284
+ ## Benchmarks
285
+
286
+ When evaluating a language model, it's is not unusual to test across a number of tasks that may not be related to one another in order to assess a variety of capabilities. To this end, it may be combursome to have to list the set of tasks or add a new group name to each yaml of each individual task.
287
+
288
+ To solve this, we can create a benchmark yaml config. This is a config that contains the names of the tasks that should be included in a particular benchmark. The config consists of two main keys `group` which denotes the name of the benchmark and `task` which is where we can list the tasks. The tasks listed in `task` are the task names that have been registered. A good example would be the list of tasks used to evaluate the Pythia Suite.
289
+
290
+ ```yaml
291
+ group: pythia
292
+ task:
293
+ - lambada_openai
294
+ - wikitext
295
+ - piqa
296
+ - sciq
297
+ - wsc
298
+ - winogrande
299
+ - arc
300
+ - logiqa
301
+ - blimp
302
+ - hendrycksTest*
303
+ ```
304
+
305
+ It is also possible to list an existing task in your benchmark configuration with some adjustments. For example, a few tasks from mmlu is included `multimedqa`. There, the `task_alias` and `group_alias` (See [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#beautifying-table-display) for more details) are modified to suit the benchmark.
306
+
307
+ ```yaml
308
+ group: multimedqa
309
+ task:
310
+ - pubmedqa
311
+ - medmcqa
312
+ - medqa_4options
313
+ - task: mmlu_anatomy
314
+ task_alias: "anatomy (mmlu)"
315
+ group_alias: null
316
+ - task: mmlu_clinical_knowledge
317
+ task_alias: "clinical_knowledge (mmlu)"
318
+ group_alias: null
319
+ ...
320
+ ```
321
+
322
+ Alternatively, benchmarks can have tasks that are customizable for each task. They can be defined like how a yaml task is usually set.
323
+
324
+ ```yaml
325
+ group: t0_eval
326
+ task:
327
+ # Coreference Resolution
328
+ - dataset_path: super_glue
329
+ dataset_name: wsc.fixed
330
+ use_prompt: promptsource:*
331
+ training_split: train
332
+ validation_split: validation
333
+ metric_list:
334
+ - metric: exact_match
335
+ aggregation: mean
336
+ higher_is_better: true
337
+ ignore_case: true
338
+ ignore_punctuation: true
339
+ # Coreference Resolution
340
+ - dataset_path: winogrande
341
+ dataset_name: winogrande_xl
342
+ use_prompt: promptsource:*
343
+ training_split: train
344
+ validation_split: validation
345
+ metric_list:
346
+ - metric: exact_match
347
+ aggregation: mean
348
+ higher_is_better: true
349
+ ignore_case: true
350
+ ignore_punctuation: true
351
+ ...
352
+ ```
353
+
354
+ If the benchmark contains the same dataset but with different configurations, use `task` to differentiate between them. For example, T0-Eval evaluates on 3 versions of ANLI but the huggingface dataset collects them in one dataset.
355
+
356
+ ```YAML
357
+ group: t0_eval
358
+ task:
359
+ ...
360
+ - task: anli_r1
361
+ dataset_path: anli
362
+ use_prompt: promptsource:*
363
+ training_split: train_r1
364
+ validation_split: dev_r1
365
+ metric_list:
366
+ - metric: exact_match
367
+ aggregation: mean
368
+ higher_is_better: true
369
+ ignore_case: true
370
+ ignore_punctuation: true
371
+ - task: anli_r2
372
+ dataset_path: anli
373
+ use_prompt: promptsource:*
374
+ training_split: train_r2
375
+ validation_split: dev_r2
376
+ metric_list:
377
+ - metric: exact_match
378
+ aggregation: mean
379
+ higher_is_better: true
380
+ ignore_case: true
381
+ ignore_punctuation: true
382
+ ```
383
+
384
+ Calling the benchmark is done the same way we would call any task with `--tasks`. Benchmarks can be added in `lm_eval/tasks/benchmarks/`
lm-evaluation/tests/testdata/anli_r2-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ d0ea3c3e09d533982c15b4c034439896d6af4bbafb2254d305e20215534a251d
lm-evaluation/tests/testdata/arithmetic_2ds-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"arithmetic_2ds": {"acc": 0.0, "acc_stderr": 0.0}}, "versions": {"arithmetic_2ds": 0}}
lm-evaluation/tests/testdata/blimp_anaphor_gender_agreement-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 2d8964e56a17661502ecf3f09c0befba63915360ddf2145b0bd845816950515d
lm-evaluation/tests/testdata/blimp_anaphor_number_agreement-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"blimp_anaphor_number_agreement": {"acc": 0.485, "acc_stderr": 0.0158121796418149}}, "versions": {"blimp_anaphor_number_agreement": 0}}
lm-evaluation/tests/testdata/blimp_animate_subject_passive-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 064c38fcd072b8bd12f54ea4f8e41599ed4e11dc386e93b77e1fc07967d1f960
lm-evaluation/tests/testdata/blimp_existential_there_quantifiers_1-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ d77594382e6d9af31a8b8ef00ba1ef6c29d6be6d0ddb7a9c27ef25ace654e05a
lm-evaluation/tests/testdata/blimp_existential_there_quantifiers_2-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"blimp_existential_there_quantifiers_2": {"acc": 0.485, "acc_stderr": 0.0158121796418149}}, "versions": {"blimp_existential_there_quantifiers_2": 0}}
lm-evaluation/tests/testdata/blimp_intransitive-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 6469ae3b0d46b008846b5fd132f2d2b26ea2858745d056df1470b89aa97a790f
lm-evaluation/tests/testdata/blimp_irregular_past_participle_adjectives-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 47c56f336df11924d8b97feb46339ce55bea4b216b6fd13946cc999ea36a4a95
lm-evaluation/tests/testdata/blimp_irregular_past_participle_verbs-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 63ec733873f94ace71cb34112d1c3cd5bb768c26b975fb90acc9b8ba3f4e938e
lm-evaluation/tests/testdata/blimp_irregular_plural_subject_verb_agreement_2-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"blimp_irregular_plural_subject_verb_agreement_2": {"acc": 0.485, "acc_stderr": 0.0158121796418149}}, "versions": {"blimp_irregular_plural_subject_verb_agreement_2": 0}}
lm-evaluation/tests/testdata/blimp_matrix_question_npi_licensor_present-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ a3a702a3335c79b02b36caf37c68069050c2a8a3a03c3610c09afc39d2b83fb1
lm-evaluation/tests/testdata/blimp_npi_present_2-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"blimp_npi_present_2": {"acc": 0.485, "acc_stderr": 0.0158121796418149}}, "versions": {"blimp_npi_present_2": 0}}
lm-evaluation/tests/testdata/blimp_principle_A_case_2-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ cd68adb65c891d672e22bf53c054b2083ab08bc1da43951732b409c942d14bc7
lm-evaluation/tests/testdata/blimp_regular_plural_subject_verb_agreement_1-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 5bc0441f31e32443cf761bca6e961d504e1e84b15aa4e1d79e5c8ed5b4c2aa3a
lm-evaluation/tests/testdata/blimp_superlative_quantifiers_1-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"blimp_superlative_quantifiers_1": {"acc": 0.485, "acc_stderr": 0.0158121796418149}}, "versions": {"blimp_superlative_quantifiers_1": 0}}
lm-evaluation/tests/testdata/blimp_wh_island-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 91a9e4b60b0f3572a7fdbd7648d0e69f36e5eb34db715315b0082558d7ed8b65
lm-evaluation/tests/testdata/blimp_wh_questions_object_gap-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 4d4aaa0274ccd485ff8430ed61b8f83806febe18c16616c7d050f637a0463eba
lm-evaluation/tests/testdata/coqa-v1-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"coqa": {"em": 0.0, "em_stderr": 0.0, "f1": 0.0, "f1_stderr": 0.0}}, "versions": {"coqa": 1}}
lm-evaluation/tests/testdata/crows_pairs_english_age-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"crows_pairs_english_age": {"likelihood_difference": 0.3160680928470684, "likelihood_difference_stderr": 0.02397758321605678, "pct_stereotype": 0.43956043956043955, "pct_stereotype_stderr": 0.05231815698566189}}, "versions": {"crows_pairs_english_age": 0}}
lm-evaluation/tests/testdata/crows_pairs_english_religion-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"crows_pairs_english_religion": {"likelihood_difference": 0.32170622542430666, "likelihood_difference_stderr": 0.022101541392310232, "pct_stereotype": 0.43243243243243246, "pct_stereotype_stderr": 0.04723583229758394}}, "versions": {"crows_pairs_english_religion": 0}}
lm-evaluation/tests/testdata/drop-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"drop": {"em": 0.0, "em_stderr": 0.0, "f1": 0.0, "f1_stderr": 0.0}}, "versions": {"drop": 0}}
lm-evaluation/tests/testdata/drop-v1-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"drop": {"em": 0.0, "em_stderr": 0.0, "f1": 0.0, "f1_stderr": 0.0}}, "versions": {"drop": 1}}
lm-evaluation/tests/testdata/hellaswag-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hellaswag": {"acc": 0.24965146385182235, "acc_norm": 0.24756024696275641, "acc_norm_stderr": 0.004307128573285236, "acc_stderr": 0.004319267432460666}}, "versions": {"hellaswag": 0}}
lm-evaluation/tests/testdata/hendrycksTest-college_biology-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ c29e4e67ff91af29b9434884874414d1b1b32ccc32903c6b1639469b19907419
lm-evaluation/tests/testdata/hendrycksTest-college_physics-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hendrycksTest-college_physics": {"acc": 0.23529411764705882, "acc_norm": 0.23529411764705882, "acc_norm_stderr": 0.04220773659171453, "acc_stderr": 0.04220773659171452}}, "versions": {"hendrycksTest-college_physics": 0}}
lm-evaluation/tests/testdata/hendrycksTest-econometrics-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ cde76ba2c7382b4876e17136c94f52aca2774e50342ab757b2a2d18da370dcb6
lm-evaluation/tests/testdata/hendrycksTest-high_school_chemistry-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ f4f338e45415c4b5ee7f1d249155bcd910c8401bd1436760a5ec61cb6bb211b6
lm-evaluation/tests/testdata/hendrycksTest-high_school_chemistry-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hendrycksTest-high_school_chemistry": {"acc": 0.2857142857142857, "acc_norm": 0.2660098522167488, "acc_norm_stderr": 0.031089826002937523, "acc_stderr": 0.031785297106427496}}, "versions": {"hendrycksTest-high_school_chemistry": 0}}
lm-evaluation/tests/testdata/hendrycksTest-high_school_geography-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ add45970ea3865be7c7a31f788a835949f6937ac73f699b122ca56a3431e95f8
lm-evaluation/tests/testdata/hendrycksTest-high_school_mathematics-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hendrycksTest-high_school_mathematics": {"acc": 0.22592592592592592, "acc_norm": 0.24814814814814815, "acc_norm_stderr": 0.0263357394040558, "acc_stderr": 0.025497532639609553}}, "versions": {"hendrycksTest-high_school_mathematics": 0}}
lm-evaluation/tests/testdata/hendrycksTest-machine_learning-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 7a7138821a66ef946e427b40344cf7f1a916a2926995a85ef731a3bee40cb7ce
lm-evaluation/tests/testdata/hendrycksTest-miscellaneous-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 972dd88dbbaf09d14766e243cfc233425e7c01a26dbc61bdb9eeefa788822331
lm-evaluation/tests/testdata/hendrycksTest-moral_scenarios-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ a8e1882e77728b53c8b86312254d08320d8363fb606d746a8dd145b812f62cf5
lm-evaluation/tests/testdata/hendrycksTest-nutrition-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 19e49d218f55ed5ec4bd1a6cd3f3388c6f620b81484e7abe8b298e5481c3044d
lm-evaluation/tests/testdata/hendrycksTest-philosophy-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ a419204da36c2b7a70fa8909a3a804260cc3283c7e07917534dfb76216c77f46
lm-evaluation/tests/testdata/hendrycksTest-philosophy-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hendrycksTest-philosophy": {"acc": 0.26366559485530544, "acc_norm": 0.2733118971061093, "acc_norm_stderr": 0.02531176597542612, "acc_stderr": 0.02502553850053234}}, "versions": {"hendrycksTest-philosophy": 0}}
lm-evaluation/tests/testdata/hendrycksTest-professional_psychology-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 92a5fad6e9ec700f84946faeccd399dda3569fb71837c9fb0c5c87f5ec29c43e
lm-evaluation/tests/testdata/hendrycksTest-security_studies-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 92dfffe2acf3278256486d3e1cf1edb5a739ad0a54c0f9c67695f7a411ed5f76
lm-evaluation/tests/testdata/hendrycksTest-us_foreign_policy-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"hendrycksTest-us_foreign_policy": {"acc": 0.2, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283, "acc_stderr": 0.040201512610368445}}, "versions": {"hendrycksTest-us_foreign_policy": 0}}
lm-evaluation/tests/testdata/iwslt17-ar-en-v0-res.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"results": {"iwslt17-ar-en": {"bleu": 0.0, "bleu_stderr": 0.0, "chrf": 0.015049895477752772, "chrf_stderr": 0.0002940315671893584, "ter": 1.0, "ter_stderr": 0.0}}, "versions": {"iwslt17-ar-en": 0}}
lm-evaluation/tests/testdata/lambada_mt_es-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 4a88f4b316c72fe0396c382d6cbb33568ac4d0ad225150d3536635c085359fc9
lm-evaluation/tests/testdata/lambada_openai_cloze-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 7655e748b63ae7e9911411d2d2a2577221d6c861ca4448509992541294d689f3
lm-evaluation/tests/testdata/lambada_openai_mt_de-v0-loglikelihood ADDED
@@ -0,0 +1 @@
 
 
1
+ 5ad125e1708499832b2cee8c3388f89f9c0277010fd96fbd3359039ce8105984