Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- lm-evaluation-harness/docs/CONTRIBUTING.md +81 -0
- lm-evaluation-harness/docs/README.md +10 -0
- lm-evaluation-harness/docs/decontamination.md +71 -0
- lm-evaluation-harness/docs/interface.md +146 -0
- lm-evaluation-harness/docs/model_guide.md +116 -0
- lm-evaluation-harness/docs/new_task_guide.md +445 -0
- lm-evaluation-harness/docs/task_guide.md +384 -0
- lm-evaluation-harness/examples/lm-eval-overview.ipynb +1231 -0
- lm-evaluation-harness/examples/visualize-wandb.ipynb +168 -0
- lm-evaluation-harness/examples/visualize-zeno.ipynb +115 -0
- lm-evaluation-harness/lm_eval.egg-info/PKG-INFO +558 -0
- lm-evaluation-harness/lm_eval.egg-info/SOURCES.txt +0 -0
- lm-evaluation-harness/lm_eval.egg-info/dependency_links.txt +1 -0
- lm-evaluation-harness/lm_eval.egg-info/entry_points.txt +3 -0
- lm-evaluation-harness/lm_eval.egg-info/requires.txt +111 -0
- lm-evaluation-harness/lm_eval.egg-info/top_level.txt +1 -0
- lm-evaluation-harness/lm_eval/__init__.py +3 -0
- lm-evaluation-harness/lm_eval/__main__.py +417 -0
- lm-evaluation-harness/lm_eval/__pycache__/__init__.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/__pycache__/__main__.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/__pycache__/evaluator.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/__pycache__/evaluator_utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/__pycache__/logging_utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/__pycache__/utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/decontamination/__init__.py +0 -0
- lm-evaluation-harness/lm_eval/decontamination/archiver.py +171 -0
- lm-evaluation-harness/lm_eval/decontamination/decontaminate.py +166 -0
- lm-evaluation-harness/lm_eval/decontamination/janitor.py +328 -0
- lm-evaluation-harness/lm_eval/evaluator.py +584 -0
- lm-evaluation-harness/lm_eval/evaluator_utils.py +312 -0
- lm-evaluation-harness/lm_eval/logging_utils.py +455 -0
- lm-evaluation-harness/lm_eval/tasks/__init__.py +447 -0
- lm-evaluation-harness/lm_eval/tasks/__pycache__/__init__.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/__pycache__/utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_common_yaml +20 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_gu.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_hi.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_kn.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_ml.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_mr.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_ta.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_te.yaml +9 -0
- lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/utils.py +136 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/__pycache__/utils.cpython-310.pyc +0 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag.yaml +3 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_common_yaml +22 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_gu.yaml +3 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_hi.yaml +3 -0
- lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_kn.yaml +3 -0
lm-evaluation-harness/docs/CONTRIBUTING.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contributing to LM Evaluation Harness
|
2 |
+
|
3 |
+
Welcome and thank you for your interest in the LM Evaluation Harness! We welcome contributions and feedback and appreciate your time spent with our library, and hope you find it useful!
|
4 |
+
|
5 |
+
We intend LM Evaluation Harness to be a broadly useful and
|
6 |
+
|
7 |
+
## Important Resources
|
8 |
+
|
9 |
+
There are several places information about LM Evaluation Harness is located:
|
10 |
+
|
11 |
+
- Our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)
|
12 |
+
- We occasionally use [GitHub Milestones](https://github.com/EleutherAI/lm-evaluation-harness/milestones) to track progress toward specific near-term version releases.
|
13 |
+
- We maintain a [Project Board](https://github.com/orgs/EleutherAI/projects/25) for tracking current work items and PRs, and for future roadmap items or feature requests.
|
14 |
+
- Further discussion and support conversations are located in the #lm-thunderdome channel of the [EleutherAI discord](discord.gg/eleutherai).
|
15 |
+
|
16 |
+
## Code Style
|
17 |
+
|
18 |
+
LM Evaluation Harness uses [ruff](https://github.com/astral-sh/ruff) for linting via [pre-commit](https://pre-commit.com/).
|
19 |
+
|
20 |
+
You can install linters and dev tools via
|
21 |
+
|
22 |
+
```pip install lm_eval[dev]``` or ```pip install -e ".[dev]"```
|
23 |
+
|
24 |
+
Then, run
|
25 |
+
|
26 |
+
```pre-commit install```
|
27 |
+
|
28 |
+
in order to ensure linters and other checks will be run upon committing.
|
29 |
+
|
30 |
+
## Testing
|
31 |
+
|
32 |
+
We use [pytest](https://docs.pytest.org/en/latest/) for running unit tests. All library unit tests can be run via:
|
33 |
+
|
34 |
+
```
|
35 |
+
python -m pytest --ignore=tests/tests_master --ignore=tests/extra
|
36 |
+
```
|
37 |
+
|
38 |
+
## Contributor License Agreement
|
39 |
+
|
40 |
+
We ask that new contributors agree to a Contributor License Agreement affirming that EleutherAI has the rights to use your contribution to our library.
|
41 |
+
First-time pull requests will have a reply added by @CLAassistant containing instructions for how to confirm this, and we require it before merging your PR.
|
42 |
+
|
43 |
+
|
44 |
+
## Contribution Best Practices
|
45 |
+
|
46 |
+
We recommend a few best practices to make your contributions or reported errors easier to assist with.
|
47 |
+
|
48 |
+
**For Pull Requests:**
|
49 |
+
- PRs should be titled descriptively, and be opened with a brief description of the scope and intent of the new contribution.
|
50 |
+
- New features should have appropriate documentation added alongside them.
|
51 |
+
- Aim for code maintainability, and minimize code copying.
|
52 |
+
- If opening a task, try to share test results on the task using a publicly-available model, and if any public results are available on the task, compare to them.
|
53 |
+
|
54 |
+
**For Feature Requests:**
|
55 |
+
- Provide a short paragraph's worth of description. What is the feature you are requesting? What is its motivation, and an example use case of it? How does this differ from what is currently supported?
|
56 |
+
|
57 |
+
**For Bug Reports**:
|
58 |
+
- Provide a short description of the bug.
|
59 |
+
- Provide a *reproducible example*--what is the command you run with our library that results in this error? Have you tried any other steps to resolve it?
|
60 |
+
- Provide a *full error traceback* of the error that occurs, if applicable. A one-line error message or small screenshot snippet is unhelpful without the surrounding context.
|
61 |
+
- Note what version of the codebase you are using, and any specifics of your environment and setup that may be relevant.
|
62 |
+
|
63 |
+
**For Requesting New Tasks**:
|
64 |
+
- Provide a 1-2 sentence description of what the task is and what it evaluates.
|
65 |
+
- Provide a link to the paper introducing the task.
|
66 |
+
- Provide a link to where the dataset can be found.
|
67 |
+
- Provide a link to a paper containing results on an open-source model on the task, for use in comparisons and implementation validation.
|
68 |
+
- If applicable, link to any codebase that has implemented the task (especially the original publication's codebase, if existent).
|
69 |
+
|
70 |
+
## How Can I Get Involved?
|
71 |
+
|
72 |
+
To quickly get started, we maintain a list of good first issues, which can be found [on our project board](https://github.com/orgs/EleutherAI/projects/25/views/8) or by [filtering GH Issues](https://github.com/EleutherAI/lm-evaluation-harness/issues?q=is%3Aopen+label%3A%22good+first+issue%22+label%3A%22help+wanted%22). These are typically smaller code changes or self-contained features which can be added without extensive familiarity with library internals, and we recommend new contributors consider taking a stab at one of these first if they are feeling uncertain where to begin.
|
73 |
+
|
74 |
+
There are a number of distinct ways to contribute to LM Evaluation Harness, and all are extremely helpful! A sampling of ways to contribute include:
|
75 |
+
- **Implementing and verifying new evaluation tasks**: Is there a task you'd like to see LM Evaluation Harness support? Consider opening an issue requesting it, or helping add it! Verifying and cross-checking task implementations with their original versions is also a very valuable form of assistance in ensuring standardized evaluation.
|
76 |
+
- **Improving documentation** - Improvements to the documentation, or noting pain points / gaps in documentation, are helpful in order for us to improve the user experience of the library and clarity + coverage of documentation.
|
77 |
+
- **Testing and devops** - We are very grateful for any assistance in adding tests for the library that can be run for new PRs, and other devops workflows.
|
78 |
+
- **Adding new modeling / inference library integrations** - We hope to support a broad range of commonly-used inference libraries popular among the community, and welcome PRs for new integrations, so long as they are documented properly and maintainable.
|
79 |
+
- **Proposing or Contributing New Features** - We want LM Evaluation Harness to support a broad range of evaluation usecases. If you have a feature that is not currently supported but desired, feel free to open an issue describing the feature and, if applicable, how you intend to implement it. We would be happy to give feedback on the cleanest way to implement new functionalities and are happy to coordinate with interested contributors via GH discussions or via discord.
|
80 |
+
|
81 |
+
We hope that this has been helpful, and appreciate your interest in contributing! Further questions can be directed to [our Discord](discord.gg/eleutherai).
|
lm-evaluation-harness/docs/README.md
ADDED
@@ -0,0 +1,10 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Eval Harness Documentation
|
2 |
+
|
3 |
+
Welcome to the docs for the LM Evaluation Harness!
|
4 |
+
|
5 |
+
## Table of Contents
|
6 |
+
|
7 |
+
* To learn about the public interface of the library, as well as how to evaluate via the commandline or as integrated into an external library, see the [Interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/interface.md)
|
8 |
+
* To learn how to add a new library, API, or model type to the library, as well as a quick explainer on the types of ways to evaluate an LM, see the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/model_guide.md).
|
9 |
+
* For a crash course on adding new tasks to the library, see our [New Task Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/new_task_guide.md).
|
10 |
+
* To learn more about pushing the limits of task configuration that the Eval Harness supports, see the [Task Configuration Guide](https://github.com/EleutherAI/lm-evaluation-harness/blob/big-refactor/docs/task_guide.md).
|
lm-evaluation-harness/docs/decontamination.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Decontamination
|
2 |
+
|
3 |
+
## Usage
|
4 |
+
|
5 |
+
The provided directory should contain
|
6 |
+
the ngram files and info.json produced in "Pile Ngram Generation" further down.
|
7 |
+
|
8 |
+
```bash
|
9 |
+
python -m lm_eval \
|
10 |
+
--model gpt2 \
|
11 |
+
--device 0 \
|
12 |
+
--tasks sciq
|
13 |
+
```
|
14 |
+
|
15 |
+
## Background
|
16 |
+
Downstream evaluations test model generalization, and are less useful when test set data also exists in the training set, referred to as leakage or contamination.
|
17 |
+
|
18 |
+
Filtering your training set against the test set is a good first step, however this isn't always possible, as in the case of a new benchmark or one that wasn't considered prior to model training. When training set filtering isn't possible, it is useful to measure the impact of test set leakage by detecting the contaminated test examples and producing a clean version of the benchmark.
|
19 |
+
|
20 |
+
The basis for our decontamination procedure can be found in Appendix C of "Language Models are Few-Shot Learners". OpenAI defined a test document as contaminated if any N-gram overlap existed with any training document. They used a range of N values between 8 and 13 depending on dataset, while we just used 13 for simplicity.
|
21 |
+
|
22 |
+
## Implementation
|
23 |
+
Contamination detection can be found in `lm_eval/decontaminate.py` with supporting code in `lm_eval/decontamination/`.
|
24 |
+
|
25 |
+
decontaminate.py does the following:
|
26 |
+
1. Build dictionaries of all ngrams and their corresponding evaluation/document ids.
|
27 |
+
2. Scan through sorted files containing training set n-grams.
|
28 |
+
3. If a match is found, the corresponding evaluation/document combinations are marked as contaminated.
|
29 |
+
|
30 |
+
`lm_eval/evaluator.py` can then produce a clean version of the benchmark by excluding the results of contaminated documents. For each metric, a clean version will be shown in the results with a "decontaminate" suffix.
|
31 |
+
|
32 |
+
This is disabled by default for new tasks, to support decontamination on a task override the "should_decontaminate" and "doc_to_decontamination_query" methods. For more details see the [task guide](task_guide.md).
|
33 |
+
|
34 |
+
## Pile Ngram Generation
|
35 |
+
The relevant scripts can be found in `scripts/clean_training_data`, which also import from
|
36 |
+
`lm_eval/decontamination/`
|
37 |
+
|
38 |
+
1. git clone https://github.com/EleutherAI/lm-evaluation-harness.git
|
39 |
+
2. pip install -r requirements.txt
|
40 |
+
3. Download The Pile from [The Eye](https://the-eye.eu/public/AI/pile/train/)
|
41 |
+
4. Place pile files in "pile" directory under "lm-evaluation-harness" (or create a symlink)
|
42 |
+
5. Run generate_13_grams.
|
43 |
+
|
44 |
+
```bash
|
45 |
+
export PYTHONHASHSEED=0
|
46 |
+
python -m scripts/clean_training_data/generate_13_grams \
|
47 |
+
-dir path/to/working/directory \
|
48 |
+
-n 13 \
|
49 |
+
-buckets 500
|
50 |
+
```
|
51 |
+
|
52 |
+
Took approximately 4 days for us. We had the time to wait, but this could be scaled out by doing partial pile scans on multiple instances of this script and merging the relevant buckets. We fixed PYTHONHASHSEED to ensure reproducibility of bucket hashing in case you need to stop and start.
|
53 |
+
|
54 |
+
6. Sort the generated 13-grams.
|
55 |
+
```bash
|
56 |
+
python -m scripts/clean_training_data/sort_13_gram_buckets \
|
57 |
+
-dir path/to/working/directory/output
|
58 |
+
```
|
59 |
+
|
60 |
+
Took approximately 5 days for us. You could speed this up by spreading the files around to different machines and running the sort script before gathering them together.
|
61 |
+
|
62 |
+
7. Compress the sorted 13 grams files and place them together with info.json.
|
63 |
+
|
64 |
+
This step only takes a few hours.
|
65 |
+
|
66 |
+
```bash
|
67 |
+
python -m scripts/clean_training_data/compress_and_package \
|
68 |
+
-dir path/to/working/directory \
|
69 |
+
-output path/to/final/directory \
|
70 |
+
-procs 8
|
71 |
+
```
|
lm-evaluation-harness/docs/interface.md
ADDED
@@ -0,0 +1,146 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# User Guide
|
2 |
+
|
3 |
+
This document details the interface exposed by `lm-eval` and provides details on what flags are available to users.
|
4 |
+
|
5 |
+
## Command-line Interface
|
6 |
+
|
7 |
+
A majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script.
|
8 |
+
|
9 |
+
Equivalently, running the library can be done via the `lm-eval` entrypoint at the command line.
|
10 |
+
|
11 |
+
This mode supports a number of command-line arguments, the details of which can be also be seen via running with `-h` or `--help`:
|
12 |
+
|
13 |
+
- `--model` : Selects which model type or provider is evaluated. Must be a string corresponding to the name of the model type/provider being used. See [the main README](https://github.com/EleutherAI/lm-evaluation-harness/tree/main#commercial-apis) for a full list of enabled model names and supported libraries or APIs.
|
14 |
+
|
15 |
+
- `--model_args` : Controls parameters passed to the model constructor. Accepts a string containing comma-separated keyword arguments to the model class of the format `"arg1=val1,arg2=val2,..."`, such as, for example `--model_args pretrained=EleutherAI/pythia-160m,dtype=float32`. For a full list of what keyword arguments, see the initialization of the `lm_eval.api.model.LM` subclass, e.g. [`HFLM`](https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/models/huggingface.py#L66)
|
16 |
+
|
17 |
+
- `--tasks` : Determines which tasks or task groups are evaluated. Accepts a comma-separated list of task names or task group names. Must be solely comprised of valid tasks/groups.
|
18 |
+
|
19 |
+
- `--num_fewshot` : Sets the number of few-shot examples to place in context. Must be an integer.
|
20 |
+
|
21 |
+
- `--gen_kwargs` : takes an arg string in same format as `--model_args` and creates a dictionary of keyword arguments. These will be passed to the models for all called `generate_until` (free-form or greedy generation task) tasks, to set options such as the sampling temperature or `top_p` / `top_k`. For a list of what args are supported for each model type, reference the respective library's documentation (for example, the documentation for `transformers.AutoModelForCausalLM.generate()`.) These kwargs will be applied to all `generate_until` tasks called--we do not currently support unique gen_kwargs or batch_size values per task in a single run of the library. To control these on a per-task level, set them in that task's YAML file.
|
22 |
+
|
23 |
+
- `--batch_size` : Sets the batch size used for evaluation. Can be a positive integer or `"auto"` to automatically select the largest batch size that will fit in memory, speeding up evaluation. One can pass `--batch_size auto:N` to re-select the maximum batch size `N` times during evaluation. This can help accelerate evaluation further, since `lm-eval` sorts documents in descending order of context length.
|
24 |
+
|
25 |
+
- `--max_batch_size` : Sets the maximum batch size to try to fit in memory, if `--batch_size auto` is passed.
|
26 |
+
|
27 |
+
- `--device` : Sets which device to place the model onto. Must be a string, for example, `"cuda", "cuda:0", "cpu", "mps"`. Defaults to "cuda", and can be ignored if running multi-GPU or running a non-local model type.
|
28 |
+
|
29 |
+
- `--output_path` : A string of the form `dir/file.jsonl` or `dir/`. Provides a path where high-level results will be saved, either into the file named or into the directory named. If `--log_samples` is passed as well, then per-document outputs and metrics will be saved into the directory as well.
|
30 |
+
|
31 |
+
- `--log_samples` : If this flag is passed, then the model's outputs, and the text fed into the model, will be saved at per-document granularity. Must be used with `--output_path`.
|
32 |
+
|
33 |
+
- `--limit` : Accepts an integer, or a float between 0.0 and 1.0 . If passed, will limit the number of documents to evaluate to the first X documents (if an integer) per task or first X% of documents per task. Useful for debugging, especially on costly API models.
|
34 |
+
|
35 |
+
- `--use_cache` : Should be a path where a sqlite db file can be written to. Takes a string of format `/path/to/sqlite_cache_` in order to create a cache db at `/path/to/sqlite_cache_rank{i}.db` for each process (0-NUM_GPUS). This allows results of prior runs to be cached, so that there is no need to re-run results in order to re-score or re-run a given (model, task) pair again.
|
36 |
+
|
37 |
+
- `--cache_requests` : Can be "true", "refresh", or "delete". "true" means that the cache should be used. "refresh" means that you wish to regenerate the cache, which you should run if you change your dataset configuration for a given task. "delete" will delete the cache. Cached files are stored under lm_eval/cache/.cache unless you specify a different path via the environment variable: `LM_HARNESS_CACHE_PATH`. e.g. `LM_HARNESS_CACHE_PATH=~/Documents/cache_for_lm_harness`.
|
38 |
+
|
39 |
+
- `--check_integrity` : If this flag is used, the library tests for each task selected are run to confirm task integrity.
|
40 |
+
|
41 |
+
- `--write_out` : Used for diagnostic purposes to observe the format of task documents passed to a model. If this flag is used, then prints the prompt and gold target string for the first document of each task.
|
42 |
+
|
43 |
+
- `--show_config` : If used, prints the full `lm_eval.api.task.TaskConfig` contents (non-default settings the task YAML file) for each task which was run, at the completion of an evaluation. Useful for when one is modifying a task's configuration YAML locally to transmit the exact configurations used for debugging or for reproducibility purposes.
|
44 |
+
|
45 |
+
- `--include_path` : Accepts a path to a folder. If passed, then all YAML files containing ` lm-eval`` compatible task configurations will be added to the task registry as available tasks. Used for when one is writing config files for their own task in a folder other than `lm_eval/tasks/`
|
46 |
+
|
47 |
+
- `--predict_only`: Generates the model outputs without computing metrics. Use with `--log_samples` to retrieve decoded results.
|
48 |
+
|
49 |
+
* `--seed`: Set seed for python's random, numpy and torch. Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, or a single integer to set the same seed for all three. The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility). E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`. E.g, `--seed 42` sets all three seeds to 42.
|
50 |
+
|
51 |
+
* `--wandb_args`: Tracks logging to Weights and Biases for evaluation runs and includes args passed to `wandb.init`, such as `project` and `job_type`. Full list (here.)[https://docs.wandb.ai/ref/python/init]. e.g., ```--wandb_args project=test-project,name=test-run```
|
52 |
+
|
53 |
+
## External Library Usage
|
54 |
+
|
55 |
+
We also support using the library's external API for use within model training loops or other scripts.
|
56 |
+
|
57 |
+
`lm_eval` supplies two functions for external import and use: `lm_eval.evaluate()` and `lm_eval.simple_evaluate()`.
|
58 |
+
|
59 |
+
`simple_evaluate()` can be used by simply creating an `lm_eval.api.model.LM` subclass that implements the methods described in the [Model Guide](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs/model_guide.md), and wrapping your custom model in that class as follows:
|
60 |
+
|
61 |
+
```python
|
62 |
+
import lm_eval
|
63 |
+
...
|
64 |
+
|
65 |
+
my_model = initialize_my_model() # create your model (could be running finetuning with some custom modeling code)
|
66 |
+
...
|
67 |
+
# instantiate an LM subclass that takes your initialized model and can run
|
68 |
+
# - `Your_LM.loglikelihood()`
|
69 |
+
# - `Your_LM.loglikelihood_rolling()`
|
70 |
+
# - `Your_LM.generate_until()`
|
71 |
+
lm_obj = Your_LM(model=my_model, batch_size=16)
|
72 |
+
|
73 |
+
# indexes all tasks from the `lm_eval/tasks` subdirectory.
|
74 |
+
# Alternatively, you can set `TaskManager(include_path="path/to/my/custom/task/configs")`
|
75 |
+
# to include a set of tasks in a separate directory.
|
76 |
+
task_manager = lm_eval.tasks.TaskManager()
|
77 |
+
|
78 |
+
# Setting `task_manager` to the one above is optional and should generally be done
|
79 |
+
# if you want to include tasks from paths other than ones in `lm_eval/tasks`.
|
80 |
+
# `simple_evaluate` will instantiate its own task_manager is the it is set to None here.
|
81 |
+
results = lm_eval.simple_evaluate( # call simple_evaluate
|
82 |
+
model=lm_obj,
|
83 |
+
tasks=["taskname1", "taskname2"],
|
84 |
+
num_fewshot=0,
|
85 |
+
task_manager=task_manager,
|
86 |
+
...
|
87 |
+
)
|
88 |
+
```
|
89 |
+
|
90 |
+
See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L35 for a full description of all arguments available. All keyword arguments to simple_evaluate share the same role as the command-line flags described previously.
|
91 |
+
|
92 |
+
Additionally, the `evaluate()` function offers the core evaluation functionality provided by the library, but without some of the special handling and simplification + abstraction provided by `simple_evaluate()`.
|
93 |
+
|
94 |
+
See https://github.com/EleutherAI/lm-evaluation-harness/blob/365fcda9b85bbb6e0572d91976b8daf409164500/lm_eval/evaluator.py#L173 for more details.
|
95 |
+
|
96 |
+
As a brief example usage of `evaluate()`:
|
97 |
+
|
98 |
+
```python
|
99 |
+
import lm_eval
|
100 |
+
|
101 |
+
# suppose you've defined a custom lm_eval.api.Task subclass in your own external codebase
|
102 |
+
from my_tasks import MyTask1
|
103 |
+
...
|
104 |
+
|
105 |
+
# create your model (could be running finetuning with some custom modeling code)
|
106 |
+
my_model = initialize_my_model()
|
107 |
+
...
|
108 |
+
|
109 |
+
# instantiate an LM subclass that takes your initialized model and can run
|
110 |
+
# - `Your_LM.loglikelihood()`
|
111 |
+
# - `Your_LM.loglikelihood_rolling()`
|
112 |
+
# - `Your_LM.generate_until()`
|
113 |
+
lm_obj = Your_LM(model=my_model, batch_size=16)
|
114 |
+
|
115 |
+
# optional: the task_manager indexes tasks including ones
|
116 |
+
# specified by the user through `include_path`.
|
117 |
+
task_manager = lm_eval.tasks.TaskManager(
|
118 |
+
include_path="/path/to/custom/yaml"
|
119 |
+
)
|
120 |
+
|
121 |
+
# To get a task dict for `evaluate`
|
122 |
+
task_dict = lm_eval.tasks.get_task_dict(
|
123 |
+
[
|
124 |
+
"mmlu", # A stock task
|
125 |
+
"my_custom_task", # A custom task
|
126 |
+
{
|
127 |
+
"task": ..., # A dict that configures a task
|
128 |
+
"doc_to_text": ...,
|
129 |
+
},
|
130 |
+
MyTask1 # A task object from `lm_eval.task.Task`
|
131 |
+
],
|
132 |
+
task_manager # A task manager that allows lm_eval to
|
133 |
+
# load the task during evaluation.
|
134 |
+
# If none is provided, `get_task_dict`
|
135 |
+
# will instantiated one itself, but this
|
136 |
+
# only includes the stock tasks so users
|
137 |
+
# will need to set this if including
|
138 |
+
# custom paths is required.
|
139 |
+
)
|
140 |
+
|
141 |
+
results = evaluate(
|
142 |
+
lm=lm_obj,
|
143 |
+
task_dict=task_dict,
|
144 |
+
...
|
145 |
+
)
|
146 |
+
```
|
lm-evaluation-harness/docs/model_guide.md
ADDED
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# New Model Guide
|
2 |
+
|
3 |
+
This guide may be of special interest to users who are using the library outside of the repository, via installing the library via pypi and calling `lm_eval.evaluator.evaluate()` to evaluate an existing model.
|
4 |
+
|
5 |
+
In order to properly evaluate a given LM, we require implementation of a wrapper class subclassing the `lm_eval.api.model.LM` class, that defines how the Evaluation Harness should interface with your model. This guide walks through how to write this `LM` subclass via adding it to the library!
|
6 |
+
|
7 |
+
## Setup
|
8 |
+
|
9 |
+
To get started contributing, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:
|
10 |
+
|
11 |
+
```sh
|
12 |
+
# After forking...
|
13 |
+
git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git
|
14 |
+
cd lm-evaluation-harness
|
15 |
+
git checkout -b <model-type>
|
16 |
+
pip install -e ".[dev]"
|
17 |
+
```
|
18 |
+
|
19 |
+
Now, we'll create a new file where we'll be adding our model:
|
20 |
+
|
21 |
+
```sh
|
22 |
+
touch lm_eval/models/<my_model_filename>.py
|
23 |
+
```
|
24 |
+
|
25 |
+
**Tip: this filename should not shadow package names! For example, naming your file `anthropic.py` is disallowed since the API's name on pypi is `anthropic`, but naming it `anthropic_llms.py` works with no problems.**
|
26 |
+
|
27 |
+
## Interface
|
28 |
+
|
29 |
+
All models must subclass the `lm_eval.api.model.LM` class.
|
30 |
+
|
31 |
+
The LM class enforces a common interface via which we can extract responses from a model:
|
32 |
+
|
33 |
+
```python
|
34 |
+
class MyCustomLM(LM):
|
35 |
+
#...
|
36 |
+
def loglikelihood(self, requests: list[Instance]) -> list[tuple[float, bool]]:
|
37 |
+
#...
|
38 |
+
|
39 |
+
|
40 |
+
def loglikelihood_rolling(self, requests: list[Instance]) -> list[tuple[float, bool]]:
|
41 |
+
#...
|
42 |
+
|
43 |
+
|
44 |
+
def generate_until(self, requests: list[Instance]) -> list[str]:
|
45 |
+
#...
|
46 |
+
#...
|
47 |
+
```
|
48 |
+
Where `Instance` is a dataclass defined in [`lm_eval.api.instance`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/api/instance.py) with property `args` of request-dependent type signature described below.
|
49 |
+
|
50 |
+
We support three types of requests, consisting of different interactions / measurements with an autoregressive LM.
|
51 |
+
|
52 |
+
All three request types take as input `requests` of type `list[Instance]` that have a matching `Instance.request_type` to the method name.
|
53 |
+
|
54 |
+
- `generate_until`
|
55 |
+
- Each request contains `Instance.args : Tuple[str, dict]` containing 1. an input string to the LM and 2. a dictionary of keyword arguments used to control generation parameters.
|
56 |
+
- Using this input and these generation parameters, text will be sampled from the language model (typically until a maximum output length or specific stopping string sequences--for example, `{"until": ["\n\n", "."], "max_gen_toks": 128}`).
|
57 |
+
- The generated input+output text from the model will then be returned.
|
58 |
+
|
59 |
+
- `loglikelihood`
|
60 |
+
- Each request contains `Instance.args : Tuple[str, str]` containing 1. an input string to the LM and 2. a target string on which the loglikelihood of the LM producing this target, conditioned on the input, will be returned.
|
61 |
+
- Each request will have, as result, `(ll, is_greedy): Tuple[float, int]` returned, where `ll` is a floating point number representing the log probability of generating the target string conditioned on the input, and `is_greedy` being either the value `0` or `1`, with it being `1` if and only if the target string *would be generated by greedy sampling from the LM* (that is, if the target string is the *most likely* N-token string to be output by the LM given the input. )
|
62 |
+
|
63 |
+
- `loglikelihood_rolling`
|
64 |
+
- Each request contains `Instance.args : Tuple[str]`, which is an input string to the model whose *entire* loglikelihood, conditioned on purely the EOT token, will be calculated.
|
65 |
+
- This is used to evaluate *perplexity* on a data distribution.
|
66 |
+
- It should return `(ll,) : Tuple[float]` , a.k.a. solely the *loglikelihood* of producing each piece of text given no starting input.
|
67 |
+
|
68 |
+
|
69 |
+
To allow a model to be evaluated on all types of tasks, you will need to implement these three types of measurements (note that `loglikelihood_rolling` is a special case of `loglikelihood`). For a reference implementation, check out `lm_eval/models/huggingface.py` ! Additionally, check out `lm_eval.api.model.TemplateLM` for a class that abstracts away some commonly used functions across LM subclasses, or see if your model would lend itself well to subclassing the `lm_eval.models.huggingface.HFLM` class and overriding just the initialization or a couple methods!
|
70 |
+
|
71 |
+
**Tip: be careful of indexing in loglikelihood!**
|
72 |
+
|
73 |
+
|
74 |
+
LMs take in tokens in position `[0 1 2 ... N]` and output a probability distribution for token position `N+1`. We provide a simplified graphic here, excerpted from `huggingface.py`:
|
75 |
+
|
76 |
+
```
|
77 |
+
# how this all works (illustrated on a causal decoder-only setup):
|
78 |
+
# CTX CONT
|
79 |
+
# inp 0 1 2 3|4 5 6 7 8 9 <- last token is deleted by inp[:, :-1]
|
80 |
+
# model \ \
|
81 |
+
# logits 1 2 3|4 5 6 7 8 9 <- the ctx half gets tossed out by the
|
82 |
+
# cont_toks 4 5 6 7 8 9 [:, -len(continuation_enc):, :self.vocab_size] slice
|
83 |
+
```
|
84 |
+
|
85 |
+
The final token of the target is not passed into the LM, because we want the LM's predictions *up to but not past* that final target token. For more information, check out https://github.com/EleutherAI/lm-evaluation-harness/issues/942 .
|
86 |
+
|
87 |
+
## Registration
|
88 |
+
|
89 |
+
Congrats on implementing your model! Now it's time to test it out.
|
90 |
+
|
91 |
+
To make your model usable via the command line interface to `lm-eval` using `python -m lm_eval`, you'll need to tell `lm-eval` what your model's name is.
|
92 |
+
|
93 |
+
This is done via a *decorator*, `lm_eval.api.registry.register_model`. Using `register_model()`, one can both tell the package what the model's name(s) to be used are when invoking it with `python -m lm_eval --model <name>` and alert `lm-eval` to the model's existence.
|
94 |
+
|
95 |
+
```python
|
96 |
+
from lm_eval.api.registry import register_model
|
97 |
+
|
98 |
+
@register_model("<name1>", "<name2>")
|
99 |
+
class MyCustomLM(LM):
|
100 |
+
```
|
101 |
+
|
102 |
+
Using this decorator results in the class being added to an accounting of the usable LM types maintained internally to the library at `lm_eval.api.registry.MODEL_REGISTRY`. See `lm_eval.api.registry` for more detail on what sorts of registries and decorators exist in the library!
|
103 |
+
|
104 |
+
**Tip: be sure to import your model in `lm_eval/models/__init__.py!`**
|
105 |
+
|
106 |
+
## Testing
|
107 |
+
|
108 |
+
We also recommend that new model contributions be accompanied by short tests of their 3 core functionalities, at minimum. To see an example of such tests, look at https://github.com/EleutherAI/lm-evaluation-harness/blob/35bdecd379c0cefad6897e67db892f4a6026a128/tests/test_ggml.py .
|
109 |
+
|
110 |
+
## Other
|
111 |
+
|
112 |
+
**Pro tip**: In order to make the Evaluation Harness overestimate total runtimes rather than underestimate it, HuggingFace models come in-built with the ability to provide responses on data points in *descending order by total input length* via `lm_eval.utils.Reorderer`. Take a look at `lm_eval.models.hf_causal.HFLM` to see how this is done, and see if you can implement it in your own model!
|
113 |
+
|
114 |
+
## Conclusion
|
115 |
+
|
116 |
+
After reading this guide, you should be able to add new model APIs or implementations to the Eval Harness library!
|
lm-evaluation-harness/docs/new_task_guide.md
ADDED
@@ -0,0 +1,445 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# New Task Guide
|
2 |
+
|
3 |
+
`lm-evaluation-harness` is a framework that strives to support a wide range of zero- and few-shot evaluation tasks on autoregressive language models (LMs).
|
4 |
+
|
5 |
+
This documentation page provides a walkthrough to get started creating your own task, in `lm-eval` versions v0.4.0 and later.
|
6 |
+
|
7 |
+
A more interactive tutorial is available as a Jupyter notebook [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/examples/lm-eval-overview.ipynb).
|
8 |
+
|
9 |
+
## Setup
|
10 |
+
|
11 |
+
If you haven't already, go ahead and fork the main repo, clone it, create a branch with the name of your task, and install the project requirements in your environment:
|
12 |
+
|
13 |
+
```sh
|
14 |
+
# After forking...
|
15 |
+
git clone https://github.com/<YOUR-USERNAME>/lm-evaluation-harness.git
|
16 |
+
cd lm-evaluation-harness
|
17 |
+
git checkout -b <task-name>
|
18 |
+
pip install -e ".[dev]"
|
19 |
+
```
|
20 |
+
|
21 |
+
In this document, we'll walk through the basics of implementing a static benchmark evaluation in two formats: a *generative* task which requires sampling text from a model, such as [`gsm8k`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/gsm8k/gsm8k.yaml), and a *discriminative*, or *multiple choice*, task where the model picks the most likely of several fixed answer choices, such as [`sciq`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/sciq/sciq.yaml).
|
22 |
+
|
23 |
+
## Creating a YAML file
|
24 |
+
|
25 |
+
To implement a new standard task, we'll need to write a YAML file which configures our task logic. We start by making a new empty YAML file. This file can have any name, but we recommend placing it in a subfolder of `lm_eval/tasks` titled by the dataset or task's shorthand name: for example,
|
26 |
+
|
27 |
+
```sh
|
28 |
+
touch lm_eval/tasks/<dataset_name>/<my_new_task_name>.yaml
|
29 |
+
```
|
30 |
+
Or, copy the template subfolder we provide from `templates/new_yaml_task`:
|
31 |
+
```sh
|
32 |
+
cp -r templates/new_yaml_task lm_eval/tasks/
|
33 |
+
```
|
34 |
+
and rename the folders and YAML file(s) as desired.
|
35 |
+
|
36 |
+
### Selecting and configuring a dataset
|
37 |
+
|
38 |
+
All data downloading and management is handled through the HuggingFace (**HF**) [`datasets`](https://github.com/huggingface/datasets) API. So, the first thing you should do is check to see if your task's dataset is already provided in their catalog [here](https://huggingface.co/datasets). If it's not in there, please consider adding it to their Hub to make it accessible to a wider user base by following their [new dataset guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)
|
39 |
+
.
|
40 |
+
|
41 |
+
Once you have a HuggingFace dataset prepared for your task, we want to assign our new YAML to use this dataset:
|
42 |
+
|
43 |
+
```yaml
|
44 |
+
dataset_path: ... # the name of the dataset on the HF Hub.
|
45 |
+
dataset_name: ... # the dataset configuration to use. Leave `null` if your dataset does not require a config to be passed. See https://huggingface.co/docs/datasets/load_hub#configurations for more info.
|
46 |
+
dataset_kwargs: null # any extra keyword arguments that should be passed to the dataset constructor, e.g. `data_dir`.
|
47 |
+
```
|
48 |
+
|
49 |
+
Next, we'd like to tell our task what the dataset's train, validation, and test splits are named, if they exist:
|
50 |
+
|
51 |
+
```yaml
|
52 |
+
training_split: <split name of training set, or `null`>
|
53 |
+
validation_split: <split name of val. set, or `null`>
|
54 |
+
test_split: <split name of test set, or `null`>
|
55 |
+
```
|
56 |
+
Tests will run on the `test_split` if it is available, and otherwise evaluate on the `validation_split`.
|
57 |
+
|
58 |
+
We can also specify from which split the task should retrieve few-shot examples via:
|
59 |
+
```yaml
|
60 |
+
fewshot_split: <split name to draw fewshot examples from, or `null`>
|
61 |
+
```
|
62 |
+
though if this is not set, we will default to train/validation/test sets, in that order.
|
63 |
+
|
64 |
+
|
65 |
+
Finally, our dataset may not be already in the exact format we want. Maybe we have to strip whitespace and special characters via a regex from our dataset's "question" field! Or maybe we just want to rename its columns to match a convention we'll be using for our prompts.
|
66 |
+
|
67 |
+
Let's create a python file in the directory where we're writing our YAML file:
|
68 |
+
```bash
|
69 |
+
touch lm_eval/tasks/<dataset_name>/utils.py
|
70 |
+
```
|
71 |
+
Now, in `utils.py` we'll write a function to process each split of our dataset:
|
72 |
+
|
73 |
+
TODO: Change the example to one that's in the tasks/
|
74 |
+
|
75 |
+
```python
|
76 |
+
def process_docs(dataset: datasets.Dataset):
|
77 |
+
def _helper(doc):
|
78 |
+
# modifies the contents of a single
|
79 |
+
# document in our dataset.
|
80 |
+
doc["choices"] = [doc["choice1"], doc["choice2"], doc["wrong_answer"]]
|
81 |
+
doc["gold"] = doc["label"]
|
82 |
+
return doc
|
83 |
+
|
84 |
+
return dataset.map(_helper) # returns back a datasets.Dataset object
|
85 |
+
```
|
86 |
+
|
87 |
+
Now, in our YAML config file we'll use the `!function` constructor, and tell the config where our imported Python function will come from. At runtime, before doing anything else we will preprocess our dataset according to this function!
|
88 |
+
```yaml
|
89 |
+
process_docs: !function utils.process_docs
|
90 |
+
```
|
91 |
+
|
92 |
+
### Using Local Datasets
|
93 |
+
|
94 |
+
To load a local dataset for evaluation, you can specify data files in the `dataset_kwargs` field, such as the following for JSON files:
|
95 |
+
|
96 |
+
```
|
97 |
+
dataset_path: json
|
98 |
+
dataset_name: null
|
99 |
+
dataset_kwargs:
|
100 |
+
data_files: /path/to/my/json
|
101 |
+
```
|
102 |
+
Or with files already split into separate directories:
|
103 |
+
|
104 |
+
```
|
105 |
+
dataset_path: arrow
|
106 |
+
dataset_kwargs:
|
107 |
+
data_files:
|
108 |
+
train: /path/to/arrow/train/data-00000-of-00001.arrow
|
109 |
+
validation: /path/to/arrow/validation/data-00000-of-00001.arrow
|
110 |
+
```
|
111 |
+
|
112 |
+
Alternatively, if you have previously downloaded a dataset from huggingface hub (using `save_to_disk()`) and wish to use the local files, you will need to use `data_dir` under `dataset_kwargs` to point to where the directory is.
|
113 |
+
|
114 |
+
```
|
115 |
+
dataset_path: hellaswag
|
116 |
+
dataset_kwargs:
|
117 |
+
data_dir: hellaswag_local/
|
118 |
+
```
|
119 |
+
|
120 |
+
You can also set `dataset_path` as a directory path in your local system. This will assume that there is a loading script with the same name as the directory. [See datasets docs](https://huggingface.co/docs/datasets/loading#local-loading-script).
|
121 |
+
|
122 |
+
## Writing a Prompt Template
|
123 |
+
|
124 |
+
The next thing we need to do is decide what format to use when presenting the data to the LM. This is our **prompt**, where we'll define both an input and output format.
|
125 |
+
|
126 |
+
To write a prompt, users will use `doc_to_text`, `doc_to_target`, and `doc_to_choice` (Optional when certain conditions are met).
|
127 |
+
|
128 |
+
`doc_to_text` defines the input string a model will be given while `doc_to_target` and `doc_to_choice` will be used to generate the target text. `doc_to_target` can be either a text string that refers to the target string or an integer that refers to the index of the correct label. When it is set as an index, `doc_to_choice` must be also be set with the appropriate list of possible choice strings.
|
129 |
+
|
130 |
+
### Basic prompts
|
131 |
+
|
132 |
+
If a dataset is straightforward enough, users can enter the feature name directly. This assumes that no preprocessing is required. For example in [Swag](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/swag/swag.yaml#L10-L11), `doc_to_text` and `doc_to_target` given the name of one of the feature each.
|
133 |
+
```yaml
|
134 |
+
doc_to_text: startphrase
|
135 |
+
doc_to_target: label
|
136 |
+
```
|
137 |
+
Hard-coding is also possible as is the case in [SciQ](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/sciq/sciq.yaml#L11).
|
138 |
+
```yaml
|
139 |
+
doc_to_target: 3
|
140 |
+
```
|
141 |
+
`doc_to_choice` can be directly given a list of text as option (See [Toxigen](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/toxigen/toxigen.yaml#L11))
|
142 |
+
```yaml
|
143 |
+
doc_to_choice: ['No', 'Yes']
|
144 |
+
```
|
145 |
+
|
146 |
+
if a dataset feature is already a list, you can set the name of the feature as `doc_to_choice` (See [Hellaswag](https://github.com/EleutherAI/lm-evaluation-harness/blob/e0eda4d3ffa10e5f65e0976161cd134bec61983a/lm_eval/tasks/hellaswag/hellaswag.yaml#L13))
|
147 |
+
```
|
148 |
+
doc_to_choice: choices
|
149 |
+
```
|
150 |
+
|
151 |
+
|
152 |
+
|
153 |
+
### Writing a prompt with Jinja 2
|
154 |
+
|
155 |
+
We support the [Jinja 2](https://jinja.palletsprojects.com/en/3.1.x/) templating language for writing prompts. In practice, this means you can take your dataset's columns and do many basic string manipulations to place each document into prompted format.
|
156 |
+
|
157 |
+
Take for example the dataset `super_glue/boolq`. As input, we'd like to use the features `passage` and `question` and string them together so that for a a sample line `doc`, the model sees something the format of:
|
158 |
+
```
|
159 |
+
doc["passage"]
|
160 |
+
Question: doc["question"]?
|
161 |
+
Answer:
|
162 |
+
```
|
163 |
+
We do this by [writing](https://github.com/EleutherAI/lm-evaluation-harness/blob/1710b42d52d0f327cb0eb3cb1bfbbeca992836ca/lm_eval/tasks/super_glue/boolq/default.yaml#L9C1-L9C61)
|
164 |
+
```yaml
|
165 |
+
doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:"
|
166 |
+
```
|
167 |
+
Such that `{{passage}}` will be replaced by `doc["passage"]` and `{{question}}` with `doc["question"]` when rendering the prompt template.
|
168 |
+
|
169 |
+
Our intended output is for the model to predict a single whitespace, and then the answer to the question. We do this via:
|
170 |
+
```yaml
|
171 |
+
doc_to_target: "{{answer}}"
|
172 |
+
```
|
173 |
+
|
174 |
+
|
175 |
+
**Important**: we now add `target_delimiter` between input and target which defaults to " ", such that the full input-output string is `doc_to_target(doc) + target_delimiter + doc_to_text(doc)`. doc_to_text and doc_to_target should not contain trailing right or left whitespace, respectively.
|
176 |
+
|
177 |
+
|
178 |
+
#### Multiple choice format
|
179 |
+
|
180 |
+
For tasks which are multiple choice (a fixed, finite set of label words per each document) and evaluated via comparing loglikelihoods of all label words (the `multiple_choice` task output type) we enforce a particular convention on prompt format.
|
181 |
+
|
182 |
+
An annotated example in the case of SciQ is as follows:
|
183 |
+
|
184 |
+
```yaml
|
185 |
+
doc_to_text: "{{support.lstrip()}}\nQuestion: {{question}}\nAnswer:" # This is the input portion of the prompt for this doc. It will have " {{choice}}" appended to it as target for each choice in answer_choices.
|
186 |
+
doc_to_target: 3 # this contains the index into the answer choice list of the correct answer.
|
187 |
+
doc_to_choice: "{{[distractor1, distractor2, distractor3, correct_answer]}}"
|
188 |
+
```
|
189 |
+
Task implementers are thus able to decide what the answer choices should be for a document, and what prompt format to use.
|
190 |
+
|
191 |
+
The label index can also be sourced from a feature directly. For example in `superglue/boolq`, the label index if defined in the feature `label`. We can set `doc_to_target` as simply `label`. The options or verbalizers can be written in a the form of a list `["no", "yes"]` that will correspond to the label index.
|
192 |
+
|
193 |
+
```yaml
|
194 |
+
doc_to_text: "{{passage}}\nQuestion: {{question}}?\nAnswer:"
|
195 |
+
doc_to_target: label
|
196 |
+
doc_to_choice: ["no", "yes"]
|
197 |
+
```
|
198 |
+
|
199 |
+
### Using Python Functions for Prompts
|
200 |
+
|
201 |
+
There may be cases where the prompt we want to implement is easier expressed in Python instead of Jinja 2. For this, we can use Python helper functions that are defined in the YAML config. It should be noted that the function script must be in the same directory as the yaml.
|
202 |
+
|
203 |
+
A good example is WikiText that requires a lot of regex rules to clean the samples.
|
204 |
+
```
|
205 |
+
def wikitext_detokenizer(doc):
|
206 |
+
string = doc["page"]
|
207 |
+
# contractions
|
208 |
+
string = string.replace("s '", "s'")
|
209 |
+
string = re.sub(r"/' [0-9]/", r"/'[0-9]/", string)
|
210 |
+
...
|
211 |
+
string = string.replace(" 's", "'s")
|
212 |
+
|
213 |
+
return string
|
214 |
+
```
|
215 |
+
|
216 |
+
We can load this function in `doc_to_target` by using a `!function` operator after `doc_to_target` and followed by `<file name>.<function name>`. In the file [wikitext.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/6ae376e3a43caa58b95bb8aa73054a94827bf560/lm_eval/tasks/wikitext/wikitext.yaml) we write:
|
217 |
+
```
|
218 |
+
doc_to_target: !function preprocess_wikitext.wikitext_detokenizer
|
219 |
+
```
|
220 |
+
|
221 |
+
### Importing a Prompt from Promptsource
|
222 |
+
|
223 |
+
[Promptsource](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource) is a great repository for crowdsourced prompts for many datasets. We can load these prompts easily by using the `use_prompt` argument and filling it with the format `"promptsource:<name of prompt template>"`. To use this, `doc_to_text` and `doc_to_target` should be left undefined. This will fetch the template of the dataset defined in the YAML file.
|
224 |
+
|
225 |
+
For example, For Super Glue BoolQ, if we want to use the prompt template `GPT-3 Style` we can add this to the YAML file.
|
226 |
+
```
|
227 |
+
use_prompt: "promptsource:GPT-3 Style"
|
228 |
+
```
|
229 |
+
|
230 |
+
If you would like to run evaluation on all prompt templates, you can simply call it this way.
|
231 |
+
```
|
232 |
+
use_prompt: "promptsource:*"
|
233 |
+
```
|
234 |
+
|
235 |
+
### Setting metrics
|
236 |
+
|
237 |
+
You're almost done! Now we need to choose how to score our task.
|
238 |
+
- *If this is a multiple choice task:* do you just want to check your model's accuracy in choosing the correct answer choice?
|
239 |
+
- *If this is a generation task:* do you just want to check how often your model outputs *exactly the ground-truth output string provided*?
|
240 |
+
|
241 |
+
|
242 |
+
If the answer to the above is no: you'll need to record what scoring metrics to use! Metrics can be listed in the following format:
|
243 |
+
|
244 |
+
```yaml
|
245 |
+
metric_list:
|
246 |
+
- metric: <name of the metric here>
|
247 |
+
aggregation: <name of the aggregation fn here>
|
248 |
+
higher_is_better: <true or false>
|
249 |
+
- metric: !function script.function
|
250 |
+
aggregation: ...
|
251 |
+
higher_is_better: ...
|
252 |
+
```
|
253 |
+
`aggregation` and `higher_is_better` can optionally be left out to default to the manually-set defaults if using a natively supported metric, otherwise it must be defined explicitly (for example, when using a custom metric implemented as a function).
|
254 |
+
|
255 |
+
For a full list of natively supported metrics and aggregation functions see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md). All metrics supported in [HuggingFace Evaluate](https://github.com/huggingface/evaluate/tree/main/metrics) can also be used, and will be loaded if a given metric name is not one natively supported in `lm-eval` or `hf_evaluate` is set to `true`.
|
256 |
+
|
257 |
+
### Optional, More Advanced Setup
|
258 |
+
|
259 |
+
Some tasks may require more advanced processing logic than is described in this guide.
|
260 |
+
|
261 |
+
As a heuristic check:
|
262 |
+
* Does your task require generating multiple free-form outputs per input document?
|
263 |
+
* Does your task require complex, multi-step post-processing of generated model outputs?
|
264 |
+
* Does your task require subsetting documents on the fly based on their content?
|
265 |
+
* Do you expect to compute metrics after applying multiple such processing steps on your model outputs?
|
266 |
+
* Does your task rely on metrics that need a custom implementation?
|
267 |
+
|
268 |
+
For more detail on the task system and advanced features, see [`docs/task_guide.md`](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/task_guide.md) . If none of the above sound like they apply to your task, it's time to continue onto checking your task performance!
|
269 |
+
|
270 |
+
### Task name + groups (registering a task)
|
271 |
+
|
272 |
+
To test a task conveniently, it helps to *register* the task--that is, to give it a name and make the `lm-eval` library aware it exists!
|
273 |
+
|
274 |
+
If you're writing your YAML file inside the `lm_eval/tasks` folder, you just need to give your task a name! You can do this inside your YAML file:
|
275 |
+
|
276 |
+
```yaml
|
277 |
+
task: <name of the task>
|
278 |
+
```
|
279 |
+
Including a task name is mandatory.
|
280 |
+
|
281 |
+
It is often also convenient to label your task with several `groups`, or tags, though this field is optional:
|
282 |
+
|
283 |
+
```yaml
|
284 |
+
group:
|
285 |
+
- group1
|
286 |
+
- group2
|
287 |
+
```
|
288 |
+
This will add your task to the `group1` and `group2` groups, enabling people to know how to categorize your task, and if desired run all tasks in one of these groups at once, your task along with them.
|
289 |
+
|
290 |
+
|
291 |
+
If your task is not in the `lm_eval/tasks` folder, you'll need to tell the Eval Harness where to look for YAML files.
|
292 |
+
|
293 |
+
You can do this via the `--include_path` argument in `__main__.py`. This command will be used to initialize the `TaskManager` object which you can also use for your custom scripts.
|
294 |
+
|
295 |
+
```python
|
296 |
+
task_manager = TaskManager(args.verbosity, include_path=args.include_path)
|
297 |
+
```
|
298 |
+
|
299 |
+
Passing `--tasks /path/to/yaml/file` is also accepted.
|
300 |
+
|
301 |
+
|
302 |
+
### Advanced Group Configs
|
303 |
+
|
304 |
+
You can make more complete group config while also tailoring parameters for individual tasks.
|
305 |
+
|
306 |
+
For example, let's build a config for evaluating MMLU and a few natural language inference tasks. For MMLU, we can write the name for the benchmark as a subtask written under `task`. You can configure the parameters such as `num_fewshot`. If the task being configured is a group such as `mmlu` or `super_glue`, the parameter set will be applied to all of the subtasks.
|
307 |
+
|
308 |
+
```yaml
|
309 |
+
group: nli_and_mmlu
|
310 |
+
task:
|
311 |
+
- group: nli_tasks
|
312 |
+
task:
|
313 |
+
- cb
|
314 |
+
- anli_r1
|
315 |
+
- rte
|
316 |
+
- task: mmlu
|
317 |
+
num_fewshot: 2
|
318 |
+
```
|
319 |
+
It's also important to note how you can basically insert a group config as a task. Here, to make a group of natural language inference tasks, you simply write like how you would normally write a group config but this time place that as part of a task list under the main group being built.
|
320 |
+
|
321 |
+
### Duplicate Tasks in Group Configs
|
322 |
+
|
323 |
+
There might be cases where you might want to evaluate prompts and how models perform over prompt variations. You can list an existing task (In the example below, `anli_r1`) which varying `doc_to_text` implementation. To differentiate from each variation, we can utilize `task_alias`. LM-Eval will recognize that there are multiple variations of the same tasks and differentiate them.
|
324 |
+
```yaml
|
325 |
+
group: flan_held_in
|
326 |
+
group_alias: Flan (Held-In)
|
327 |
+
task:
|
328 |
+
# ANLI R1
|
329 |
+
- group: anli_r1_flan
|
330 |
+
group_alias: ANLI R1
|
331 |
+
task:
|
332 |
+
- task: anli_r1
|
333 |
+
task_alias: prompt-0
|
334 |
+
include: _held_in_template_yaml
|
335 |
+
doc_to_text: "{{premise}}\n\nChoose your answer ..."
|
336 |
+
...
|
337 |
+
- task: anli_r1
|
338 |
+
task_alias: prompt-1
|
339 |
+
include: _held_in_template_yaml
|
340 |
+
doc_to_text: "{{premise}}\n\nBased on ..."
|
341 |
+
...
|
342 |
+
```
|
343 |
+
|
344 |
+
### Configuring python classes
|
345 |
+
|
346 |
+
There can occasions when yaml-based tasks cannot accommodate how a task is handled. LM-Eval supports the manually implementing tasks as was previously done before `0.4.x`. To register the task, you can simply make a yaml with the name of the task in `task` and the class object in `class` using the `!function` prefix.
|
347 |
+
|
348 |
+
```yaml
|
349 |
+
task: squadv2
|
350 |
+
class: !function task.SQuAD2
|
351 |
+
```
|
352 |
+
|
353 |
+
This also applies to building group configurations with subtasks that are python classes.
|
354 |
+
|
355 |
+
```yaml
|
356 |
+
group: scrolls
|
357 |
+
task:
|
358 |
+
- task: scrolls_qasper
|
359 |
+
class: !function task.Qasper
|
360 |
+
- task: scrolls_quality
|
361 |
+
class: !function task.QuALITY
|
362 |
+
- task: scrolls_narrativeqa
|
363 |
+
class: !function task.NarrativeQA
|
364 |
+
...
|
365 |
+
```
|
366 |
+
|
367 |
+
## Beautifying Table Display
|
368 |
+
|
369 |
+
To avoid conflict, each task needs to be registered with a unique name. Because of this, slight variations of task are still counted as unique tasks and need to be named uniquely. This could be done by appending an additional naming that may refer to the variation such as in MMLU where the template used to evaluated for flan are differentiated from the default by the prefix `mmlu_flan_*`. Printing the full task names can easily clutter the results table at the end of the evaluation especially when you have a long list of tasks or are using a benchmark that comprises of many tasks. To make it more legible, you can use `task_alias` and `group_alias` to provide an alternative task name and group name that will be printed.
|
370 |
+
``
|
371 |
+
for example in `mmlu_abstract_algebra.yaml` we set `group_alias` to `stem` and `task_alias` to `abstract_algebra`.
|
372 |
+
|
373 |
+
```
|
374 |
+
"dataset_name": "abstract_algebra"
|
375 |
+
"description": "The following are multiple choice questions (with answers) about abstract\
|
376 |
+
\ algebra.\n\n"
|
377 |
+
"group": "mmlu_stem"
|
378 |
+
"group_alias": "stem"
|
379 |
+
"include": "_default_template_yaml"
|
380 |
+
"task": "mmlu_abstract_algebra"
|
381 |
+
"task_alias": "abstract_algebra"
|
382 |
+
```
|
383 |
+
Note: Even though `group` can be a list, for now, `group_alias` can only be a single string.
|
384 |
+
|
385 |
+
## Checking validity
|
386 |
+
|
387 |
+
After registering your task, you can now check on your data downloading and verify that the few-shot samples look as intended. Run the following command with your desired args:
|
388 |
+
|
389 |
+
```bash
|
390 |
+
python -m scripts.write_out \
|
391 |
+
--output_base_path <path> \
|
392 |
+
--tasks <your-task-name> \
|
393 |
+
--sets <train | val | test> \
|
394 |
+
--num_fewshot K \
|
395 |
+
--num_examples N \
|
396 |
+
```
|
397 |
+
|
398 |
+
Open the file specified at the `--output_base_path <path>` and ensure it passes
|
399 |
+
a simple eye test.
|
400 |
+
|
401 |
+
## Versioning
|
402 |
+
|
403 |
+
One key feature in LM Evaluation Harness is the ability to version tasks--that is, mark them with a specific version number that can be bumped whenever a breaking change is made.
|
404 |
+
|
405 |
+
This version info can be provided by adding the following to your new task config file:
|
406 |
+
|
407 |
+
```
|
408 |
+
metadata:
|
409 |
+
version: 0
|
410 |
+
```
|
411 |
+
|
412 |
+
Now, whenever a change needs to be made to your task in the future, please increase the version number by 1 so that users can differentiate the different task iterations and versions.
|
413 |
+
|
414 |
+
If you are incrementing a task's version, please also consider adding a changelog to the task's README.md noting the date, PR number, what version you have updated to, and a one-liner describing the change.
|
415 |
+
|
416 |
+
for example,
|
417 |
+
|
418 |
+
* \[Dec 25, 2023\] (PR #999) Version 0.0 -> 1.0: Fixed a bug with answer extraction that led to underestimated performance.
|
419 |
+
|
420 |
+
## Checking performance + equivalence
|
421 |
+
|
422 |
+
It's now time to check models' performance on your task! In the evaluation harness, we intend to support a wide range of evaluation tasks and setups, but prioritize the inclusion of already-proven benchmarks following the precise evaluation setups in the literature where possible.
|
423 |
+
|
424 |
+
To enable this, we provide a checklist that should be completed when contributing a new task, to enable accurate book-keeping and to ensure that tasks added to the library are well-tested and, where applicable, precedented.
|
425 |
+
|
426 |
+
### Task Validity Checklist
|
427 |
+
|
428 |
+
The checklist is the following:
|
429 |
+
|
430 |
+
For adding novel benchmarks/datasets to the library:
|
431 |
+
* [ ] Is the task an existing benchmark in the literature?
|
432 |
+
* [ ] Have you referenced the original paper that introduced the task?
|
433 |
+
* [ ] If yes, does the original paper provide a reference implementation? If so, have you checked against the reference implementation and documented how to run such a test?
|
434 |
+
|
435 |
+
|
436 |
+
If other tasks on this dataset are already supported:
|
437 |
+
* [ ] Is the "Main" variant of this task clearly denoted?
|
438 |
+
* [ ] Have you provided a short sentence in a README on what each new variant adds / evaluates?
|
439 |
+
* [ ] Have you noted which, if any, published evaluation setups are matched by this variant?
|
440 |
+
|
441 |
+
It is recommended to include a filled-out copy of this checklist in the README.md for the subfolder you are creating, if you have created a new subfolder in `lm_eval/tasks`.
|
442 |
+
|
443 |
+
## Submitting your task
|
444 |
+
|
445 |
+
You're all set! Now push your work and make a pull request to the `main` branch! Thanks for the contribution :). If there are any questions, please leave a message in the `#lm-thunderdome` channel on the EAI discord!
|
lm-evaluation-harness/docs/task_guide.md
ADDED
@@ -0,0 +1,384 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Task Configuration
|
2 |
+
|
3 |
+
The `lm-evaluation-harness` is meant to be an extensible and flexible framework within which many different evaluation tasks can be defined. All tasks in the new version of the harness are built around a YAML configuration file format.
|
4 |
+
|
5 |
+
These YAML configuration files, along with the current codebase commit hash, are intended to be shareable such that providing the YAML config enables another researcher to precisely replicate the evaluation setup used by another, in the case that the prompt or setup differs from standard `lm-eval` task implementations.
|
6 |
+
|
7 |
+
While adding a standard evaluation task on a new dataset can be occasionally as simple as swapping out a Hugging Face dataset path in an existing file, more specialized evaluation setups also exist. Here we'll provide a crash course on the more advanced logic implementable in YAML form available to users.
|
8 |
+
|
9 |
+
If your intended task relies on features beyond what are described in this guide, we'd love to hear about it! Feel free to open an issue describing the scenario on Github, create a PR to the project with a proposed implementation, or ask in the `#lm-thunderdome` channel on the EleutherAI discord.
|
10 |
+
|
11 |
+
## Configurations
|
12 |
+
|
13 |
+
Tasks are configured via the `TaskConfig` object. Below, we describe all fields usable within the object, and their role in defining a task.
|
14 |
+
|
15 |
+
### Parameters
|
16 |
+
|
17 |
+
Task naming + registration:
|
18 |
+
- **task** (`str`, defaults to None) — name of the task.
|
19 |
+
- **group** (`str`, *optional*) — name of the task group(s) a task belongs to. Enables one to run all tasks with a specified tag or group name at once.
|
20 |
+
|
21 |
+
Dataset configuration options:
|
22 |
+
- **dataset_path** (`str`) — The name of the dataset as listed by HF in the datasets Hub.
|
23 |
+
- **dataset_name** (`str`, *optional*, defaults to None) — The name of what HF calls a “data instance” or sub-task of the benchmark. If your task does not contain any data instances, just leave this to default to None. (If you're familiar with the HF `datasets.load_dataset` function, these are just the first 2 arguments to it.)
|
24 |
+
- **dataset_kwargs** (`dict`, *optional*) — Auxiliary arguments that `datasets.load_dataset` accepts. This can be used to specify arguments such as `data_files` or `data_dir` if you want to use local datafiles such as json or csv.
|
25 |
+
- **training_split** (`str`, *optional*) — Split in the dataset to use as the training split.
|
26 |
+
- **validation_split** (`str`, *optional*) — Split in the dataset to use as the validation split.
|
27 |
+
- **test_split** (`str`, *optional*) — Split in the dataset to use as the test split.
|
28 |
+
- **fewshot_split** (`str`, *optional*) — Split in the dataset to draw few-shot exemplars from. assert that this not None if num_fewshot > 0.
|
29 |
+
- **process_docs** (`Callable`, *optional*) — Optionally define a function to apply to each HF dataset split, to preprocess all documents before being fed into prompt template rendering or other evaluation steps. Can be used to rename dataset columns, or to process documents into a format closer to the expected format expected by a prompt template.
|
30 |
+
|
31 |
+
Prompting / in-context formatting options:
|
32 |
+
- **use_prompt** (`str`, *optional*) — Name of prompt in promptsource to use. if defined, will overwrite doc_to_text, doc_to_target, and doc_to_choice.
|
33 |
+
- **description** (`str`, *optional*) — An optional prepended Jinja2 template or string which will be prepended to the few-shot examples passed into the model, often describing the task or providing instructions to a model, such as `"The following are questions (with answers) about {{subject}}.\n\n"`. No delimiters or spacing are inserted between the description and the first few-shot example.
|
34 |
+
- **doc_to_text** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate input for the model
|
35 |
+
- **doc_to_target** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into the appropriate target output for the model. For multiple choice tasks, this should return an index into
|
36 |
+
- **doc_to_choice** (`Union[Callable, str]`, *optional*) — Jinja2 template, string, or function to process a sample into a list of possible string choices for `multiple_choice` tasks. Left undefined for `generate_until` tasks.
|
37 |
+
- **fewshot_delimiter** (`str`, *optional*, defaults to "\n\n") — String to insert between few-shot examples.
|
38 |
+
- **target_delimiter** (`str`, *optional*, defaults to `" "`) — String to insert between input and target output for the datapoint being tested.
|
39 |
+
|
40 |
+
Runtime configuration options:
|
41 |
+
- **num_fewshot** (`int`, *optional*, defaults to 0) — Number of few-shot examples before the input.
|
42 |
+
- **batch_size** (`int`, *optional*, defaults to 1) — Batch size.
|
43 |
+
|
44 |
+
Scoring details:
|
45 |
+
- **metric_list** (`str`, *optional*, defaults to None) — A list of metrics to use for evaluation. See docs for expected format.
|
46 |
+
- **output_type** (`str`, *optional*, defaults to "generate_until") — Selects the type of model output for the given task. Options are `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
|
47 |
+
- **generation_kwargs** (`dict`, *optional*) — Auxiliary arguments for the `generate` function from HF transformers library. Advanced keyword arguments may not be supported for non-HF LM classes.
|
48 |
+
- **repeats** (`int`, *optional*, defaults to 1) — Number of repeated runs through model for each sample. can be used for cases such as self-consistency.
|
49 |
+
- **filter_list** (`Union[str, list]`, *optional*) — List of filters to postprocess model outputs. See below for further detail on the filter API.
|
50 |
+
- **should_decontaminate** (`bool`, *optional*, defaults to False) - Whether to decontaminate or not.
|
51 |
+
- **doc_to_decontamination_query** (`str`, *optional*) — Query for decontamination if `should_decontaminate` is True. If `should_decontaminate` is True but `doc_to_decontamination_query` is `None`, `doc_to_decontamination_query` will follow `doc_to_text`.
|
52 |
+
|
53 |
+
Other:
|
54 |
+
- **metadata** (`dict`, *optional*) — An optional field where arbitrary metadata can be passed. Most tasks should include a `version` key in this field that is used to denote the version of the yaml config. Other special metadata keys are: `num_fewshot`, to override the printed `n-shot` table column for a task.
|
55 |
+
|
56 |
+
## Filters
|
57 |
+
|
58 |
+
Explain: What are filters? What is their place in the pipeline?
|
59 |
+
|
60 |
+
A key component of the `lm-evaluation-harness` library is the `Filter` object. In a typical evaluation run of the harness, we take the formatted inputs and run them through our LM, with the appropriate output type (greedy or free-form generation, or loglikelihood-based comparative scoring).
|
61 |
+
|
62 |
+
After getting scores or output text from our LM on each `Instance` or document in the dataset, we then need to feed these responses into a metric or scoring function to return scores to a user.
|
63 |
+
|
64 |
+
However, certain tasks may require more complex behavior than directly turning over model outputs to a metric function. For example, we may want to post-process our output text by truncating it or extracting a model's answer, we may want to ensemble over multiple "takes" on a different document, et cetera.
|
65 |
+
|
66 |
+
**Detailed Aside**:
|
67 |
+
We do such post-processing by operating on *responses*, which are stored after running an LM on an `Instance` from the task in `Instance.resps`.
|
68 |
+
|
69 |
+
`resps` is a `List[str]` for each instance, and we pass a `List[List[<expected return type from model>]]` to our filters that is a list of `[instance.resps for instance in instances]`.
|
70 |
+
|
71 |
+
Our filters, after completing a pipeline, must return a `List[<expected return type from model>]` which we then unpack and store each element of in `Instance.filtered_resps` for the corresponding instance. Thus, we take as input a list of returns from our model for each doc, and must return a return from our model *without it being wrapped in a list* for each doc.
|
72 |
+
|
73 |
+
**End Aside**
|
74 |
+
|
75 |
+
|
76 |
+
A full list of supported filter operations can be found in `lm_eval/filters/__init__.py`. Contributions of new filter types are welcome!
|
77 |
+
|
78 |
+
### Multiple Filter Pipelines
|
79 |
+
|
80 |
+
Tasks need not be limited to a single filter pipeline. We enable users to run multiple, distinct, filter pipelines on *the same model outputs* generated in one run on a task.
|
81 |
+
|
82 |
+
As a case study, let's look at an implementation of solving the Gsm8k math word problem benchmark in `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`. Here, we are emulating the setup used by [Self-Consistency Improves Chain of Thought Prompting](https://arxiv.org/abs/2203.11171), in which evaluation is performed by generating N chain-of-thought outputs from a model via temperature-based sampling, then selecting the answers output by the model at the end of the chains of thought, then majority voting across all those numeric answers.
|
83 |
+
|
84 |
+
Within our YAML file:
|
85 |
+
|
86 |
+
```yaml
|
87 |
+
...
|
88 |
+
repeats: 64
|
89 |
+
filter_list:
|
90 |
+
- name: "score-first"
|
91 |
+
filter:
|
92 |
+
- function: "regex"
|
93 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
94 |
+
- function: "take_first"
|
95 |
+
- name: "maj@64"
|
96 |
+
filter:
|
97 |
+
- function: "regex"
|
98 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
99 |
+
- function: "majority_vote"
|
100 |
+
- function: "take_first"
|
101 |
+
- name: "maj@8"
|
102 |
+
filter:
|
103 |
+
- function: "take_first_k"
|
104 |
+
k: 8
|
105 |
+
- function: "regex"
|
106 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
107 |
+
- function: "majority_vote"
|
108 |
+
- function: "take_first"
|
109 |
+
```
|
110 |
+
|
111 |
+
We are able to provide multiple different filter pipelines, each with their own name and list of filters to apply in sequence.
|
112 |
+
|
113 |
+
Our first filter pipeline implements
|
114 |
+
- applying a regex to the model generations (extracting the number within the phrase "The answer is (number)")
|
115 |
+
- selecting only the first out of the 64 model answers
|
116 |
+
|
117 |
+
Then scoring this single answer.
|
118 |
+
|
119 |
+
```yaml
|
120 |
+
- name: "score-first"
|
121 |
+
filter:
|
122 |
+
- function: "regex"
|
123 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
124 |
+
- function: "take_first"
|
125 |
+
```
|
126 |
+
|
127 |
+
Our second filter pipeline, "maj@64", does majority voting across all 64 answers via:
|
128 |
+
- applying the same regex to all responses, to get the numerical answer from the model for each of the 64 responses per problem
|
129 |
+
- applying majority voting to all responses, which then returns a length-1 `[<majority answer>]` list for each
|
130 |
+
- taking the first element of this length-1 list, to then score the sole response `<majority answer>` for each document.
|
131 |
+
|
132 |
+
```yaml
|
133 |
+
- name: "maj@64"
|
134 |
+
filter:
|
135 |
+
- function: "regex"
|
136 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
137 |
+
- function: "majority_vote"
|
138 |
+
- function: "take_first"
|
139 |
+
```
|
140 |
+
|
141 |
+
Our final filter pipeline, "maj@8", does majority voting across the first 8 of the model's responses per document via:
|
142 |
+
- subsetting the len-64 list of responses `[answer1, answer2, ..., answer64]` to `[answer1, answer2, ..., answer8]` for each document
|
143 |
+
- performing the same sequence of filters on these new sets of 8 responses, for each document.
|
144 |
+
```yaml
|
145 |
+
- name: "maj@8"
|
146 |
+
filter:
|
147 |
+
- function: "take_first_k"
|
148 |
+
k: 8
|
149 |
+
- function: "regex"
|
150 |
+
regex_pattern: "The answer is (\\-?[0-9\\.\\,]*[0-9]+)"
|
151 |
+
- function: "majority_vote"
|
152 |
+
- function: "take_first"
|
153 |
+
```
|
154 |
+
|
155 |
+
Thus, given the 64 responses from our LM on each document, we can report metrics on these responses in these 3 different ways, as defined by our filter pipelines.
|
156 |
+
|
157 |
+
|
158 |
+
## Embedded Python Code
|
159 |
+
|
160 |
+
Use can use python functions for certain arguments by using the `!function` operator after the argument name followed by `<filename>.<pythonfunctionname>`. This feature can be used for the following arguments:
|
161 |
+
1. `doc_to_text`
|
162 |
+
2. `doc_to_target`
|
163 |
+
3. `doc_to_choice`
|
164 |
+
4. `aggregation` for a `metric` in `metric_list`
|
165 |
+
|
166 |
+
## (No Longer Recommended) Direct `Task` Subclassing
|
167 |
+
|
168 |
+
The prior implementation method of new tasks was to subclass `Task`. While we intend to migrate all tasks to the new YAML implementation option going forward, it remains possible to subclass the Task class and implement custom logic. For more information, see `docs/task_guide.md` in v0.3.0 of the `lm-evaluation-harness`.
|
169 |
+
|
170 |
+
|
171 |
+
## Including a Base YAML
|
172 |
+
|
173 |
+
You can base a YAML on another YAML file as a template. This can be handy when you need to just change the prompt for `doc_to_text` but keep the rest the same or change `filters` to compare which is better. Simply use `include` in the YAML file and write the name of the template you want to base from. This assumes that the base temeplate is in the same directory. Otherwise, You will need to define the full path.
|
174 |
+
```
|
175 |
+
include: <YAML filename or with full path>
|
176 |
+
...
|
177 |
+
```
|
178 |
+
You can find an example of how to use this feature at [gsm8k-cot-self-consistency.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/3c07cc04a92fc467d7c9a94894aeddd58c93a5da/lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml) where it is based off [gsm8k-cot.yaml](https://github.com/EleutherAI/lm-evaluation-harness/blob/3c07cc04a92fc467d7c9a94894aeddd58c93a5da/lm_eval/tasks/gsm8k/gsm8k-cot.yaml)
|
179 |
+
|
180 |
+
|
181 |
+
## Passing Arguments to Metrics
|
182 |
+
|
183 |
+
Metrics can be defined in the `metric_list` argument when building the YAML config. Multiple metrics can be listed along with any auxiliary arguments. For example, setting the [`exact_match` metric](https://github.com/huggingface/evaluate/tree/main/metrics/exact_match), auxiliary arguments such as `ignore_case`, `ignore_punctuation`, `regexes_to_ignore` can be listed as well. They will be added to the metric function as `kwargs`. Some metrics have predefined values for `aggregation` and `higher_is_better` so listing the metric name only can be sufficient.
|
184 |
+
|
185 |
+
```
|
186 |
+
metric_list:
|
187 |
+
- metric: acc
|
188 |
+
- metric: exact_match
|
189 |
+
aggregation: mean
|
190 |
+
higher_is_better: true
|
191 |
+
ignore_case: true
|
192 |
+
ignore_punctuation: false
|
193 |
+
regexes_to_ignore:
|
194 |
+
- ","
|
195 |
+
- "\\$"
|
196 |
+
```
|
197 |
+
|
198 |
+
### Natively Supported Metrics
|
199 |
+
|
200 |
+
Here we list all metrics currently supported natively in `lm-eval`:
|
201 |
+
|
202 |
+
Metrics:
|
203 |
+
* `acc` (accuracy)
|
204 |
+
* `acc_norm` (length-normalized accuracy)
|
205 |
+
* `acc_mutual_info` (baseline loglikelihood - normalized accuracy)
|
206 |
+
* `perplexity`
|
207 |
+
* `word_perplexity` (perplexity per word)
|
208 |
+
* `byte_perplexity` (perplexity per byte)
|
209 |
+
* `bits_per_byte`
|
210 |
+
* `matthews_corrcoef` (Matthews correlation coefficient)
|
211 |
+
* `f1` (F1 score)
|
212 |
+
* `bleu`
|
213 |
+
* `chrf`
|
214 |
+
* `ter`
|
215 |
+
|
216 |
+
Aggregation functions:
|
217 |
+
* `mean`
|
218 |
+
* `median`
|
219 |
+
* `perplexity`
|
220 |
+
* `weighted_perplexity`
|
221 |
+
* `bits_per_byte`
|
222 |
+
|
223 |
+
### Adding a Multiple Choice Metric
|
224 |
+
|
225 |
+
Adding a multiple choice metric has a few steps. To get it working you need to:
|
226 |
+
|
227 |
+
1. register a metric function
|
228 |
+
2. register an aggregation function
|
229 |
+
3. update the `Task` definition to make sure the correct arguments are passed
|
230 |
+
|
231 |
+
The default metric and aggregation functions are in `lm_eval/api/metrics.py`, and you can add a function there if it's for general use. The metrics are towards the bottom of the file and look like this:
|
232 |
+
|
233 |
+
|
234 |
+
@register_metric(
|
235 |
+
metric="mcc",
|
236 |
+
higher_is_better=True,
|
237 |
+
output_type="multiple_choice",
|
238 |
+
aggregation="matthews_corrcoef",
|
239 |
+
)
|
240 |
+
def mcc_fn(items): # This is a passthrough function
|
241 |
+
return items
|
242 |
+
|
243 |
+
Note that many of these are passthrough functions, and for multiple choice (at least) this function is never actually called.
|
244 |
+
|
245 |
+
Aggregation functions are defined towards the top of the file, here's an example:
|
246 |
+
|
247 |
+
@register_aggregation("matthews_corrcoef")
|
248 |
+
def matthews_corrcoef(items):
|
249 |
+
unzipped_list = list(zip(*items))
|
250 |
+
golds = unzipped_list[0]
|
251 |
+
preds = unzipped_list[1]
|
252 |
+
return sklearn.metrics.matthews_corrcoef(golds, preds)
|
253 |
+
|
254 |
+
This function returns a single numeric value. The input is defined in `Task.process_results` in `lm_eval/api/task.py`. There's a section that looks like this:
|
255 |
+
|
256 |
+
|
257 |
+
result_dict = {
|
258 |
+
**({"acc": acc} if "acc" in use_metric else {}),
|
259 |
+
**({"f1": (gold, pred)} if "f1" in use_metric else {}),
|
260 |
+
**({"mcc": (gold, pred)} if "mcc" in use_metric else {}),
|
261 |
+
**({"acc_norm": acc_norm} if "acc_norm" in use_metric else {}),
|
262 |
+
**({"exact_match": exact_match} if "exact_match" in use_metric else {}),
|
263 |
+
}
|
264 |
+
|
265 |
+
The value here determines the input to the aggregation function, though the name used matches the metric function. These metrics all have simple needs and just need the accuracy or gold and predicted values, but immediately below this there are examples of metrics with more complicated needs you can use as reference.
|
266 |
+
|
267 |
+
## Good Reference Tasks
|
268 |
+
|
269 |
+
Contributing a new task can be daunting! Luckily, much of the work has often been done for you in a different, similarly evaluated task. Good examples of task implementations to study include:
|
270 |
+
|
271 |
+
Multiple choice tasks:
|
272 |
+
- SciQ (`lm_eval/tasks/sciq/sciq.yaml`)
|
273 |
+
|
274 |
+
Corpus perplexity evaluations:
|
275 |
+
- Wikitext (`lm_eval/tasks/wikitext/wikitext.yaml`)
|
276 |
+
|
277 |
+
Generative tasks:
|
278 |
+
- GSM8k (`lm_eval/tasks/gsm8k/gsm8k.yaml`)
|
279 |
+
|
280 |
+
Tasks using complex filtering:
|
281 |
+
- GSM8k with CoT (+ with Self-Consistency): (`lm_eval/tasks/gsm8k/gsm8k-cot.yaml` ; `lm_eval/tasks/gsm8k/gsm8k-cot-self-consistency.yaml`)
|
282 |
+
|
283 |
+
|
284 |
+
## Benchmarks
|
285 |
+
|
286 |
+
When evaluating a language model, it's is not unusual to test across a number of tasks that may not be related to one another in order to assess a variety of capabilities. To this end, it may be combursome to have to list the set of tasks or add a new group name to each yaml of each individual task.
|
287 |
+
|
288 |
+
To solve this, we can create a benchmark yaml config. This is a config that contains the names of the tasks that should be included in a particular benchmark. The config consists of two main keys `group` which denotes the name of the benchmark and `task` which is where we can list the tasks. The tasks listed in `task` are the task names that have been registered. A good example would be the list of tasks used to evaluate the Pythia Suite.
|
289 |
+
|
290 |
+
```yaml
|
291 |
+
group: pythia
|
292 |
+
task:
|
293 |
+
- lambada_openai
|
294 |
+
- wikitext
|
295 |
+
- piqa
|
296 |
+
- sciq
|
297 |
+
- wsc
|
298 |
+
- winogrande
|
299 |
+
- arc
|
300 |
+
- logiqa
|
301 |
+
- blimp
|
302 |
+
- hendrycksTest*
|
303 |
+
```
|
304 |
+
|
305 |
+
It is also possible to list an existing task in your benchmark configuration with some adjustments. For example, a few tasks from mmlu is included `multimedqa`. There, the `task_alias` and `group_alias` (See [here](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/new_task_guide.md#beautifying-table-display) for more details) are modified to suit the benchmark.
|
306 |
+
|
307 |
+
```yaml
|
308 |
+
group: multimedqa
|
309 |
+
task:
|
310 |
+
- pubmedqa
|
311 |
+
- medmcqa
|
312 |
+
- medqa_4options
|
313 |
+
- task: mmlu_anatomy
|
314 |
+
task_alias: "anatomy (mmlu)"
|
315 |
+
group_alias: null
|
316 |
+
- task: mmlu_clinical_knowledge
|
317 |
+
task_alias: "clinical_knowledge (mmlu)"
|
318 |
+
group_alias: null
|
319 |
+
...
|
320 |
+
```
|
321 |
+
|
322 |
+
Alternatively, benchmarks can have tasks that are customizable for each task. They can be defined like how a yaml task is usually set.
|
323 |
+
|
324 |
+
```yaml
|
325 |
+
group: t0_eval
|
326 |
+
task:
|
327 |
+
# Coreference Resolution
|
328 |
+
- dataset_path: super_glue
|
329 |
+
dataset_name: wsc.fixed
|
330 |
+
use_prompt: promptsource:*
|
331 |
+
training_split: train
|
332 |
+
validation_split: validation
|
333 |
+
metric_list:
|
334 |
+
- metric: exact_match
|
335 |
+
aggregation: mean
|
336 |
+
higher_is_better: true
|
337 |
+
ignore_case: true
|
338 |
+
ignore_punctuation: true
|
339 |
+
# Coreference Resolution
|
340 |
+
- dataset_path: winogrande
|
341 |
+
dataset_name: winogrande_xl
|
342 |
+
use_prompt: promptsource:*
|
343 |
+
training_split: train
|
344 |
+
validation_split: validation
|
345 |
+
metric_list:
|
346 |
+
- metric: exact_match
|
347 |
+
aggregation: mean
|
348 |
+
higher_is_better: true
|
349 |
+
ignore_case: true
|
350 |
+
ignore_punctuation: true
|
351 |
+
...
|
352 |
+
```
|
353 |
+
|
354 |
+
If the benchmark contains the same dataset but with different configurations, use `task` to differentiate between them. For example, T0-Eval evaluates on 3 versions of ANLI but the huggingface dataset collects them in one dataset.
|
355 |
+
|
356 |
+
```YAML
|
357 |
+
group: t0_eval
|
358 |
+
task:
|
359 |
+
...
|
360 |
+
- task: anli_r1
|
361 |
+
dataset_path: anli
|
362 |
+
use_prompt: promptsource:*
|
363 |
+
training_split: train_r1
|
364 |
+
validation_split: dev_r1
|
365 |
+
metric_list:
|
366 |
+
- metric: exact_match
|
367 |
+
aggregation: mean
|
368 |
+
higher_is_better: true
|
369 |
+
ignore_case: true
|
370 |
+
ignore_punctuation: true
|
371 |
+
- task: anli_r2
|
372 |
+
dataset_path: anli
|
373 |
+
use_prompt: promptsource:*
|
374 |
+
training_split: train_r2
|
375 |
+
validation_split: dev_r2
|
376 |
+
metric_list:
|
377 |
+
- metric: exact_match
|
378 |
+
aggregation: mean
|
379 |
+
higher_is_better: true
|
380 |
+
ignore_case: true
|
381 |
+
ignore_punctuation: true
|
382 |
+
```
|
383 |
+
|
384 |
+
Calling the benchmark is done the same way we would call any task with `--tasks`. Benchmarks can be added in `lm_eval/tasks/benchmarks/`
|
lm-evaluation-harness/examples/lm-eval-overview.ipynb
ADDED
@@ -0,0 +1,1231 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"metadata": {
|
6 |
+
"id": "Qw83KAePAhaS"
|
7 |
+
},
|
8 |
+
"source": [
|
9 |
+
"# Releasing LM-Evaluation-Harness v0.4.0"
|
10 |
+
]
|
11 |
+
},
|
12 |
+
{
|
13 |
+
"cell_type": "markdown",
|
14 |
+
"metadata": {
|
15 |
+
"id": "Z7k2vq1iAdqr"
|
16 |
+
},
|
17 |
+
"source": [
|
18 |
+
"With the vast amount of work done in the field today, it helps to have a tool that people can use easily to share their results and use to check others to ensure reported numbers are valid. The LM Evaluation Harness is one such tool the community has used extensively. We want to continue to support the community and with that in mind, we’re excited to announce a major update on the LM Evaluation Harness to further our goal for open and accessible AI research."
|
19 |
+
]
|
20 |
+
},
|
21 |
+
{
|
22 |
+
"cell_type": "markdown",
|
23 |
+
"metadata": {
|
24 |
+
"id": "0gDoM0AJAvEc"
|
25 |
+
},
|
26 |
+
"source": [
|
27 |
+
"Our refactor stems from our desires to make the following believed best practices easier to carry out. \n",
|
28 |
+
"\n",
|
29 |
+
"1. Never copy results from other papers\n",
|
30 |
+
"2. Always share your exact prompts\n",
|
31 |
+
"3. Always provide model outputs\n",
|
32 |
+
"4. Qualitatively review a small batch of outputs before running evaluation jobs at scale\n",
|
33 |
+
"\n",
|
34 |
+
"We also wanted to make the library a better experience to use and to contribute or design evaluations within. New features in the new release that serve this purpose include:\n",
|
35 |
+
"\n",
|
36 |
+
"1. Faster Evaluation Runtimes (accelerated data-parallel inference with HF Transformers + Accelerate, and commonly used or faster inference libraries such as vLLM and Llama-CPP)\n",
|
37 |
+
"2. Easier addition and sharing of new tasks (YAML-based task config formats, allowing single-file sharing of custom tasks)\n",
|
38 |
+
"3. More configurability, for more advanced workflows and easier operation with modifying prompts\n",
|
39 |
+
"4. Better logging of data at runtime and post-hoc"
|
40 |
+
]
|
41 |
+
},
|
42 |
+
{
|
43 |
+
"cell_type": "markdown",
|
44 |
+
"metadata": {
|
45 |
+
"id": "nnwsOpjda_YW"
|
46 |
+
},
|
47 |
+
"source": [
|
48 |
+
"In this notebook we will be going through a short tutorial on how things work."
|
49 |
+
]
|
50 |
+
},
|
51 |
+
{
|
52 |
+
"cell_type": "markdown",
|
53 |
+
"metadata": {
|
54 |
+
"id": "zAov81vTbL2K"
|
55 |
+
},
|
56 |
+
"source": [
|
57 |
+
"## Install LM-Eval"
|
58 |
+
]
|
59 |
+
},
|
60 |
+
{
|
61 |
+
"cell_type": "code",
|
62 |
+
"execution_count": 1,
|
63 |
+
"metadata": {
|
64 |
+
"colab": {
|
65 |
+
"base_uri": "https://localhost:8080/"
|
66 |
+
},
|
67 |
+
"id": "8hiosGzq_qZg",
|
68 |
+
"outputId": "6ab73e5e-1f54-417e-a388-07e0d870b132"
|
69 |
+
},
|
70 |
+
"outputs": [
|
71 |
+
{
|
72 |
+
"name": "stdout",
|
73 |
+
"output_type": "stream",
|
74 |
+
"text": [
|
75 |
+
"Collecting git+https://github.com/EleutherAI/lm-evaluation-harness.git@big-refactor\n",
|
76 |
+
" Cloning https://github.com/EleutherAI/lm-evaluation-harness.git (to revision big-refactor) to /tmp/pip-req-build-tnssql5s\n",
|
77 |
+
" Running command git clone --filter=blob:none --quiet https://github.com/EleutherAI/lm-evaluation-harness.git /tmp/pip-req-build-tnssql5s\n",
|
78 |
+
" Running command git checkout -b big-refactor --track origin/big-refactor\n",
|
79 |
+
" Switched to a new branch 'big-refactor'\n",
|
80 |
+
" Branch 'big-refactor' set up to track remote branch 'big-refactor' from 'origin'.\n",
|
81 |
+
" Resolved https://github.com/EleutherAI/lm-evaluation-harness.git to commit 42f486ee49b65926a444cb0620870a39a5b4b0a8\n",
|
82 |
+
" Installing build dependencies ... \u001b[?25l\u001b[?25hdone\n",
|
83 |
+
" Getting requirements to build wheel ... \u001b[?25l\u001b[?25hdone\n",
|
84 |
+
" Preparing metadata (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
|
85 |
+
"Collecting accelerate>=0.21.0 (from lm-eval==1.0.0)\n",
|
86 |
+
" Downloading accelerate-0.24.1-py3-none-any.whl (261 kB)\n",
|
87 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m261.4/261.4 kB\u001b[0m \u001b[31m4.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
88 |
+
"\u001b[?25hCollecting evaluate (from lm-eval==1.0.0)\n",
|
89 |
+
" Downloading evaluate-0.4.1-py3-none-any.whl (84 kB)\n",
|
90 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m84.1/84.1 kB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
91 |
+
"\u001b[?25hCollecting datasets>=2.0.0 (from lm-eval==1.0.0)\n",
|
92 |
+
" Downloading datasets-2.15.0-py3-none-any.whl (521 kB)\n",
|
93 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m521.2/521.2 kB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
94 |
+
"\u001b[?25hCollecting jsonlines (from lm-eval==1.0.0)\n",
|
95 |
+
" Downloading jsonlines-4.0.0-py3-none-any.whl (8.7 kB)\n",
|
96 |
+
"Requirement already satisfied: numexpr in /usr/local/lib/python3.10/dist-packages (from lm-eval==1.0.0) (2.8.7)\n",
|
97 |
+
"Collecting peft>=0.2.0 (from lm-eval==1.0.0)\n",
|
98 |
+
" Downloading peft-0.6.2-py3-none-any.whl (174 kB)\n",
|
99 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m174.7/174.7 kB\u001b[0m \u001b[31m7.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
100 |
+
"\u001b[?25hCollecting pybind11>=2.6.2 (from lm-eval==1.0.0)\n",
|
101 |
+
" Downloading pybind11-2.11.1-py3-none-any.whl (227 kB)\n",
|
102 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m227.7/227.7 kB\u001b[0m \u001b[31m12.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
103 |
+
"\u001b[?25hCollecting pytablewriter (from lm-eval==1.0.0)\n",
|
104 |
+
" Downloading pytablewriter-1.2.0-py3-none-any.whl (111 kB)\n",
|
105 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m111.1/111.1 kB\u001b[0m \u001b[31m8.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
106 |
+
"\u001b[?25hCollecting rouge-score>=0.0.4 (from lm-eval==1.0.0)\n",
|
107 |
+
" Downloading rouge_score-0.1.2.tar.gz (17 kB)\n",
|
108 |
+
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
109 |
+
"Collecting sacrebleu>=1.5.0 (from lm-eval==1.0.0)\n",
|
110 |
+
" Downloading sacrebleu-2.3.2-py3-none-any.whl (119 kB)\n",
|
111 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m119.7/119.7 kB\u001b[0m \u001b[31m8.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
112 |
+
"\u001b[?25hRequirement already satisfied: scikit-learn>=0.24.1 in /usr/local/lib/python3.10/dist-packages (from lm-eval==1.0.0) (1.2.2)\n",
|
113 |
+
"Collecting sqlitedict (from lm-eval==1.0.0)\n",
|
114 |
+
" Downloading sqlitedict-2.1.0.tar.gz (21 kB)\n",
|
115 |
+
" Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
116 |
+
"Requirement already satisfied: torch>=1.8 in /usr/local/lib/python3.10/dist-packages (from lm-eval==1.0.0) (2.1.0+cu118)\n",
|
117 |
+
"Collecting tqdm-multiprocess (from lm-eval==1.0.0)\n",
|
118 |
+
" Downloading tqdm_multiprocess-0.0.11-py3-none-any.whl (9.8 kB)\n",
|
119 |
+
"Requirement already satisfied: transformers>=4.1 in /usr/local/lib/python3.10/dist-packages (from lm-eval==1.0.0) (4.35.2)\n",
|
120 |
+
"Collecting zstandard (from lm-eval==1.0.0)\n",
|
121 |
+
" Downloading zstandard-0.22.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (5.4 MB)\n",
|
122 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.4/5.4 MB\u001b[0m \u001b[31m29.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
123 |
+
"\u001b[?25hRequirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.21.0->lm-eval==1.0.0) (1.23.5)\n",
|
124 |
+
"Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.21.0->lm-eval==1.0.0) (23.2)\n",
|
125 |
+
"Requirement already satisfied: psutil in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.21.0->lm-eval==1.0.0) (5.9.5)\n",
|
126 |
+
"Requirement already satisfied: pyyaml in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.21.0->lm-eval==1.0.0) (6.0.1)\n",
|
127 |
+
"Requirement already satisfied: huggingface-hub in /usr/local/lib/python3.10/dist-packages (from accelerate>=0.21.0->lm-eval==1.0.0) (0.19.4)\n",
|
128 |
+
"Requirement already satisfied: pyarrow>=8.0.0 in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (9.0.0)\n",
|
129 |
+
"Collecting pyarrow-hotfix (from datasets>=2.0.0->lm-eval==1.0.0)\n",
|
130 |
+
" Downloading pyarrow_hotfix-0.6-py3-none-any.whl (7.9 kB)\n",
|
131 |
+
"Collecting dill<0.3.8,>=0.3.0 (from datasets>=2.0.0->lm-eval==1.0.0)\n",
|
132 |
+
" Downloading dill-0.3.7-py3-none-any.whl (115 kB)\n",
|
133 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m115.3/115.3 kB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
134 |
+
"\u001b[?25hRequirement already satisfied: pandas in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (1.5.3)\n",
|
135 |
+
"Requirement already satisfied: requests>=2.19.0 in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (2.31.0)\n",
|
136 |
+
"Requirement already satisfied: tqdm>=4.62.1 in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (4.66.1)\n",
|
137 |
+
"Requirement already satisfied: xxhash in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (3.4.1)\n",
|
138 |
+
"Collecting multiprocess (from datasets>=2.0.0->lm-eval==1.0.0)\n",
|
139 |
+
" Downloading multiprocess-0.70.15-py310-none-any.whl (134 kB)\n",
|
140 |
+
"\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m134.8/134.8 kB\u001b[0m \u001b[31m19.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
|
141 |
+
"\u001b[?25hRequirement already satisfied: fsspec[http]<=2023.10.0,>=2023.1.0 in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (2023.6.0)\n",
|
142 |
+
"Requirement already satisfied: aiohttp in /usr/local/lib/python3.10/dist-packages (from datasets>=2.0.0->lm-eval==1.0.0) (3.8.6)\n",
|
143 |
+
"Collecting responses<0.19 (from evaluate->lm-eval==1.0.0)\n",
|
144 |
+
" Downloading responses-0.18.0-py3-none-any.whl (38 kB)\n",
|
145 |
+
"Requirement already satisfied: safetensors in /usr/local/lib/python3.10/dist-packages (from peft>=0.2.0->lm-eval==1.0.0) (0.4.0)\n",
|
146 |
+
"Requirement already satisfied: absl-py in /usr/local/lib/python3.10/dist-packages (from rouge-score>=0.0.4->lm-eval==1.0.0) (1.4.0)\n",
|
147 |
+
"Requirement already satisfied: nltk in /usr/local/lib/python3.10/dist-packages (from rouge-score>=0.0.4->lm-eval==1.0.0) (3.8.1)\n",
|
148 |
+
"Requirement already satisfied: six>=1.14.0 in /usr/local/lib/python3.10/dist-packages (from rouge-score>=0.0.4->lm-eval==1.0.0) (1.16.0)\n",
|
149 |
+
"Collecting portalocker (from sacrebleu>=1.5.0->lm-eval==1.0.0)\n",
|
150 |
+
" Downloading portalocker-2.8.2-py3-none-any.whl (17 kB)\n",
|
151 |
+
"Requirement already satisfied: regex in /usr/local/lib/python3.10/dist-packages (from sacrebleu>=1.5.0->lm-eval==1.0.0) (2023.6.3)\n",
|
152 |
+
"Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.10/dist-packages (from sacrebleu>=1.5.0->lm-eval==1.0.0) (0.9.0)\n",
|
153 |
+
"Collecting colorama (from sacrebleu>=1.5.0->lm-eval==1.0.0)\n",
|
154 |
+
" Downloading colorama-0.4.6-py2.py3-none-any.whl (25 kB)\n",
|
155 |
+
"Requirement already satisfied: lxml in /usr/local/lib/python3.10/dist-packages (from sacrebleu>=1.5.0->lm-eval==1.0.0) (4.9.3)\n",
|
156 |
+
"Requirement already satisfied: scipy>=1.3.2 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.24.1->lm-eval==1.0.0) (1.11.3)\n",
|
157 |
+
"Requirement already satisfied: joblib>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.24.1->lm-eval==1.0.0) (1.3.2)\n",
|
158 |
+
"Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn>=0.24.1->lm-eval==1.0.0) (3.2.0)\n",
|
159 |
+
"Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (3.13.1)\n",
|
160 |
+
"Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (4.5.0)\n",
|
161 |
+
"Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (1.12)\n",
|
162 |
+
"Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (3.2.1)\n",
|
163 |
+
"Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (3.1.2)\n",
|
164 |
+
"Requirement already satisfied: triton==2.1.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.8->lm-eval==1.0.0) (2.1.0)\n",
|
165 |
+
"Requirement already satisfied: tokenizers<0.19,>=0.14 in /usr/local/lib/python3.10/dist-packages (from transformers>=4.1->lm-eval==1.0.0) (0.15.0)\n",
|
166 |
+
"Requirement already satisfied: attrs>=19.2.0 in /usr/local/lib/python3.10/dist-packages (from jsonlines->lm-eval==1.0.0) (23.1.0)\n",
|
167 |
+
"Requirement already satisfied: setuptools>=38.3.0 in /usr/local/lib/python3.10/dist-packages (from pytablewriter->lm-eval==1.0.0) (67.7.2)\n",
|
168 |
+
"Collecting DataProperty<2,>=1.0.1 (from pytablewriter->lm-eval==1.0.0)\n",
|
169 |
+
" Downloading DataProperty-1.0.1-py3-none-any.whl (27 kB)\n",
|
170 |
+
"Collecting mbstrdecoder<2,>=1.0.0 (from pytablewriter->lm-eval==1.0.0)\n",
|
171 |
+
" Downloading mbstrdecoder-1.1.3-py3-none-any.whl (7.8 kB)\n",
|
172 |
+
"Collecting pathvalidate<4,>=2.3.0 (from pytablewriter->lm-eval==1.0.0)\n",
|
173 |
+
" Downloading pathvalidate-3.2.0-py3-none-any.whl (23 kB)\n",
|
174 |
+
"Collecting tabledata<2,>=1.3.1 (from pytablewriter->lm-eval==1.0.0)\n",
|
175 |
+
" Downloading tabledata-1.3.3-py3-none-any.whl (11 kB)\n",
|
176 |
+
"Collecting tcolorpy<1,>=0.0.5 (from pytablewriter->lm-eval==1.0.0)\n",
|
177 |
+
" Downloading tcolorpy-0.1.4-py3-none-any.whl (7.9 kB)\n",
|
178 |
+
"Collecting typepy[datetime]<2,>=1.3.2 (from pytablewriter->lm-eval==1.0.0)\n",
|
179 |
+
" Downloading typepy-1.3.2-py3-none-any.whl (31 kB)\n",
|
180 |
+
"Requirement already satisfied: charset-normalizer<4.0,>=2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (3.3.2)\n",
|
181 |
+
"Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (6.0.4)\n",
|
182 |
+
"Requirement already satisfied: async-timeout<5.0,>=4.0.0a3 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (4.0.3)\n",
|
183 |
+
"Requirement already satisfied: yarl<2.0,>=1.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (1.9.2)\n",
|
184 |
+
"Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (1.4.0)\n",
|
185 |
+
"Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp->datasets>=2.0.0->lm-eval==1.0.0) (1.3.1)\n",
|
186 |
+
"Requirement already satisfied: chardet<6,>=3.0.4 in /usr/local/lib/python3.10/dist-packages (from mbstrdecoder<2,>=1.0.0->pytablewriter->lm-eval==1.0.0) (5.2.0)\n",
|
187 |
+
"Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets>=2.0.0->lm-eval==1.0.0) (3.4)\n",
|
188 |
+
"Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets>=2.0.0->lm-eval==1.0.0) (2.0.7)\n",
|
189 |
+
"Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests>=2.19.0->datasets>=2.0.0->lm-eval==1.0.0) (2023.7.22)\n",
|
190 |
+
"Requirement already satisfied: python-dateutil<3.0.0,>=2.8.0 in /usr/local/lib/python3.10/dist-packages (from typepy[datetime]<2,>=1.3.2->pytablewriter->lm-eval==1.0.0) (2.8.2)\n",
|
191 |
+
"Requirement already satisfied: pytz>=2018.9 in /usr/local/lib/python3.10/dist-packages (from typepy[datetime]<2,>=1.3.2->pytablewriter->lm-eval==1.0.0) (2023.3.post1)\n",
|
192 |
+
"Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.8->lm-eval==1.0.0) (2.1.3)\n",
|
193 |
+
"Requirement already satisfied: click in /usr/local/lib/python3.10/dist-packages (from nltk->rouge-score>=0.0.4->lm-eval==1.0.0) (8.1.7)\n",
|
194 |
+
"Requirement already satisfied: mpmath>=0.19 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.8->lm-eval==1.0.0) (1.3.0)\n",
|
195 |
+
"Building wheels for collected packages: lm-eval, rouge-score, sqlitedict\n",
|
196 |
+
" Building wheel for lm-eval (pyproject.toml) ... \u001b[?25l\u001b[?25hdone\n",
|
197 |
+
" Created wheel for lm-eval: filename=lm_eval-1.0.0-py3-none-any.whl size=994254 sha256=88356155b19f2891981ecef948326ad6ce8ca40a6009378410ec20d0e225995a\n",
|
198 |
+
" Stored in directory: /tmp/pip-ephem-wheel-cache-9v6ye7h3/wheels/17/01/26/599c0779e9858a70a73fa8a306699b5b9a868f820c225457b0\n",
|
199 |
+
" Building wheel for rouge-score (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
200 |
+
" Created wheel for rouge-score: filename=rouge_score-0.1.2-py3-none-any.whl size=24933 sha256=6bb0d44e4881972c43ce194e7cb65233d309758cb15f0dec54590d3d2efcfc36\n",
|
201 |
+
" Stored in directory: /root/.cache/pip/wheels/5f/dd/89/461065a73be61a532ff8599a28e9beef17985c9e9c31e541b4\n",
|
202 |
+
" Building wheel for sqlitedict (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
|
203 |
+
" Created wheel for sqlitedict: filename=sqlitedict-2.1.0-py3-none-any.whl size=16863 sha256=5747f7dd73ddf3d8fbcebf51b5e4f718fabe1e94bccdf16d2f22a2e65ee7fdf4\n",
|
204 |
+
" Stored in directory: /root/.cache/pip/wheels/79/d6/e7/304e0e6cb2221022c26d8161f7c23cd4f259a9e41e8bbcfabd\n",
|
205 |
+
"Successfully built lm-eval rouge-score sqlitedict\n",
|
206 |
+
"Installing collected packages: sqlitedict, zstandard, tcolorpy, pybind11, pyarrow-hotfix, portalocker, pathvalidate, mbstrdecoder, jsonlines, dill, colorama, typepy, tqdm-multiprocess, sacrebleu, rouge-score, responses, multiprocess, accelerate, datasets, DataProperty, tabledata, peft, evaluate, pytablewriter, lm-eval\n",
|
207 |
+
"Successfully installed DataProperty-1.0.1 accelerate-0.24.1 colorama-0.4.6 datasets-2.15.0 dill-0.3.7 evaluate-0.4.1 jsonlines-4.0.0 lm-eval-1.0.0 mbstrdecoder-1.1.3 multiprocess-0.70.15 pathvalidate-3.2.0 peft-0.6.2 portalocker-2.8.2 pyarrow-hotfix-0.6 pybind11-2.11.1 pytablewriter-1.2.0 responses-0.18.0 rouge-score-0.1.2 sacrebleu-2.3.2 sqlitedict-2.1.0 tabledata-1.3.3 tcolorpy-0.1.4 tqdm-multiprocess-0.0.11 typepy-1.3.2 zstandard-0.22.0\n"
|
208 |
+
]
|
209 |
+
}
|
210 |
+
],
|
211 |
+
"source": [
|
212 |
+
"# Install LM-Eval\n",
|
213 |
+
"!pip install git+https://github.com/EleutherAI/lm-evaluation-harness.git@big-refactor"
|
214 |
+
]
|
215 |
+
},
|
216 |
+
{
|
217 |
+
"cell_type": "code",
|
218 |
+
"execution_count": 2,
|
219 |
+
"metadata": {
|
220 |
+
"colab": {
|
221 |
+
"base_uri": "https://localhost:8080/",
|
222 |
+
"height": 0,
|
223 |
+
"referenced_widgets": [
|
224 |
+
"a1d3a8aa016544a78e8821c8f6199e06",
|
225 |
+
"f61ed33fad754146bdd2ac9db1ba1c48",
|
226 |
+
"bfa0af6aeff344c6845e1080a878e92e",
|
227 |
+
"fd1ad9e0367d4004aae853b91c3a7617",
|
228 |
+
"6b2d90209ec14230b3d58a74ac9b83bf",
|
229 |
+
"a73f357065d34d7baf0453ae4a8d75e2",
|
230 |
+
"46f521b73fd943c081c648fd873ebc0a",
|
231 |
+
"7c5689bc13684db8a22681f41863dddd",
|
232 |
+
"48763b6233374554ae76035c0483066f",
|
233 |
+
"4986a21eb560448fa79f4b25cde48951",
|
234 |
+
"aed3acd2f2d74003b44079c333a0698e"
|
235 |
+
]
|
236 |
+
},
|
237 |
+
"id": "uyO5MaKkZyah",
|
238 |
+
"outputId": "d46e8096-5086-4e49-967e-ea33d4a2a335"
|
239 |
+
},
|
240 |
+
"outputs": [
|
241 |
+
{
|
242 |
+
"data": {
|
243 |
+
"application/vnd.jupyter.widget-view+json": {
|
244 |
+
"model_id": "a1d3a8aa016544a78e8821c8f6199e06",
|
245 |
+
"version_major": 2,
|
246 |
+
"version_minor": 0
|
247 |
+
},
|
248 |
+
"text/plain": [
|
249 |
+
"Downloading builder script: 0%| | 0.00/5.67k [00:00<?, ?B/s]"
|
250 |
+
]
|
251 |
+
},
|
252 |
+
"metadata": {},
|
253 |
+
"output_type": "display_data"
|
254 |
+
}
|
255 |
+
],
|
256 |
+
"source": [
|
257 |
+
"from lm_eval import api"
|
258 |
+
]
|
259 |
+
},
|
260 |
+
{
|
261 |
+
"cell_type": "markdown",
|
262 |
+
"metadata": {
|
263 |
+
"id": "8rfUeX6n_wkK"
|
264 |
+
},
|
265 |
+
"source": [
|
266 |
+
"## Create new evaluation tasks with config-based tasks\n",
|
267 |
+
"\n",
|
268 |
+
"Even within the same task, many works have reported numbers based on different choices of evaluation. Some report on the test sets, validation sets, or even subset of the training sets. Others have specialized prompts and verbalizers. We introduce YAMLs to allow users to easily make different variations. By leveraging the YAML configs to configure evaluations, the refactored LM-Eval takes the methods of the `Task` object and makes them configurable by setting the appropriate attributes in the config file. There, users can set the tasks they want by setting the name of the HF dataset (local tasks are also possible), the dataset splits used, and much more. Key configurations relating to prompting, such as `doc_to_text`, previously implemented as a method of the same name, are now configurable with jinja2 to allow high-level scripting to transform a HF dataset to text string as input to the model.\n",
|
269 |
+
"\n"
|
270 |
+
]
|
271 |
+
},
|
272 |
+
{
|
273 |
+
"cell_type": "markdown",
|
274 |
+
"metadata": {
|
275 |
+
"id": "HYFUhhfOSJKe"
|
276 |
+
},
|
277 |
+
"source": [
|
278 |
+
"A core-feature to LM-Eval is to configure tasks with YAML configs. With configs, you can fill preset fields to easily set up a task.\n",
|
279 |
+
"\n",
|
280 |
+
"Here, we write a demo YAML config for a multiple-choice evaluation of BoolQ:"
|
281 |
+
]
|
282 |
+
},
|
283 |
+
{
|
284 |
+
"cell_type": "code",
|
285 |
+
"execution_count": 3,
|
286 |
+
"metadata": {
|
287 |
+
"id": "bg3dGROW-V39"
|
288 |
+
},
|
289 |
+
"outputs": [],
|
290 |
+
"source": [
|
291 |
+
"YAML_boolq_string = '''\n",
|
292 |
+
"task: demo_boolq\n",
|
293 |
+
"dataset_path: super_glue\n",
|
294 |
+
"dataset_name: boolq\n",
|
295 |
+
"output_type: multiple_choice\n",
|
296 |
+
"training_split: train\n",
|
297 |
+
"validation_split: validation\n",
|
298 |
+
"doc_to_text: \"{{passage}}\\nQuestion: {{question}}?\\nAnswer:\"\n",
|
299 |
+
"doc_to_target: label\n",
|
300 |
+
"doc_to_choice: [\"no\", \"yes\"]\n",
|
301 |
+
"should_decontaminate: true\n",
|
302 |
+
"doc_to_decontamination_query: passage\n",
|
303 |
+
"metric_list:\n",
|
304 |
+
" - metric: acc\n",
|
305 |
+
"'''\n",
|
306 |
+
"with open('boolq.yaml', 'w') as f:\n",
|
307 |
+
" f.write(YAML_boolq_string)"
|
308 |
+
]
|
309 |
+
},
|
310 |
+
{
|
311 |
+
"cell_type": "markdown",
|
312 |
+
"metadata": {},
|
313 |
+
"source": [
|
314 |
+
"And we can now run evaluation on this task, by pointing to the config file we've just created:"
|
315 |
+
]
|
316 |
+
},
|
317 |
+
{
|
318 |
+
"cell_type": "code",
|
319 |
+
"execution_count": 4,
|
320 |
+
"metadata": {
|
321 |
+
"id": "LOUHK7PtQfq4"
|
322 |
+
},
|
323 |
+
"outputs": [
|
324 |
+
{
|
325 |
+
"name": "stdout",
|
326 |
+
"output_type": "stream",
|
327 |
+
"text": [
|
328 |
+
"2023-11-29:11:54:55,156 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
|
329 |
+
"2023-11-29 11:54:55.942051: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
330 |
+
"2023-11-29 11:54:55.942108: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
331 |
+
"2023-11-29 11:54:55.942142: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
332 |
+
"2023-11-29 11:54:57.066802: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
333 |
+
"2023-11-29:11:55:00,954 INFO [__main__.py:132] Verbosity set to INFO\n",
|
334 |
+
"2023-11-29:11:55:11,038 WARNING [__main__.py:138] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
|
335 |
+
"2023-11-29:11:55:11,038 INFO [__main__.py:143] Including path: ./\n",
|
336 |
+
"2023-11-29:11:55:11,046 INFO [__main__.py:205] Selected Tasks: ['demo_boolq']\n",
|
337 |
+
"2023-11-29:11:55:11,047 WARNING [evaluator.py:93] generation_kwargs specified through cli, these settings will be used over set parameters in yaml tasks.\n",
|
338 |
+
"2023-11-29:11:55:11,110 INFO [huggingface.py:120] Using device 'cuda'\n",
|
339 |
+
"config.json: 100% 571/571 [00:00<00:00, 2.87MB/s]\n",
|
340 |
+
"model.safetensors: 100% 5.68G/5.68G [00:32<00:00, 173MB/s]\n",
|
341 |
+
"tokenizer_config.json: 100% 396/396 [00:00<00:00, 2.06MB/s]\n",
|
342 |
+
"tokenizer.json: 100% 2.11M/2.11M [00:00<00:00, 11.6MB/s]\n",
|
343 |
+
"special_tokens_map.json: 100% 99.0/99.0 [00:00<00:00, 555kB/s]\n",
|
344 |
+
"2023-11-29:11:56:18,658 WARNING [task.py:614] [Task: demo_boolq] metric acc is defined, but aggregation is not. using default aggregation=mean\n",
|
345 |
+
"2023-11-29:11:56:18,658 WARNING [task.py:626] [Task: demo_boolq] metric acc is defined, but higher_is_better is not. using default higher_is_better=True\n",
|
346 |
+
"Downloading builder script: 100% 30.7k/30.7k [00:00<00:00, 59.0MB/s]\n",
|
347 |
+
"Downloading metadata: 100% 38.7k/38.7k [00:00<00:00, 651kB/s]\n",
|
348 |
+
"Downloading readme: 100% 14.8k/14.8k [00:00<00:00, 37.3MB/s]\n",
|
349 |
+
"Downloading data: 100% 4.12M/4.12M [00:00<00:00, 55.1MB/s]\n",
|
350 |
+
"Generating train split: 100% 9427/9427 [00:00<00:00, 15630.89 examples/s]\n",
|
351 |
+
"Generating validation split: 100% 3270/3270 [00:00<00:00, 20002.56 examples/s]\n",
|
352 |
+
"Generating test split: 100% 3245/3245 [00:00<00:00, 20866.19 examples/s]\n",
|
353 |
+
"2023-11-29:11:56:22,315 INFO [task.py:355] Building contexts for task on rank 0...\n",
|
354 |
+
"2023-11-29:11:56:22,322 INFO [evaluator.py:319] Running loglikelihood requests\n",
|
355 |
+
"100% 20/20 [00:04<00:00, 4.37it/s]\n",
|
356 |
+
"fatal: not a git repository (or any of the parent directories): .git\n",
|
357 |
+
"hf (pretrained=EleutherAI/pythia-2.8b), gen_kwargs: (), limit: 10.0, num_fewshot: None, batch_size: 1\n",
|
358 |
+
"| Tasks |Version|Filter|n-shot|Metric|Value| |Stderr|\n",
|
359 |
+
"|----------|-------|------|-----:|------|----:|---|-----:|\n",
|
360 |
+
"|demo_boolq|Yaml |none | 0|acc | 1|± | 0|\n",
|
361 |
+
"\n"
|
362 |
+
]
|
363 |
+
}
|
364 |
+
],
|
365 |
+
"source": [
|
366 |
+
"!lm_eval \\\n",
|
367 |
+
" --model hf \\\n",
|
368 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
369 |
+
" --include_path ./ \\\n",
|
370 |
+
" --tasks demo_boolq \\\n",
|
371 |
+
" --limit 10\n"
|
372 |
+
]
|
373 |
+
},
|
374 |
+
{
|
375 |
+
"cell_type": "markdown",
|
376 |
+
"metadata": {
|
377 |
+
"id": "LOUHK7PtQfq4"
|
378 |
+
},
|
379 |
+
"source": [
|
380 |
+
"Often, tasks are part of a larger group used to measure different capabilities. The dynamism of the field today means new dimensions of evaluation can come about which would mix and match new and older tasks alike. In LM-Eval, We can also group tasks and call that the group name to evaluate on a set of tasks easily. In this instance, let's evaluate the group `yes_or_no_tasks` which comprise of the tasks `demo_boolq` and `demo_cola`; tasks which are multiple choice tasks with options `yes` and `no` as the name suggests.\n",
|
381 |
+
"\n",
|
382 |
+
"<!-- making new groups is easier than ever, allowing user to work bottom-up by makiing individual tasks and linking them to a group or Top-Down, making a new group by listing existing tasks.\n",
|
383 |
+
"\n",
|
384 |
+
"We also show the aggregate across samples besides only showing the aggregation between subtasks. This may come in handy when certain groups want to be aggregated as a single task. -->\n",
|
385 |
+
"\n",
|
386 |
+
"\n"
|
387 |
+
]
|
388 |
+
},
|
389 |
+
{
|
390 |
+
"cell_type": "code",
|
391 |
+
"execution_count": 5,
|
392 |
+
"metadata": {
|
393 |
+
"id": "fthNg3ywO-kA"
|
394 |
+
},
|
395 |
+
"outputs": [],
|
396 |
+
"source": [
|
397 |
+
"YAML_cola_string = '''\n",
|
398 |
+
"group: yes_or_no_tasks\n",
|
399 |
+
"task: demo_cola\n",
|
400 |
+
"dataset_path: glue\n",
|
401 |
+
"dataset_name: cola\n",
|
402 |
+
"output_type: multiple_choice\n",
|
403 |
+
"training_split: train\n",
|
404 |
+
"validation_split: validation\n",
|
405 |
+
"doc_to_text: \"{{sentence}}\\nQuestion: Does this sentence make sense?\\nAnswer:\"\n",
|
406 |
+
"doc_to_target: label\n",
|
407 |
+
"doc_to_choice: [\"no\", \"yes\"]\n",
|
408 |
+
"should_decontaminate: true\n",
|
409 |
+
"doc_to_decontamination_query: sentence\n",
|
410 |
+
"metric_list:\n",
|
411 |
+
" - metric: acc\n",
|
412 |
+
"'''\n",
|
413 |
+
"with open('cola.yaml', 'w') as f:\n",
|
414 |
+
" f.write(YAML_cola_string)"
|
415 |
+
]
|
416 |
+
},
|
417 |
+
{
|
418 |
+
"cell_type": "code",
|
419 |
+
"execution_count": 6,
|
420 |
+
"metadata": {
|
421 |
+
"id": "XceRKCuuDtbn"
|
422 |
+
},
|
423 |
+
"outputs": [
|
424 |
+
{
|
425 |
+
"name": "stdout",
|
426 |
+
"output_type": "stream",
|
427 |
+
"text": [
|
428 |
+
"2023-11-29:11:56:33,016 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
|
429 |
+
"2023-11-29 11:56:33.852995: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
430 |
+
"2023-11-29 11:56:33.853050: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
431 |
+
"2023-11-29 11:56:33.853087: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
432 |
+
"2023-11-29 11:56:35.129047: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
433 |
+
"2023-11-29:11:56:38,546 INFO [__main__.py:132] Verbosity set to INFO\n",
|
434 |
+
"2023-11-29:11:56:47,509 WARNING [__main__.py:138] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
|
435 |
+
"2023-11-29:11:56:47,509 INFO [__main__.py:143] Including path: ./\n",
|
436 |
+
"2023-11-29:11:56:47,517 INFO [__main__.py:205] Selected Tasks: ['yes_or_no_tasks']\n",
|
437 |
+
"2023-11-29:11:56:47,520 WARNING [evaluator.py:93] generation_kwargs specified through cli, these settings will be used over set parameters in yaml tasks.\n",
|
438 |
+
"2023-11-29:11:56:47,550 INFO [huggingface.py:120] Using device 'cuda'\n",
|
439 |
+
"2023-11-29:11:57:08,743 WARNING [task.py:614] [Task: demo_cola] metric acc is defined, but aggregation is not. using default aggregation=mean\n",
|
440 |
+
"2023-11-29:11:57:08,743 WARNING [task.py:626] [Task: demo_cola] metric acc is defined, but higher_is_better is not. using default higher_is_better=True\n",
|
441 |
+
"Downloading builder script: 100% 28.8k/28.8k [00:00<00:00, 52.7MB/s]\n",
|
442 |
+
"Downloading metadata: 100% 28.7k/28.7k [00:00<00:00, 51.9MB/s]\n",
|
443 |
+
"Downloading readme: 100% 27.9k/27.9k [00:00<00:00, 48.0MB/s]\n",
|
444 |
+
"Downloading data: 100% 377k/377k [00:00<00:00, 12.0MB/s]\n",
|
445 |
+
"Generating train split: 100% 8551/8551 [00:00<00:00, 19744.58 examples/s]\n",
|
446 |
+
"Generating validation split: 100% 1043/1043 [00:00<00:00, 27057.01 examples/s]\n",
|
447 |
+
"Generating test split: 100% 1063/1063 [00:00<00:00, 22705.17 examples/s]\n",
|
448 |
+
"2023-11-29:11:57:11,698 INFO [task.py:355] Building contexts for task on rank 0...\n",
|
449 |
+
"2023-11-29:11:57:11,704 INFO [evaluator.py:319] Running loglikelihood requests\n",
|
450 |
+
"100% 20/20 [00:03<00:00, 5.15it/s]\n",
|
451 |
+
"fatal: not a git repository (or any of the parent directories): .git\n",
|
452 |
+
"hf (pretrained=EleutherAI/pythia-2.8b), gen_kwargs: (), limit: 10.0, num_fewshot: None, batch_size: 1\n",
|
453 |
+
"| Tasks |Version|Filter|n-shot|Metric|Value| |Stderr|\n",
|
454 |
+
"|---------------|-------|------|-----:|------|----:|---|-----:|\n",
|
455 |
+
"|yes_or_no_tasks|N/A |none | 0|acc | 0.7|± |0.1528|\n",
|
456 |
+
"| - demo_cola |Yaml |none | 0|acc | 0.7|± |0.1528|\n",
|
457 |
+
"\n",
|
458 |
+
"| Groups |Version|Filter|n-shot|Metric|Value| |Stderr|\n",
|
459 |
+
"|---------------|-------|------|-----:|------|----:|---|-----:|\n",
|
460 |
+
"|yes_or_no_tasks|N/A |none | 0|acc | 0.7|± |0.1528|\n",
|
461 |
+
"\n"
|
462 |
+
]
|
463 |
+
}
|
464 |
+
],
|
465 |
+
"source": [
|
466 |
+
"# !accelerate launch --no_python\n",
|
467 |
+
"!lm_eval \\\n",
|
468 |
+
" --model hf \\\n",
|
469 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
470 |
+
" --include_path ./ \\\n",
|
471 |
+
" --tasks yes_or_no_tasks \\\n",
|
472 |
+
" --limit 10 \\\n",
|
473 |
+
" --output output/yes_or_no_tasks/ \\\n",
|
474 |
+
" --log_samples\n"
|
475 |
+
]
|
476 |
+
},
|
477 |
+
{
|
478 |
+
"cell_type": "markdown",
|
479 |
+
"metadata": {
|
480 |
+
"id": "XceRKCuuDtbn"
|
481 |
+
},
|
482 |
+
"source": [
|
483 |
+
"## Edit Prompt Templates Quickly\n",
|
484 |
+
"\n",
|
485 |
+
"The following is a yaml made to evaluate the specific subtask of `high_school_geography` from MMLU. It uses the standard prompt where the we choose the letters from the options with most likelihood as the model's prediction."
|
486 |
+
]
|
487 |
+
},
|
488 |
+
{
|
489 |
+
"cell_type": "code",
|
490 |
+
"execution_count": 7,
|
491 |
+
"metadata": {
|
492 |
+
"id": "GTFvdt9kSlBG"
|
493 |
+
},
|
494 |
+
"outputs": [],
|
495 |
+
"source": [
|
496 |
+
"YAML_mmlu_geo_string = '''\n",
|
497 |
+
"group: mmlu\n",
|
498 |
+
"task: demo_mmlu_high_school_geography\n",
|
499 |
+
"dataset_path: cais/mmlu\n",
|
500 |
+
"dataset_name: high_school_geography\n",
|
501 |
+
"description: \"The following are multiple choice questions (with answers) about high school geography.\\n\\n\"\n",
|
502 |
+
"test_split: test\n",
|
503 |
+
"fewshot_split: dev\n",
|
504 |
+
"fewshot_config:\n",
|
505 |
+
" sampler: first_n\n",
|
506 |
+
"output_type: multiple_choice\n",
|
507 |
+
"doc_to_text: \"{{question.strip()}}\\nA. {{choices[0]}}\\nB. {{choices[1]}}\\nC. {{choices[2]}}\\nD. {{choices[3]}}\\nAnswer:\"\n",
|
508 |
+
"doc_to_choice: [\"A\", \"B\", \"C\", \"D\"]\n",
|
509 |
+
"doc_to_target: answer\n",
|
510 |
+
"metric_list:\n",
|
511 |
+
" - metric: acc\n",
|
512 |
+
" aggregation: mean\n",
|
513 |
+
" higher_is_better: true\n",
|
514 |
+
" - metric: acc_norm\n",
|
515 |
+
" aggregation: mean\n",
|
516 |
+
" higher_is_better: true\n",
|
517 |
+
"'''\n",
|
518 |
+
"with open('mmlu_high_school_geography.yaml', 'w') as f:\n",
|
519 |
+
" f.write(YAML_mmlu_geo_string)\n"
|
520 |
+
]
|
521 |
+
},
|
522 |
+
{
|
523 |
+
"cell_type": "code",
|
524 |
+
"execution_count": 8,
|
525 |
+
"metadata": {
|
526 |
+
"id": "jyKOfCsKb-xy"
|
527 |
+
},
|
528 |
+
"outputs": [
|
529 |
+
{
|
530 |
+
"name": "stdout",
|
531 |
+
"output_type": "stream",
|
532 |
+
"text": [
|
533 |
+
"2023-11-29:11:57:23,598 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
|
534 |
+
"2023-11-29 11:57:24.719750: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
535 |
+
"2023-11-29 11:57:24.719806: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
536 |
+
"2023-11-29 11:57:24.719847: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
537 |
+
"2023-11-29 11:57:26.656125: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
538 |
+
"2023-11-29:11:57:31,563 INFO [__main__.py:132] Verbosity set to INFO\n",
|
539 |
+
"2023-11-29:11:57:40,541 WARNING [__main__.py:138] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
|
540 |
+
"2023-11-29:11:57:40,541 INFO [__main__.py:143] Including path: ./\n",
|
541 |
+
"2023-11-29:11:57:40,558 INFO [__main__.py:205] Selected Tasks: ['demo_mmlu_high_school_geography']\n",
|
542 |
+
"2023-11-29:11:57:40,559 WARNING [evaluator.py:93] generation_kwargs specified through cli, these settings will be used over set parameters in yaml tasks.\n",
|
543 |
+
"2023-11-29:11:57:40,589 INFO [huggingface.py:120] Using device 'cuda'\n",
|
544 |
+
"Downloading builder script: 100% 5.84k/5.84k [00:00<00:00, 17.7MB/s]\n",
|
545 |
+
"Downloading metadata: 100% 106k/106k [00:00<00:00, 892kB/s] \n",
|
546 |
+
"Downloading readme: 100% 39.7k/39.7k [00:00<00:00, 631kB/s]\n",
|
547 |
+
"Downloading data: 100% 166M/166M [00:01<00:00, 89.0MB/s]\n",
|
548 |
+
"Generating auxiliary_train split: 100% 99842/99842 [00:07<00:00, 12536.83 examples/s]\n",
|
549 |
+
"Generating test split: 100% 198/198 [00:00<00:00, 1439.20 examples/s]\n",
|
550 |
+
"Generating validation split: 100% 22/22 [00:00<00:00, 4181.76 examples/s]\n",
|
551 |
+
"Generating dev split: 100% 5/5 [00:00<00:00, 36.25 examples/s]\n",
|
552 |
+
"2023-11-29:11:58:09,798 INFO [task.py:355] Building contexts for task on rank 0...\n",
|
553 |
+
"2023-11-29:11:58:09,822 INFO [evaluator.py:319] Running loglikelihood requests\n",
|
554 |
+
"100% 40/40 [00:05<00:00, 7.86it/s]\n",
|
555 |
+
"fatal: not a git repository (or any of the parent directories): .git\n",
|
556 |
+
"hf (pretrained=EleutherAI/pythia-2.8b), gen_kwargs: (), limit: 10.0, num_fewshot: None, batch_size: 1\n",
|
557 |
+
"| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|\n",
|
558 |
+
"|-------------------------------|-------|------|-----:|--------|----:|---|-----:|\n",
|
559 |
+
"|demo_mmlu_high_school_geography|Yaml |none | 0|acc | 0.3|± |0.1528|\n",
|
560 |
+
"| | |none | 0|acc_norm| 0.3|± |0.1528|\n",
|
561 |
+
"\n"
|
562 |
+
]
|
563 |
+
}
|
564 |
+
],
|
565 |
+
"source": [
|
566 |
+
"# !accelerate launch --no_python\n",
|
567 |
+
"!lm_eval \\\n",
|
568 |
+
" --model hf \\\n",
|
569 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
570 |
+
" --include_path ./ \\\n",
|
571 |
+
" --tasks demo_mmlu_high_school_geography \\\n",
|
572 |
+
" --limit 10 \\\n",
|
573 |
+
" --output output/mmlu_high_school_geography/ \\\n",
|
574 |
+
" --log_samples"
|
575 |
+
]
|
576 |
+
},
|
577 |
+
{
|
578 |
+
"cell_type": "markdown",
|
579 |
+
"metadata": {
|
580 |
+
"id": "jyKOfCsKb-xy"
|
581 |
+
},
|
582 |
+
"source": [
|
583 |
+
"We could also evaluate this task in a different way. For example, instead of observing the loglikelihood of the letters, we can instead evaluate on the choices themselves as the continuation. This is done by simply changing `doc_to_choice` from a list of letters to the corresponding `choices` field from the HF dataset. We write `\"{{choices}}\"` so that the string field is interpreted as jinja string that acquires the list from the HF dataset directly.\n",
|
584 |
+
"\n",
|
585 |
+
"Another convenient feature here is since we're only modifying the `doc_to_choice` and the rest of config is the same as the task above, we can use the above configuration as a template by using `include: mmlu_high_school_geography.yaml` to load the config from that file. We'll need to add a unique task name as to not colide with the existing yaml config we're including. For this case we'll simply name this one `mmlu_high_school_geography_continuation`. `doc_to_text` is added here just for sake of clarity."
|
586 |
+
]
|
587 |
+
},
|
588 |
+
{
|
589 |
+
"cell_type": "code",
|
590 |
+
"execution_count": 9,
|
591 |
+
"metadata": {
|
592 |
+
"id": "lqElwU54TaK-"
|
593 |
+
},
|
594 |
+
"outputs": [],
|
595 |
+
"source": [
|
596 |
+
"YAML_mmlu_geo_string = '''\n",
|
597 |
+
"include: mmlu_high_school_geography.yaml\n",
|
598 |
+
"task: demo_mmlu_high_school_geography_continuation\n",
|
599 |
+
"doc_to_text: \"{{question.strip()}}\\nA. {{choices[0]}}\\nB. {{choices[1]}}\\nC. {{choices[2]}}\\nD. {{choices[3]}}\\nAnswer:\"\n",
|
600 |
+
"doc_to_choice: \"{{choices}}\"\n",
|
601 |
+
"'''\n",
|
602 |
+
"with open('mmlu_high_school_geography_continuation.yaml', 'w') as f:\n",
|
603 |
+
" f.write(YAML_mmlu_geo_string)\n"
|
604 |
+
]
|
605 |
+
},
|
606 |
+
{
|
607 |
+
"cell_type": "code",
|
608 |
+
"execution_count": 10,
|
609 |
+
"metadata": {
|
610 |
+
"id": "-_CVnDirdy7j"
|
611 |
+
},
|
612 |
+
"outputs": [
|
613 |
+
{
|
614 |
+
"name": "stdout",
|
615 |
+
"output_type": "stream",
|
616 |
+
"text": [
|
617 |
+
"2023-11-29:11:58:21,284 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
|
618 |
+
"2023-11-29 11:58:22.850159: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
619 |
+
"2023-11-29 11:58:22.850219: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
620 |
+
"2023-11-29 11:58:22.850254: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
621 |
+
"2023-11-29 11:58:24.948103: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
622 |
+
"2023-11-29:11:58:28,460 INFO [__main__.py:132] Verbosity set to INFO\n",
|
623 |
+
"2023-11-29:11:58:37,935 WARNING [__main__.py:138] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
|
624 |
+
"2023-11-29:11:58:37,935 INFO [__main__.py:143] Including path: ./\n",
|
625 |
+
"2023-11-29:11:58:37,969 INFO [__main__.py:205] Selected Tasks: ['demo_mmlu_high_school_geography_continuation']\n",
|
626 |
+
"2023-11-29:11:58:37,972 WARNING [evaluator.py:93] generation_kwargs specified through cli, these settings will be used over set parameters in yaml tasks.\n",
|
627 |
+
"2023-11-29:11:58:38,008 INFO [huggingface.py:120] Using device 'cuda'\n",
|
628 |
+
"2023-11-29:11:58:59,758 INFO [task.py:355] Building contexts for task on rank 0...\n",
|
629 |
+
"2023-11-29:11:58:59,777 INFO [evaluator.py:319] Running loglikelihood requests\n",
|
630 |
+
"100% 40/40 [00:02<00:00, 16.23it/s]\n",
|
631 |
+
"fatal: not a git repository (or any of the parent directories): .git\n",
|
632 |
+
"hf (pretrained=EleutherAI/pythia-2.8b), gen_kwargs: (), limit: 10.0, num_fewshot: None, batch_size: 1\n",
|
633 |
+
"| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|\n",
|
634 |
+
"|--------------------------------------------|-------|------|-----:|--------|----:|---|-----:|\n",
|
635 |
+
"|demo_mmlu_high_school_geography_continuation|Yaml |none | 0|acc | 0.1|± |0.1000|\n",
|
636 |
+
"| | |none | 0|acc_norm| 0.2|± |0.1333|\n",
|
637 |
+
"\n"
|
638 |
+
]
|
639 |
+
}
|
640 |
+
],
|
641 |
+
"source": [
|
642 |
+
"# !accelerate launch --no_python\n",
|
643 |
+
"!lm_eval \\\n",
|
644 |
+
" --model hf \\\n",
|
645 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
646 |
+
" --include_path ./ \\\n",
|
647 |
+
" --tasks demo_mmlu_high_school_geography_continuation \\\n",
|
648 |
+
" --limit 10 \\\n",
|
649 |
+
" --output output/mmlu_high_school_geography_continuation/ \\\n",
|
650 |
+
" --log_samples\n"
|
651 |
+
]
|
652 |
+
},
|
653 |
+
{
|
654 |
+
"cell_type": "markdown",
|
655 |
+
"metadata": {
|
656 |
+
"id": "-_CVnDirdy7j"
|
657 |
+
},
|
658 |
+
"source": [
|
659 |
+
"If we take a look at the samples, we can see that it is in fact evaluating the continuation based on the choices rather than the letters."
|
660 |
+
]
|
661 |
+
},
|
662 |
+
{
|
663 |
+
"cell_type": "code",
|
664 |
+
"execution_count": 11,
|
665 |
+
"metadata": {
|
666 |
+
"id": "duBDqC6PAdjL"
|
667 |
+
},
|
668 |
+
"outputs": [
|
669 |
+
{
|
670 |
+
"data": {
|
671 |
+
"application/javascript": "\n ((filepath) => {{\n if (!google.colab.kernel.accessAllowed) {{\n return;\n }}\n google.colab.files.view(filepath);\n }})(\"/content/output/mmlu_high_school_geography_continuation/pretrained__EleutherAI__pythia-2.8b_demo_mmlu_high_school_geography_continuation.jsonl\")",
|
672 |
+
"text/plain": [
|
673 |
+
"<IPython.core.display.Javascript object>"
|
674 |
+
]
|
675 |
+
},
|
676 |
+
"metadata": {},
|
677 |
+
"output_type": "display_data"
|
678 |
+
}
|
679 |
+
],
|
680 |
+
"source": [
|
681 |
+
"from google.colab import files\n",
|
682 |
+
"files.view(\"output/mmlu_high_school_geography_continuation/pretrained__EleutherAI__pythia-2.8b_demo_mmlu_high_school_geography_continuation.jsonl\")\n"
|
683 |
+
]
|
684 |
+
},
|
685 |
+
{
|
686 |
+
"cell_type": "markdown",
|
687 |
+
"metadata": {
|
688 |
+
"id": "6p0-KPwAgK5j"
|
689 |
+
},
|
690 |
+
"source": [
|
691 |
+
"## Closer Look at YAML Fields\n",
|
692 |
+
"\n",
|
693 |
+
"To prepare a task we can simply fill in a YAML config with the relevant information.\n",
|
694 |
+
"\n",
|
695 |
+
"`output_type`\n",
|
696 |
+
"The current provided evaluation types comprise of the following:\n",
|
697 |
+
"1. `loglikelihood`: Evaluates the loglikelihood of a continuation, conditioned on some input string.\n",
|
698 |
+
"2. `loglikelihood_rolling`: evaluate the loglikelihood of producing a string, conditioned on the empty string. (Used for perplexity evaluations)\n",
|
699 |
+
"3. `multiple_choice`: Evaluates loglikelihood among the a number of choices predicted by the model.\n",
|
700 |
+
"4. `greedy_until`: Model outputs greedy generation (can be configured to to use beam search and other generation-related parameters)\n",
|
701 |
+
"\n",
|
702 |
+
"The core prompt revolves around 3 fields.\n",
|
703 |
+
"1. `doc_to_text`: Denotes the prompt template that will be used as input to the model.\n",
|
704 |
+
"2. `doc_to_choice`: Available choices that will be used as continuation for the model. This is used when the `output_type` is `multiple_choice`, and otherwise can be left as `None`.\n",
|
705 |
+
"3. `doc_to_target`: When `output_type` is `multiple_choice`, this can be an index that corresponds to the correct answer, or the answer string itself (must be a subset of `doc_to_choice`). For other tasks, this is expected to be a string. You can fill this field with a feature name from the HF dataset so long as the resulting feature follows the conditioned described.\n",
|
706 |
+
"\n",
|
707 |
+
"These three fields can be expressed as strings, column names from the source dataset, or as Jinja2 templates that can use fields from the source dataset as variables.\n"
|
708 |
+
]
|
709 |
+
},
|
710 |
+
{
|
711 |
+
"cell_type": "markdown",
|
712 |
+
"metadata": {
|
713 |
+
"id": "6p0-KPwAgK5j"
|
714 |
+
},
|
715 |
+
"source": [
|
716 |
+
"## What if Jinja is not Sufficient?\n",
|
717 |
+
"\n",
|
718 |
+
"There can be times where the Jinja2 templating language is not enough to make the prompt we had in mind. There are a few ways to circumvent this limitation:\n",
|
719 |
+
"\n",
|
720 |
+
"1. Use `!function` operator for the prompt-related fields to pass a python function that takes as input the dataset row, and will output the prompt template component.\n",
|
721 |
+
"2. Perform a transformation on the dataset beforehand."
|
722 |
+
]
|
723 |
+
},
|
724 |
+
{
|
725 |
+
"cell_type": "markdown",
|
726 |
+
"metadata": {},
|
727 |
+
"source": [
|
728 |
+
"Below, we show an example of using `!function` to create `doc_to_text` from a python function:"
|
729 |
+
]
|
730 |
+
},
|
731 |
+
{
|
732 |
+
"cell_type": "code",
|
733 |
+
"execution_count": 12,
|
734 |
+
"metadata": {
|
735 |
+
"colab": {
|
736 |
+
"base_uri": "https://localhost:8080/"
|
737 |
+
},
|
738 |
+
"id": "DYZ5c0JhR1lJ",
|
739 |
+
"outputId": "ca945235-fb9e-4f17-8bfa-78e7d6ec1490"
|
740 |
+
},
|
741 |
+
"outputs": [
|
742 |
+
{
|
743 |
+
"name": "stdout",
|
744 |
+
"output_type": "stream",
|
745 |
+
"text": [
|
746 |
+
"2023-11-29:11:59:08,312 INFO [utils.py:160] NumExpr defaulting to 2 threads.\n",
|
747 |
+
"2023-11-29 11:59:09.348327: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
|
748 |
+
"2023-11-29 11:59:09.348387: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
|
749 |
+
"2023-11-29 11:59:09.348421: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
|
750 |
+
"2023-11-29 11:59:10.573752: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n",
|
751 |
+
"2023-11-29:11:59:14,044 INFO [__main__.py:132] Verbosity set to INFO\n",
|
752 |
+
"2023-11-29:11:59:23,654 WARNING [__main__.py:138] --limit SHOULD ONLY BE USED FOR TESTING.REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT.\n",
|
753 |
+
"2023-11-29:11:59:23,654 INFO [__main__.py:143] Including path: ./\n",
|
754 |
+
"2023-11-29:11:59:23,678 INFO [__main__.py:205] Selected Tasks: ['demo_mmlu_high_school_geography_function_prompt']\n",
|
755 |
+
"2023-11-29:11:59:23,679 WARNING [evaluator.py:93] generation_kwargs specified through cli, these settings will be used over set parameters in yaml tasks.\n",
|
756 |
+
"2023-11-29:11:59:23,708 INFO [huggingface.py:120] Using device 'cuda'\n",
|
757 |
+
"2023-11-29:11:59:44,516 INFO [task.py:355] Building contexts for task on rank 0...\n",
|
758 |
+
"2023-11-29:11:59:44,524 INFO [evaluator.py:319] Running loglikelihood requests\n",
|
759 |
+
"100% 40/40 [00:02<00:00, 15.41it/s]\n",
|
760 |
+
"fatal: not a git repository (or any of the parent directories): .git\n",
|
761 |
+
"hf (pretrained=EleutherAI/pythia-2.8b), gen_kwargs: (), limit: 10.0, num_fewshot: None, batch_size: 1\n",
|
762 |
+
"| Tasks |Version|Filter|n-shot| Metric |Value| |Stderr|\n",
|
763 |
+
"|-----------------------------------------------|-------|------|-----:|--------|----:|---|-----:|\n",
|
764 |
+
"|demo_mmlu_high_school_geography_function_prompt|Yaml |none | 0|acc | 0.1|± |0.1000|\n",
|
765 |
+
"| | |none | 0|acc_norm| 0.2|± |0.1333|\n",
|
766 |
+
"\n"
|
767 |
+
]
|
768 |
+
}
|
769 |
+
],
|
770 |
+
"source": [
|
771 |
+
"YAML_mmlu_geo_string = '''\n",
|
772 |
+
"include: mmlu_high_school_geography.yaml\n",
|
773 |
+
"task: demo_mmlu_high_school_geography_function_prompt\n",
|
774 |
+
"doc_to_text: !function utils.doc_to_text\n",
|
775 |
+
"doc_to_choice: \"{{choices}}\"\n",
|
776 |
+
"'''\n",
|
777 |
+
"with open('demo_mmlu_high_school_geography_function_prompt.yaml', 'w') as f:\n",
|
778 |
+
" f.write(YAML_mmlu_geo_string)\n",
|
779 |
+
"\n",
|
780 |
+
"DOC_TO_TEXT = '''\n",
|
781 |
+
"def doc_to_text(x):\n",
|
782 |
+
" question = x[\"question\"].strip()\n",
|
783 |
+
" choices = x[\"choices\"]\n",
|
784 |
+
" option_a = choices[0]\n",
|
785 |
+
" option_b = choices[1]\n",
|
786 |
+
" option_c = choices[2]\n",
|
787 |
+
" option_d = choices[3]\n",
|
788 |
+
" return f\"{question}\\\\nA. {option_a}\\\\nB. {option_b}\\\\nC. {option_c}\\\\nD. {option_d}\\\\nAnswer:\"\n",
|
789 |
+
"'''\n",
|
790 |
+
"with open('utils.py', 'w') as f:\n",
|
791 |
+
" f.write(DOC_TO_TEXT)\n",
|
792 |
+
"\n",
|
793 |
+
"!lm_eval \\\n",
|
794 |
+
" --model hf \\\n",
|
795 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
796 |
+
" --include_path ./ \\\n",
|
797 |
+
" --tasks demo_mmlu_high_school_geography_function_prompt \\\n",
|
798 |
+
" --limit 10 \\\n",
|
799 |
+
" --output output/demo_mmlu_high_school_geography_function_prompt/ \\\n",
|
800 |
+
" --log_samples\n"
|
801 |
+
]
|
802 |
+
},
|
803 |
+
{
|
804 |
+
"cell_type": "markdown",
|
805 |
+
"metadata": {},
|
806 |
+
"source": [
|
807 |
+
"Next, we'll also show how to do this via preprocessing the dataset as necessary using the `process_docs` config field:\n",
|
808 |
+
"\n",
|
809 |
+
"We will write a function that will modify each document in our evaluation dataset's split to add a field that is suitable for us to use in `doc_to_text`."
|
810 |
+
]
|
811 |
+
},
|
812 |
+
{
|
813 |
+
"cell_type": "code",
|
814 |
+
"execution_count": null,
|
815 |
+
"metadata": {},
|
816 |
+
"outputs": [],
|
817 |
+
"source": [
|
818 |
+
"YAML_mmlu_geo_string = '''\n",
|
819 |
+
"include: mmlu_high_school_geography.yaml\n",
|
820 |
+
"task: demo_mmlu_high_school_geography_function_prompt_2\n",
|
821 |
+
"process_docs: !function utils_process_docs.process_docs\n",
|
822 |
+
"doc_to_text: \"{{input}}\"\n",
|
823 |
+
"doc_to_choice: \"{{choices}}\"\n",
|
824 |
+
"'''\n",
|
825 |
+
"with open('demo_mmlu_high_school_geography_process_docs.yaml', 'w') as f:\n",
|
826 |
+
" f.write(YAML_mmlu_geo_string)\n",
|
827 |
+
"\n",
|
828 |
+
"DOC_TO_TEXT = '''\n",
|
829 |
+
"def process_docs(dataset):\n",
|
830 |
+
" def _process_doc(x):\n",
|
831 |
+
" question = x[\"question\"].strip()\n",
|
832 |
+
" choices = x[\"choices\"]\n",
|
833 |
+
" option_a = choices[0]\n",
|
834 |
+
" option_b = choices[1]\n",
|
835 |
+
" option_c = choices[2]\n",
|
836 |
+
" option_d = choices[3]\n",
|
837 |
+
" doc[\"input\"] = f\"{question}\\\\nA. {option_a}\\\\nB. {option_b}\\\\nC. {option_c}\\\\nD. {option_d}\\\\nAnswer:\"\n",
|
838 |
+
" return out_doc\n",
|
839 |
+
"\n",
|
840 |
+
" return dataset.map(_process_doc)\n",
|
841 |
+
"'''\n",
|
842 |
+
"\n",
|
843 |
+
"with open('utils_process_docs.py', 'w') as f:\n",
|
844 |
+
" f.write(DOC_TO_TEXT)\n",
|
845 |
+
"\n",
|
846 |
+
"!lm_eval \\\n",
|
847 |
+
" --model hf \\\n",
|
848 |
+
" --model_args pretrained=EleutherAI/pythia-2.8b \\\n",
|
849 |
+
" --include_path ./ \\\n",
|
850 |
+
" --tasks demo_mmlu_high_school_geography_function_prompt_2 \\\n",
|
851 |
+
" --limit 10 \\\n",
|
852 |
+
" --output output/demo_mmlu_high_school_geography_function_prompt_2/ \\\n",
|
853 |
+
" --log_samples\n"
|
854 |
+
]
|
855 |
+
},
|
856 |
+
{
|
857 |
+
"cell_type": "markdown",
|
858 |
+
"metadata": {},
|
859 |
+
"source": [
|
860 |
+
"We hope that this explainer gives you a sense of what can be done with and how to work with LM-Evaluation-Harnes v0.4.0 ! \n",
|
861 |
+
"\n",
|
862 |
+
"For more information, check out our documentation pages in the `docs/` folder, and if you have questions, please raise them in GitHub issues, or in #lm-thunderdome or #release-discussion on the EleutherAI discord server."
|
863 |
+
]
|
864 |
+
}
|
865 |
+
],
|
866 |
+
"metadata": {
|
867 |
+
"accelerator": "GPU",
|
868 |
+
"colab": {
|
869 |
+
"collapsed_sections": [
|
870 |
+
"zAov81vTbL2K"
|
871 |
+
],
|
872 |
+
"gpuType": "T4",
|
873 |
+
"provenance": []
|
874 |
+
},
|
875 |
+
"kernelspec": {
|
876 |
+
"display_name": "Python 3",
|
877 |
+
"name": "python3"
|
878 |
+
},
|
879 |
+
"language_info": {
|
880 |
+
"name": "python"
|
881 |
+
},
|
882 |
+
"widgets": {
|
883 |
+
"application/vnd.jupyter.widget-state+json": {
|
884 |
+
"46f521b73fd943c081c648fd873ebc0a": {
|
885 |
+
"model_module": "@jupyter-widgets/controls",
|
886 |
+
"model_module_version": "1.5.0",
|
887 |
+
"model_name": "DescriptionStyleModel",
|
888 |
+
"state": {
|
889 |
+
"_model_module": "@jupyter-widgets/controls",
|
890 |
+
"_model_module_version": "1.5.0",
|
891 |
+
"_model_name": "DescriptionStyleModel",
|
892 |
+
"_view_count": null,
|
893 |
+
"_view_module": "@jupyter-widgets/base",
|
894 |
+
"_view_module_version": "1.2.0",
|
895 |
+
"_view_name": "StyleView",
|
896 |
+
"description_width": ""
|
897 |
+
}
|
898 |
+
},
|
899 |
+
"48763b6233374554ae76035c0483066f": {
|
900 |
+
"model_module": "@jupyter-widgets/controls",
|
901 |
+
"model_module_version": "1.5.0",
|
902 |
+
"model_name": "ProgressStyleModel",
|
903 |
+
"state": {
|
904 |
+
"_model_module": "@jupyter-widgets/controls",
|
905 |
+
"_model_module_version": "1.5.0",
|
906 |
+
"_model_name": "ProgressStyleModel",
|
907 |
+
"_view_count": null,
|
908 |
+
"_view_module": "@jupyter-widgets/base",
|
909 |
+
"_view_module_version": "1.2.0",
|
910 |
+
"_view_name": "StyleView",
|
911 |
+
"bar_color": null,
|
912 |
+
"description_width": ""
|
913 |
+
}
|
914 |
+
},
|
915 |
+
"4986a21eb560448fa79f4b25cde48951": {
|
916 |
+
"model_module": "@jupyter-widgets/base",
|
917 |
+
"model_module_version": "1.2.0",
|
918 |
+
"model_name": "LayoutModel",
|
919 |
+
"state": {
|
920 |
+
"_model_module": "@jupyter-widgets/base",
|
921 |
+
"_model_module_version": "1.2.0",
|
922 |
+
"_model_name": "LayoutModel",
|
923 |
+
"_view_count": null,
|
924 |
+
"_view_module": "@jupyter-widgets/base",
|
925 |
+
"_view_module_version": "1.2.0",
|
926 |
+
"_view_name": "LayoutView",
|
927 |
+
"align_content": null,
|
928 |
+
"align_items": null,
|
929 |
+
"align_self": null,
|
930 |
+
"border": null,
|
931 |
+
"bottom": null,
|
932 |
+
"display": null,
|
933 |
+
"flex": null,
|
934 |
+
"flex_flow": null,
|
935 |
+
"grid_area": null,
|
936 |
+
"grid_auto_columns": null,
|
937 |
+
"grid_auto_flow": null,
|
938 |
+
"grid_auto_rows": null,
|
939 |
+
"grid_column": null,
|
940 |
+
"grid_gap": null,
|
941 |
+
"grid_row": null,
|
942 |
+
"grid_template_areas": null,
|
943 |
+
"grid_template_columns": null,
|
944 |
+
"grid_template_rows": null,
|
945 |
+
"height": null,
|
946 |
+
"justify_content": null,
|
947 |
+
"justify_items": null,
|
948 |
+
"left": null,
|
949 |
+
"margin": null,
|
950 |
+
"max_height": null,
|
951 |
+
"max_width": null,
|
952 |
+
"min_height": null,
|
953 |
+
"min_width": null,
|
954 |
+
"object_fit": null,
|
955 |
+
"object_position": null,
|
956 |
+
"order": null,
|
957 |
+
"overflow": null,
|
958 |
+
"overflow_x": null,
|
959 |
+
"overflow_y": null,
|
960 |
+
"padding": null,
|
961 |
+
"right": null,
|
962 |
+
"top": null,
|
963 |
+
"visibility": null,
|
964 |
+
"width": null
|
965 |
+
}
|
966 |
+
},
|
967 |
+
"6b2d90209ec14230b3d58a74ac9b83bf": {
|
968 |
+
"model_module": "@jupyter-widgets/base",
|
969 |
+
"model_module_version": "1.2.0",
|
970 |
+
"model_name": "LayoutModel",
|
971 |
+
"state": {
|
972 |
+
"_model_module": "@jupyter-widgets/base",
|
973 |
+
"_model_module_version": "1.2.0",
|
974 |
+
"_model_name": "LayoutModel",
|
975 |
+
"_view_count": null,
|
976 |
+
"_view_module": "@jupyter-widgets/base",
|
977 |
+
"_view_module_version": "1.2.0",
|
978 |
+
"_view_name": "LayoutView",
|
979 |
+
"align_content": null,
|
980 |
+
"align_items": null,
|
981 |
+
"align_self": null,
|
982 |
+
"border": null,
|
983 |
+
"bottom": null,
|
984 |
+
"display": null,
|
985 |
+
"flex": null,
|
986 |
+
"flex_flow": null,
|
987 |
+
"grid_area": null,
|
988 |
+
"grid_auto_columns": null,
|
989 |
+
"grid_auto_flow": null,
|
990 |
+
"grid_auto_rows": null,
|
991 |
+
"grid_column": null,
|
992 |
+
"grid_gap": null,
|
993 |
+
"grid_row": null,
|
994 |
+
"grid_template_areas": null,
|
995 |
+
"grid_template_columns": null,
|
996 |
+
"grid_template_rows": null,
|
997 |
+
"height": null,
|
998 |
+
"justify_content": null,
|
999 |
+
"justify_items": null,
|
1000 |
+
"left": null,
|
1001 |
+
"margin": null,
|
1002 |
+
"max_height": null,
|
1003 |
+
"max_width": null,
|
1004 |
+
"min_height": null,
|
1005 |
+
"min_width": null,
|
1006 |
+
"object_fit": null,
|
1007 |
+
"object_position": null,
|
1008 |
+
"order": null,
|
1009 |
+
"overflow": null,
|
1010 |
+
"overflow_x": null,
|
1011 |
+
"overflow_y": null,
|
1012 |
+
"padding": null,
|
1013 |
+
"right": null,
|
1014 |
+
"top": null,
|
1015 |
+
"visibility": null,
|
1016 |
+
"width": null
|
1017 |
+
}
|
1018 |
+
},
|
1019 |
+
"7c5689bc13684db8a22681f41863dddd": {
|
1020 |
+
"model_module": "@jupyter-widgets/base",
|
1021 |
+
"model_module_version": "1.2.0",
|
1022 |
+
"model_name": "LayoutModel",
|
1023 |
+
"state": {
|
1024 |
+
"_model_module": "@jupyter-widgets/base",
|
1025 |
+
"_model_module_version": "1.2.0",
|
1026 |
+
"_model_name": "LayoutModel",
|
1027 |
+
"_view_count": null,
|
1028 |
+
"_view_module": "@jupyter-widgets/base",
|
1029 |
+
"_view_module_version": "1.2.0",
|
1030 |
+
"_view_name": "LayoutView",
|
1031 |
+
"align_content": null,
|
1032 |
+
"align_items": null,
|
1033 |
+
"align_self": null,
|
1034 |
+
"border": null,
|
1035 |
+
"bottom": null,
|
1036 |
+
"display": null,
|
1037 |
+
"flex": null,
|
1038 |
+
"flex_flow": null,
|
1039 |
+
"grid_area": null,
|
1040 |
+
"grid_auto_columns": null,
|
1041 |
+
"grid_auto_flow": null,
|
1042 |
+
"grid_auto_rows": null,
|
1043 |
+
"grid_column": null,
|
1044 |
+
"grid_gap": null,
|
1045 |
+
"grid_row": null,
|
1046 |
+
"grid_template_areas": null,
|
1047 |
+
"grid_template_columns": null,
|
1048 |
+
"grid_template_rows": null,
|
1049 |
+
"height": null,
|
1050 |
+
"justify_content": null,
|
1051 |
+
"justify_items": null,
|
1052 |
+
"left": null,
|
1053 |
+
"margin": null,
|
1054 |
+
"max_height": null,
|
1055 |
+
"max_width": null,
|
1056 |
+
"min_height": null,
|
1057 |
+
"min_width": null,
|
1058 |
+
"object_fit": null,
|
1059 |
+
"object_position": null,
|
1060 |
+
"order": null,
|
1061 |
+
"overflow": null,
|
1062 |
+
"overflow_x": null,
|
1063 |
+
"overflow_y": null,
|
1064 |
+
"padding": null,
|
1065 |
+
"right": null,
|
1066 |
+
"top": null,
|
1067 |
+
"visibility": null,
|
1068 |
+
"width": null
|
1069 |
+
}
|
1070 |
+
},
|
1071 |
+
"a1d3a8aa016544a78e8821c8f6199e06": {
|
1072 |
+
"model_module": "@jupyter-widgets/controls",
|
1073 |
+
"model_module_version": "1.5.0",
|
1074 |
+
"model_name": "HBoxModel",
|
1075 |
+
"state": {
|
1076 |
+
"_dom_classes": [],
|
1077 |
+
"_model_module": "@jupyter-widgets/controls",
|
1078 |
+
"_model_module_version": "1.5.0",
|
1079 |
+
"_model_name": "HBoxModel",
|
1080 |
+
"_view_count": null,
|
1081 |
+
"_view_module": "@jupyter-widgets/controls",
|
1082 |
+
"_view_module_version": "1.5.0",
|
1083 |
+
"_view_name": "HBoxView",
|
1084 |
+
"box_style": "",
|
1085 |
+
"children": [
|
1086 |
+
"IPY_MODEL_f61ed33fad754146bdd2ac9db1ba1c48",
|
1087 |
+
"IPY_MODEL_bfa0af6aeff344c6845e1080a878e92e",
|
1088 |
+
"IPY_MODEL_fd1ad9e0367d4004aae853b91c3a7617"
|
1089 |
+
],
|
1090 |
+
"layout": "IPY_MODEL_6b2d90209ec14230b3d58a74ac9b83bf"
|
1091 |
+
}
|
1092 |
+
},
|
1093 |
+
"a73f357065d34d7baf0453ae4a8d75e2": {
|
1094 |
+
"model_module": "@jupyter-widgets/base",
|
1095 |
+
"model_module_version": "1.2.0",
|
1096 |
+
"model_name": "LayoutModel",
|
1097 |
+
"state": {
|
1098 |
+
"_model_module": "@jupyter-widgets/base",
|
1099 |
+
"_model_module_version": "1.2.0",
|
1100 |
+
"_model_name": "LayoutModel",
|
1101 |
+
"_view_count": null,
|
1102 |
+
"_view_module": "@jupyter-widgets/base",
|
1103 |
+
"_view_module_version": "1.2.0",
|
1104 |
+
"_view_name": "LayoutView",
|
1105 |
+
"align_content": null,
|
1106 |
+
"align_items": null,
|
1107 |
+
"align_self": null,
|
1108 |
+
"border": null,
|
1109 |
+
"bottom": null,
|
1110 |
+
"display": null,
|
1111 |
+
"flex": null,
|
1112 |
+
"flex_flow": null,
|
1113 |
+
"grid_area": null,
|
1114 |
+
"grid_auto_columns": null,
|
1115 |
+
"grid_auto_flow": null,
|
1116 |
+
"grid_auto_rows": null,
|
1117 |
+
"grid_column": null,
|
1118 |
+
"grid_gap": null,
|
1119 |
+
"grid_row": null,
|
1120 |
+
"grid_template_areas": null,
|
1121 |
+
"grid_template_columns": null,
|
1122 |
+
"grid_template_rows": null,
|
1123 |
+
"height": null,
|
1124 |
+
"justify_content": null,
|
1125 |
+
"justify_items": null,
|
1126 |
+
"left": null,
|
1127 |
+
"margin": null,
|
1128 |
+
"max_height": null,
|
1129 |
+
"max_width": null,
|
1130 |
+
"min_height": null,
|
1131 |
+
"min_width": null,
|
1132 |
+
"object_fit": null,
|
1133 |
+
"object_position": null,
|
1134 |
+
"order": null,
|
1135 |
+
"overflow": null,
|
1136 |
+
"overflow_x": null,
|
1137 |
+
"overflow_y": null,
|
1138 |
+
"padding": null,
|
1139 |
+
"right": null,
|
1140 |
+
"top": null,
|
1141 |
+
"visibility": null,
|
1142 |
+
"width": null
|
1143 |
+
}
|
1144 |
+
},
|
1145 |
+
"aed3acd2f2d74003b44079c333a0698e": {
|
1146 |
+
"model_module": "@jupyter-widgets/controls",
|
1147 |
+
"model_module_version": "1.5.0",
|
1148 |
+
"model_name": "DescriptionStyleModel",
|
1149 |
+
"state": {
|
1150 |
+
"_model_module": "@jupyter-widgets/controls",
|
1151 |
+
"_model_module_version": "1.5.0",
|
1152 |
+
"_model_name": "DescriptionStyleModel",
|
1153 |
+
"_view_count": null,
|
1154 |
+
"_view_module": "@jupyter-widgets/base",
|
1155 |
+
"_view_module_version": "1.2.0",
|
1156 |
+
"_view_name": "StyleView",
|
1157 |
+
"description_width": ""
|
1158 |
+
}
|
1159 |
+
},
|
1160 |
+
"bfa0af6aeff344c6845e1080a878e92e": {
|
1161 |
+
"model_module": "@jupyter-widgets/controls",
|
1162 |
+
"model_module_version": "1.5.0",
|
1163 |
+
"model_name": "FloatProgressModel",
|
1164 |
+
"state": {
|
1165 |
+
"_dom_classes": [],
|
1166 |
+
"_model_module": "@jupyter-widgets/controls",
|
1167 |
+
"_model_module_version": "1.5.0",
|
1168 |
+
"_model_name": "FloatProgressModel",
|
1169 |
+
"_view_count": null,
|
1170 |
+
"_view_module": "@jupyter-widgets/controls",
|
1171 |
+
"_view_module_version": "1.5.0",
|
1172 |
+
"_view_name": "ProgressView",
|
1173 |
+
"bar_style": "success",
|
1174 |
+
"description": "",
|
1175 |
+
"description_tooltip": null,
|
1176 |
+
"layout": "IPY_MODEL_7c5689bc13684db8a22681f41863dddd",
|
1177 |
+
"max": 5669,
|
1178 |
+
"min": 0,
|
1179 |
+
"orientation": "horizontal",
|
1180 |
+
"style": "IPY_MODEL_48763b6233374554ae76035c0483066f",
|
1181 |
+
"value": 5669
|
1182 |
+
}
|
1183 |
+
},
|
1184 |
+
"f61ed33fad754146bdd2ac9db1ba1c48": {
|
1185 |
+
"model_module": "@jupyter-widgets/controls",
|
1186 |
+
"model_module_version": "1.5.0",
|
1187 |
+
"model_name": "HTMLModel",
|
1188 |
+
"state": {
|
1189 |
+
"_dom_classes": [],
|
1190 |
+
"_model_module": "@jupyter-widgets/controls",
|
1191 |
+
"_model_module_version": "1.5.0",
|
1192 |
+
"_model_name": "HTMLModel",
|
1193 |
+
"_view_count": null,
|
1194 |
+
"_view_module": "@jupyter-widgets/controls",
|
1195 |
+
"_view_module_version": "1.5.0",
|
1196 |
+
"_view_name": "HTMLView",
|
1197 |
+
"description": "",
|
1198 |
+
"description_tooltip": null,
|
1199 |
+
"layout": "IPY_MODEL_a73f357065d34d7baf0453ae4a8d75e2",
|
1200 |
+
"placeholder": "",
|
1201 |
+
"style": "IPY_MODEL_46f521b73fd943c081c648fd873ebc0a",
|
1202 |
+
"value": "Downloading builder script: 100%"
|
1203 |
+
}
|
1204 |
+
},
|
1205 |
+
"fd1ad9e0367d4004aae853b91c3a7617": {
|
1206 |
+
"model_module": "@jupyter-widgets/controls",
|
1207 |
+
"model_module_version": "1.5.0",
|
1208 |
+
"model_name": "HTMLModel",
|
1209 |
+
"state": {
|
1210 |
+
"_dom_classes": [],
|
1211 |
+
"_model_module": "@jupyter-widgets/controls",
|
1212 |
+
"_model_module_version": "1.5.0",
|
1213 |
+
"_model_name": "HTMLModel",
|
1214 |
+
"_view_count": null,
|
1215 |
+
"_view_module": "@jupyter-widgets/controls",
|
1216 |
+
"_view_module_version": "1.5.0",
|
1217 |
+
"_view_name": "HTMLView",
|
1218 |
+
"description": "",
|
1219 |
+
"description_tooltip": null,
|
1220 |
+
"layout": "IPY_MODEL_4986a21eb560448fa79f4b25cde48951",
|
1221 |
+
"placeholder": "",
|
1222 |
+
"style": "IPY_MODEL_aed3acd2f2d74003b44079c333a0698e",
|
1223 |
+
"value": " 5.67k/5.67k [00:00<00:00, 205kB/s]"
|
1224 |
+
}
|
1225 |
+
}
|
1226 |
+
}
|
1227 |
+
}
|
1228 |
+
},
|
1229 |
+
"nbformat": 4,
|
1230 |
+
"nbformat_minor": 0
|
1231 |
+
}
|
lm-evaluation-harness/examples/visualize-wandb.ipynb
ADDED
@@ -0,0 +1,168 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"id": "fc477b96-adee-4829-a9d7-a5eb990df358",
|
6 |
+
"metadata": {},
|
7 |
+
"source": [
|
8 |
+
"# Visualizing Results in Weights and Biases\n",
|
9 |
+
"\n",
|
10 |
+
"With the Weights and Biases integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.\n",
|
11 |
+
"\n",
|
12 |
+
"The integration provide functionalities\n",
|
13 |
+
"\n",
|
14 |
+
"- to automatically log the evaluation results,\n",
|
15 |
+
"- log the samples as W&B Tables for easy visualization,\n",
|
16 |
+
"- log the `results.json` file as an artifact for version control,\n",
|
17 |
+
"- log the `<task_name>_eval_samples.json` file if the samples are logged,\n",
|
18 |
+
"- generate a comprehensive report for analysis and visualization with all the important metric,\n",
|
19 |
+
"- log task and cli configs,\n",
|
20 |
+
"- and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc.\n",
|
21 |
+
"\n",
|
22 |
+
"The integration is super easy to use with the eval harness. Let's see how!"
|
23 |
+
]
|
24 |
+
},
|
25 |
+
{
|
26 |
+
"cell_type": "code",
|
27 |
+
"execution_count": null,
|
28 |
+
"id": "3851439a-bff4-41f2-bf21-1b3d8704913b",
|
29 |
+
"metadata": {
|
30 |
+
"scrolled": true
|
31 |
+
},
|
32 |
+
"outputs": [],
|
33 |
+
"source": [
|
34 |
+
"# Install this project if you did not already have it.\n",
|
35 |
+
"# This is all that is needed to be installed to start using Weights and Biases\n",
|
36 |
+
"\n",
|
37 |
+
"!pip -qq install -e ..[wandb]"
|
38 |
+
]
|
39 |
+
},
|
40 |
+
{
|
41 |
+
"cell_type": "markdown",
|
42 |
+
"id": "8507fd7e-3b99-4a92-89fa-9eaada74ba91",
|
43 |
+
"metadata": {},
|
44 |
+
"source": [
|
45 |
+
"# Run the Eval Harness\n",
|
46 |
+
"\n",
|
47 |
+
"Run the eval harness as usual with a `wandb_args` flag. This flag is used to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments.\n",
|
48 |
+
"\n",
|
49 |
+
"If `wandb_args` flag is used, the metrics and all other goodness will be automatically logged to Weights and Biases. In the stdout, you will find the link to the W&B run page as well as link to the generated report."
|
50 |
+
]
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"cell_type": "markdown",
|
54 |
+
"id": "eec5866e-f01e-42f8-8803-9d77472ef991",
|
55 |
+
"metadata": {},
|
56 |
+
"source": [
|
57 |
+
"## Set your API Key\n",
|
58 |
+
"\n",
|
59 |
+
"Before you can use W&B, you need to authenticate your machine with an authentication key. Visit https://wandb.ai/authorize to get one."
|
60 |
+
]
|
61 |
+
},
|
62 |
+
{
|
63 |
+
"cell_type": "code",
|
64 |
+
"execution_count": null,
|
65 |
+
"id": "d824d163-71a9-4313-935d-f1d56397841c",
|
66 |
+
"metadata": {},
|
67 |
+
"outputs": [],
|
68 |
+
"source": [
|
69 |
+
"import wandb\n",
|
70 |
+
"\n",
|
71 |
+
"wandb.login()"
|
72 |
+
]
|
73 |
+
},
|
74 |
+
{
|
75 |
+
"cell_type": "markdown",
|
76 |
+
"id": "124e4a34-1547-4bed-bc09-db012bacbda6",
|
77 |
+
"metadata": {},
|
78 |
+
"source": [
|
79 |
+
"> Note that if you are using command line you can simply authenticate your machine by doing `wandb login` in your terminal. For more info check out the [documentation](https://docs.wandb.ai/quickstart#2-log-in-to-wb)."
|
80 |
+
]
|
81 |
+
},
|
82 |
+
{
|
83 |
+
"cell_type": "markdown",
|
84 |
+
"id": "abc6f6b6-179a-4aff-ada9-f380fb74df6e",
|
85 |
+
"metadata": {},
|
86 |
+
"source": [
|
87 |
+
"## Run and log to W&B"
|
88 |
+
]
|
89 |
+
},
|
90 |
+
{
|
91 |
+
"cell_type": "code",
|
92 |
+
"execution_count": null,
|
93 |
+
"id": "bd0a8130-a97b-451a-acd2-3f9885b88643",
|
94 |
+
"metadata": {},
|
95 |
+
"outputs": [],
|
96 |
+
"source": [
|
97 |
+
"!lm_eval \\\n",
|
98 |
+
" --model hf \\\n",
|
99 |
+
" --model_args pretrained=microsoft/phi-2,trust_remote_code=True \\\n",
|
100 |
+
" --tasks hellaswag,mmlu_abstract_algebra \\\n",
|
101 |
+
" --device cuda:0 \\\n",
|
102 |
+
" --batch_size 8 \\\n",
|
103 |
+
" --output_path output/phi-2 \\\n",
|
104 |
+
" --limit 10 \\\n",
|
105 |
+
" --wandb_args project=lm-eval-harness-integration \\\n",
|
106 |
+
" --log_samples"
|
107 |
+
]
|
108 |
+
},
|
109 |
+
{
|
110 |
+
"cell_type": "markdown",
|
111 |
+
"id": "e974cabdbe70b667",
|
112 |
+
"metadata": {},
|
113 |
+
"source": ""
|
114 |
+
},
|
115 |
+
{
|
116 |
+
"cell_type": "markdown",
|
117 |
+
"id": "5178ca9445b844e4",
|
118 |
+
"metadata": {},
|
119 |
+
"source": "W&B can also be initialized programmatically for use outside the CLI to parse and log the results."
|
120 |
+
},
|
121 |
+
{
|
122 |
+
"cell_type": "code",
|
123 |
+
"execution_count": null,
|
124 |
+
"id": "c6a421b2cf3ddac5",
|
125 |
+
"metadata": {},
|
126 |
+
"outputs": [],
|
127 |
+
"source": [
|
128 |
+
"import lm_eval\n",
|
129 |
+
"from lm_eval.logging_utils import WandbLogger\n",
|
130 |
+
"\n",
|
131 |
+
"results = lm_eval.simple_evaluate(\n",
|
132 |
+
" model=\"hf\",\n",
|
133 |
+
" model_args=\"pretrained=microsoft/phi-2,trust_remote_code=True\",\n",
|
134 |
+
" tasks=\"hellaswag,mmlu_abstract_algebra\",\n",
|
135 |
+
" log_samples=True,\n",
|
136 |
+
")\n",
|
137 |
+
"\n",
|
138 |
+
"wandb_logger = WandbLogger(\n",
|
139 |
+
" project=\"lm-eval-harness-integration\", job_type=\"eval\"\n",
|
140 |
+
") # or empty if wandb.init(...) already called before\n",
|
141 |
+
"wandb_logger.post_init(results)\n",
|
142 |
+
"wandb_logger.log_eval_result()\n",
|
143 |
+
"wandb_logger.log_eval_samples(results[\"samples\"]) # if log_samples"
|
144 |
+
]
|
145 |
+
}
|
146 |
+
],
|
147 |
+
"metadata": {
|
148 |
+
"kernelspec": {
|
149 |
+
"display_name": "Python 3 (ipykernel)",
|
150 |
+
"language": "python",
|
151 |
+
"name": "python3"
|
152 |
+
},
|
153 |
+
"language_info": {
|
154 |
+
"codemirror_mode": {
|
155 |
+
"name": "ipython",
|
156 |
+
"version": 3
|
157 |
+
},
|
158 |
+
"file_extension": ".py",
|
159 |
+
"mimetype": "text/x-python",
|
160 |
+
"name": "python",
|
161 |
+
"nbconvert_exporter": "python",
|
162 |
+
"pygments_lexer": "ipython3",
|
163 |
+
"version": "3.10.12"
|
164 |
+
}
|
165 |
+
},
|
166 |
+
"nbformat": 4,
|
167 |
+
"nbformat_minor": 5
|
168 |
+
}
|
lm-evaluation-harness/examples/visualize-zeno.ipynb
ADDED
@@ -0,0 +1,115 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"cells": [
|
3 |
+
{
|
4 |
+
"cell_type": "markdown",
|
5 |
+
"metadata": {},
|
6 |
+
"source": [
|
7 |
+
"# Visualizing Results in Zeno\n",
|
8 |
+
"\n",
|
9 |
+
"Benchmarking your models is the first step towards making sure your model performs well.\n",
|
10 |
+
"However, looking at the data behind the benchmark, slicing the data into subsets, and comparing models on individual instances can help you even more in evaluating and quantifying the behavior of your AI system.\n",
|
11 |
+
"\n",
|
12 |
+
"All of this can be done in [Zeno](https://zenoml.com)!\n",
|
13 |
+
"Zeno is super easy to use with the eval harness, let's explore how you can easily upload and visualize your eval results.\n"
|
14 |
+
]
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"cell_type": "code",
|
18 |
+
"execution_count": null,
|
19 |
+
"metadata": {},
|
20 |
+
"outputs": [],
|
21 |
+
"source": [
|
22 |
+
"# Install this project if you did not already do that. This is all that needs to be installed for you to be able to visualize your data in Zeno!\n",
|
23 |
+
"!pip install -e ..\n",
|
24 |
+
"!pip install -e ..[zeno]"
|
25 |
+
]
|
26 |
+
},
|
27 |
+
{
|
28 |
+
"cell_type": "markdown",
|
29 |
+
"metadata": {},
|
30 |
+
"source": [
|
31 |
+
"# Run the Eval Harness\n",
|
32 |
+
"\n",
|
33 |
+
"To visualize the results, run the eval harness with the `log_samples` and `output_path` flags. We expect `output_path` to contain multiple folders that represent individual model names. You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.\n"
|
34 |
+
]
|
35 |
+
},
|
36 |
+
{
|
37 |
+
"cell_type": "code",
|
38 |
+
"execution_count": null,
|
39 |
+
"metadata": {},
|
40 |
+
"outputs": [],
|
41 |
+
"source": [
|
42 |
+
"!lm_eval \\\n",
|
43 |
+
" --model hf \\\n",
|
44 |
+
" --model_args pretrained=EleutherAI/gpt-neo-2.7B \\\n",
|
45 |
+
" --tasks hellaswag,wikitext \\\n",
|
46 |
+
" --batch_size 8 \\\n",
|
47 |
+
" --device mps \\\n",
|
48 |
+
" --log_samples \\\n",
|
49 |
+
" --output_path output/gpt-neo-2.7B \\\n",
|
50 |
+
" --limit 10"
|
51 |
+
]
|
52 |
+
},
|
53 |
+
{
|
54 |
+
"cell_type": "markdown",
|
55 |
+
"metadata": {},
|
56 |
+
"source": [
|
57 |
+
"# Set your API Key\n",
|
58 |
+
"\n",
|
59 |
+
"This is so you can be authenticated with Zeno.\n",
|
60 |
+
"If you don't already have a Zeno account, first create an account on [Zeno Hub](https://hub.zenoml.com).\n",
|
61 |
+
"After logging in to Zeno Hub, generate your API key by clicking on your profile at the bottom left to navigate to your account page.\n"
|
62 |
+
]
|
63 |
+
},
|
64 |
+
{
|
65 |
+
"cell_type": "code",
|
66 |
+
"execution_count": null,
|
67 |
+
"metadata": {},
|
68 |
+
"outputs": [],
|
69 |
+
"source": [
|
70 |
+
"%env ZENO_API_KEY=YOUR_API_KEY"
|
71 |
+
]
|
72 |
+
},
|
73 |
+
{
|
74 |
+
"cell_type": "markdown",
|
75 |
+
"metadata": {},
|
76 |
+
"source": [
|
77 |
+
"# Visualize Eval Results\n",
|
78 |
+
"\n",
|
79 |
+
"You can now use the `zeno_visualize` script to upload the results to Zeno.\n",
|
80 |
+
"\n",
|
81 |
+
"This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno. If you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.\n"
|
82 |
+
]
|
83 |
+
},
|
84 |
+
{
|
85 |
+
"cell_type": "code",
|
86 |
+
"execution_count": null,
|
87 |
+
"metadata": {},
|
88 |
+
"outputs": [],
|
89 |
+
"source": [
|
90 |
+
"!python ../scripts/zeno_visualize.py --data_path output --project_name \"Zeno Upload Test\""
|
91 |
+
]
|
92 |
+
}
|
93 |
+
],
|
94 |
+
"metadata": {
|
95 |
+
"kernelspec": {
|
96 |
+
"display_name": "zeno_projects",
|
97 |
+
"language": "python",
|
98 |
+
"name": "python3"
|
99 |
+
},
|
100 |
+
"language_info": {
|
101 |
+
"codemirror_mode": {
|
102 |
+
"name": "ipython",
|
103 |
+
"version": 3
|
104 |
+
},
|
105 |
+
"file_extension": ".py",
|
106 |
+
"mimetype": "text/x-python",
|
107 |
+
"name": "python",
|
108 |
+
"nbconvert_exporter": "python",
|
109 |
+
"pygments_lexer": "ipython3",
|
110 |
+
"version": "3.10.11"
|
111 |
+
}
|
112 |
+
},
|
113 |
+
"nbformat": 4,
|
114 |
+
"nbformat_minor": 2
|
115 |
+
}
|
lm-evaluation-harness/lm_eval.egg-info/PKG-INFO
ADDED
@@ -0,0 +1,558 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Metadata-Version: 2.1
|
2 |
+
Name: lm_eval
|
3 |
+
Version: 0.4.2
|
4 |
+
Summary: A framework for evaluating language models
|
5 |
+
Author-email: EleutherAI <[email protected]>
|
6 |
+
License: MIT
|
7 |
+
Project-URL: Homepage, https://github.com/EleutherAI/lm-evaluation-harness
|
8 |
+
Project-URL: Repository, https://github.com/EleutherAI/lm-evaluation-harness
|
9 |
+
Classifier: Development Status :: 3 - Alpha
|
10 |
+
Classifier: Programming Language :: Python :: 3
|
11 |
+
Classifier: License :: OSI Approved :: MIT License
|
12 |
+
Classifier: Operating System :: OS Independent
|
13 |
+
Requires-Python: >=3.8
|
14 |
+
Description-Content-Type: text/markdown
|
15 |
+
License-File: LICENSE.md
|
16 |
+
Requires-Dist: accelerate>=0.21.0
|
17 |
+
Requires-Dist: evaluate
|
18 |
+
Requires-Dist: datasets>=2.16.0
|
19 |
+
Requires-Dist: evaluate>=0.4.0
|
20 |
+
Requires-Dist: jsonlines
|
21 |
+
Requires-Dist: numexpr
|
22 |
+
Requires-Dist: peft>=0.2.0
|
23 |
+
Requires-Dist: pybind11>=2.6.2
|
24 |
+
Requires-Dist: pytablewriter
|
25 |
+
Requires-Dist: rouge-score>=0.0.4
|
26 |
+
Requires-Dist: sacrebleu>=1.5.0
|
27 |
+
Requires-Dist: scikit-learn>=0.24.1
|
28 |
+
Requires-Dist: sqlitedict
|
29 |
+
Requires-Dist: torch>=1.8
|
30 |
+
Requires-Dist: tqdm-multiprocess
|
31 |
+
Requires-Dist: transformers>=4.1
|
32 |
+
Requires-Dist: zstandard
|
33 |
+
Requires-Dist: dill
|
34 |
+
Requires-Dist: word2number
|
35 |
+
Requires-Dist: more_itertools
|
36 |
+
Provides-Extra: anthropic
|
37 |
+
Requires-Dist: anthropic; extra == "anthropic"
|
38 |
+
Provides-Extra: dev
|
39 |
+
Requires-Dist: pytest; extra == "dev"
|
40 |
+
Requires-Dist: pytest-cov; extra == "dev"
|
41 |
+
Requires-Dist: pytest-xdist; extra == "dev"
|
42 |
+
Requires-Dist: pre-commit; extra == "dev"
|
43 |
+
Requires-Dist: mypy; extra == "dev"
|
44 |
+
Provides-Extra: deepsparse
|
45 |
+
Requires-Dist: deepsparse-nightly[llm]>=1.8.0.20240404; extra == "deepsparse"
|
46 |
+
Provides-Extra: gptq
|
47 |
+
Requires-Dist: auto-gptq[triton]>=0.6.0; extra == "gptq"
|
48 |
+
Provides-Extra: hf-transfer
|
49 |
+
Requires-Dist: hf_transfer; extra == "hf-transfer"
|
50 |
+
Provides-Extra: ifeval
|
51 |
+
Requires-Dist: langdetect; extra == "ifeval"
|
52 |
+
Requires-Dist: immutabledict; extra == "ifeval"
|
53 |
+
Provides-Extra: neuronx
|
54 |
+
Requires-Dist: optimum[neuronx]; extra == "neuronx"
|
55 |
+
Provides-Extra: mamba
|
56 |
+
Requires-Dist: mamba_ssm; extra == "mamba"
|
57 |
+
Requires-Dist: causal-conv1d==1.0.2; extra == "mamba"
|
58 |
+
Provides-Extra: math
|
59 |
+
Requires-Dist: sympy>=1.12; extra == "math"
|
60 |
+
Requires-Dist: antlr4-python3-runtime==4.11; extra == "math"
|
61 |
+
Provides-Extra: multilingual
|
62 |
+
Requires-Dist: nagisa>=0.2.7; extra == "multilingual"
|
63 |
+
Requires-Dist: jieba>=0.42.1; extra == "multilingual"
|
64 |
+
Requires-Dist: pycountry; extra == "multilingual"
|
65 |
+
Provides-Extra: openai
|
66 |
+
Requires-Dist: openai==1.3.9; extra == "openai"
|
67 |
+
Requires-Dist: tiktoken; extra == "openai"
|
68 |
+
Provides-Extra: optimum
|
69 |
+
Requires-Dist: optimum[openvino]; extra == "optimum"
|
70 |
+
Provides-Extra: promptsource
|
71 |
+
Requires-Dist: promptsource>=0.2.3; extra == "promptsource"
|
72 |
+
Provides-Extra: sentencepiece
|
73 |
+
Requires-Dist: sentencepiece>=0.1.98; extra == "sentencepiece"
|
74 |
+
Provides-Extra: sparseml
|
75 |
+
Requires-Dist: sparseml-nightly[llm]>=1.8.0.20240404; extra == "sparseml"
|
76 |
+
Provides-Extra: testing
|
77 |
+
Requires-Dist: pytest; extra == "testing"
|
78 |
+
Requires-Dist: pytest-cov; extra == "testing"
|
79 |
+
Requires-Dist: pytest-xdist; extra == "testing"
|
80 |
+
Provides-Extra: vllm
|
81 |
+
Requires-Dist: vllm==0.3.2; extra == "vllm"
|
82 |
+
Provides-Extra: zeno
|
83 |
+
Requires-Dist: pandas; extra == "zeno"
|
84 |
+
Requires-Dist: zeno-client; extra == "zeno"
|
85 |
+
Provides-Extra: wandb
|
86 |
+
Requires-Dist: wandb>=0.16.3; extra == "wandb"
|
87 |
+
Requires-Dist: pandas; extra == "wandb"
|
88 |
+
Requires-Dist: numpy; extra == "wandb"
|
89 |
+
Provides-Extra: all
|
90 |
+
Requires-Dist: lm_eval[anthropic]; extra == "all"
|
91 |
+
Requires-Dist: lm_eval[dev]; extra == "all"
|
92 |
+
Requires-Dist: lm_eval[deepsparse]; extra == "all"
|
93 |
+
Requires-Dist: lm_eval[gptq]; extra == "all"
|
94 |
+
Requires-Dist: lm_eval[hf_transfer]; extra == "all"
|
95 |
+
Requires-Dist: lm_eval[ifeval]; extra == "all"
|
96 |
+
Requires-Dist: lm_eval[mamba]; extra == "all"
|
97 |
+
Requires-Dist: lm_eval[math]; extra == "all"
|
98 |
+
Requires-Dist: lm_eval[multilingual]; extra == "all"
|
99 |
+
Requires-Dist: lm_eval[openai]; extra == "all"
|
100 |
+
Requires-Dist: lm_eval[promptsource]; extra == "all"
|
101 |
+
Requires-Dist: lm_eval[sentencepiece]; extra == "all"
|
102 |
+
Requires-Dist: lm_eval[sparseml]; extra == "all"
|
103 |
+
Requires-Dist: lm_eval[testing]; extra == "all"
|
104 |
+
Requires-Dist: lm_eval[vllm]; extra == "all"
|
105 |
+
Requires-Dist: lm_eval[zeno]; extra == "all"
|
106 |
+
Requires-Dist: lm_eval[wandb]; extra == "all"
|
107 |
+
|
108 |
+
# Language Model Evaluation Harness
|
109 |
+
|
110 |
+
[](https://doi.org/10.5281/zenodo.10256836)
|
111 |
+
|
112 |
+
## Announcement
|
113 |
+
**A new v0.4.0 release of lm-evaluation-harness is available** !
|
114 |
+
|
115 |
+
New updates and features include:
|
116 |
+
|
117 |
+
- Internal refactoring
|
118 |
+
- Config-based task creation and configuration
|
119 |
+
- Easier import and sharing of externally-defined task config YAMLs
|
120 |
+
- Support for Jinja2 prompt design, easy modification of prompts + prompt imports from Promptsource
|
121 |
+
- More advanced configuration options, including output post-processing, answer extraction, and multiple LM generations per document, configurable fewshot settings, and more
|
122 |
+
- Speedups and new modeling libraries supported, including: faster data-parallel HF model usage, vLLM support, MPS support with HuggingFace, and more
|
123 |
+
- Logging and usability changes
|
124 |
+
- New tasks including CoT BIG-Bench-Hard, Belebele, user-defined task groupings, and more
|
125 |
+
|
126 |
+
Please see our updated documentation pages in `docs/` for more details.
|
127 |
+
|
128 |
+
Development will be continuing on the `main` branch, and we encourage you to give us feedback on what features are desired and how to improve the library further, or ask questions, either in issues or PRs on GitHub, or in the [EleutherAI discord](https://discord.gg/eleutherai)!
|
129 |
+
|
130 |
+
## Overview
|
131 |
+
|
132 |
+
This project provides a unified framework to test generative language models on a large number of different evaluation tasks.
|
133 |
+
|
134 |
+
**Features:**
|
135 |
+
- Over 60 standard academic benchmarks for LLMs, with hundreds of subtasks and variants implemented.
|
136 |
+
- Support for models loaded via [transformers](https://github.com/huggingface/transformers/) (including quantization via [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)), [GPT-NeoX](https://github.com/EleutherAI/gpt-neox), and [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/), with a flexible tokenization-agnostic interface.
|
137 |
+
- Support for fast and memory-efficient inference with [vLLM](https://github.com/vllm-project/vllm).
|
138 |
+
- Support for commercial APIs including [OpenAI](https://openai.com), and [TextSynth](https://textsynth.com/).
|
139 |
+
- Support for evaluation on adapters (e.g. LoRA) supported in [HuggingFace's PEFT library](https://github.com/huggingface/peft).
|
140 |
+
- Support for local models and benchmarks.
|
141 |
+
- Evaluation with publicly available prompts ensures reproducibility and comparability between papers.
|
142 |
+
- Easy support for custom prompts and evaluation metrics.
|
143 |
+
|
144 |
+
The Language Model Evaluation Harness is the backend for 🤗 Hugging Face's popular [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard), has been used in [hundreds of papers](https://scholar.google.com/scholar?oi=bibs&hl=en&authuser=2&cites=15052937328817631261,4097184744846514103,1520777361382155671,17476825572045927382,18443729326628441434,14801318227356878622,7890865700763267262,12854182577605049984,15641002901115500560,5104500764547628290), and is used internally by dozens of organizations including NVIDIA, Cohere, BigScience, BigCode, Nous Research, and Mosaic ML.
|
145 |
+
|
146 |
+
## Install
|
147 |
+
|
148 |
+
To install the `lm-eval` package from the github repository, run:
|
149 |
+
|
150 |
+
```bash
|
151 |
+
git clone https://github.com/EleutherAI/lm-evaluation-harness
|
152 |
+
cd lm-evaluation-harness
|
153 |
+
pip install -e .
|
154 |
+
```
|
155 |
+
|
156 |
+
We also provide a number of optional dependencies for extended functionality. A detailed table is available at the end of this document.
|
157 |
+
|
158 |
+
## Basic Usage
|
159 |
+
|
160 |
+
### Hugging Face `transformers`
|
161 |
+
|
162 |
+
To evaluate a model hosted on the [HuggingFace Hub](https://huggingface.co/models) (e.g. GPT-J-6B) on `hellaswag` you can use the following command (this assumes you are using a CUDA-compatible GPU):
|
163 |
+
|
164 |
+
```bash
|
165 |
+
lm_eval --model hf \
|
166 |
+
--model_args pretrained=EleutherAI/gpt-j-6B \
|
167 |
+
--tasks hellaswag \
|
168 |
+
--device cuda:0 \
|
169 |
+
--batch_size 8
|
170 |
+
```
|
171 |
+
|
172 |
+
Additional arguments can be provided to the model constructor using the `--model_args` flag. Most notably, this supports the common practice of using the `revisions` feature on the Hub to store partially trained checkpoints, or to specify the datatype for running a model:
|
173 |
+
|
174 |
+
```bash
|
175 |
+
lm_eval --model hf \
|
176 |
+
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
|
177 |
+
--tasks lambada_openai,hellaswag \
|
178 |
+
--device cuda:0 \
|
179 |
+
--batch_size 8
|
180 |
+
```
|
181 |
+
|
182 |
+
Models that are loaded via both `transformers.AutoModelForCausalLM` (autoregressive, decoder-only GPT style models) and `transformers.AutoModelForSeq2SeqLM` (such as encoder-decoder models like T5) in Huggingface are supported.
|
183 |
+
|
184 |
+
Batch size selection can be automated by setting the ```--batch_size``` flag to ```auto```. This will perform automatic detection of the largest batch size that will fit on your device. On tasks where there is a large difference between the longest and shortest example, it can be helpful to periodically recompute the largest batch size, to gain a further speedup. To do this, append ```:N``` to above flag to automatically recompute the largest batch size ```N``` times. For example, to recompute the batch size 4 times, the command would be:
|
185 |
+
|
186 |
+
```bash
|
187 |
+
lm_eval --model hf \
|
188 |
+
--model_args pretrained=EleutherAI/pythia-160m,revision=step100000,dtype="float" \
|
189 |
+
--tasks lambada_openai,hellaswag \
|
190 |
+
--device cuda:0 \
|
191 |
+
--batch_size auto:4
|
192 |
+
```
|
193 |
+
|
194 |
+
The full list of supported arguments are provided [here](./docs/interface.md), and on the terminal by calling `lm_eval -h`. Alternatively, you can use `lm-eval` instead of `lm_eval`. A list of supported tasks can be viewed with `lm-eval --tasks list`.
|
195 |
+
|
196 |
+
> [!Note]
|
197 |
+
> Just like you can provide a local path to `transformers.AutoModel`, you can also provide a local path to `lm_eval` via `--model_args pretrained=/path/to/model`
|
198 |
+
|
199 |
+
#### Multi-GPU Evaluation with Hugging Face `accelerate`
|
200 |
+
|
201 |
+
We support two main ways of using Hugging Face's [accelerate 🚀](https://github.com/huggingface/accelerate) library for multi-GPU evaluation.
|
202 |
+
|
203 |
+
To perform *data-parallel evaluation* (where each GPU loads a **separate full copy** of the model), we leverage the `accelerate` launcher as follows:
|
204 |
+
|
205 |
+
```
|
206 |
+
accelerate launch -m lm_eval --model hf \
|
207 |
+
--tasks lambada_openai,arc_easy \
|
208 |
+
--batch_size 16
|
209 |
+
```
|
210 |
+
(or via `accelerate launch --no-python lm_eval`).
|
211 |
+
|
212 |
+
For cases where your model can fit on a single GPU, this allows you to evaluate on K GPUs K times faster than on one.
|
213 |
+
|
214 |
+
**WARNING**: This setup does not work with FSDP model sharding, so in `accelerate config` FSDP must be disabled, or the NO_SHARD FSDP option must be used.
|
215 |
+
|
216 |
+
The second way of using `accelerate` for multi-GPU evaluation is when your model is *too large to fit on a single GPU.*
|
217 |
+
|
218 |
+
In this setting, run the library *outside of the `accelerate` launcher*, but passing `parallelize=True` to `--model_args` as follows:
|
219 |
+
|
220 |
+
```
|
221 |
+
lm_eval --model hf \
|
222 |
+
--tasks lambada_openai,arc_easy \
|
223 |
+
--model_args parallelize=True \
|
224 |
+
--batch_size 16
|
225 |
+
```
|
226 |
+
|
227 |
+
This means that your model's weights will be split across all available GPUs.
|
228 |
+
|
229 |
+
For more advanced users or even larger models, we allow for the following arguments when `parallelize=True` as well:
|
230 |
+
- `device_map_option`: How to split model weights across available GPUs. defaults to "auto".
|
231 |
+
- `max_memory_per_gpu`: the max GPU memory to use per GPU in loading the model.
|
232 |
+
- `max_cpu_memory`: the max amount of CPU memory to use when offloading the model weights to RAM.
|
233 |
+
- `offload_folder`: a folder where model weights will be offloaded to disk if needed.
|
234 |
+
|
235 |
+
These two options (`accelerate launch` and `parallelize=True`) are mutually exclusive.
|
236 |
+
|
237 |
+
**Note: we do not currently support multi-node evaluations natively, and advise using either an externally hosted server to run inference requests against, or creating a custom integration with your distributed framework [as is done for the GPT-NeoX library](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py).**
|
238 |
+
|
239 |
+
### NVIDIA `nemo` models
|
240 |
+
|
241 |
+
[NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo) is a generative AI framework built for researchers and pytorch developers working on language models.
|
242 |
+
|
243 |
+
To evaluate a `nemo` model, start by installing NeMo following [the documentation](https://github.com/NVIDIA/NeMo?tab=readme-ov-file#installation). We highly recommended to use the NVIDIA PyTorch or NeMo container, especially if having issues installing Apex or any other dependencies (see [latest released containers](https://github.com/NVIDIA/NeMo/releases)). Please also install the lm evaluation harness library following the instructions in [the Install section](https://github.com/EleutherAI/lm-evaluation-harness/tree/main?tab=readme-ov-file#install).
|
244 |
+
|
245 |
+
NeMo models can be obtained through [NVIDIA NGC Catalog](https://catalog.ngc.nvidia.com/models) or in [NVIDIA's Hugging Face page](https://huggingface.co/nvidia). In [NVIDIA NeMo Framework](https://github.com/NVIDIA/NeMo/tree/main/scripts/nlp_language_modeling) there are conversion scripts to convert the `hf` checkpoints of popular models like llama, falcon, mixtral or mpt to `nemo`.
|
246 |
+
|
247 |
+
Run a `nemo` model on one GPU:
|
248 |
+
```bash
|
249 |
+
lm_eval --model nemo_lm \
|
250 |
+
--model_args path=<path_to_nemo_model> \
|
251 |
+
--tasks hellaswag \
|
252 |
+
--batch_size 32
|
253 |
+
```
|
254 |
+
|
255 |
+
It is recommended to unpack the `nemo` model to avoid the unpacking inside the docker container - it may overflow disk space. For that you can run:
|
256 |
+
|
257 |
+
```
|
258 |
+
mkdir MY_MODEL
|
259 |
+
tar -xvf MY_MODEL.nemo -c MY_MODEL
|
260 |
+
```
|
261 |
+
|
262 |
+
#### Multi-GPU evaluation with NVIDIA `nemo` models
|
263 |
+
|
264 |
+
By default, only one GPU is used. But we do support either data replication or tensor/pipeline parallelism during evaluation, on one node.
|
265 |
+
|
266 |
+
1) To enable data replication, set the `model_args` of `devices` to the number of data replicas to run. For example, the command to run 8 data replicas over 8 GPUs is:
|
267 |
+
```bash
|
268 |
+
torchrun --nproc-per-node=8 --no-python lm_eval \
|
269 |
+
--model nemo_lm \
|
270 |
+
--model_args path=<path_to_nemo_model>,devices=8 \
|
271 |
+
--tasks hellaswag \
|
272 |
+
--batch_size 32
|
273 |
+
```
|
274 |
+
|
275 |
+
2) To enable tensor and/or pipeline parallelism, set the `model_args` of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. In addition, you also have to set up `devices` to be equal to the product of `tensor_model_parallel_size` and/or `pipeline_model_parallel_size`. For example, the command to use one node of 4 GPUs with tensor parallelism of 2 and pipeline parallelism of 2 is:
|
276 |
+
```bash
|
277 |
+
torchrun --nproc-per-node=4 --no-python lm_eval \
|
278 |
+
--model nemo_lm \
|
279 |
+
--model_args path=<path_to_nemo_model>,devices=4,tensor_model_parallel_size=2,pipeline_model_parallel_size=2 \
|
280 |
+
--tasks hellaswag \
|
281 |
+
--batch_size 32
|
282 |
+
```
|
283 |
+
Note that it is recommended to substitute the `python` command by `torchrun --nproc-per-node=<number of devices> --no-python` to facilitate loading the model into the GPUs. This is especially important for large checkpoints loaded into multiple GPUs.
|
284 |
+
|
285 |
+
Not supported yet: multi-node evaluation and combinations of data replication with tensor or pipeline parallelism.
|
286 |
+
|
287 |
+
### Tensor + Data Parallel and Optimized Inference with `vLLM`
|
288 |
+
|
289 |
+
We also support vLLM for faster inference on [supported model types](https://docs.vllm.ai/en/latest/models/supported_models.html), especially faster when splitting a model across multiple GPUs. For single-GPU or multi-GPU — tensor parallel, data parallel, or a combination of both — inference, for example:
|
290 |
+
|
291 |
+
```bash
|
292 |
+
lm_eval --model vllm \
|
293 |
+
--model_args pretrained={model_name},tensor_parallel_size={GPUs_per_model},dtype=auto,gpu_memory_utilization=0.8,data_parallel_size={model_replicas} \
|
294 |
+
--tasks lambada_openai \
|
295 |
+
--batch_size auto
|
296 |
+
```
|
297 |
+
To use vllm, do `pip install lm_eval[vllm]`. For a full list of supported vLLM configurations, please reference our [vLLM integration](https://github.com/EleutherAI/lm-evaluation-harness/blob/e74ec966556253fbe3d8ecba9de675c77c075bce/lm_eval/models/vllm_causallms.py) and the vLLM documentation.
|
298 |
+
|
299 |
+
vLLM occasionally differs in output from Huggingface. We treat Huggingface as the reference implementation, and provide a [script](./scripts/model_comparator.py) for checking the validity of vllm results against HF.
|
300 |
+
|
301 |
+
> [!Tip]
|
302 |
+
> For fastest performance, we recommend using `--batch_size auto` for vLLM whenever possible, to leverage its continuous batching functionality!
|
303 |
+
|
304 |
+
> [!Tip]
|
305 |
+
> Passing `max_model_len=4096` or some other reasonable default to vLLM through model args may cause speedups or prevent out-of-memory errors when trying to use auto batch size, such as for Mistral-7B-v0.1 which defaults to a maximum length of 32k.
|
306 |
+
|
307 |
+
### Model APIs and Inference Servers
|
308 |
+
|
309 |
+
Our library also supports the evaluation of models served via several commercial APIs, and we hope to implement support for the most commonly used performant local/self-hosted inference servers.
|
310 |
+
|
311 |
+
To call a hosted model, use:
|
312 |
+
|
313 |
+
```bash
|
314 |
+
export OPENAI_API_KEY=YOUR_KEY_HERE
|
315 |
+
lm_eval --model openai-completions \
|
316 |
+
--model_args model=davinci \
|
317 |
+
--tasks lambada_openai,hellaswag
|
318 |
+
```
|
319 |
+
|
320 |
+
We also support using your own local inference server with servers that mirror the OpenAI Completions and ChatCompletions APIs.
|
321 |
+
|
322 |
+
```bash
|
323 |
+
lm_eval --model local-chat-completions --tasks gsm8k --model_args model=facebook/opt-125m,base_url=http://{yourip}:8000/v1
|
324 |
+
```
|
325 |
+
Note that for externally hosted models, configs such as `--device` and `--batch_size` should not be used and do not function. Just like you can use `--model_args` to pass arbitrary arguments to the model constructor for local models, you can use it to pass arbitrary arguments to the model API for hosted models. See the documentation of the hosting service for information on what arguments they support.
|
326 |
+
|
327 |
+
| API or Inference Server | Implemented? | `--model <xxx>` name | Models supported: | Request Types: |
|
328 |
+
|---------------------------------------------------------------------------------------------------------------------------|---------------------------------|---------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|------------------------------------------------------------|
|
329 |
+
| OpenAI Completions | :heavy_check_mark: | `openai-completions`, `local-completions` | All OpenAI Completions API models | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
|
330 |
+
| OpenAI ChatCompletions | :heavy_check_mark: | `openai-chat-completions`, `local-chat-completions` | [All ChatCompletions API models](https://platform.openai.com/docs/guides/gpt) | `generate_until` (no logprobs) |
|
331 |
+
| Anthropic | :heavy_check_mark: | `anthropic` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/reference/selecting-a-model) | `generate_until` (no logprobs) |
|
332 |
+
| Anthropic Chat | :heavy_check_mark: | `anthropic-chat`, `anthropic-chat-completions` | [Supported Anthropic Engines](https://docs.anthropic.com/claude/docs/models-overview) | `generate_until` (no logprobs) |
|
333 |
+
| Textsynth | :heavy_check_mark: | `textsynth` | [All supported engines](https://textsynth.com/documentation.html#engines) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
|
334 |
+
| Cohere | [:hourglass: - blocked on Cohere API bug](https://github.com/EleutherAI/lm-evaluation-harness/pull/395) | N/A | [All `cohere.generate()` engines](https://docs.cohere.com/docs/models) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
|
335 |
+
| [Llama.cpp](https://github.com/ggerganov/llama.cpp) (via [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)) | :heavy_check_mark: | `gguf`, `ggml` | [All models supported by llama.cpp](https://github.com/ggerganov/llama.cpp) | `generate_until`, `loglikelihood`, (perplexity evaluation not yet implemented) |
|
336 |
+
| vLLM | :heavy_check_mark: | `vllm` | [Most HF Causal Language Models](https://docs.vllm.ai/en/latest/models/supported_models.html) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
|
337 |
+
| Mamba | :heavy_check_mark: | `mamba_ssm` | [Mamba architecture Language Models via the `mamba_ssm` package](https://huggingface.co/state-spaces) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` |
|
338 |
+
| Huggingface Optimum (Causal LMs) | ✔️ | `openvino` | Any decoder-only AutoModelForCausalLM converted with Huggingface Optimum into OpenVINO™ Intermediate Representation (IR) format | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
|
339 |
+
| Neuron via AWS Inf2 (Causal LMs) | ✔️ | `neuronx` | Any decoder-only AutoModelForCausalLM supported to run on [huggingface-ami image for inferentia2](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
|
340 |
+
| [Neural Magic DeepSparse](https://github.com/neuralmagic/deepsparse) | ✔️ | `deepsparse` | Any LM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub with the "deepsparse" tag](https://huggingface.co/models?other=deepsparse) | `generate_until`, `loglikelihood` | ... |
|
341 |
+
| [Neural Magic SparseML](https://github.com/neuralmagic/sparseml) | ✔️ | `sparseml` | Any decoder-only AutoModelForCausalLM from [SparseZoo](https://sparsezoo.neuralmagic.com/) or on [HF Hub](https://huggingface.co/neuralmagic). Especially useful for models with quantization like [`zoo:llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized`](https://sparsezoo.neuralmagic.com/models/llama2-7b-gsm8k_llama2_pretrain-pruned60_quantized) | `generate_until`, `loglikelihood`, `loglikelihood_rolling` | ... |
|
342 |
+
| Your local inference server! | :heavy_check_mark: | `local-completions` or `local-chat-completions` (using `openai-chat-completions` model type) | Any server address that accepts GET requests using HF models and mirror's OpenAI's Completions or ChatCompletions interface | `generate_until` | | ... |
|
343 |
+
|
344 |
+
Models which do not supply logits or logprobs can be used with tasks of type `generate_until` only, while local models, or APIs that supply logprobs/logits of their prompts, can be run on all task types: `generate_until`, `loglikelihood`, `loglikelihood_rolling`, and `multiple_choice`.
|
345 |
+
|
346 |
+
For more information on the different task `output_types` and model request types, see [our documentation](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/model_guide.md#interface).
|
347 |
+
|
348 |
+
> [!Note]
|
349 |
+
> For best performance with closed chat model APIs such as Anthropic Claude 3 and GPT-4, we recommend carefully looking at a few sample outputs using `--limit 10` first to confirm answer extraction and scoring on generative tasks is performing as expected. providing `system="<some system prompt here>"` within `--model_args` for anthropic-chat-completions, to instruct the model what format to respond in, may be useful.
|
350 |
+
|
351 |
+
|
352 |
+
### Other Frameworks
|
353 |
+
|
354 |
+
A number of other libraries contain scripts for calling the eval harness through their library. These include [GPT-NeoX](https://github.com/EleutherAI/gpt-neox/blob/main/eval_tasks/eval_adapter.py), [Megatron-DeepSpeed](https://github.com/microsoft/Megatron-DeepSpeed/blob/main/examples/MoE/readme_evalharness.md), and [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax/blob/master/eval_harness.py).
|
355 |
+
|
356 |
+
To create your own custom integration you can follow instructions from [this tutorial](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md#external-library-usage).
|
357 |
+
|
358 |
+
### Additional Features
|
359 |
+
> [!Note]
|
360 |
+
> For tasks unsuitable for direct evaluation — either due risks associated with executing untrusted code or complexities in the evaluation process — the `--predict_only` flag is available to obtain decoded generations for post-hoc evaluation.
|
361 |
+
|
362 |
+
If you have a Metal compatible Mac, you can run the eval harness using the MPS back-end by replacing `--device cuda:0` with `--device mps` (requires PyTorch version 2.1 or higher). **Note that the PyTorch MPS backend is still in early stages of development, so correctness issues or unsupported operations may exist. If you observe oddities in model performance on the MPS back-end, we recommend first checking that a forward pass of your model on `--device cpu` and `--device mps` match.**
|
363 |
+
|
364 |
+
> [!Note]
|
365 |
+
> You can inspect what the LM inputs look like by running the following command:
|
366 |
+
> ```bash
|
367 |
+
> python write_out.py \
|
368 |
+
> --tasks <task1,task2,...> \
|
369 |
+
> --num_fewshot 5 \
|
370 |
+
> --num_examples 10 \
|
371 |
+
> --output_base_path /path/to/output/folder
|
372 |
+
> ```
|
373 |
+
> This will write out one text file for each task.
|
374 |
+
|
375 |
+
To verify the data integrity of the tasks you're performing in addition to running the tasks themselves, you can use the `--check_integrity` flag:
|
376 |
+
|
377 |
+
```bash
|
378 |
+
lm_eval --model openai \
|
379 |
+
--model_args engine=davinci \
|
380 |
+
--tasks lambada_openai,hellaswag \
|
381 |
+
--check_integrity
|
382 |
+
```
|
383 |
+
|
384 |
+
## Advanced Usage Tips
|
385 |
+
|
386 |
+
For models loaded with the HuggingFace `transformers` library, any arguments provided via `--model_args` get passed to the relevant constructor directly. This means that anything you can do with `AutoModel` can be done with our library. For example, you can pass a local path via `pretrained=` or use models finetuned with [PEFT](https://github.com/huggingface/peft) by taking the call you would run to evaluate the base model and add `,peft=PATH` to the `model_args` argument:
|
387 |
+
```bash
|
388 |
+
lm_eval --model hf \
|
389 |
+
--model_args pretrained=EleutherAI/gpt-j-6b,parallelize=True,load_in_4bit=True,peft=nomic-ai/gpt4all-j-lora \
|
390 |
+
--tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq \
|
391 |
+
--device cuda:0
|
392 |
+
```
|
393 |
+
|
394 |
+
Models provided as delta weights can be easily loaded using the Hugging Face transformers library. Within --model_args, set the delta argument to specify the delta weights, and use the pretrained argument to designate the relative base model to which they will be applied:
|
395 |
+
```bash
|
396 |
+
lm_eval --model hf \
|
397 |
+
--model_args pretrained=Ejafa/llama_7B,delta=lmsys/vicuna-7b-delta-v1.1 \
|
398 |
+
--tasks hellaswag
|
399 |
+
```
|
400 |
+
|
401 |
+
[GPTQ](https://github.com/PanQiWei/AutoGPTQ) quantized models can be loaded by specifying their file names in `,autogptq=NAME` (or `,autogptq=True` for default names) in the `model_args` argument:
|
402 |
+
|
403 |
+
```bash
|
404 |
+
lm_eval --model hf \
|
405 |
+
--model_args pretrained=model-name-or-path,autogptq=model.safetensors,gptq_use_triton=True \
|
406 |
+
--tasks hellaswag
|
407 |
+
```
|
408 |
+
|
409 |
+
We support wildcards in task names, for example you can run all of the machine-translated lambada tasks via `--task lambada_openai_mt_*`.
|
410 |
+
|
411 |
+
To save evaluation results provide an `--output_path`. We also support logging model responses with the `--log_samples` flag for post-hoc analysis.
|
412 |
+
|
413 |
+
Additionally, one can provide a directory with `--use_cache` to cache the results of prior runs. This allows you to avoid repeated execution of the same (model, task) pairs for re-scoring.
|
414 |
+
|
415 |
+
For a full list of supported arguments, check out the [interface](https://github.com/EleutherAI/lm-evaluation-harness/blob/main/docs/interface.md) guide in our documentation!
|
416 |
+
|
417 |
+
## Visualizing Results
|
418 |
+
|
419 |
+
You can seamlessly visualize and analyze the results of your evaluation harness runs using both Weights & Biases (W&B) and Zeno.
|
420 |
+
|
421 |
+
### Zeno
|
422 |
+
|
423 |
+
You can use [Zeno](https://zenoml.com) to visualize the results of your eval harness runs.
|
424 |
+
|
425 |
+
First, head to [hub.zenoml.com](https://hub.zenoml.com) to create an account and get an API key [on your account page](https://hub.zenoml.com/account).
|
426 |
+
Add this key as an environment variable:
|
427 |
+
|
428 |
+
```bash
|
429 |
+
export ZENO_API_KEY=[your api key]
|
430 |
+
```
|
431 |
+
|
432 |
+
You'll also need to install the `lm_eval[zeno]` package extra.
|
433 |
+
|
434 |
+
To visualize the results, run the eval harness with the `log_samples` and `output_path` flags.
|
435 |
+
We expect `output_path` to contain multiple folders that represent individual model names.
|
436 |
+
You can thus run your evaluation on any number of tasks and models and upload all of the results as projects on Zeno.
|
437 |
+
|
438 |
+
```bash
|
439 |
+
lm_eval \
|
440 |
+
--model hf \
|
441 |
+
--model_args pretrained=EleutherAI/gpt-j-6B \
|
442 |
+
--tasks hellaswag \
|
443 |
+
--device cuda:0 \
|
444 |
+
--batch_size 8 \
|
445 |
+
--log_samples \
|
446 |
+
--output_path output/gpt-j-6B
|
447 |
+
```
|
448 |
+
|
449 |
+
Then, you can upload the resulting data using the `zeno_visualize` script:
|
450 |
+
|
451 |
+
```bash
|
452 |
+
python scripts/zeno_visualize.py \
|
453 |
+
--data_path output \
|
454 |
+
--project_name "Eleuther Project"
|
455 |
+
```
|
456 |
+
|
457 |
+
This will use all subfolders in `data_path` as different models and upload all tasks within these model folders to Zeno.
|
458 |
+
If you run the eval harness on multiple tasks, the `project_name` will be used as a prefix and one project will be created per task.
|
459 |
+
|
460 |
+
You can find an example of this workflow in [examples/visualize-zeno.ipynb](examples/visualize-zeno.ipynb).
|
461 |
+
|
462 |
+
### Weights and Biases
|
463 |
+
|
464 |
+
With the [Weights and Biases](https://wandb.ai/site) integration, you can now spend more time extracting deeper insights into your evaluation results. The integration is designed to streamline the process of logging and visualizing experiment results using the Weights & Biases (W&B) platform.
|
465 |
+
|
466 |
+
The integration provide functionalities
|
467 |
+
|
468 |
+
- to automatically log the evaluation results,
|
469 |
+
- log the samples as W&B Tables for easy visualization,
|
470 |
+
- log the `results.json` file as an artifact for version control,
|
471 |
+
- log the `<task_name>_eval_samples.json` file if the samples are logged,
|
472 |
+
- generate a comprehensive report for analysis and visualization with all the important metric,
|
473 |
+
- log task and cli specific configs,
|
474 |
+
- and more out of the box like the command used to run the evaluation, GPU/CPU counts, timestamp, etc.
|
475 |
+
|
476 |
+
First you'll need to install the lm_eval[wandb] package extra. Do `pip install lm_eval[wandb]`.
|
477 |
+
|
478 |
+
Authenticate your machine with an your unique W&B token. Visit https://wandb.ai/authorize to get one. Do `wandb login` in your command line terminal.
|
479 |
+
|
480 |
+
Run eval harness as usual with a `wandb_args` flag. Use this flag to provide arguments for initializing a wandb run ([wandb.init](https://docs.wandb.ai/ref/python/init)) as comma separated string arguments.
|
481 |
+
|
482 |
+
```bash
|
483 |
+
lm_eval \
|
484 |
+
--model hf \
|
485 |
+
--model_args pretrained=microsoft/phi-2,trust_remote_code=True \
|
486 |
+
--tasks hellaswag,mmlu_abstract_algebra \
|
487 |
+
--device cuda:0 \
|
488 |
+
--batch_size 8 \
|
489 |
+
--output_path output/phi-2 \
|
490 |
+
--limit 10 \
|
491 |
+
--wandb_args project=lm-eval-harness-integration \
|
492 |
+
--log_samples
|
493 |
+
```
|
494 |
+
|
495 |
+
In the stdout, you will find the link to the W&B run page as well as link to the generated report. You can find an example of this workflow in [examples/visualize-wandb.ipynb](examples/visualize-wandb.ipynb), and an example of how to integrate it beyond the CLI.
|
496 |
+
|
497 |
+
## How to Contribute or Learn More?
|
498 |
+
|
499 |
+
For more information on the library and how everything fits together, check out all of our [documentation pages](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/docs)! We plan to post a larger roadmap of desired + planned library improvements soon, with more information on how contributors can help.
|
500 |
+
|
501 |
+
### Implementing new tasks
|
502 |
+
|
503 |
+
To implement a new task in the eval harness, see [this guide](./docs/new_task_guide.md).
|
504 |
+
|
505 |
+
In general, we follow this priority list for addressing concerns about prompting and other eval details:
|
506 |
+
1. If there is widespread agreement among people who train LLMs, use the agreed upon procedure.
|
507 |
+
2. If there is a clear and unambiguous official implementation, use that procedure.
|
508 |
+
3. If there is widespread agreement among people who evaluate LLMs, use the agreed upon procedure.
|
509 |
+
4. If there are multiple common implementations but not universal or widespread agreement, use our preferred option among the common implementations. As before, prioritize choosing from among the implementations found in LLM training papers.
|
510 |
+
|
511 |
+
These are guidelines and not rules, and can be overruled in special circumstances.
|
512 |
+
|
513 |
+
We try to prioritize agreement with the procedures used by other groups to decrease the harm when people inevitably compare runs across different papers despite our discouragement of the practice. Historically, we also prioritized the implementation from [Language Models are Few Shot Learners](https://arxiv.org/abs/2005.14165) as our original goal was specifically to compare results with that paper.
|
514 |
+
|
515 |
+
### Support
|
516 |
+
|
517 |
+
The best way to get support is to open an issue on this repo or join the [EleutherAI Discord server](https://discord.gg/eleutherai). The `#lm-thunderdome` channel is dedicated to developing this project and the `#release-discussion` channel is for receiving support for our releases. If you've used the library and have had a positive (or negative) experience, we'd love to hear from you!
|
518 |
+
|
519 |
+
## Optional Extras
|
520 |
+
Extras dependencies can be installed via `pip install -e ".[NAME]"`
|
521 |
+
|
522 |
+
| Name | Use |
|
523 |
+
|---------------|---------------------------------------|
|
524 |
+
| anthropic | For using Anthropic's models |
|
525 |
+
| deepsparse | For running NM's DeepSparse models |
|
526 |
+
| dev | For linting PRs and contributions |
|
527 |
+
| gptq | For loading models with GPTQ |
|
528 |
+
| hf_transfer | For speeding up HF Hub file downloads |
|
529 |
+
| ifeval | For running the IFEval task |
|
530 |
+
| neuronx | For running on AWS inf2 instances |
|
531 |
+
| mamba | For loading Mamba SSM models |
|
532 |
+
| math | For running math task answer checking |
|
533 |
+
| multilingual | For multilingual tokenizers |
|
534 |
+
| openai | For using OpenAI's models |
|
535 |
+
| optimum | For running Intel OpenVINO models |
|
536 |
+
| promptsource | For using PromptSource prompts |
|
537 |
+
| sentencepiece | For using the sentencepiece tokenizer |
|
538 |
+
| sparseml | For using NM's SparseML models |
|
539 |
+
| testing | For running library test suite |
|
540 |
+
| vllm | For loading models with vLLM |
|
541 |
+
| zeno | For visualizing results with Zeno |
|
542 |
+
|---------------|---------------------------------------|
|
543 |
+
| all | Loads all extras (not recommended) |
|
544 |
+
|
545 |
+
## Cite as
|
546 |
+
|
547 |
+
```
|
548 |
+
@misc{eval-harness,
|
549 |
+
author = {Gao, Leo and Tow, Jonathan and Abbasi, Baber and Biderman, Stella and Black, Sid and DiPofi, Anthony and Foster, Charles and Golding, Laurence and Hsu, Jeffrey and Le Noac'h, Alain and Li, Haonan and McDonell, Kyle and Muennighoff, Niklas and Ociepa, Chris and Phang, Jason and Reynolds, Laria and Schoelkopf, Hailey and Skowron, Aviya and Sutawika, Lintang and Tang, Eric and Thite, Anish and Wang, Ben and Wang, Kevin and Zou, Andy},
|
550 |
+
title = {A framework for few-shot language model evaluation},
|
551 |
+
month = 12,
|
552 |
+
year = 2023,
|
553 |
+
publisher = {Zenodo},
|
554 |
+
version = {v0.4.0},
|
555 |
+
doi = {10.5281/zenodo.10256836},
|
556 |
+
url = {https://zenodo.org/records/10256836}
|
557 |
+
}
|
558 |
+
```
|
lm-evaluation-harness/lm_eval.egg-info/SOURCES.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
lm-evaluation-harness/lm_eval.egg-info/dependency_links.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
|
lm-evaluation-harness/lm_eval.egg-info/entry_points.txt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
[console_scripts]
|
2 |
+
lm-eval = lm_eval.__main__:cli_evaluate
|
3 |
+
lm_eval = lm_eval.__main__:cli_evaluate
|
lm-evaluation-harness/lm_eval.egg-info/requires.txt
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
accelerate>=0.21.0
|
2 |
+
evaluate
|
3 |
+
datasets>=2.16.0
|
4 |
+
evaluate>=0.4.0
|
5 |
+
jsonlines
|
6 |
+
numexpr
|
7 |
+
peft>=0.2.0
|
8 |
+
pybind11>=2.6.2
|
9 |
+
pytablewriter
|
10 |
+
rouge-score>=0.0.4
|
11 |
+
sacrebleu>=1.5.0
|
12 |
+
scikit-learn>=0.24.1
|
13 |
+
sqlitedict
|
14 |
+
torch>=1.8
|
15 |
+
tqdm-multiprocess
|
16 |
+
transformers>=4.1
|
17 |
+
zstandard
|
18 |
+
dill
|
19 |
+
word2number
|
20 |
+
more_itertools
|
21 |
+
|
22 |
+
[all]
|
23 |
+
lm_eval[anthropic]
|
24 |
+
lm_eval[dev]
|
25 |
+
lm_eval[deepsparse]
|
26 |
+
lm_eval[gptq]
|
27 |
+
lm_eval[hf_transfer]
|
28 |
+
lm_eval[ifeval]
|
29 |
+
lm_eval[mamba]
|
30 |
+
lm_eval[math]
|
31 |
+
lm_eval[multilingual]
|
32 |
+
lm_eval[openai]
|
33 |
+
lm_eval[promptsource]
|
34 |
+
lm_eval[sentencepiece]
|
35 |
+
lm_eval[sparseml]
|
36 |
+
lm_eval[testing]
|
37 |
+
lm_eval[vllm]
|
38 |
+
lm_eval[zeno]
|
39 |
+
lm_eval[wandb]
|
40 |
+
|
41 |
+
[anthropic]
|
42 |
+
anthropic
|
43 |
+
|
44 |
+
[deepsparse]
|
45 |
+
deepsparse-nightly[llm]>=1.8.0.20240404
|
46 |
+
|
47 |
+
[dev]
|
48 |
+
pytest
|
49 |
+
pytest-cov
|
50 |
+
pytest-xdist
|
51 |
+
pre-commit
|
52 |
+
mypy
|
53 |
+
|
54 |
+
[gptq]
|
55 |
+
auto-gptq[triton]>=0.6.0
|
56 |
+
|
57 |
+
[hf_transfer]
|
58 |
+
hf_transfer
|
59 |
+
|
60 |
+
[ifeval]
|
61 |
+
langdetect
|
62 |
+
immutabledict
|
63 |
+
|
64 |
+
[mamba]
|
65 |
+
mamba_ssm
|
66 |
+
causal-conv1d==1.0.2
|
67 |
+
|
68 |
+
[math]
|
69 |
+
sympy>=1.12
|
70 |
+
antlr4-python3-runtime==4.11
|
71 |
+
|
72 |
+
[multilingual]
|
73 |
+
nagisa>=0.2.7
|
74 |
+
jieba>=0.42.1
|
75 |
+
pycountry
|
76 |
+
|
77 |
+
[neuronx]
|
78 |
+
optimum[neuronx]
|
79 |
+
|
80 |
+
[openai]
|
81 |
+
openai==1.3.9
|
82 |
+
tiktoken
|
83 |
+
|
84 |
+
[optimum]
|
85 |
+
optimum[openvino]
|
86 |
+
|
87 |
+
[promptsource]
|
88 |
+
promptsource>=0.2.3
|
89 |
+
|
90 |
+
[sentencepiece]
|
91 |
+
sentencepiece>=0.1.98
|
92 |
+
|
93 |
+
[sparseml]
|
94 |
+
sparseml-nightly[llm]>=1.8.0.20240404
|
95 |
+
|
96 |
+
[testing]
|
97 |
+
pytest
|
98 |
+
pytest-cov
|
99 |
+
pytest-xdist
|
100 |
+
|
101 |
+
[vllm]
|
102 |
+
vllm==0.3.2
|
103 |
+
|
104 |
+
[wandb]
|
105 |
+
wandb>=0.16.3
|
106 |
+
pandas
|
107 |
+
numpy
|
108 |
+
|
109 |
+
[zeno]
|
110 |
+
pandas
|
111 |
+
zeno-client
|
lm-evaluation-harness/lm_eval.egg-info/top_level.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
lm_eval
|
lm-evaluation-harness/lm_eval/__init__.py
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
from .evaluator import evaluate, simple_evaluate
|
2 |
+
import habana_frameworks.torch.gpu_migration
|
3 |
+
import habana_frameworks.torch.core as htcore
|
lm-evaluation-harness/lm_eval/__main__.py
ADDED
@@ -0,0 +1,417 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import json
|
3 |
+
import logging
|
4 |
+
import os
|
5 |
+
import re
|
6 |
+
import sys
|
7 |
+
from functools import partial
|
8 |
+
from pathlib import Path
|
9 |
+
from typing import Union
|
10 |
+
|
11 |
+
import numpy as np
|
12 |
+
|
13 |
+
from lm_eval import evaluator, utils
|
14 |
+
from lm_eval.evaluator import request_caching_arg_to_dict
|
15 |
+
from lm_eval.logging_utils import WandbLogger
|
16 |
+
from lm_eval.tasks import TaskManager
|
17 |
+
from lm_eval.utils import make_table, simple_parse_args_string
|
18 |
+
|
19 |
+
|
20 |
+
DEFAULT_RESULTS_FILE = "results.json"
|
21 |
+
|
22 |
+
|
23 |
+
def _handle_non_serializable(o):
|
24 |
+
if isinstance(o, np.int64) or isinstance(o, np.int32):
|
25 |
+
return int(o)
|
26 |
+
elif isinstance(o, set):
|
27 |
+
return list(o)
|
28 |
+
else:
|
29 |
+
return str(o)
|
30 |
+
|
31 |
+
|
32 |
+
def _int_or_none_list_arg_type(max_len: int, value: str, split_char: str = ","):
|
33 |
+
def parse_value(item):
|
34 |
+
item = item.strip().lower()
|
35 |
+
if item == "none":
|
36 |
+
return None
|
37 |
+
try:
|
38 |
+
return int(item)
|
39 |
+
except ValueError:
|
40 |
+
raise argparse.ArgumentTypeError(f"{item} is not an integer or None")
|
41 |
+
|
42 |
+
items = [parse_value(v) for v in value.split(split_char)]
|
43 |
+
num_items = len(items)
|
44 |
+
|
45 |
+
if num_items == 1:
|
46 |
+
# Makes downstream handling the same for single and multiple values
|
47 |
+
items = items * max_len
|
48 |
+
elif num_items != max_len:
|
49 |
+
raise argparse.ArgumentTypeError(
|
50 |
+
f"Argument requires {max_len} integers or None, separated by '{split_char}'"
|
51 |
+
)
|
52 |
+
|
53 |
+
return items
|
54 |
+
|
55 |
+
|
56 |
+
def check_argument_types(parser: argparse.ArgumentParser):
|
57 |
+
"""
|
58 |
+
Check to make sure all CLI args are typed, raises error if not
|
59 |
+
"""
|
60 |
+
for action in parser._actions:
|
61 |
+
if action.dest != "help" and not action.const:
|
62 |
+
if action.type is None:
|
63 |
+
raise ValueError(
|
64 |
+
f"Argument '{action.dest}' doesn't have a type specified."
|
65 |
+
)
|
66 |
+
else:
|
67 |
+
continue
|
68 |
+
|
69 |
+
|
70 |
+
def setup_parser() -> argparse.ArgumentParser:
|
71 |
+
parser = argparse.ArgumentParser(formatter_class=argparse.RawTextHelpFormatter)
|
72 |
+
parser.add_argument(
|
73 |
+
"--model", "-m", type=str, default="hf", help="Name of model e.g. `hf`"
|
74 |
+
)
|
75 |
+
parser.add_argument(
|
76 |
+
"--tasks",
|
77 |
+
"-t",
|
78 |
+
default=None,
|
79 |
+
type=str,
|
80 |
+
metavar="task1,task2",
|
81 |
+
help="To get full list of tasks, use the command lm-eval --tasks list",
|
82 |
+
)
|
83 |
+
parser.add_argument(
|
84 |
+
"--model_args",
|
85 |
+
"-a",
|
86 |
+
default="",
|
87 |
+
type=str,
|
88 |
+
help="Comma separated string arguments for model, e.g. `pretrained=EleutherAI/pythia-160m,dtype=float32`",
|
89 |
+
)
|
90 |
+
parser.add_argument(
|
91 |
+
"--num_fewshot",
|
92 |
+
"-f",
|
93 |
+
type=int,
|
94 |
+
default=None,
|
95 |
+
metavar="N",
|
96 |
+
help="Number of examples in few-shot context",
|
97 |
+
)
|
98 |
+
parser.add_argument(
|
99 |
+
"--batch_size",
|
100 |
+
"-b",
|
101 |
+
type=str,
|
102 |
+
default=1,
|
103 |
+
metavar="auto|auto:N|N",
|
104 |
+
help="Acceptable values are 'auto', 'auto:N' or N, where N is an integer. Default 1.",
|
105 |
+
)
|
106 |
+
parser.add_argument(
|
107 |
+
"--max_batch_size",
|
108 |
+
type=int,
|
109 |
+
default=None,
|
110 |
+
metavar="N",
|
111 |
+
help="Maximal batch size to try with --batch_size auto.",
|
112 |
+
)
|
113 |
+
parser.add_argument(
|
114 |
+
"--device",
|
115 |
+
type=str,
|
116 |
+
default=None,
|
117 |
+
help="Device to use (e.g. cuda, cuda:0, cpu).",
|
118 |
+
)
|
119 |
+
parser.add_argument(
|
120 |
+
"--output_path",
|
121 |
+
"-o",
|
122 |
+
default=None,
|
123 |
+
type=str,
|
124 |
+
metavar="DIR|DIR/file.json",
|
125 |
+
help="The path to the output file where the result metrics will be saved. If the path is a directory and log_samples is true, the results will be saved in the directory. Else the parent directory will be used.",
|
126 |
+
)
|
127 |
+
parser.add_argument(
|
128 |
+
"--limit",
|
129 |
+
"-L",
|
130 |
+
type=float,
|
131 |
+
default=None,
|
132 |
+
metavar="N|0<N<1",
|
133 |
+
help="Limit the number of examples per task. "
|
134 |
+
"If <1, limit is a percentage of the total number of examples.",
|
135 |
+
)
|
136 |
+
parser.add_argument(
|
137 |
+
"--use_cache",
|
138 |
+
"-c",
|
139 |
+
type=str,
|
140 |
+
default=None,
|
141 |
+
metavar="DIR",
|
142 |
+
help="A path to a sqlite db file for caching model responses. `None` if not caching.",
|
143 |
+
)
|
144 |
+
parser.add_argument(
|
145 |
+
"--cache_requests",
|
146 |
+
type=str,
|
147 |
+
default=None,
|
148 |
+
choices=["true", "refresh", "delete"],
|
149 |
+
help="Speed up evaluation by caching the building of dataset requests. `None` if not caching.",
|
150 |
+
)
|
151 |
+
parser.add_argument(
|
152 |
+
"--check_integrity",
|
153 |
+
action="store_true",
|
154 |
+
help="Whether to run the relevant part of the test suite for the tasks.",
|
155 |
+
)
|
156 |
+
parser.add_argument(
|
157 |
+
"--write_out",
|
158 |
+
"-w",
|
159 |
+
action="store_true",
|
160 |
+
default=False,
|
161 |
+
help="Prints the prompt for the first few documents.",
|
162 |
+
)
|
163 |
+
parser.add_argument(
|
164 |
+
"--log_samples",
|
165 |
+
"-s",
|
166 |
+
action="store_true",
|
167 |
+
default=False,
|
168 |
+
help="If True, write out all model outputs and documents for per-sample measurement and post-hoc analysis. Use with --output_path.",
|
169 |
+
)
|
170 |
+
parser.add_argument(
|
171 |
+
"--show_config",
|
172 |
+
action="store_true",
|
173 |
+
default=False,
|
174 |
+
help="If True, shows the the full config of all tasks at the end of the evaluation.",
|
175 |
+
)
|
176 |
+
parser.add_argument(
|
177 |
+
"--include_path",
|
178 |
+
type=str,
|
179 |
+
default=None,
|
180 |
+
metavar="DIR",
|
181 |
+
help="Additional path to include if there are external tasks to include.",
|
182 |
+
)
|
183 |
+
parser.add_argument(
|
184 |
+
"--gen_kwargs",
|
185 |
+
type=str,
|
186 |
+
default=None,
|
187 |
+
help=(
|
188 |
+
"String arguments for model generation on greedy_until tasks,"
|
189 |
+
" e.g. `temperature=0,top_k=0,top_p=0`."
|
190 |
+
),
|
191 |
+
)
|
192 |
+
parser.add_argument(
|
193 |
+
"--verbosity",
|
194 |
+
"-v",
|
195 |
+
type=str.upper,
|
196 |
+
default="INFO",
|
197 |
+
metavar="CRITICAL|ERROR|WARNING|INFO|DEBUG",
|
198 |
+
help="Controls the reported logging error level. Set to DEBUG when testing + adding new task configurations for comprehensive log output.",
|
199 |
+
)
|
200 |
+
parser.add_argument(
|
201 |
+
"--wandb_args",
|
202 |
+
type=str,
|
203 |
+
default="",
|
204 |
+
help="Comma separated string arguments passed to wandb.init, e.g. `project=lm-eval,job_type=eval",
|
205 |
+
)
|
206 |
+
parser.add_argument(
|
207 |
+
"--predict_only",
|
208 |
+
"-x",
|
209 |
+
action="store_true",
|
210 |
+
default=False,
|
211 |
+
help="Use with --log_samples. Only model outputs will be saved and metrics will not be evaluated.",
|
212 |
+
)
|
213 |
+
parser.add_argument(
|
214 |
+
"--seed",
|
215 |
+
type=partial(_int_or_none_list_arg_type, 3),
|
216 |
+
default="0,1234,1234", # for backward compatibility
|
217 |
+
help=(
|
218 |
+
"Set seed for python's random, numpy and torch.\n"
|
219 |
+
"Accepts a comma-separated list of 3 values for python's random, numpy, and torch seeds, respectively, "
|
220 |
+
"or a single integer to set the same seed for all three.\n"
|
221 |
+
"The values are either an integer or 'None' to not set the seed. Default is `0,1234,1234` (for backward compatibility).\n"
|
222 |
+
"E.g. `--seed 0,None,8` sets `random.seed(0)` and `torch.manual_seed(8)`. Here numpy's seed is not set since the second value is `None`.\n"
|
223 |
+
"E.g, `--seed 42` sets all three seeds to 42."
|
224 |
+
),
|
225 |
+
)
|
226 |
+
parser.add_argument(
|
227 |
+
"--trust_remote_code",
|
228 |
+
action="store_true",
|
229 |
+
help="Sets trust_remote_code to True to execute code to create HF Datasets from the Hub",
|
230 |
+
)
|
231 |
+
|
232 |
+
return parser
|
233 |
+
|
234 |
+
|
235 |
+
def parse_eval_args(parser: argparse.ArgumentParser) -> argparse.Namespace:
|
236 |
+
check_argument_types(parser)
|
237 |
+
return parser.parse_args()
|
238 |
+
|
239 |
+
|
240 |
+
def cli_evaluate(args: Union[argparse.Namespace, None] = None) -> None:
|
241 |
+
if not args:
|
242 |
+
# we allow for args to be passed externally, else we parse them ourselves
|
243 |
+
parser = setup_parser()
|
244 |
+
args = parse_eval_args(parser)
|
245 |
+
|
246 |
+
if args.wandb_args:
|
247 |
+
wandb_logger = WandbLogger(**simple_parse_args_string(args.wandb_args))
|
248 |
+
#run = wandb.init(project='eval',group='exp1')
|
249 |
+
eval_logger = utils.eval_logger
|
250 |
+
eval_logger.setLevel(getattr(logging, f"{args.verbosity}"))
|
251 |
+
eval_logger.info(f"Verbosity set to {args.verbosity}")
|
252 |
+
os.environ["TOKENIZERS_PARALLELISM"] = "false"
|
253 |
+
|
254 |
+
if args.predict_only:
|
255 |
+
args.log_samples = True
|
256 |
+
if (args.log_samples or args.predict_only) and not args.output_path:
|
257 |
+
raise ValueError(
|
258 |
+
"Specify --output_path if providing --log_samples or --predict_only"
|
259 |
+
)
|
260 |
+
|
261 |
+
if args.include_path is not None:
|
262 |
+
eval_logger.info(f"Including path: {args.include_path}")
|
263 |
+
task_manager = TaskManager(args.verbosity, include_path=args.include_path)
|
264 |
+
|
265 |
+
if args.limit:
|
266 |
+
eval_logger.warning(
|
267 |
+
" --limit SHOULD ONLY BE USED FOR TESTING."
|
268 |
+
"REAL METRICS SHOULD NOT BE COMPUTED USING LIMIT."
|
269 |
+
)
|
270 |
+
|
271 |
+
if args.tasks is None:
|
272 |
+
eval_logger.error("Need to specify task to evaluate.")
|
273 |
+
sys.exit()
|
274 |
+
elif args.tasks == "list":
|
275 |
+
eval_logger.info(
|
276 |
+
"Available Tasks:\n - {}".format("\n - ".join(task_manager.all_tasks))
|
277 |
+
)
|
278 |
+
sys.exit()
|
279 |
+
else:
|
280 |
+
if os.path.isdir(args.tasks):
|
281 |
+
import glob
|
282 |
+
|
283 |
+
task_names = []
|
284 |
+
yaml_path = os.path.join(args.tasks, "*.yaml")
|
285 |
+
for yaml_file in glob.glob(yaml_path):
|
286 |
+
config = utils.load_yaml_config(yaml_file)
|
287 |
+
task_names.append(config)
|
288 |
+
else:
|
289 |
+
task_list = args.tasks.split(",")
|
290 |
+
task_names = task_manager.match_tasks(task_list)
|
291 |
+
for task in [task for task in task_list if task not in task_names]:
|
292 |
+
if os.path.isfile(task):
|
293 |
+
config = utils.load_yaml_config(task)
|
294 |
+
task_names.append(config)
|
295 |
+
task_missing = [
|
296 |
+
task for task in task_list if task not in task_names and "*" not in task
|
297 |
+
] # we don't want errors if a wildcard ("*") task name was used
|
298 |
+
|
299 |
+
if task_missing:
|
300 |
+
missing = ", ".join(task_missing)
|
301 |
+
eval_logger.error(
|
302 |
+
f"Tasks were not found: {missing}\n"
|
303 |
+
f"{utils.SPACING}Try `lm-eval --tasks list` for list of available tasks",
|
304 |
+
)
|
305 |
+
raise ValueError(
|
306 |
+
f"Tasks not found: {missing}. Try `lm-eval --tasks list` for list of available tasks, or '--verbosity DEBUG' to troubleshoot task registration issues."
|
307 |
+
)
|
308 |
+
|
309 |
+
if args.output_path:
|
310 |
+
path = Path(args.output_path)
|
311 |
+
# check if file or 'dir/results.json' exists
|
312 |
+
if path.is_file():
|
313 |
+
raise FileExistsError(f"File already exists at {path}")
|
314 |
+
output_path_file = path.joinpath(DEFAULT_RESULTS_FILE)
|
315 |
+
if output_path_file.is_file():
|
316 |
+
eval_logger.warning(
|
317 |
+
f"File {output_path_file} already exists. Results will be overwritten."
|
318 |
+
)
|
319 |
+
# if path json then get parent dir
|
320 |
+
elif path.suffix in (".json", ".jsonl"):
|
321 |
+
output_path_file = path
|
322 |
+
path.parent.mkdir(parents=True, exist_ok=True)
|
323 |
+
path = path.parent
|
324 |
+
else:
|
325 |
+
path.mkdir(parents=True, exist_ok=True)
|
326 |
+
|
327 |
+
# Respect user's value passed in via CLI, otherwise default to True and add to comma-separated model args
|
328 |
+
if args.trust_remote_code:
|
329 |
+
os.environ["HF_DATASETS_TRUST_REMOTE_CODE"] = str(args.trust_remote_code)
|
330 |
+
args.model_args = (
|
331 |
+
args.model_args
|
332 |
+
+ f",trust_remote_code={os.environ['HF_DATASETS_TRUST_REMOTE_CODE']}"
|
333 |
+
)
|
334 |
+
|
335 |
+
eval_logger.info(f"Selected Tasks: {task_names}")
|
336 |
+
|
337 |
+
request_caching_args = request_caching_arg_to_dict(
|
338 |
+
cache_requests=args.cache_requests
|
339 |
+
)
|
340 |
+
|
341 |
+
results = evaluator.simple_evaluate(
|
342 |
+
model=args.model,
|
343 |
+
model_args=args.model_args,
|
344 |
+
tasks=task_names,
|
345 |
+
num_fewshot=args.num_fewshot,
|
346 |
+
batch_size=args.batch_size,
|
347 |
+
max_batch_size=args.max_batch_size,
|
348 |
+
device=args.device,
|
349 |
+
use_cache=args.use_cache,
|
350 |
+
limit=args.limit,
|
351 |
+
check_integrity=args.check_integrity,
|
352 |
+
write_out=args.write_out,
|
353 |
+
log_samples=args.log_samples,
|
354 |
+
gen_kwargs=args.gen_kwargs,
|
355 |
+
task_manager=task_manager,
|
356 |
+
verbosity=args.verbosity,
|
357 |
+
predict_only=args.predict_only,
|
358 |
+
random_seed=args.seed[0],
|
359 |
+
numpy_random_seed=args.seed[1],
|
360 |
+
torch_random_seed=args.seed[2],
|
361 |
+
**request_caching_args,
|
362 |
+
)
|
363 |
+
|
364 |
+
if results is not None:
|
365 |
+
if args.log_samples:
|
366 |
+
samples = results.pop("samples")
|
367 |
+
dumped = json.dumps(
|
368 |
+
results, indent=2, default=_handle_non_serializable, ensure_ascii=False
|
369 |
+
)
|
370 |
+
if args.show_config:
|
371 |
+
print(dumped)
|
372 |
+
|
373 |
+
batch_sizes = ",".join(map(str, results["config"]["batch_sizes"]))
|
374 |
+
|
375 |
+
# Add W&B logging
|
376 |
+
if args.wandb_args:
|
377 |
+
try:
|
378 |
+
wandb_logger.post_init(results)
|
379 |
+
wandb_logger.log_eval_result()
|
380 |
+
if args.log_samples:
|
381 |
+
wandb_logger.log_eval_samples(samples)
|
382 |
+
except Exception as e:
|
383 |
+
eval_logger.info(f"Logging to Weights and Biases failed due to {e}")
|
384 |
+
|
385 |
+
if args.output_path:
|
386 |
+
output_path_file.open("w", encoding="utf-8").write(dumped)
|
387 |
+
|
388 |
+
if args.log_samples:
|
389 |
+
for task_name, config in results["configs"].items():
|
390 |
+
output_name = "{}_{}".format(
|
391 |
+
re.sub(r"[\"<>:/\|\\?\*\[\]]+", "__", args.model_args),
|
392 |
+
task_name,
|
393 |
+
)
|
394 |
+
filename = path.joinpath(f"{output_name}.jsonl")
|
395 |
+
samples_dumped = json.dumps(
|
396 |
+
samples[task_name],
|
397 |
+
indent=2,
|
398 |
+
default=_handle_non_serializable,
|
399 |
+
ensure_ascii=False,
|
400 |
+
)
|
401 |
+
filename.write_text(samples_dumped, encoding="utf-8")
|
402 |
+
|
403 |
+
print(
|
404 |
+
f"{args.model} ({args.model_args}), gen_kwargs: ({args.gen_kwargs}), limit: {args.limit}, num_fewshot: {args.num_fewshot}, "
|
405 |
+
f"batch_size: {args.batch_size}{f' ({batch_sizes})' if batch_sizes else ''}"
|
406 |
+
)
|
407 |
+
print(make_table(results))
|
408 |
+
if "groups" in results:
|
409 |
+
print(make_table(results, "groups"))
|
410 |
+
|
411 |
+
if args.wandb_args:
|
412 |
+
# Tear down wandb run once all the logging is done.
|
413 |
+
wandb_logger.run.finish()
|
414 |
+
|
415 |
+
|
416 |
+
if __name__ == "__main__":
|
417 |
+
cli_evaluate()
|
lm-evaluation-harness/lm_eval/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (363 Bytes). View file
|
|
lm-evaluation-harness/lm_eval/__pycache__/__main__.cpython-310.pyc
ADDED
Binary file (10.6 kB). View file
|
|
lm-evaluation-harness/lm_eval/__pycache__/evaluator.cpython-310.pyc
ADDED
Binary file (14 kB). View file
|
|
lm-evaluation-harness/lm_eval/__pycache__/evaluator_utils.cpython-310.pyc
ADDED
Binary file (9.77 kB). View file
|
|
lm-evaluation-harness/lm_eval/__pycache__/logging_utils.cpython-310.pyc
ADDED
Binary file (14.7 kB). View file
|
|
lm-evaluation-harness/lm_eval/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (11.2 kB). View file
|
|
lm-evaluation-harness/lm_eval/decontamination/__init__.py
ADDED
File without changes
|
lm-evaluation-harness/lm_eval/decontamination/archiver.py
ADDED
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import datetime
|
2 |
+
import io
|
3 |
+
import json
|
4 |
+
import mmap
|
5 |
+
import os
|
6 |
+
from pathlib import Path
|
7 |
+
from typing import Any
|
8 |
+
|
9 |
+
import jsonlines
|
10 |
+
import tqdm
|
11 |
+
import zstandard
|
12 |
+
|
13 |
+
|
14 |
+
def json_serial(obj: Any) -> str:
|
15 |
+
"""JSON serializer for objects not serializable by default json code"""
|
16 |
+
|
17 |
+
if isinstance(obj, (datetime.datetime,)):
|
18 |
+
return obj.isoformat()
|
19 |
+
raise TypeError("Type %s not serializable" % type(obj))
|
20 |
+
|
21 |
+
|
22 |
+
# Modified version of lm_dataformat Archive for single file.
|
23 |
+
class Archive:
|
24 |
+
def __init__(self, file_path: str, compression_level: int = 3) -> None:
|
25 |
+
self.file_path = file_path
|
26 |
+
dir_name = os.path.dirname(file_path)
|
27 |
+
if dir_name:
|
28 |
+
os.makedirs(dir_name, exist_ok=True)
|
29 |
+
self.fh = open(self.file_path, "wb")
|
30 |
+
self.cctx = zstandard.ZstdCompressor(level=compression_level)
|
31 |
+
self.compressor = self.cctx.stream_writer(self.fh)
|
32 |
+
|
33 |
+
def add_data(self, data, meta=None) -> None:
|
34 |
+
if meta is None:
|
35 |
+
meta = {}
|
36 |
+
self.compressor.write(
|
37 |
+
json.dumps({"text": data, "meta": meta}, default=json_serial).encode(
|
38 |
+
"UTF-8"
|
39 |
+
)
|
40 |
+
+ b"\n"
|
41 |
+
)
|
42 |
+
|
43 |
+
def commit(self) -> None:
|
44 |
+
self.compressor.flush(zstandard.FLUSH_FRAME)
|
45 |
+
self.fh.flush()
|
46 |
+
self.fh.close()
|
47 |
+
|
48 |
+
|
49 |
+
# Modified version of lm_dataformat Reader with self.fh set, allowing peeking for tqdm.
|
50 |
+
class Reader:
|
51 |
+
def __init__(self) -> None:
|
52 |
+
pass
|
53 |
+
|
54 |
+
def read(
|
55 |
+
self,
|
56 |
+
file,
|
57 |
+
get_meta: bool = False,
|
58 |
+
autojoin_paragraphs: bool = True,
|
59 |
+
para_joiner: str = "\n\n",
|
60 |
+
):
|
61 |
+
with open(file, "rb") as fh:
|
62 |
+
self.fh = fh
|
63 |
+
cctx = zstandard.ZstdDecompressor()
|
64 |
+
reader = io.BufferedReader(cctx.stream_reader(fh))
|
65 |
+
rdr = jsonlines.Reader(reader)
|
66 |
+
for ob in rdr:
|
67 |
+
# naive jsonl where each object is just the string itself, with no meta. For legacy compatibility.
|
68 |
+
if isinstance(ob, str):
|
69 |
+
assert not get_meta
|
70 |
+
yield ob
|
71 |
+
continue
|
72 |
+
|
73 |
+
text = ob["text"]
|
74 |
+
|
75 |
+
if autojoin_paragraphs and isinstance(text, list):
|
76 |
+
text = para_joiner.join(text)
|
77 |
+
|
78 |
+
if get_meta:
|
79 |
+
yield text, (ob["meta"] if "meta" in ob else {})
|
80 |
+
else:
|
81 |
+
yield text
|
82 |
+
|
83 |
+
|
84 |
+
class TextArchive:
|
85 |
+
def __init__(self, file_path, mode: str = "rb+") -> None:
|
86 |
+
self.file_path = file_path
|
87 |
+
dir_name = os.path.dirname(file_path)
|
88 |
+
if dir_name:
|
89 |
+
os.makedirs(dir_name, exist_ok=True)
|
90 |
+
|
91 |
+
if not os.path.exists(file_path):
|
92 |
+
Path(file_path).touch()
|
93 |
+
|
94 |
+
self.fh = open(self.file_path, mode)
|
95 |
+
|
96 |
+
def add_data(self, data) -> None:
|
97 |
+
self.fh.write(data.encode("UTF-8") + b"\n")
|
98 |
+
|
99 |
+
def commit(self) -> None:
|
100 |
+
self.fh.flush()
|
101 |
+
self.fh.close()
|
102 |
+
|
103 |
+
|
104 |
+
class TextReader:
|
105 |
+
def __init__(self, file_path) -> None:
|
106 |
+
self.file_path = file_path
|
107 |
+
|
108 |
+
# Optimized mmap read with infrequent tqdm updates to maintain speed
|
109 |
+
# Tested up to 250MB/s.
|
110 |
+
def read_tqdm(self, update_frequency: int = 10000):
|
111 |
+
current_file_position = 0
|
112 |
+
line_counter = 0
|
113 |
+
with open(self.file_path, "r", encoding="utf-8") as fh, tqdm.tqdm(
|
114 |
+
total=os.path.getsize(self.file_path),
|
115 |
+
dynamic_ncols=True,
|
116 |
+
unit="byte",
|
117 |
+
unit_scale=1,
|
118 |
+
) as progress:
|
119 |
+
with mmap.mmap(fh.fileno(), length=0, access=mmap.ACCESS_READ) as mmap_obj:
|
120 |
+
for line in iter(mmap_obj.readline, b""):
|
121 |
+
line = line.decode("utf-8")
|
122 |
+
line_counter += 1
|
123 |
+
if line_counter == update_frequency:
|
124 |
+
new_file_pos = mmap_obj.tell()
|
125 |
+
bytes_read = new_file_pos - current_file_position
|
126 |
+
current_file_position = new_file_pos
|
127 |
+
progress.update(bytes_read)
|
128 |
+
line_counter = 0
|
129 |
+
yield line[:-1]
|
130 |
+
|
131 |
+
def read_and_tell(self):
|
132 |
+
current_file_position = 0
|
133 |
+
with open(self.file_path, "r", encoding="utf8") as fh:
|
134 |
+
with mmap.mmap(fh.fileno(), length=0, access=mmap.ACCESS_READ) as mmap_obj:
|
135 |
+
for line in iter(mmap_obj.readline, b""):
|
136 |
+
line = line.decode("utf-8")
|
137 |
+
new_file_pos = mmap_obj.tell()
|
138 |
+
raw_bytes_read = new_file_pos - current_file_position
|
139 |
+
current_file_position = new_file_pos
|
140 |
+
yield line[:-1], raw_bytes_read
|
141 |
+
|
142 |
+
def read(self):
|
143 |
+
with open(self.file_path, "r", encoding="utf8") as fh:
|
144 |
+
with mmap.mmap(fh.fileno(), length=0, access=mmap.ACCESS_READ) as mmap_obj:
|
145 |
+
for line in iter(mmap_obj.readline, b""):
|
146 |
+
line = line.decode("utf-8")
|
147 |
+
yield line[:-1]
|
148 |
+
|
149 |
+
def read_slow(self):
|
150 |
+
with open(self.file_path, "r", encoding="utf8") as fh:
|
151 |
+
while True:
|
152 |
+
line = fh.readline()
|
153 |
+
if line == -1 or line == "":
|
154 |
+
break
|
155 |
+
else:
|
156 |
+
yield line[:-1]
|
157 |
+
|
158 |
+
|
159 |
+
# Optimized for speed. Decompresses the archive in shell before
|
160 |
+
# using the mmap'd TextReader.
|
161 |
+
class ZStdTextReader:
|
162 |
+
def __init__(self, file) -> None:
|
163 |
+
self.file = file
|
164 |
+
|
165 |
+
def read_tqdm(self):
|
166 |
+
decompressed_file = self.file[:-4]
|
167 |
+
print("Decompressing file, please wait...")
|
168 |
+
os.system(f"zstd -d {self.file}") # linux decompress is faster
|
169 |
+
reader = TextReader(decompressed_file)
|
170 |
+
yield from reader.read_tqdm()
|
171 |
+
os.remove(decompressed_file)
|
lm-evaluation-harness/lm_eval/decontamination/decontaminate.py
ADDED
@@ -0,0 +1,166 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import collections
|
2 |
+
import glob
|
3 |
+
import json
|
4 |
+
import os
|
5 |
+
import pickle
|
6 |
+
import random
|
7 |
+
import time
|
8 |
+
|
9 |
+
from .archiver import ZStdTextReader
|
10 |
+
from .janitor import Janitor, word_ngrams
|
11 |
+
|
12 |
+
|
13 |
+
# Was used for testing the evaluator decoupled from the full logic below
|
14 |
+
def get_train_overlap_stub(docs: dict, ngrams_path: str, ngrams_n_size: str):
|
15 |
+
simulated_overlap = 0.1
|
16 |
+
contaminated = int(len(docs) * simulated_overlap)
|
17 |
+
return random.sample(range(len(docs)), contaminated)
|
18 |
+
|
19 |
+
|
20 |
+
# Returns a dictionary containing all overlapping documents in each
|
21 |
+
# task. In the standard use case, an overlap occurs when any of the 13-grams
|
22 |
+
# found in the task document exist in the training set documents.
|
23 |
+
#
|
24 |
+
# To generate 13-grams for the pile see scripts/clean_training_data. The final output of these
|
25 |
+
# scripts are an info.json file containing the n_gram_size (13) and a bunch of "ngrams_{x}.bkt.txt.sorted.zst"
|
26 |
+
# files. These should exist in the "ngrams_path" provided to this function.
|
27 |
+
|
28 |
+
|
29 |
+
# Algorithm:
|
30 |
+
# 1. Build lookups for each dataset {ngram: list(document_ids)}
|
31 |
+
# 2. Merge into an overall lookup {ngram: [(task_name, task_set, doc_ids),]}
|
32 |
+
# 3. Full scan the 13-grams from the training set against the merged lookup,
|
33 |
+
# saving matches in the "duplicates" dictionary {(task_name, task_set): set(doc_ids)}
|
34 |
+
# 4. Strip the task_set from the dictionary keys and return
|
35 |
+
#
|
36 |
+
# We cache the task+set lookups as well as the overlaps.
|
37 |
+
def get_train_overlap(docs_by_task_set: dict, ngrams_path: str, limit: int) -> dict:
|
38 |
+
# return get_train_overlap_stub(docs, ngrams_path, ngrams_n_size)
|
39 |
+
|
40 |
+
info_dict_path = os.path.join(ngrams_path, "info.json")
|
41 |
+
info_dict = json.load(open(info_dict_path, "r", encoding="utf-8"))
|
42 |
+
ngrams_n_size = info_dict["ngram_size"]
|
43 |
+
|
44 |
+
janitor = Janitor()
|
45 |
+
|
46 |
+
# Build lookup for each dataset first in case we use different task combinations later
|
47 |
+
print("Building Lookups...")
|
48 |
+
start = time.perf_counter()
|
49 |
+
|
50 |
+
def get_overlaps_dump_path(task_name, task_set, ngrams_n_size, limit) -> str:
|
51 |
+
return f"data/{task_name}/{task_set}_{ngrams_n_size}grams_limit{limit}.overlaps"
|
52 |
+
|
53 |
+
lookups = {}
|
54 |
+
duplicates = {} # (task_name, task_set): set(doc_ids)}
|
55 |
+
sets_to_decontaminate = len(docs_by_task_set.keys())
|
56 |
+
|
57 |
+
for (task_name, task_set), docs in docs_by_task_set.items():
|
58 |
+
if not os.path.exists(f"data/{task_name}"):
|
59 |
+
os.mkdir(f"data/{task_name}")
|
60 |
+
|
61 |
+
# Check if we've decontaminated this combination before
|
62 |
+
overlaps_dump_path = get_overlaps_dump_path(
|
63 |
+
task_name, task_set, ngrams_n_size, limit
|
64 |
+
)
|
65 |
+
if os.path.exists(overlaps_dump_path):
|
66 |
+
duplicates[(task_name, task_set)] = pickle.load(
|
67 |
+
open(overlaps_dump_path, "rb")
|
68 |
+
)
|
69 |
+
sets_to_decontaminate -= 1
|
70 |
+
continue
|
71 |
+
else:
|
72 |
+
duplicates[(task_name, task_set)] = set()
|
73 |
+
|
74 |
+
# Build/load the task lookup {ngram: set(documents)}.
|
75 |
+
task_set_lookup_path = (
|
76 |
+
f"data/{task_name}/{task_set}_{ngrams_n_size}grams_limit{limit}.lookup"
|
77 |
+
)
|
78 |
+
if os.path.exists(task_set_lookup_path):
|
79 |
+
print(f"{task_set_lookup_path} available, loading...")
|
80 |
+
lookups[(task_name, task_set)] = pickle.load(
|
81 |
+
open(task_set_lookup_path, "rb")
|
82 |
+
)
|
83 |
+
else:
|
84 |
+
print(f"{task_set_lookup_path} not available, building...")
|
85 |
+
lookup = collections.defaultdict(set)
|
86 |
+
|
87 |
+
for doc_id, document in enumerate(docs):
|
88 |
+
ngrams = word_ngrams(janitor.normalize_string(document), ngrams_n_size)
|
89 |
+
for ngram in ngrams:
|
90 |
+
lookup[ngram].add(doc_id)
|
91 |
+
|
92 |
+
pickle.dump(lookup, open(task_set_lookup_path, "wb"))
|
93 |
+
lookups[(task_name, task_set)] = lookup
|
94 |
+
|
95 |
+
elapsed = time.perf_counter() - start
|
96 |
+
print(f"Building lookups took {elapsed:0.5f} seconds.")
|
97 |
+
|
98 |
+
matched_ngrams = []
|
99 |
+
|
100 |
+
if sets_to_decontaminate > 0:
|
101 |
+
print("Merging lookups...")
|
102 |
+
start = time.perf_counter()
|
103 |
+
merged_lookup = collections.defaultdict(list)
|
104 |
+
for (task_name, task_set), lookup in lookups.items():
|
105 |
+
for ngram, doc_ids in lookup.items():
|
106 |
+
merged_lookup[ngram].append((task_name, task_set, doc_ids))
|
107 |
+
|
108 |
+
elapsed = time.perf_counter() - start
|
109 |
+
print(f"Merging lookups took {elapsed:0.5f} seconds.")
|
110 |
+
|
111 |
+
print(f"{ngrams_n_size} grams files found in {ngrams_path}:")
|
112 |
+
files = glob.glob(os.path.join(ngrams_path, "*.sorted.zst"))
|
113 |
+
print(files)
|
114 |
+
|
115 |
+
for file in files:
|
116 |
+
start = time.perf_counter()
|
117 |
+
print(f"Scanning {file}")
|
118 |
+
reader = ZStdTextReader(file)
|
119 |
+
total_ngrams = 0
|
120 |
+
unique_ngrams = 0
|
121 |
+
matching_unique = 0
|
122 |
+
non_matching_unique = 0
|
123 |
+
|
124 |
+
current_ngram = ""
|
125 |
+
for line in reader.read_tqdm(): # Scan training set ngrams file
|
126 |
+
total_ngrams += 1
|
127 |
+
[ngram, document_id] = line.rsplit(" ", 1)
|
128 |
+
if (
|
129 |
+
ngram != current_ngram
|
130 |
+
): # Only need to match the ngram once in training set
|
131 |
+
unique_ngrams += 1
|
132 |
+
current_ngram = ngram
|
133 |
+
if ngram in merged_lookup:
|
134 |
+
matched_ngrams.append(ngram) # For logging
|
135 |
+
matching_unique += 1
|
136 |
+
for task_name, task_set, doc_ids in merged_lookup[ngram]:
|
137 |
+
task_doc_set = duplicates[(task_name, task_set)]
|
138 |
+
for doc_id in doc_ids: # Record contamination across all relevant task/set combos
|
139 |
+
task_doc_set.add(doc_id)
|
140 |
+
del merged_lookup[ngram] # No point matching again
|
141 |
+
else:
|
142 |
+
non_matching_unique += 1
|
143 |
+
|
144 |
+
print(f"Total Ngrams: {total_ngrams}")
|
145 |
+
print(f"Unique Ngrams: {unique_ngrams}")
|
146 |
+
print(f"Unique Matching: {matching_unique}")
|
147 |
+
print(f"Unique Non Matching: {non_matching_unique}")
|
148 |
+
print("Matched ngrams:")
|
149 |
+
for ngram in matched_ngrams:
|
150 |
+
print(ngram)
|
151 |
+
|
152 |
+
elapsed = time.perf_counter() - start
|
153 |
+
print(f"Read took {elapsed:0.5f} seconds.")
|
154 |
+
print(f"Speed: {(os.path.getsize(file)/1000000.0)/elapsed}MB/second")
|
155 |
+
|
156 |
+
print(duplicates)
|
157 |
+
|
158 |
+
# Dump overlaps separately
|
159 |
+
for (task_name, task_set), doc_ids in duplicates.items():
|
160 |
+
overlaps_dump_path = get_overlaps_dump_path(
|
161 |
+
task_name, task_set, ngrams_n_size, limit
|
162 |
+
)
|
163 |
+
pickle.dump(doc_ids, open(overlaps_dump_path, "wb"))
|
164 |
+
|
165 |
+
# Strip task set and return
|
166 |
+
return {task_name: doc_ids for (task_name, task_set), doc_ids in duplicates.items()}
|
lm-evaluation-harness/lm_eval/decontamination/janitor.py
ADDED
@@ -0,0 +1,328 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import pickle
|
2 |
+
import re
|
3 |
+
import string
|
4 |
+
import traceback
|
5 |
+
from typing import Iterator, List, Sequence, Tuple, TypeVar
|
6 |
+
|
7 |
+
|
8 |
+
# This is a cpp module. Compile janitor_util.cpp with:
|
9 |
+
# c++ -O3 -Wall -shared -std=c++11 -fPIC $(python3 -m pybind11 --includes) janitor_util.cpp -o janitor_util$(python3-config --extension-suffix) -undefined dynamic_lookup
|
10 |
+
try:
|
11 |
+
import janitor_util
|
12 |
+
|
13 |
+
JANITOR_CPP = True
|
14 |
+
except Exception:
|
15 |
+
print("WARNING: C++ module could not be loaded. Janitor running in python mode")
|
16 |
+
traceback.print_exc()
|
17 |
+
JANITOR_CPP = False
|
18 |
+
|
19 |
+
T = TypeVar("T")
|
20 |
+
|
21 |
+
|
22 |
+
# Implementation from nltk source
|
23 |
+
# https://www.nltk.org/_modules/nltk/util.html
|
24 |
+
def form_ngrams(sequence: Iterator[T], n: int) -> Iterator[Tuple[T, ...]]:
|
25 |
+
history = []
|
26 |
+
while n > 1:
|
27 |
+
# PEP 479, prevent RuntimeError from being raised when StopIteration bubbles out of generator
|
28 |
+
try:
|
29 |
+
next_item = next(sequence)
|
30 |
+
except StopIteration:
|
31 |
+
# no more data, terminate the generator
|
32 |
+
return
|
33 |
+
history.append(next_item)
|
34 |
+
n -= 1
|
35 |
+
for item in sequence:
|
36 |
+
history.append(item)
|
37 |
+
yield tuple(history)
|
38 |
+
del history[0]
|
39 |
+
|
40 |
+
|
41 |
+
def word_ngrams(s: str, n: int) -> Iterator[str]:
|
42 |
+
"""Splits a string into ngram words"""
|
43 |
+
tokens = s.split() # not a generator :(
|
44 |
+
ngram_seqs = form_ngrams(iter(tokens), n)
|
45 |
+
return (" ".join(ngram) for ngram in ngram_seqs)
|
46 |
+
|
47 |
+
|
48 |
+
# Does character sequences only - combined faster function to play around with later
|
49 |
+
# def word_ngrams_indices_combined(sequence, n):
|
50 |
+
# current_word = ""
|
51 |
+
# history = []
|
52 |
+
# gap = False;
|
53 |
+
# start = 0
|
54 |
+
# end = 0
|
55 |
+
# for character in sequence:
|
56 |
+
# if character == " ":
|
57 |
+
# if not gap:
|
58 |
+
# gap = True
|
59 |
+
# history.append(current_word)
|
60 |
+
# end += len(current_word) - 1
|
61 |
+
# current_word = ""
|
62 |
+
# if len(history) == n:
|
63 |
+
# yield (tuple(history), start, end)
|
64 |
+
# del history[0]
|
65 |
+
# start = end + 1
|
66 |
+
# end = start
|
67 |
+
# else:
|
68 |
+
# gap = False
|
69 |
+
# current_word += character
|
70 |
+
|
71 |
+
|
72 |
+
# https://stackoverflow.com/questions/13734451/string-split-with-indices-in-python
|
73 |
+
def split_indices(s: str) -> Iterator[Tuple[str, Tuple[int, int]]]:
|
74 |
+
"""Splits a string on whitespaces and records the indices of each in the original string.
|
75 |
+
@:return generator((word, (start_idx, end_idx)), ...)
|
76 |
+
"""
|
77 |
+
return ((m.group(0), (m.start(), m.end() - 1)) for m in re.finditer(r"\S+", s))
|
78 |
+
|
79 |
+
|
80 |
+
def word_ngrams_indices(s: str, n: int) -> Iterator[Tuple[str, Tuple[int, int]]]:
|
81 |
+
"""Splits a string into pairs of (ngram words, their start/end indices)"""
|
82 |
+
tokens_with_indices = split_indices(s)
|
83 |
+
|
84 |
+
# Generator of ngrams of (word, idx_pairs)
|
85 |
+
# (
|
86 |
+
# [(word, (start,end)), (word, (start, end))...],
|
87 |
+
# [(word, (start, end)), ...],
|
88 |
+
# ...
|
89 |
+
# )
|
90 |
+
ngram_seqs_with_indices = form_ngrams(tokens_with_indices, n)
|
91 |
+
|
92 |
+
# Generator of pairs of word and index ngrams
|
93 |
+
# (
|
94 |
+
# ([word, word, ...], [(start,end), (start,end), ...]),
|
95 |
+
# ...
|
96 |
+
# )
|
97 |
+
ngram_indices_pairs = (
|
98 |
+
zip(*ngram_with_indices) for ngram_with_indices in ngram_seqs_with_indices
|
99 |
+
)
|
100 |
+
|
101 |
+
# Generator of ( (word_ngram, (start, end)), (word_ngram, start, end)), ...)
|
102 |
+
return (
|
103 |
+
(" ".join(ngram_seq), (indices[0][0], indices[-1][1]))
|
104 |
+
for ngram_seq, indices in ngram_indices_pairs
|
105 |
+
)
|
106 |
+
|
107 |
+
|
108 |
+
class Janitor:
|
109 |
+
# FIXME delete_chars: Should anything else go here? Special chars?
|
110 |
+
def __init__(
|
111 |
+
self,
|
112 |
+
ngram_n: int = 13,
|
113 |
+
window_to_remove: int = 200,
|
114 |
+
too_dirty_cutoff: int = 10,
|
115 |
+
minimum_slice_length: int = 200,
|
116 |
+
delete_chars: str = string.punctuation,
|
117 |
+
) -> None:
|
118 |
+
self.ngram_n = ngram_n
|
119 |
+
self.window_to_remove = window_to_remove
|
120 |
+
self.too_dirty_cutoff = too_dirty_cutoff
|
121 |
+
self.minimum_slice_length = minimum_slice_length
|
122 |
+
self.delete_chars = delete_chars
|
123 |
+
|
124 |
+
self.dirt_ngrams = set()
|
125 |
+
|
126 |
+
# If in python, we'll translate uppercase to lowercase and delete naughty characters.
|
127 |
+
# This is fast by python standards
|
128 |
+
# https://stackoverflow.com/questions/638893/what-is-the-most-efficient-way-in-python-to-convert-a-string-to-all-lowercase-st
|
129 |
+
self.translation_table = str.maketrans(
|
130 |
+
string.ascii_lowercase + string.ascii_uppercase, # These characters
|
131 |
+
string.ascii_lowercase * 2, # Become these characters
|
132 |
+
self.delete_chars, # These are deleted
|
133 |
+
)
|
134 |
+
|
135 |
+
##############
|
136 |
+
# I/O for saving contamination ngrams
|
137 |
+
##############
|
138 |
+
|
139 |
+
def save_contamination_ngrams(self, filename: str) -> None:
|
140 |
+
with open(filename, "wb") as fp:
|
141 |
+
pickle.dump(filename, fp)
|
142 |
+
|
143 |
+
def load_contamination_ngrams(self, filename: str) -> None:
|
144 |
+
with open(filename, "rb") as fp:
|
145 |
+
self.dirt_ngrams = pickle.load(fp)
|
146 |
+
|
147 |
+
##############
|
148 |
+
# Call these :)
|
149 |
+
##############
|
150 |
+
|
151 |
+
def register_contaminant(self, dirt_string: str) -> None:
|
152 |
+
"""Register a string as contamination to be removed, e.g. a test set
|
153 |
+
This breaks the dirt_string into ngrams to store for future cleaning"""
|
154 |
+
if JANITOR_CPP:
|
155 |
+
return self.register_contaminant_cpp(dirt_string)
|
156 |
+
else:
|
157 |
+
print("WARNING: Janitor running in python mode")
|
158 |
+
return self.register_contaminant_python(dirt_string)
|
159 |
+
|
160 |
+
def clean(self, dirty_string: str) -> List[str]:
|
161 |
+
"""Clean a string (e.g. a training set) by removing all ngrams previously
|
162 |
+
registered as contaminants. Returns a list of clean chunks, or empty if
|
163 |
+
the string was too dirty"""
|
164 |
+
if JANITOR_CPP:
|
165 |
+
return self.clean_cpp(dirty_string)
|
166 |
+
else:
|
167 |
+
print("WARNING: Janitor running in python mode")
|
168 |
+
return self.clean_python(dirty_string)
|
169 |
+
|
170 |
+
def _split_chunks(
|
171 |
+
self, dirty_string: str, dirty_parts: Sequence[Tuple]
|
172 |
+
) -> List[str]:
|
173 |
+
clean_chunks = []
|
174 |
+
splice_idx = 0
|
175 |
+
end = -1
|
176 |
+
for i, (ngram, start, end) in enumerate(dirty_parts):
|
177 |
+
if i >= self.too_dirty_cutoff:
|
178 |
+
return []
|
179 |
+
start = max(0, start - self.window_to_remove)
|
180 |
+
end = min(len(dirty_string), end + self.window_to_remove)
|
181 |
+
|
182 |
+
if start - splice_idx > self.minimum_slice_length:
|
183 |
+
clean_chunks.append(dirty_string[splice_idx:start])
|
184 |
+
splice_idx = end
|
185 |
+
|
186 |
+
if end < len(dirty_string) - self.minimum_slice_length:
|
187 |
+
clean_chunks.append(dirty_string[end + 1 :])
|
188 |
+
|
189 |
+
return clean_chunks
|
190 |
+
|
191 |
+
##############
|
192 |
+
# Fast C++
|
193 |
+
##############
|
194 |
+
|
195 |
+
def register_contaminant_cpp(self, dirt_string) -> None:
|
196 |
+
self.dirt_ngrams.update(
|
197 |
+
janitor_util.clean_ngram(dirt_string, self.delete_chars, self.ngram_n)
|
198 |
+
)
|
199 |
+
|
200 |
+
def clean_cpp(self, dirty_string: str) -> List[str]:
|
201 |
+
contamination_indices = janitor_util.clean_ngram_with_indices(
|
202 |
+
dirty_string, self.delete_chars, self.ngram_n
|
203 |
+
)
|
204 |
+
return self._split_chunks(dirty_string, contamination_indices)
|
205 |
+
|
206 |
+
##############
|
207 |
+
# Slow python
|
208 |
+
##############
|
209 |
+
|
210 |
+
def normalize_string(self, s: str) -> str:
|
211 |
+
return s.translate(self.translation_table)
|
212 |
+
|
213 |
+
def register_contaminant_python(self, dirt_string: str) -> None:
|
214 |
+
self.dirt_ngrams.update(
|
215 |
+
word_ngrams(self.normalize_string(dirt_string), self.ngram_n)
|
216 |
+
)
|
217 |
+
|
218 |
+
def clean_python(self, dirty_string: str) -> List[str]:
|
219 |
+
contamination_indices = (
|
220 |
+
(None, *idx_pair)
|
221 |
+
for dirty_ngram, idx_pair in word_ngrams_indices(dirty_string, self.ngram_n)
|
222 |
+
if self.normalize_string(dirty_ngram) in self.dirt_ngrams
|
223 |
+
)
|
224 |
+
return self._split_chunks(dirty_string, contamination_indices)
|
225 |
+
|
226 |
+
|
227 |
+
##################################################################
|
228 |
+
# Tests
|
229 |
+
#################################################################
|
230 |
+
|
231 |
+
# def print_cpp():
|
232 |
+
# source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
|
233 |
+
|
234 |
+
# for i in range(1, 10, 2):
|
235 |
+
# pprint(janitor_util.clean_ngram(source, string.punctuation, i))
|
236 |
+
# for ngram, start, end in \
|
237 |
+
# janitor_util.clean_ngram_with_indices(source, string.punctuation, i):
|
238 |
+
# print(ngram, "\t", start, end, source[start:end].replace("\n", "\\n"))
|
239 |
+
|
240 |
+
|
241 |
+
# def test_cpp():
|
242 |
+
# source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
|
243 |
+
# contaminant = "dirty boy. Clean he he"
|
244 |
+
|
245 |
+
# jan_python = Janitor()
|
246 |
+
# jan_cpp = Janitor()
|
247 |
+
|
248 |
+
# jan_python.register_contaminant_python(contaminant)
|
249 |
+
# jan_cpp.register_contaminant(contaminant)
|
250 |
+
|
251 |
+
# assert jan_python.dirt_ngrams == jan_cpp.dirt_ngrams, (jan_python.dirt_ngrams, jan_cpp.dirt_ngrams)
|
252 |
+
|
253 |
+
# assert jan_python.clean_python(source) == jan_cpp.clean(source), \
|
254 |
+
# (jan_python.clean_python(source), jan_cpp.clean(source))
|
255 |
+
|
256 |
+
# print("Passed test, python==cpp")
|
257 |
+
|
258 |
+
|
259 |
+
# def benchmark():
|
260 |
+
# # Download and put in data folder: enwik8 (100 MB) from https://cs.fit.edu/~mmahoney/compression/textdata.html
|
261 |
+
# setup = \
|
262 |
+
# """
|
263 |
+
# with open("data/enwik8", "r") as f:
|
264 |
+
# data = f.read()
|
265 |
+
# jan = Janitor(too_dirty_cutoff=1000)
|
266 |
+
# jan.register_contaminant('''
|
267 |
+
# theories is that there is a connection between "geekdom" and autism.
|
268 |
+
# This is hinted, for instance, by a ''Wired Magazine'' article in 2001 entitled "
|
269 |
+
# The [[Geek]] Syndrome", which is a point argued by many in the autism rights
|
270 |
+
# movement{{ref|Wired}}. This article, many professionals assert, is just one example of
|
271 |
+
# the media's application of mental disease labels to what is actually variant normal behavior
|
272 |
+
# &mdash;they argue that shyness, lack of athletic ability or social skills, and intellectual
|
273 |
+
# interests, even when they seem unusual to others, are not in themselves signs of autism or
|
274 |
+
# Asperger's syndrome. Others assert that it is actually the medical profession which is applying
|
275 |
+
# mental disease labels to children who in the past would have simply been accepted as a little
|
276 |
+
# different or even labeled 'gifted'. See [[clinomorphism]] for further discussion of this issue.
|
277 |
+
# Due to the recent publicity surrounding autism and autis
|
278 |
+
# ultan Al Nahyan]] granted [[Petroleum]] concessions, and oil was first found in 1958. At first,
|
279 |
+
# oil money had a marginal impact. A few lowrise concete buildings were erected, and the first
|
280 |
+
# paved road was completed in 1961, but Sheikh Shakbut, uncertain whether the new oil royalties
|
281 |
+
# would last, took a cautious approach, preferring to save the revenue rather than investing it in
|
282 |
+
# development. His brother, [[Zayed bin Sultan Al Nahayan]], saw that oil wealth had the potential
|
283 |
+
# to transform Abu Dhabi. The ruling Al Nahayan family decided that Sheikh Zayed should replace his
|
284 |
+
# brother as Ruler and carry out his vision of developing the country. On [[August 6]], [[1966]],
|
285 |
+
# with the assistance of the British, Sheikh Zayed became the new ruler. See generally, Al-Fahim, M,
|
286 |
+
# ''From Rags to Riches: A Story of Abu Dhabi'', Chapter Six (London Centre of Arab Studies, 1995),
|
287 |
+
# ISBN 1 900404 00 1. With the announcement by Britain in 1968 that it would withdraw from the
|
288 |
+
# Gulf area by 1971, Sheikh Zayed became the main driving force behind the formation of the
|
289 |
+
# [[United Arab Emirates]]. After the Emirates gained independence in 1971,
|
290 |
+
# ''')
|
291 |
+
# """
|
292 |
+
|
293 |
+
# n = 1
|
294 |
+
# print(f"Timing {n} run on 100 MB")
|
295 |
+
# print("Register contaminant")
|
296 |
+
# # print("\tPython", timeit.timeit("jan.register_contaminant_python(data)", setup=setup, globals=globals(), number=n))
|
297 |
+
# print("\tCpp", timeit.timeit("jan.register_contaminant(data)", setup=setup, globals=globals(), number=n))
|
298 |
+
|
299 |
+
# print("Clean")
|
300 |
+
# # print("\tPython", timeit.timeit("jan.clean_python(data)", setup=setup, globals=globals(), number=n))
|
301 |
+
# print("\tCpp", timeit.timeit("jan.clean(data)", setup=setup, globals=globals(), number=n))
|
302 |
+
|
303 |
+
|
304 |
+
# def test_janitor_general():
|
305 |
+
# source = """ ,, I'm a very !dirty,, ,, dirty boy. Clean me daddy. \n\nhe he he hehe heh. lastword """ * 2
|
306 |
+
# contaminant = "dirty boy. Clean he he"
|
307 |
+
|
308 |
+
# jan = Janitor(ngram_n=3)
|
309 |
+
# jan.register_contaminant(contaminant)
|
310 |
+
# cleaned = " ".join(jan.clean(source))
|
311 |
+
# for contam in jan.dirt_ngrams:
|
312 |
+
# assert contam not in cleaned, contam
|
313 |
+
|
314 |
+
# filename = "data/saved_contam"
|
315 |
+
# jan.save_contamination_ngrams(filename)
|
316 |
+
|
317 |
+
# jan = Janitor(ngram_n=3)
|
318 |
+
# jan.load_contamination_ngrams(filename)
|
319 |
+
# cleaned = " ".join(jan.clean(source))
|
320 |
+
# for contam in jan.dirt_ngrams:
|
321 |
+
# assert contam not in cleaned, contam
|
322 |
+
|
323 |
+
|
324 |
+
# if __name__ == "__main__":
|
325 |
+
# test()
|
326 |
+
# # print_cpp()
|
327 |
+
# # test_cpp()
|
328 |
+
# # benchmark()
|
lm-evaluation-harness/lm_eval/evaluator.py
ADDED
@@ -0,0 +1,584 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import itertools
|
2 |
+
import logging
|
3 |
+
import random
|
4 |
+
import time
|
5 |
+
from collections import defaultdict
|
6 |
+
from typing import TYPE_CHECKING, List, Optional, Union
|
7 |
+
|
8 |
+
import numpy as np
|
9 |
+
import torch
|
10 |
+
|
11 |
+
import lm_eval.api.metrics
|
12 |
+
import lm_eval.api.registry
|
13 |
+
import lm_eval.models
|
14 |
+
from lm_eval.caching.cache import delete_cache
|
15 |
+
from lm_eval.evaluator_utils import (
|
16 |
+
consolidate_results,
|
17 |
+
get_sample_size,
|
18 |
+
get_task_list,
|
19 |
+
prepare_print_tasks,
|
20 |
+
print_writeout,
|
21 |
+
run_task_tests,
|
22 |
+
)
|
23 |
+
from lm_eval.logging_utils import add_env_info, get_git_commit_hash
|
24 |
+
from lm_eval.tasks import TaskManager, get_task_dict
|
25 |
+
from lm_eval.utils import eval_logger, positional_deprecated, simple_parse_args_string
|
26 |
+
|
27 |
+
|
28 |
+
if TYPE_CHECKING:
|
29 |
+
from lm_eval.api.model import LM
|
30 |
+
from lm_eval.tasks import Task
|
31 |
+
|
32 |
+
|
33 |
+
@positional_deprecated
|
34 |
+
def simple_evaluate(
|
35 |
+
model,
|
36 |
+
model_args: Optional[Union[str, dict]] = None,
|
37 |
+
tasks: Optional[List[Union[str, dict, object]]] = None,
|
38 |
+
num_fewshot: Optional[int] = None,
|
39 |
+
batch_size: Optional[int] = None,
|
40 |
+
max_batch_size: Optional[int] = None,
|
41 |
+
device: Optional[str] = None,
|
42 |
+
use_cache: Optional[str] = None,
|
43 |
+
cache_requests: bool = False,
|
44 |
+
rewrite_requests_cache: bool = False,
|
45 |
+
delete_requests_cache: bool = False,
|
46 |
+
limit: Optional[Union[int, float]] = None,
|
47 |
+
bootstrap_iters: int = 100000,
|
48 |
+
check_integrity: bool = False,
|
49 |
+
write_out: bool = False,
|
50 |
+
log_samples: bool = True,
|
51 |
+
gen_kwargs: Optional[str] = None,
|
52 |
+
task_manager: Optional[TaskManager] = None,
|
53 |
+
verbosity: str = "INFO",
|
54 |
+
predict_only: bool = False,
|
55 |
+
random_seed: int = 0,
|
56 |
+
numpy_random_seed: int = 1234,
|
57 |
+
torch_random_seed: int = 1234,
|
58 |
+
):
|
59 |
+
"""Instantiate and evaluate a model on a list of tasks.
|
60 |
+
|
61 |
+
:param model: Union[str, LM]
|
62 |
+
Name of model or LM object, see lm_eval.models.get_model
|
63 |
+
:param model_args: Optional[str, dict]
|
64 |
+
String or dict arguments for each model class, see LM.create_from_arg_string and LM.create_from_arg_object.
|
65 |
+
Ignored if `model` argument is a LM object.
|
66 |
+
:param tasks: list[Union[str, dict, Task]]
|
67 |
+
List of task names or Task objects. Task objects will be taken to have name task.EVAL_HARNESS_NAME if defined and type(task).__name__ otherwise.
|
68 |
+
:param num_fewshot: int
|
69 |
+
Number of examples in few-shot context
|
70 |
+
:param batch_size: int or str, optional
|
71 |
+
Batch size for model
|
72 |
+
:param max_batch_size: int, optional
|
73 |
+
Maximal batch size to try with automatic batch size detection
|
74 |
+
:param device: str, optional
|
75 |
+
PyTorch device (e.g. "cpu" or "cuda:0") for running models
|
76 |
+
:param use_cache: str, optional
|
77 |
+
A path to a sqlite db file for caching model responses. `None` if not caching.
|
78 |
+
:param cache_requests: bool, optional
|
79 |
+
Speed up evaluation by caching the building of dataset requests. `None` if not caching.
|
80 |
+
:param rewrite_requests_cache: bool, optional
|
81 |
+
Rewrites all of the request cache if set to `True`. `None` if not desired.
|
82 |
+
:param delete_requests_cache: bool, optional
|
83 |
+
Deletes all of the request cache if set to `True`. `None` if not desired.
|
84 |
+
:param limit: int or float, optional
|
85 |
+
Limit the number of examples per task (only use this for testing), If <1, limit is a percentage of the total number of examples.
|
86 |
+
:param bootstrap_iters:
|
87 |
+
Number of iterations for bootstrap statistics
|
88 |
+
:param check_integrity: bool
|
89 |
+
Whether to run the relevant part of the test suite for the tasks
|
90 |
+
:param write_out: bool
|
91 |
+
If True, write out an example document and model input for checking task integrity
|
92 |
+
:param log_samples: bool
|
93 |
+
If True, write out all model outputs and documents for per-sample measurement and post-hoc analysis
|
94 |
+
:param gen_kwargs: str
|
95 |
+
String arguments for model generation
|
96 |
+
Ignored for all tasks with loglikelihood output_type
|
97 |
+
:param predict_only: bool
|
98 |
+
If true only model outputs will be generated and returned. Metrics will not be evaluated
|
99 |
+
:param random_seed: int
|
100 |
+
Random seed for python's random module. If set to None, the seed will not be set.
|
101 |
+
:param numpy_random_seed: int
|
102 |
+
Random seed for numpy. If set to None, the seed will not be set.
|
103 |
+
:param torch_random_seed: int
|
104 |
+
Random seed for torch. If set to None, the seed will not be set.
|
105 |
+
|
106 |
+
:return
|
107 |
+
Dictionary of results
|
108 |
+
"""
|
109 |
+
eval_logger.setLevel(getattr(logging, f"{verbosity}"))
|
110 |
+
start_date = time.time()
|
111 |
+
|
112 |
+
if delete_requests_cache:
|
113 |
+
eval_logger.info("Deleting requests cache...")
|
114 |
+
delete_cache()
|
115 |
+
|
116 |
+
seed_message = []
|
117 |
+
if random_seed is not None:
|
118 |
+
# See https://github.com/EleutherAI/lm-evaluation-harness/pull/1412
|
119 |
+
seed_message.append(f"Setting random seed to {random_seed}")
|
120 |
+
random.seed(random_seed)
|
121 |
+
|
122 |
+
if numpy_random_seed is not None:
|
123 |
+
seed_message.append(f"Setting numpy seed to {numpy_random_seed}")
|
124 |
+
np.random.seed(numpy_random_seed)
|
125 |
+
|
126 |
+
if torch_random_seed is not None:
|
127 |
+
seed_message.append(f"Setting torch manual seed to {torch_random_seed}")
|
128 |
+
torch.manual_seed(torch_random_seed)
|
129 |
+
|
130 |
+
if seed_message:
|
131 |
+
eval_logger.info(" | ".join(seed_message))
|
132 |
+
|
133 |
+
if tasks is None:
|
134 |
+
tasks = []
|
135 |
+
if len(tasks) == 0:
|
136 |
+
raise ValueError(
|
137 |
+
"No tasks specified, or no tasks found. Please verify the task names."
|
138 |
+
)
|
139 |
+
|
140 |
+
if gen_kwargs is not None:
|
141 |
+
gen_kwargs = simple_parse_args_string(gen_kwargs)
|
142 |
+
eval_logger.warning(
|
143 |
+
"generation_kwargs specified through cli, these settings will update set parameters in yaml tasks. "
|
144 |
+
"Ensure 'do_sample=True' for non-greedy decoding!"
|
145 |
+
)
|
146 |
+
if gen_kwargs == "":
|
147 |
+
gen_kwargs = None
|
148 |
+
|
149 |
+
if isinstance(model, str):
|
150 |
+
if model_args is None:
|
151 |
+
eval_logger.warning("model_args not specified. Using defaults.")
|
152 |
+
model_args = ""
|
153 |
+
if "pretrained" not in model_args and model in [
|
154 |
+
"hf-auto",
|
155 |
+
"hf",
|
156 |
+
"huggingface",
|
157 |
+
"vllm",
|
158 |
+
]:
|
159 |
+
eval_logger.warning(
|
160 |
+
"pretrained not specified. Using default pretrained=gpt2."
|
161 |
+
)
|
162 |
+
|
163 |
+
if isinstance(model_args, dict):
|
164 |
+
eval_logger.info(
|
165 |
+
f"Initializing {model} model, with arguments: {model_args}"
|
166 |
+
)
|
167 |
+
lm = lm_eval.api.registry.get_model(model).create_from_arg_obj(
|
168 |
+
model_args,
|
169 |
+
{
|
170 |
+
"batch_size": batch_size,
|
171 |
+
"max_batch_size": max_batch_size,
|
172 |
+
"device": device,
|
173 |
+
},
|
174 |
+
)
|
175 |
+
|
176 |
+
else:
|
177 |
+
eval_logger.info(
|
178 |
+
f"Initializing {model} model, with arguments: {simple_parse_args_string(model_args)}"
|
179 |
+
)
|
180 |
+
lm = lm_eval.api.registry.get_model(model).create_from_arg_string(
|
181 |
+
model_args,
|
182 |
+
{
|
183 |
+
"batch_size": batch_size,
|
184 |
+
"max_batch_size": max_batch_size,
|
185 |
+
"device": device,
|
186 |
+
},
|
187 |
+
)
|
188 |
+
else:
|
189 |
+
if not isinstance(model, lm_eval.api.model.LM):
|
190 |
+
raise TypeError
|
191 |
+
eval_logger.info("Using pre-initialized model")
|
192 |
+
lm = model
|
193 |
+
|
194 |
+
if use_cache is not None:
|
195 |
+
eval_logger.info(f"Using cache at {use_cache + '_rank' + str(lm.rank) + '.db'}")
|
196 |
+
lm = lm_eval.api.model.CachingLM(
|
197 |
+
lm,
|
198 |
+
use_cache
|
199 |
+
# each rank receives a different cache db.
|
200 |
+
# necessary to avoid multiple writes to cache at once
|
201 |
+
+ "_rank"
|
202 |
+
+ str(lm.rank)
|
203 |
+
+ ".db",
|
204 |
+
)
|
205 |
+
|
206 |
+
if task_manager is None:
|
207 |
+
task_manager = TaskManager(verbosity)
|
208 |
+
|
209 |
+
task_dict = get_task_dict(tasks, task_manager)
|
210 |
+
for task_name in task_dict.keys():
|
211 |
+
task_obj = task_dict[task_name]
|
212 |
+
if isinstance(task_obj, tuple):
|
213 |
+
_, task_obj = task_obj
|
214 |
+
if task_obj is None:
|
215 |
+
continue
|
216 |
+
|
217 |
+
if task_obj.get_config("output_type") == "generate_until":
|
218 |
+
if gen_kwargs is not None:
|
219 |
+
task_obj.set_config(
|
220 |
+
key="generation_kwargs", value=gen_kwargs, update=True
|
221 |
+
)
|
222 |
+
|
223 |
+
if predict_only:
|
224 |
+
log_samples = True
|
225 |
+
eval_logger.info(
|
226 |
+
f"Processing {task_name} in output-only mode. Metrics will not be calculated!"
|
227 |
+
)
|
228 |
+
# we have to change the class properties post-hoc. This is pretty hacky.
|
229 |
+
task_obj.override_metric(metric_name="bypass")
|
230 |
+
|
231 |
+
# override tasks' fewshot values to the provided num_fewshot arg value
|
232 |
+
# except if tasks have it set to 0 manually in their configs--then we should never overwrite that
|
233 |
+
if num_fewshot is not None:
|
234 |
+
if (default_num_fewshot := task_obj.get_config("num_fewshot")) == 0:
|
235 |
+
eval_logger.info(
|
236 |
+
f"num_fewshot has been set to 0 for {task_name} in its config. Manual configuration will be ignored."
|
237 |
+
)
|
238 |
+
else:
|
239 |
+
eval_logger.warning(
|
240 |
+
f"Overwriting default num_fewshot of {task_name} from {default_num_fewshot} to {num_fewshot}"
|
241 |
+
)
|
242 |
+
task_obj.set_config(key="num_fewshot", value=num_fewshot)
|
243 |
+
else:
|
244 |
+
# if num_fewshot not provided, and the task does not define a default one, default to 0
|
245 |
+
if (default_num_fewshot := task_obj.get_config("num_fewshot")) is None:
|
246 |
+
task_obj.set_config(key="num_fewshot", value=0)
|
247 |
+
|
248 |
+
if check_integrity:
|
249 |
+
run_task_tests(task_list=tasks)
|
250 |
+
|
251 |
+
results = evaluate(
|
252 |
+
lm=lm,
|
253 |
+
task_dict=task_dict,
|
254 |
+
limit=limit,
|
255 |
+
cache_requests=cache_requests,
|
256 |
+
rewrite_requests_cache=rewrite_requests_cache,
|
257 |
+
bootstrap_iters=bootstrap_iters,
|
258 |
+
write_out=write_out,
|
259 |
+
log_samples=log_samples,
|
260 |
+
verbosity=verbosity,
|
261 |
+
)
|
262 |
+
|
263 |
+
if lm.rank == 0:
|
264 |
+
if isinstance(model, str):
|
265 |
+
model_name = model
|
266 |
+
elif hasattr(model, "config") and hasattr(model.config, "_name_or_path"):
|
267 |
+
model_name = model.config._name_or_path
|
268 |
+
else:
|
269 |
+
model_name = type(model).__name__
|
270 |
+
|
271 |
+
# add info about the model and few shot config
|
272 |
+
results["config"] = {
|
273 |
+
"model": model_name,
|
274 |
+
"model_args": model_args,
|
275 |
+
"batch_size": batch_size,
|
276 |
+
"batch_sizes": (
|
277 |
+
list(lm.batch_sizes.values()) if hasattr(lm, "batch_sizes") else []
|
278 |
+
),
|
279 |
+
"device": device,
|
280 |
+
"use_cache": use_cache,
|
281 |
+
"limit": limit,
|
282 |
+
"bootstrap_iters": bootstrap_iters,
|
283 |
+
"gen_kwargs": gen_kwargs,
|
284 |
+
}
|
285 |
+
results["git_hash"] = get_git_commit_hash()
|
286 |
+
results["date"] = start_date
|
287 |
+
add_env_info(results) # additional environment info to results
|
288 |
+
return results
|
289 |
+
else:
|
290 |
+
return None
|
291 |
+
|
292 |
+
|
293 |
+
@positional_deprecated
|
294 |
+
def evaluate(
|
295 |
+
lm: "LM",
|
296 |
+
task_dict,
|
297 |
+
limit: Optional[int] = None,
|
298 |
+
cache_requests: bool = False,
|
299 |
+
rewrite_requests_cache: bool = False,
|
300 |
+
bootstrap_iters: Optional[int] = 100000,
|
301 |
+
write_out: bool = False,
|
302 |
+
log_samples: bool = True,
|
303 |
+
verbosity: str = "INFO",
|
304 |
+
):
|
305 |
+
"""Instantiate and evaluate a model on a list of tasks.
|
306 |
+
|
307 |
+
:param lm: obj
|
308 |
+
Language Model
|
309 |
+
:param task_dict: dict[str, Task]
|
310 |
+
Dictionary of tasks. Tasks will be taken to have name type(task).config.task .
|
311 |
+
:param limit: int, optional
|
312 |
+
Limit the number of examples per task (only use this for testing)
|
313 |
+
:param bootstrap_iters:
|
314 |
+
Number of iterations for bootstrap statistics
|
315 |
+
:param write_out: bool
|
316 |
+
If True, write out an example document and model input for checking task integrity
|
317 |
+
:param log_samples: bool
|
318 |
+
If True, write out all model outputs and documents for per-sample measurement and post-hoc analysis
|
319 |
+
:return
|
320 |
+
Dictionary of results
|
321 |
+
"""
|
322 |
+
|
323 |
+
eval_logger.setLevel(getattr(logging, f"{verbosity}"))
|
324 |
+
|
325 |
+
# tracks all Instances/requests a model must generate output on.
|
326 |
+
requests = defaultdict(list)
|
327 |
+
# stores the amount to pad out reqs per req. type so that
|
328 |
+
# number of fwd passes per distributed rank is equal
|
329 |
+
padding_requests = defaultdict(int)
|
330 |
+
|
331 |
+
# get lists of group hierarchy and each type of request
|
332 |
+
task_hierarchy, eval_tasks = get_task_list(task_dict)
|
333 |
+
if not log_samples:
|
334 |
+
if not all(
|
335 |
+
"bypass" not in getattr(task_output.task, "_metric_fn_list", {}).keys()
|
336 |
+
for task_output in eval_tasks
|
337 |
+
):
|
338 |
+
raise ValueError("log_samples must be True for 'bypass' metric-only tasks")
|
339 |
+
for task_output in eval_tasks:
|
340 |
+
task: Task = task_output.task
|
341 |
+
limit = get_sample_size(task, limit)
|
342 |
+
task.build_all_requests(
|
343 |
+
limit=limit,
|
344 |
+
rank=lm.rank,
|
345 |
+
world_size=lm.world_size,
|
346 |
+
cache_requests=cache_requests,
|
347 |
+
rewrite_requests_cache=rewrite_requests_cache,
|
348 |
+
)
|
349 |
+
eval_logger.debug(
|
350 |
+
f"Task: {task_output.task_name}; number of requests on this rank: {len(task.instances)}"
|
351 |
+
)
|
352 |
+
|
353 |
+
if write_out:
|
354 |
+
print_writeout(task)
|
355 |
+
# aggregate Instances by LM method requested to get output.
|
356 |
+
for instance in task.instances:
|
357 |
+
reqtype = instance.request_type
|
358 |
+
requests[reqtype].append(instance)
|
359 |
+
|
360 |
+
if lm.world_size > 1:
|
361 |
+
instances_rnk = torch.tensor(len(task._instances), device=lm.device)
|
362 |
+
gathered_item = (
|
363 |
+
lm.accelerator.gather(instances_rnk).cpu().detach().numpy().tolist()
|
364 |
+
)
|
365 |
+
# "multiple_choice" task types dispatch (several) "loglikelihood" request types
|
366 |
+
reqtype = (
|
367 |
+
"loglikelihood"
|
368 |
+
if task.OUTPUT_TYPE == "multiple_choice"
|
369 |
+
else task.OUTPUT_TYPE
|
370 |
+
)
|
371 |
+
# compute number of pseudo-batches to pad with (FSDP/DDP require even batches among ranks)
|
372 |
+
numpad = max(gathered_item) - gathered_item[lm.rank]
|
373 |
+
# todo: may not account for padding in cases like SquadV2 which has multiple req types
|
374 |
+
padding_requests[reqtype] += numpad
|
375 |
+
|
376 |
+
### Run LM on inputs, get all outputs ###
|
377 |
+
# execute each type of request
|
378 |
+
for reqtype, reqs in requests.items():
|
379 |
+
eval_logger.info(f"Running {reqtype} requests")
|
380 |
+
# create `K` copies of each request `req` based off `K = req.repeats`
|
381 |
+
cloned_reqs = []
|
382 |
+
for req in reqs:
|
383 |
+
cloned_reqs.extend([req] * req.repeats)
|
384 |
+
|
385 |
+
if (lm.world_size > 1) and (padding_requests[reqtype] > 0):
|
386 |
+
for _ in range(padding_requests[reqtype]):
|
387 |
+
cloned_reqs.extend([req] * req.repeats)
|
388 |
+
|
389 |
+
# run requests through model
|
390 |
+
resps = getattr(lm, reqtype)(cloned_reqs)
|
391 |
+
|
392 |
+
# put responses from model into a list of length K for each request.
|
393 |
+
for x, req in zip(resps, cloned_reqs):
|
394 |
+
req.resps.append(x)
|
395 |
+
|
396 |
+
if lm.world_size > 1:
|
397 |
+
lm.accelerator.wait_for_everyone()
|
398 |
+
|
399 |
+
RANK = lm.rank
|
400 |
+
WORLD_SIZE = lm.world_size
|
401 |
+
### Postprocess outputs ###
|
402 |
+
# TODO: del model here, maybe (idea: allow user to specify device of e.g. reward model separately)
|
403 |
+
for task_output in eval_tasks:
|
404 |
+
task = task_output.task
|
405 |
+
task.apply_filters()
|
406 |
+
|
407 |
+
### Collect values of metrics on all datapoints ###
|
408 |
+
# # unpack results and sort back in order and return control to Task
|
409 |
+
# TODO: make it possible to use a different metric per filter
|
410 |
+
# Pre-process task.instances to group by doc_id
|
411 |
+
instances_by_doc_id = defaultdict(list)
|
412 |
+
for instance in task.instances:
|
413 |
+
instances_by_doc_id[instance.doc_id].append(instance)
|
414 |
+
# Sort instances within each group
|
415 |
+
for instances in instances_by_doc_id.values():
|
416 |
+
instances.sort(key=lambda x: x.idx)
|
417 |
+
# iterate over different filters used
|
418 |
+
for filter_key in task.instances[0].filtered_resps.keys():
|
419 |
+
doc_iterator = task.doc_iterator(
|
420 |
+
rank=RANK, limit=limit, world_size=WORLD_SIZE
|
421 |
+
)
|
422 |
+
for doc_id, doc in doc_iterator:
|
423 |
+
requests = instances_by_doc_id[doc_id]
|
424 |
+
metrics = task.process_results(
|
425 |
+
doc, [req.filtered_resps[filter_key] for req in requests]
|
426 |
+
)
|
427 |
+
if log_samples:
|
428 |
+
target = task.doc_to_target(doc)
|
429 |
+
example = {
|
430 |
+
"doc_id": doc_id,
|
431 |
+
"doc": doc,
|
432 |
+
"target": target,
|
433 |
+
"arguments": [req.args for req in requests],
|
434 |
+
"resps": [req.resps for req in requests],
|
435 |
+
"filtered_resps": [
|
436 |
+
req.filtered_resps[filter_key] for req in requests
|
437 |
+
],
|
438 |
+
}
|
439 |
+
example.update(metrics)
|
440 |
+
task_output.logged_samples.append(example)
|
441 |
+
for metric, value in metrics.items():
|
442 |
+
task_output.sample_metrics[(metric, filter_key)].append(value)
|
443 |
+
|
444 |
+
if WORLD_SIZE > 1:
|
445 |
+
# if multigpu, then gather data across all ranks to rank 0
|
446 |
+
# first gather logged samples across all ranks
|
447 |
+
for task_output in eval_tasks:
|
448 |
+
if log_samples:
|
449 |
+
# for task_name, task_samples in list(samples.items()):
|
450 |
+
full_samples = [None] * WORLD_SIZE
|
451 |
+
torch.distributed.all_gather_object(
|
452 |
+
obj=task_output.logged_samples,
|
453 |
+
object_list=full_samples,
|
454 |
+
)
|
455 |
+
|
456 |
+
|
457 |
+
if RANK == 0:
|
458 |
+
task_output.logged_samples = list(
|
459 |
+
itertools.chain.from_iterable(full_samples)
|
460 |
+
)
|
461 |
+
|
462 |
+
# then collect metrics across all ranks
|
463 |
+
for metrics in task_output.sample_metrics:
|
464 |
+
metric_list = [None] * WORLD_SIZE
|
465 |
+
torch.distributed.all_gather_object(
|
466 |
+
obj=task_output.sample_metrics[metrics],
|
467 |
+
object_list=metric_list,
|
468 |
+
)
|
469 |
+
if RANK == 0:
|
470 |
+
task_output.sample_metrics[metrics] = list(
|
471 |
+
itertools.chain.from_iterable(metric_list)
|
472 |
+
)
|
473 |
+
|
474 |
+
if RANK == 0:
|
475 |
+
### Aggregate results over all datapoints ###
|
476 |
+
# aggregate results ; run bootstrap CIs
|
477 |
+
for task_output in eval_tasks:
|
478 |
+
task_output.calculate_aggregate_metric(bootstrap_iters=bootstrap_iters)
|
479 |
+
results, samples, configs, versions, num_fewshot = consolidate_results(
|
480 |
+
eval_tasks
|
481 |
+
)
|
482 |
+
|
483 |
+
### Calculate group metrics ###
|
484 |
+
if bool(results):
|
485 |
+
for group, task_list in reversed(task_hierarchy.items()):
|
486 |
+
if len(task_list) == 0:
|
487 |
+
# task_hierarchy entries are either
|
488 |
+
# `group_name: [subtask1, subtask2, ...]`
|
489 |
+
# or `task_name: []`.
|
490 |
+
# we only want to operate on groups here.
|
491 |
+
continue
|
492 |
+
metric_list = list(
|
493 |
+
{
|
494 |
+
key
|
495 |
+
for task in task_list
|
496 |
+
for key in results[task].keys()
|
497 |
+
if "_stderr" not in key and key not in ["alias", "samples"]
|
498 |
+
}
|
499 |
+
)
|
500 |
+
for metric in metric_list:
|
501 |
+
stderr = "_stderr,".join(metric.split(","))
|
502 |
+
|
503 |
+
# gather metrics, sizes, and stderrs from subtasks
|
504 |
+
metrics = [
|
505 |
+
results[task][metric]
|
506 |
+
for task in task_list
|
507 |
+
if metric in results[task]
|
508 |
+
] # TODO: copy?
|
509 |
+
stderrs = [
|
510 |
+
results[task][stderr]
|
511 |
+
for task in task_list
|
512 |
+
if stderr in results[task]
|
513 |
+
]
|
514 |
+
sizes = [
|
515 |
+
results[task]["samples"]
|
516 |
+
for task in task_list
|
517 |
+
if metric in results[task]
|
518 |
+
]
|
519 |
+
|
520 |
+
# compute group's pooled metric and stderr
|
521 |
+
results[group][
|
522 |
+
metric
|
523 |
+
] = lm_eval.api.metrics.aggregate_subtask_metrics(metrics, sizes)
|
524 |
+
# TODO: calculate grouped metric using aggregation fn
|
525 |
+
if "N/A" in stderrs:
|
526 |
+
results[group][stderr] = "N/A"
|
527 |
+
else:
|
528 |
+
results[group][
|
529 |
+
stderr
|
530 |
+
] = lm_eval.api.metrics.pooled_sample_stderr(stderrs, sizes)
|
531 |
+
# TODO: allow GroupConfigs to choose which variance formula is used, for back-compatibility
|
532 |
+
# To use the old (likely incorrect) variance formula, comment out the above and uncomment this line:
|
533 |
+
# results[group][stderr] = lm_eval.api.metrics.combined_sample_stderr(stderrs, sizes, metrics=metrics)
|
534 |
+
|
535 |
+
results[group]["samples"] = sum(sizes)
|
536 |
+
|
537 |
+
results_agg = defaultdict(dict)
|
538 |
+
groups_agg = defaultdict(dict)
|
539 |
+
all_tasks_list = list(task_hierarchy.keys())
|
540 |
+
while True:
|
541 |
+
add_tasks_list = list(k for k in results_agg.keys())
|
542 |
+
left_tasks_list = sorted(list(set(all_tasks_list) - set(add_tasks_list)))
|
543 |
+
if len(left_tasks_list) == 0:
|
544 |
+
break
|
545 |
+
|
546 |
+
_task_hierarchy = {
|
547 |
+
k: v for k, v in task_hierarchy.items() if k in left_tasks_list
|
548 |
+
}
|
549 |
+
_results_agg, _groups_agg = prepare_print_tasks(_task_hierarchy, results)
|
550 |
+
|
551 |
+
results_agg = {**results_agg, **_results_agg}
|
552 |
+
groups_agg = {**groups_agg, **_groups_agg}
|
553 |
+
|
554 |
+
for group_name, task_list in task_hierarchy.items():
|
555 |
+
if task_list:
|
556 |
+
num_fewshot[group_name] = num_fewshot[
|
557 |
+
task_list[0]
|
558 |
+
] # TODO: validate this
|
559 |
+
|
560 |
+
results_dict = {
|
561 |
+
"results": dict(results_agg.items()),
|
562 |
+
**({"groups": dict(groups_agg.items())} if bool(groups_agg) else {}),
|
563 |
+
"group_subtasks": dict(reversed(task_hierarchy.items())),
|
564 |
+
"configs": dict(sorted(configs.items())),
|
565 |
+
"versions": dict(sorted(versions.items())),
|
566 |
+
"n-shot": dict(sorted(num_fewshot.items())),
|
567 |
+
}
|
568 |
+
if log_samples:
|
569 |
+
results_dict["samples"] = dict(samples)
|
570 |
+
|
571 |
+
return results_dict
|
572 |
+
|
573 |
+
else:
|
574 |
+
return None
|
575 |
+
|
576 |
+
|
577 |
+
def request_caching_arg_to_dict(cache_requests: str) -> dict:
|
578 |
+
request_caching_args = {
|
579 |
+
"cache_requests": cache_requests in {"true", "refresh"},
|
580 |
+
"rewrite_requests_cache": cache_requests == "refresh",
|
581 |
+
"delete_requests_cache": cache_requests == "delete",
|
582 |
+
}
|
583 |
+
|
584 |
+
return request_caching_args
|
lm-evaluation-harness/lm_eval/evaluator_utils.py
ADDED
@@ -0,0 +1,312 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import collections
|
2 |
+
import math
|
3 |
+
import pathlib
|
4 |
+
import sys
|
5 |
+
from typing import Dict, List, Optional, Tuple, Union
|
6 |
+
|
7 |
+
from lm_eval.api import metrics
|
8 |
+
from lm_eval.utils import eval_logger, positional_deprecated
|
9 |
+
|
10 |
+
|
11 |
+
class TaskOutput:
|
12 |
+
"""
|
13 |
+
Wrapper class for Task outputs.It contains various attributes and methods to manage and calculate metrics for the task.
|
14 |
+
|
15 |
+
Attributes:
|
16 |
+
task (object): The task object.
|
17 |
+
task_name (str): The name of the task.
|
18 |
+
task_config (dict): The configuration of the task.
|
19 |
+
version (str): The version of the task.
|
20 |
+
group_name (str): The name of the task group.
|
21 |
+
n_shot (int): The number of shots for the task.
|
22 |
+
task_alias (str): The alias of the task.
|
23 |
+
group_alias (str): The alias of the task group.
|
24 |
+
is_group (bool): Indicates if the task is a group.
|
25 |
+
logged_samples (list): The list of logged samples.
|
26 |
+
sample_len (int): The length of the samples.
|
27 |
+
sample_metrics (defaultdict): The dictionary of samples' metrics.
|
28 |
+
agg_metrics (defaultdict): The dictionary of aggregate metrics.
|
29 |
+
|
30 |
+
Methods:
|
31 |
+
from_taskdict(cls, task_name: str, task):
|
32 |
+
Creates a TaskOutput instance from a task dictionary.
|
33 |
+
|
34 |
+
calculate_aggregate_metric(bootstrap_iters=100000) -> None:
|
35 |
+
Calculates the aggregate metrics for the task.
|
36 |
+
"""
|
37 |
+
|
38 |
+
def __init__(
|
39 |
+
self,
|
40 |
+
task=None,
|
41 |
+
task_name=None,
|
42 |
+
task_config=None,
|
43 |
+
version=None,
|
44 |
+
group_name=None,
|
45 |
+
n_shot=None,
|
46 |
+
task_alias=None,
|
47 |
+
group_alias=None,
|
48 |
+
is_group=None,
|
49 |
+
):
|
50 |
+
self.task = task
|
51 |
+
self.task_config = task_config
|
52 |
+
self.task_name = task_name
|
53 |
+
self.group_name = group_name
|
54 |
+
self.version = version
|
55 |
+
self.n_shot = n_shot
|
56 |
+
self.task_alias = task_alias
|
57 |
+
self.group_alias = group_alias
|
58 |
+
self.is_group = is_group
|
59 |
+
self.logged_samples = []
|
60 |
+
self.sample_len = None
|
61 |
+
self.sample_metrics = collections.defaultdict(list)
|
62 |
+
self.agg_metrics = collections.defaultdict(list)
|
63 |
+
|
64 |
+
@classmethod
|
65 |
+
def from_taskdict(cls, task_name: str, task):
|
66 |
+
if isinstance(task, tuple):
|
67 |
+
group_name, task = task
|
68 |
+
else:
|
69 |
+
group_name = None
|
70 |
+
if not task:
|
71 |
+
# these gets filtered out in get_task_list
|
72 |
+
# once they are added to group hierarchy
|
73 |
+
is_group = True
|
74 |
+
return cls(
|
75 |
+
task=task, task_name=task_name, is_group=is_group, group_name=group_name
|
76 |
+
)
|
77 |
+
version = task.VERSION
|
78 |
+
task_config = dict(task.dump_config())
|
79 |
+
if (n_shot := task_config.get("num_fewshot")) == 0:
|
80 |
+
n_shot = task_config.get("metadata", {}).get("num_fewshot", 0)
|
81 |
+
task_alias = task_config.get("alias")
|
82 |
+
group_alias = task_config.get("group_alias")
|
83 |
+
return cls(
|
84 |
+
task=task,
|
85 |
+
task_name=task_name,
|
86 |
+
task_config=task_config,
|
87 |
+
group_name=group_name,
|
88 |
+
version=version,
|
89 |
+
n_shot=n_shot,
|
90 |
+
task_alias=task_alias,
|
91 |
+
group_alias=group_alias,
|
92 |
+
)
|
93 |
+
|
94 |
+
def calculate_aggregate_metric(self, bootstrap_iters=100000) -> None:
|
95 |
+
for (metric, filter_key), items in self.sample_metrics.items():
|
96 |
+
agg_fn = self.task.aggregation()[metric]
|
97 |
+
metric_key = f"{metric},{filter_key}"
|
98 |
+
self.agg_metrics[metric_key] = agg_fn(items)
|
99 |
+
self.sample_len = len(items) # TODO: same sample size for each metric?
|
100 |
+
if bootstrap_iters:
|
101 |
+
stderr_fn = metrics.stderr_for_metric(
|
102 |
+
metric=agg_fn,
|
103 |
+
bootstrap_iters=min(bootstrap_iters, 100)
|
104 |
+
if metric in ["bleu", "chrf", "ter"]
|
105 |
+
else bootstrap_iters,
|
106 |
+
)
|
107 |
+
self.agg_metrics[f"{metric}_stderr,{filter_key}"] = (
|
108 |
+
stderr_fn(items) if (stderr_fn and len(items) > 1) else "N/A"
|
109 |
+
)
|
110 |
+
|
111 |
+
def __repr__(self):
|
112 |
+
return (
|
113 |
+
f"TaskOutput(task_name={self.task_name}, "
|
114 |
+
f"group_name={self.group_name}, "
|
115 |
+
f"version={self.version},"
|
116 |
+
f"n_shot={self.n_shot}"
|
117 |
+
f"task_alias={self.task_alias}, group_alias={self.group_alias})"
|
118 |
+
)
|
119 |
+
|
120 |
+
|
121 |
+
def get_task_list(task_dict: dict) -> Tuple[Dict[str, list], List[TaskOutput]]:
|
122 |
+
task_hierarchy = collections.defaultdict(list)
|
123 |
+
outputs = list(TaskOutput.from_taskdict(x, y) for x, y in task_dict.items())
|
124 |
+
for task_output in outputs:
|
125 |
+
if group_name := task_output.group_name:
|
126 |
+
task_hierarchy[group_name].append(task_output.task_name)
|
127 |
+
else:
|
128 |
+
task_hierarchy[task_output.task_name] = []
|
129 |
+
# returns task_hierarchy tracking which groups contain which subtasks,
|
130 |
+
# and a list of TaskOutput classes for each non-group subtask
|
131 |
+
return task_hierarchy, [x for x in outputs if x.task]
|
132 |
+
|
133 |
+
|
134 |
+
def print_writeout(task) -> None:
|
135 |
+
for inst in task.instances:
|
136 |
+
# print the prompt for the first few documents
|
137 |
+
if inst.doc_id < 1:
|
138 |
+
eval_logger.info(
|
139 |
+
f"Task: {task}; document {inst.doc_id}; context prompt (starting on next line):\
|
140 |
+
\n{inst.args[0]}\n(end of prompt on previous line)\ntarget string or answer choice index (starting on next line):\n{task.doc_to_target(inst.doc)}\n(end of target on previous line)"
|
141 |
+
)
|
142 |
+
eval_logger.info(f"Request: {str(inst)}")
|
143 |
+
|
144 |
+
|
145 |
+
def get_sample_size(task, limit: Optional[int]) -> Union[int, None]:
|
146 |
+
if limit is not None:
|
147 |
+
limit = (
|
148 |
+
int(math.ceil(len(task.eval_docs) * limit)) if limit < 1.0 else int(limit)
|
149 |
+
)
|
150 |
+
return limit
|
151 |
+
|
152 |
+
|
153 |
+
def prepare_print_tasks(
|
154 |
+
task_hierarchy: dict, results: dict, tab=0
|
155 |
+
) -> Tuple[dict, dict]:
|
156 |
+
"""
|
157 |
+
@param task_hierarchy: Dictionary representing the group hierarchy of tasks. Each key is a group name and its
|
158 |
+
value is a list of task names.
|
159 |
+
@param results: Dictionary containing the results of each task. Each key is a
|
160 |
+
group name and its value is a dictionary of task results.
|
161 |
+
@param tab: The indentation level for printing the task
|
162 |
+
hierarchy. Default is 0.
|
163 |
+
@return: A tuple of two dictionaries: results_agg and groups_agg. results_agg contains
|
164 |
+
aggregated results for each task, and groups_agg contains aggregated results for each group.
|
165 |
+
|
166 |
+
Prepares the task hierarchy and aggregates the results for each task and group recursively for printing.
|
167 |
+
"""
|
168 |
+
results_agg = collections.defaultdict(dict)
|
169 |
+
groups_agg = collections.defaultdict(dict)
|
170 |
+
|
171 |
+
(group_name, task_list), *_ = task_hierarchy.items()
|
172 |
+
task_list = sorted(task_list)
|
173 |
+
|
174 |
+
results_agg[group_name] = results[group_name].copy()
|
175 |
+
# results_agg[group_name]["tab"] = tab
|
176 |
+
if "samples" in results_agg[group_name]:
|
177 |
+
results_agg[group_name].pop("samples")
|
178 |
+
|
179 |
+
tab_string = " " * tab + "- " if tab > 0 else ""
|
180 |
+
|
181 |
+
if "alias" in results_agg[group_name]:
|
182 |
+
results_agg[group_name]["alias"] = tab_string + results_agg[group_name]["alias"]
|
183 |
+
else:
|
184 |
+
results_agg[group_name]["alias"] = tab_string + group_name
|
185 |
+
|
186 |
+
if len(task_list) > 0:
|
187 |
+
groups_agg[group_name] = results[group_name].copy()
|
188 |
+
# groups_agg[group_name]["tab"] = tab
|
189 |
+
if "samples" in groups_agg[group_name]:
|
190 |
+
groups_agg[group_name].pop("samples")
|
191 |
+
|
192 |
+
if "alias" in groups_agg[group_name]:
|
193 |
+
groups_agg[group_name]["alias"] = (
|
194 |
+
tab_string + groups_agg[group_name]["alias"]
|
195 |
+
)
|
196 |
+
else:
|
197 |
+
groups_agg[group_name]["alias"] = tab_string + group_name
|
198 |
+
|
199 |
+
for task_name in task_list:
|
200 |
+
if task_name in task_hierarchy:
|
201 |
+
_task_hierarchy = {
|
202 |
+
**{task_name: task_hierarchy[task_name]},
|
203 |
+
**task_hierarchy,
|
204 |
+
}
|
205 |
+
else:
|
206 |
+
_task_hierarchy = {
|
207 |
+
**{task_name: []},
|
208 |
+
**task_hierarchy,
|
209 |
+
}
|
210 |
+
|
211 |
+
_results_agg, _groups_agg = prepare_print_tasks(
|
212 |
+
_task_hierarchy, results, tab + 1
|
213 |
+
)
|
214 |
+
results_agg = {**results_agg, **_results_agg}
|
215 |
+
groups_agg = {**groups_agg, **_groups_agg}
|
216 |
+
|
217 |
+
return results_agg, groups_agg
|
218 |
+
|
219 |
+
|
220 |
+
def consolidate_results(
|
221 |
+
eval_tasks: List[TaskOutput],
|
222 |
+
) -> Tuple[dict, dict, dict, dict, dict]:
|
223 |
+
"""
|
224 |
+
@param eval_tasks: list(TaskOutput).
|
225 |
+
@return: A tuple containing the consolidated results, samples, configs, versions, and num_fewshot.
|
226 |
+
|
227 |
+
Consolidates the results of multiple evaluation tasks into a single structure.
|
228 |
+
|
229 |
+
The method iterates over each evaluation instance and extracts relevant information to create the consolidated
|
230 |
+
results structure. The consolidated results structure has the following properties:
|
231 |
+
|
232 |
+
- results: A defaultdict with task names as keys and dictionaries as values. Each dictionary contains
|
233 |
+
metric/filter pairs as keys and corresponding metric values as values. The "alias" key is used to store task
|
234 |
+
aliases specified in the task configuration.
|
235 |
+
- samples: A defaultdict with task names as keys and lists of log samples as values.
|
236 |
+
- configs: A defaultdict with task names as keys and task configurations as values.
|
237 |
+
- versions: A defaultdict with task names as keys and task versions as values.
|
238 |
+
- num_fewshot: A defaultdict with task names as keys and number of few-shot samples as values.
|
239 |
+
|
240 |
+
The method then returns the consolidated results, samples, configs, versions, and num_fewshot as a tuple.
|
241 |
+
"""
|
242 |
+
# stores the final result for each task, for each metric/filter pair.
|
243 |
+
results = collections.defaultdict(dict)
|
244 |
+
# logs info about each document evaluated.
|
245 |
+
samples = collections.defaultdict(list)
|
246 |
+
# store num-fewshot value per task
|
247 |
+
num_fewshot = collections.defaultdict(int)
|
248 |
+
# Tracks the YAML configs of all chosen task
|
249 |
+
configs = collections.defaultdict(dict)
|
250 |
+
# Tracks each task's version.
|
251 |
+
versions = collections.defaultdict(dict)
|
252 |
+
for task_output in eval_tasks:
|
253 |
+
if "task_alias" in (task_config := task_output.task_config):
|
254 |
+
results[task_output.task_name]["alias"] = task_config["task_alias"]
|
255 |
+
if group_alias := task_output.group_alias:
|
256 |
+
if group_alias not in results and (group_name := task_output.group_name):
|
257 |
+
results[group_name]["alias"] = group_alias
|
258 |
+
num_fewshot[task_output.task_name] = task_output.n_shot
|
259 |
+
configs[task_output.task_name] = task_output.task_config
|
260 |
+
versions[task_output.task_name] = task_output.version
|
261 |
+
samples[task_output.task_name] = task_output.logged_samples
|
262 |
+
for (metric, filter_key), items in task_output.sample_metrics.items():
|
263 |
+
metric_key = f"{metric},{filter_key}"
|
264 |
+
results[task_output.task_name][metric_key] = task_output.agg_metrics[
|
265 |
+
metric_key
|
266 |
+
]
|
267 |
+
results[task_output.task_name]["samples"] = task_output.sample_len
|
268 |
+
results[task_output.task_name][
|
269 |
+
f"{metric}_stderr,{filter_key}"
|
270 |
+
] = task_output.agg_metrics[f"{metric}_stderr,{filter_key}"]
|
271 |
+
return results, samples, configs, versions, num_fewshot
|
272 |
+
|
273 |
+
|
274 |
+
@positional_deprecated
|
275 |
+
def find_test_root(start_path: pathlib.Path) -> pathlib.Path:
|
276 |
+
"""
|
277 |
+
Search upward in the directory tree to a maximum of three layers
|
278 |
+
to find and return the package root (containing the 'tests' folder)
|
279 |
+
"""
|
280 |
+
cur_path = start_path.resolve()
|
281 |
+
max_layers = 3
|
282 |
+
for _ in range(max_layers):
|
283 |
+
if (cur_path / "tests" / "test_version_stable.py").exists():
|
284 |
+
return cur_path
|
285 |
+
else:
|
286 |
+
cur_path = cur_path.parent.resolve()
|
287 |
+
raise FileNotFoundError(
|
288 |
+
f"Unable to find package root within {max_layers} upwards" + f"of {start_path}"
|
289 |
+
)
|
290 |
+
|
291 |
+
|
292 |
+
@positional_deprecated
|
293 |
+
def run_task_tests(task_list: List[str]):
|
294 |
+
"""
|
295 |
+
Find the package root and run the tests for the given tasks
|
296 |
+
"""
|
297 |
+
import pytest
|
298 |
+
|
299 |
+
package_root = find_test_root(start_path=pathlib.Path(__file__))
|
300 |
+
task_string = " or ".join(task_list)
|
301 |
+
args = [
|
302 |
+
f"{package_root}/tests/test_version_stable.py",
|
303 |
+
f"--rootdir={package_root}",
|
304 |
+
"-k",
|
305 |
+
f"{task_string}",
|
306 |
+
]
|
307 |
+
sys.path.append(str(package_root))
|
308 |
+
pytest_return_val = pytest.main(args)
|
309 |
+
if pytest_return_val:
|
310 |
+
raise ValueError(
|
311 |
+
f"Not all tests for the specified tasks ({task_list}) ran successfully! Error code: {pytest_return_val}"
|
312 |
+
)
|
lm-evaluation-harness/lm_eval/logging_utils.py
ADDED
@@ -0,0 +1,455 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import copy
|
2 |
+
import json
|
3 |
+
import logging
|
4 |
+
import os
|
5 |
+
import re
|
6 |
+
import subprocess
|
7 |
+
from pathlib import Path
|
8 |
+
from typing import Any, Dict, List, Literal, Optional, Tuple, Union
|
9 |
+
|
10 |
+
import numpy as np
|
11 |
+
import pandas as pd
|
12 |
+
from packaging.version import Version
|
13 |
+
from torch.utils.collect_env import get_pretty_env_info
|
14 |
+
from transformers import __version__ as trans_version
|
15 |
+
|
16 |
+
|
17 |
+
logger = logging.getLogger(__name__)
|
18 |
+
|
19 |
+
|
20 |
+
def remove_none_pattern(input_string: str) -> Tuple[str, bool]:
|
21 |
+
"""Remove the ',none' substring from the input_string if it exists at the end.
|
22 |
+
|
23 |
+
Args:
|
24 |
+
input_string (str): The input string from which to remove the ',none' substring.
|
25 |
+
|
26 |
+
Returns:
|
27 |
+
Tuple[str, bool]: A tuple containing the modified input_string with the ',none' substring removed
|
28 |
+
and a boolean indicating whether the modification was made (True) or not (False).
|
29 |
+
"""
|
30 |
+
# Define the pattern to match ',none' at the end of the string
|
31 |
+
pattern = re.compile(r",none$")
|
32 |
+
|
33 |
+
# Use sub() to replace ',none' with an empty string
|
34 |
+
result = re.sub(pattern, "", input_string)
|
35 |
+
|
36 |
+
# check if the input_string changed
|
37 |
+
removed = result != input_string
|
38 |
+
|
39 |
+
return result, removed
|
40 |
+
|
41 |
+
|
42 |
+
def _handle_non_serializable(o: Any) -> Union[int, str, list]:
|
43 |
+
"""Handle non-serializable objects by converting them to serializable types.
|
44 |
+
|
45 |
+
Args:
|
46 |
+
o (Any): The object to be handled.
|
47 |
+
|
48 |
+
Returns:
|
49 |
+
Union[int, str, list]: The converted object. If the object is of type np.int64 or np.int32,
|
50 |
+
it will be converted to int. If the object is of type set, it will be converted
|
51 |
+
to a list. Otherwise, it will be converted to str.
|
52 |
+
"""
|
53 |
+
if isinstance(o, np.int64) or isinstance(o, np.int32):
|
54 |
+
return int(o)
|
55 |
+
elif isinstance(o, set):
|
56 |
+
return list(o)
|
57 |
+
else:
|
58 |
+
return str(o)
|
59 |
+
|
60 |
+
|
61 |
+
def get_wandb_printer() -> Literal["Printer"]:
|
62 |
+
"""Returns a wandb printer instance for pretty stdout."""
|
63 |
+
from wandb.sdk.lib.printer import get_printer
|
64 |
+
from wandb.sdk.wandb_settings import Settings
|
65 |
+
|
66 |
+
printer = get_printer(Settings()._jupyter)
|
67 |
+
return printer
|
68 |
+
|
69 |
+
|
70 |
+
class WandbLogger:
|
71 |
+
def __init__(self, **kwargs) -> None:
|
72 |
+
"""Attaches to wandb logger if already initialized. Otherwise, passes kwargs to wandb.init()
|
73 |
+
|
74 |
+
Args:
|
75 |
+
kwargs Optional[Any]: Arguments for configuration.
|
76 |
+
|
77 |
+
Parse and log the results returned from evaluator.simple_evaluate() with:
|
78 |
+
wandb_logger.post_init(results)
|
79 |
+
wandb_logger.log_eval_result()
|
80 |
+
wandb_logger.log_eval_samples(results["samples"])
|
81 |
+
"""
|
82 |
+
try:
|
83 |
+
import wandb
|
84 |
+
|
85 |
+
assert Version(wandb.__version__) >= Version("0.13.6")
|
86 |
+
if Version(wandb.__version__) < Version("0.13.6"):
|
87 |
+
wandb.require("report-editing:v0")
|
88 |
+
except Exception as e:
|
89 |
+
logger.warning(
|
90 |
+
"To use the wandb reporting functionality please install wandb>=0.13.6.\n"
|
91 |
+
"To install the latest version of wandb run `pip install wandb --upgrade`\n"
|
92 |
+
f"{e}"
|
93 |
+
)
|
94 |
+
|
95 |
+
self.wandb_args: Dict[str, Any] = kwargs
|
96 |
+
|
97 |
+
# initialize a W&B run
|
98 |
+
if wandb.run is None:
|
99 |
+
self.run = wandb.init(**self.wandb_args)
|
100 |
+
else:
|
101 |
+
self.run = wandb.run
|
102 |
+
|
103 |
+
self.printer = get_wandb_printer()
|
104 |
+
|
105 |
+
def post_init(self, results: Dict[str, Any]) -> None:
|
106 |
+
self.results: Dict[str, Any] = copy.deepcopy(results)
|
107 |
+
self.task_names: List[str] = list(results.get("results", {}).keys())
|
108 |
+
self.group_names: List[str] = list(results.get("groups", {}).keys())
|
109 |
+
|
110 |
+
def _get_config(self) -> Dict[str, Any]:
|
111 |
+
"""Get configuration parameters."""
|
112 |
+
self.task_configs = self.results.get("configs", {})
|
113 |
+
cli_configs = self.results.get("config", {})
|
114 |
+
configs = {
|
115 |
+
"task_configs": self.task_configs,
|
116 |
+
"cli_configs": cli_configs,
|
117 |
+
}
|
118 |
+
|
119 |
+
return configs
|
120 |
+
|
121 |
+
def _sanitize_results_dict(self) -> Tuple[Dict[str, str], Dict[str, Any]]:
|
122 |
+
"""Sanitize the results dictionary."""
|
123 |
+
_results = copy.deepcopy(self.results.get("results", dict()))
|
124 |
+
|
125 |
+
# Remove None from the metric string name
|
126 |
+
tmp_results = copy.deepcopy(_results)
|
127 |
+
for task_name in self.task_names:
|
128 |
+
task_result = tmp_results.get(task_name, dict())
|
129 |
+
for metric_name, metric_value in task_result.items():
|
130 |
+
_metric_name, removed = remove_none_pattern(metric_name)
|
131 |
+
if removed:
|
132 |
+
_results[task_name][_metric_name] = metric_value
|
133 |
+
_results[task_name].pop(metric_name)
|
134 |
+
|
135 |
+
# remove string valued keys from the results dict
|
136 |
+
wandb_summary = {}
|
137 |
+
for task in self.task_names:
|
138 |
+
task_result = _results.get(task, dict())
|
139 |
+
for metric_name, metric_value in task_result.items():
|
140 |
+
if isinstance(metric_value, str):
|
141 |
+
wandb_summary[f"{task}/{metric_name}"] = metric_value
|
142 |
+
|
143 |
+
for summary_metric, summary_value in wandb_summary.items():
|
144 |
+
_task, _summary_metric = summary_metric.split("/")
|
145 |
+
_results[_task].pop(_summary_metric)
|
146 |
+
|
147 |
+
tmp_results = copy.deepcopy(_results)
|
148 |
+
for task_name, task_results in tmp_results.items():
|
149 |
+
for metric_name, metric_value in task_results.items():
|
150 |
+
_results[f"{task_name}/{metric_name}"] = metric_value
|
151 |
+
_results[task_name].pop(metric_name)
|
152 |
+
for task in self.task_names:
|
153 |
+
_results.pop(task)
|
154 |
+
|
155 |
+
return wandb_summary, _results
|
156 |
+
|
157 |
+
def _log_results_as_table(self) -> None:
|
158 |
+
"""Generate and log evaluation results as a table to W&B."""
|
159 |
+
columns = [
|
160 |
+
"Version",
|
161 |
+
"Filter",
|
162 |
+
"num_fewshot",
|
163 |
+
"Metric",
|
164 |
+
"Value",
|
165 |
+
"Stderr",
|
166 |
+
]
|
167 |
+
|
168 |
+
def make_table(columns: List[str], key: str = "results"):
|
169 |
+
import wandb
|
170 |
+
|
171 |
+
table = wandb.Table(columns=columns)
|
172 |
+
results = copy.deepcopy(self.results)
|
173 |
+
|
174 |
+
for k, dic in results.get(key).items():
|
175 |
+
if k in self.group_names and not key == "groups":
|
176 |
+
continue
|
177 |
+
version = results.get("versions").get(k)
|
178 |
+
if version == "N/A":
|
179 |
+
version = None
|
180 |
+
n = results.get("n-shot").get(k)
|
181 |
+
|
182 |
+
for (mf), v in dic.items():
|
183 |
+
m, _, f = mf.partition(",")
|
184 |
+
if m.endswith("_stderr"):
|
185 |
+
continue
|
186 |
+
if m == "alias":
|
187 |
+
continue
|
188 |
+
|
189 |
+
if m + "_stderr" + "," + f in dic:
|
190 |
+
se = dic[m + "_stderr" + "," + f]
|
191 |
+
if se != "N/A":
|
192 |
+
se = "%.4f" % se
|
193 |
+
table.add_data(*[k, version, f, n, m, str(v), str(se)])
|
194 |
+
else:
|
195 |
+
table.add_data(*[k, version, f, n, m, str(v), ""])
|
196 |
+
|
197 |
+
return table
|
198 |
+
|
199 |
+
# log the complete eval result to W&B Table
|
200 |
+
table = make_table(["Tasks"] + columns, "results")
|
201 |
+
self.run.log({"evaluation/eval_results": table})
|
202 |
+
|
203 |
+
if "groups" in self.results.keys():
|
204 |
+
table = make_table(["Groups"] + columns, "groups")
|
205 |
+
self.run.log({"evaluation/group_eval_results": table})
|
206 |
+
|
207 |
+
def _log_results_as_artifact(self) -> None:
|
208 |
+
"""Log results as JSON artifact to W&B."""
|
209 |
+
import wandb
|
210 |
+
|
211 |
+
dumped = json.dumps(
|
212 |
+
self.results, indent=2, default=_handle_non_serializable, ensure_ascii=False
|
213 |
+
)
|
214 |
+
artifact = wandb.Artifact("results", type="eval_results")
|
215 |
+
with artifact.new_file("results.json", mode="w", encoding="utf-8") as f:
|
216 |
+
f.write(dumped)
|
217 |
+
self.run.log_artifact(artifact)
|
218 |
+
|
219 |
+
def log_eval_result(self) -> None:
|
220 |
+
"""Log evaluation results to W&B."""
|
221 |
+
# Log configs to wandb
|
222 |
+
configs = self._get_config()
|
223 |
+
self.run.config.update(configs)
|
224 |
+
|
225 |
+
wandb_summary, self.wandb_results = self._sanitize_results_dict()
|
226 |
+
# update wandb.run.summary with items that were removed
|
227 |
+
self.run.summary.update(wandb_summary)
|
228 |
+
# Log the evaluation metrics to wandb
|
229 |
+
self.run.log(self.wandb_results)
|
230 |
+
# Log the evaluation metrics as W&B Table
|
231 |
+
self._log_results_as_table()
|
232 |
+
# Log the results dict as json to W&B Artifacts
|
233 |
+
self._log_results_as_artifact()
|
234 |
+
|
235 |
+
def _generate_dataset(
|
236 |
+
self, data: List[Dict[str, Any]], config: Dict[str, Any]
|
237 |
+
) -> pd.DataFrame:
|
238 |
+
"""Generate a dataset from evaluation data.
|
239 |
+
|
240 |
+
Args:
|
241 |
+
data (List[Dict[str, Any]]): The data to generate a dataset for.
|
242 |
+
config (Dict[str, Any]): The configuration of the task.
|
243 |
+
|
244 |
+
Returns:
|
245 |
+
pd.DataFrame: A dataframe that is ready to be uploaded to W&B.
|
246 |
+
"""
|
247 |
+
ids = [x["doc_id"] for x in data]
|
248 |
+
labels = [x["target"] for x in data]
|
249 |
+
instance = [""] * len(ids)
|
250 |
+
resps = [""] * len(ids)
|
251 |
+
filtered_resps = [""] * len(ids)
|
252 |
+
model_outputs = {}
|
253 |
+
|
254 |
+
metrics_list = config["metric_list"]
|
255 |
+
metrics = {}
|
256 |
+
for metric in metrics_list:
|
257 |
+
metric = metric.get("metric")
|
258 |
+
if metric in ["word_perplexity", "byte_perplexity", "bits_per_byte"]:
|
259 |
+
metrics[f"{metric}_loglikelihood"] = [x[metric][0] for x in data]
|
260 |
+
if metric in ["byte_perplexity", "bits_per_byte"]:
|
261 |
+
metrics[f"{metric}_bytes"] = [x[metric][1] for x in data]
|
262 |
+
else:
|
263 |
+
metrics[f"{metric}_words"] = [x[metric][1] for x in data]
|
264 |
+
else:
|
265 |
+
metrics[metric] = [x[metric] for x in data]
|
266 |
+
|
267 |
+
if config["output_type"] == "loglikelihood":
|
268 |
+
instance = [x["arguments"][0][0] for x in data]
|
269 |
+
labels = [x["arguments"][0][1] for x in data]
|
270 |
+
resps = [
|
271 |
+
f'log probability of continuation is {x["resps"][0][0][0]} '
|
272 |
+
+ "\n\n"
|
273 |
+
+ "continuation will {} generated with greedy sampling".format(
|
274 |
+
"not be" if not x["resps"][0][0][1] else "be"
|
275 |
+
)
|
276 |
+
for x in data
|
277 |
+
]
|
278 |
+
filtered_resps = [
|
279 |
+
f'log probability of continuation is {x["filtered_resps"][0][0]} '
|
280 |
+
+ "\n\n"
|
281 |
+
+ "continuation will {} generated with greedy sampling".format(
|
282 |
+
"not be" if not x["filtered_resps"][0][1] else "be"
|
283 |
+
)
|
284 |
+
for x in data
|
285 |
+
]
|
286 |
+
elif config["output_type"] == "multiple_choice":
|
287 |
+
instance = [x["arguments"][0][0] for x in data]
|
288 |
+
choices = [
|
289 |
+
"\n".join([f"{idx}. {y[1]}" for idx, y in enumerate(x["arguments"])])
|
290 |
+
for x in data
|
291 |
+
]
|
292 |
+
resps = [np.argmax([n[0][0] for n in x["resps"]]) for x in data]
|
293 |
+
filtered_resps = [
|
294 |
+
np.argmax([n[0] for n in x["filtered_resps"]]) for x in data
|
295 |
+
]
|
296 |
+
elif config["output_type"] == "loglikelihood_rolling":
|
297 |
+
instance = [x["arguments"][0][0] for x in data]
|
298 |
+
resps = [x["resps"][0][0] for x in data]
|
299 |
+
filtered_resps = [x["filtered_resps"][0] for x in data]
|
300 |
+
elif config["output_type"] == "generate_until":
|
301 |
+
instance = [x["arguments"][0][0] for x in data]
|
302 |
+
resps = [x["resps"][0][0] for x in data]
|
303 |
+
filtered_resps = [x["filtered_resps"][0] for x in data]
|
304 |
+
|
305 |
+
model_outputs["raw_predictions"] = resps
|
306 |
+
model_outputs["filtered_predictions"] = filtered_resps
|
307 |
+
|
308 |
+
df_data = {
|
309 |
+
"id": ids,
|
310 |
+
"data": instance,
|
311 |
+
}
|
312 |
+
if config["output_type"] == "multiple_choice":
|
313 |
+
df_data["choices"] = choices
|
314 |
+
|
315 |
+
tmp_data = {
|
316 |
+
"input_len": [len(x) for x in instance],
|
317 |
+
"labels": labels,
|
318 |
+
"output_type": config["output_type"],
|
319 |
+
}
|
320 |
+
df_data.update(tmp_data)
|
321 |
+
df_data.update(model_outputs)
|
322 |
+
df_data.update(metrics)
|
323 |
+
|
324 |
+
return pd.DataFrame(df_data)
|
325 |
+
|
326 |
+
def _log_samples_as_artifact(
|
327 |
+
self, data: List[Dict[str, Any]], task_name: str
|
328 |
+
) -> None:
|
329 |
+
import wandb
|
330 |
+
|
331 |
+
# log the samples as an artifact
|
332 |
+
dumped = json.dumps(
|
333 |
+
data,
|
334 |
+
indent=2,
|
335 |
+
default=_handle_non_serializable,
|
336 |
+
ensure_ascii=False,
|
337 |
+
)
|
338 |
+
artifact = wandb.Artifact(f"{task_name}", type="samples_by_task")
|
339 |
+
with artifact.new_file(
|
340 |
+
f"{task_name}_eval_samples.json", mode="w", encoding="utf-8"
|
341 |
+
) as f:
|
342 |
+
f.write(dumped)
|
343 |
+
self.run.log_artifact(artifact)
|
344 |
+
# artifact.wait()
|
345 |
+
|
346 |
+
def log_eval_samples(self, samples: Dict[str, List[Dict[str, Any]]]) -> None:
|
347 |
+
"""Log evaluation samples to W&B.
|
348 |
+
|
349 |
+
Args:
|
350 |
+
samples (Dict[str, List[Dict[str, Any]]]): Evaluation samples for each task.
|
351 |
+
"""
|
352 |
+
task_names: List[str] = [
|
353 |
+
x for x in self.task_names if x not in self.group_names
|
354 |
+
]
|
355 |
+
|
356 |
+
ungrouped_tasks = []
|
357 |
+
tasks_by_groups = {}
|
358 |
+
|
359 |
+
for task_name in task_names:
|
360 |
+
group_names = self.task_configs[task_name].get("group", None)
|
361 |
+
if group_names:
|
362 |
+
if isinstance(group_names, str):
|
363 |
+
group_names = [group_names]
|
364 |
+
|
365 |
+
for group_name in group_names:
|
366 |
+
if not tasks_by_groups.get(group_name):
|
367 |
+
tasks_by_groups[group_name] = [task_name]
|
368 |
+
else:
|
369 |
+
tasks_by_groups[group_name].append(task_name)
|
370 |
+
else:
|
371 |
+
ungrouped_tasks.append(task_name)
|
372 |
+
|
373 |
+
for task_name in ungrouped_tasks:
|
374 |
+
eval_preds = samples[task_name]
|
375 |
+
|
376 |
+
# log the samples as a W&B Table
|
377 |
+
df = self._generate_dataset(eval_preds, self.task_configs.get(task_name))
|
378 |
+
self.run.log({f"{task_name}_eval_results": df})
|
379 |
+
|
380 |
+
# log the samples as a json file as W&B Artifact
|
381 |
+
self._log_samples_as_artifact(eval_preds, task_name)
|
382 |
+
|
383 |
+
for group, grouped_tasks in tasks_by_groups.items():
|
384 |
+
grouped_df = pd.DataFrame()
|
385 |
+
for task_name in grouped_tasks:
|
386 |
+
eval_preds = samples[task_name]
|
387 |
+
df = self._generate_dataset(
|
388 |
+
eval_preds, self.task_configs.get(task_name)
|
389 |
+
)
|
390 |
+
df["group"] = group
|
391 |
+
df["task"] = task_name
|
392 |
+
grouped_df = pd.concat([grouped_df, df], ignore_index=True)
|
393 |
+
|
394 |
+
# log the samples as a json file as W&B Artifact
|
395 |
+
self._log_samples_as_artifact(eval_preds, task_name)
|
396 |
+
|
397 |
+
self.run.log({f"{group}_eval_results": grouped_df})
|
398 |
+
|
399 |
+
|
400 |
+
def get_commit_from_path(repo_path: Union[Path, str]) -> Optional[str]:
|
401 |
+
try:
|
402 |
+
git_folder = Path(repo_path, ".git")
|
403 |
+
if git_folder.is_file():
|
404 |
+
git_folder = Path(
|
405 |
+
git_folder.parent,
|
406 |
+
git_folder.read_text(encoding="utf-8").split("\n")[0].split(" ")[-1],
|
407 |
+
)
|
408 |
+
if Path(git_folder, "HEAD").exists():
|
409 |
+
head_name = (
|
410 |
+
Path(git_folder, "HEAD")
|
411 |
+
.read_text(encoding="utf-8")
|
412 |
+
.split("\n")[0]
|
413 |
+
.split(" ")[-1]
|
414 |
+
)
|
415 |
+
head_ref = Path(git_folder, head_name)
|
416 |
+
git_hash = head_ref.read_text(encoding="utf-8").replace("\n", "")
|
417 |
+
else:
|
418 |
+
git_hash = None
|
419 |
+
except Exception as err:
|
420 |
+
logger.debug(
|
421 |
+
f"Failed to retrieve a Git commit hash from path: {str(repo_path)}. Error: {err}"
|
422 |
+
)
|
423 |
+
return None
|
424 |
+
return git_hash
|
425 |
+
|
426 |
+
|
427 |
+
def get_git_commit_hash():
|
428 |
+
"""
|
429 |
+
Gets the git commit hash of your current repo (if it exists).
|
430 |
+
Source: https://github.com/EleutherAI/gpt-neox/blob/b608043be541602170bfcfb8ec9bf85e8a0799e0/megatron/neox_arguments/neox_args.py#L42
|
431 |
+
"""
|
432 |
+
try:
|
433 |
+
git_hash = subprocess.check_output(["git", "describe", "--always"]).strip()
|
434 |
+
git_hash = git_hash.decode()
|
435 |
+
except (subprocess.CalledProcessError, FileNotFoundError):
|
436 |
+
# FileNotFoundError occurs when git not installed on system
|
437 |
+
git_hash = get_commit_from_path(os.getcwd()) # git hash of repo if exists
|
438 |
+
return git_hash
|
439 |
+
|
440 |
+
|
441 |
+
def add_env_info(storage: Dict[str, Any]):
|
442 |
+
try:
|
443 |
+
pretty_env_info = get_pretty_env_info()
|
444 |
+
except Exception as err:
|
445 |
+
pretty_env_info = str(err)
|
446 |
+
transformers_version = trans_version
|
447 |
+
upper_dir_commit = get_commit_from_path(
|
448 |
+
Path(os.getcwd(), "..")
|
449 |
+
) # git hash of upper repo if exists
|
450 |
+
added_info = {
|
451 |
+
"pretty_env_info": pretty_env_info,
|
452 |
+
"transformers_version": transformers_version,
|
453 |
+
"upper_git_hash": upper_dir_commit, # in case this repo is submodule
|
454 |
+
}
|
455 |
+
storage.update(added_info)
|
lm-evaluation-harness/lm_eval/tasks/__init__.py
ADDED
@@ -0,0 +1,447 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import collections
|
2 |
+
import logging
|
3 |
+
import os
|
4 |
+
from functools import partial
|
5 |
+
from typing import Dict, List, Mapping, Optional, Union
|
6 |
+
|
7 |
+
from lm_eval import utils
|
8 |
+
from lm_eval.api.task import ConfigurableTask, Task
|
9 |
+
|
10 |
+
|
11 |
+
class TaskManager:
|
12 |
+
"""TaskManager indexes all tasks from the default `lm_eval/tasks/`
|
13 |
+
and an optional directory if provided.
|
14 |
+
|
15 |
+
"""
|
16 |
+
|
17 |
+
def __init__(self, verbosity="INFO", include_path: Optional[str] = None) -> None:
|
18 |
+
self.verbosity = verbosity
|
19 |
+
self.include_path = include_path
|
20 |
+
self.logger = utils.eval_logger
|
21 |
+
self.logger.setLevel(getattr(logging, f"{verbosity}"))
|
22 |
+
|
23 |
+
self._task_index = self.initialize_tasks(include_path=include_path)
|
24 |
+
self._all_tasks = sorted(list(self._task_index.keys()))
|
25 |
+
|
26 |
+
self.task_group_map = collections.defaultdict(list)
|
27 |
+
|
28 |
+
def initialize_tasks(self, include_path: Optional[str] = None):
|
29 |
+
"""Creates a dictionary of tasks index.
|
30 |
+
|
31 |
+
:param include_path: str = None
|
32 |
+
An additional path to be searched for tasks
|
33 |
+
|
34 |
+
:return
|
35 |
+
Dictionary of task names as key and task metadata
|
36 |
+
"""
|
37 |
+
all_paths = [os.path.dirname(os.path.abspath(__file__)) + "/"]
|
38 |
+
if include_path is not None:
|
39 |
+
if isinstance(include_path, str):
|
40 |
+
include_path = [include_path]
|
41 |
+
all_paths.extend(include_path)
|
42 |
+
|
43 |
+
task_index = {}
|
44 |
+
for task_dir in all_paths:
|
45 |
+
tasks = self._get_task_and_group(task_dir)
|
46 |
+
task_index = {**tasks, **task_index}
|
47 |
+
|
48 |
+
return task_index
|
49 |
+
|
50 |
+
@property
|
51 |
+
def all_tasks(self):
|
52 |
+
return self._all_tasks
|
53 |
+
|
54 |
+
@property
|
55 |
+
def task_index(self):
|
56 |
+
return self._task_index
|
57 |
+
|
58 |
+
def match_tasks(self, task_list):
|
59 |
+
return utils.pattern_match(task_list, self.all_tasks)
|
60 |
+
|
61 |
+
def _name_is_registered(self, name) -> bool:
|
62 |
+
if name in self.all_tasks:
|
63 |
+
return True
|
64 |
+
return False
|
65 |
+
|
66 |
+
def _name_is_task(self, name) -> bool:
|
67 |
+
if self._name_is_registered(name) and ("task" in self.task_index[name]["type"]):
|
68 |
+
return True
|
69 |
+
return False
|
70 |
+
|
71 |
+
def _name_is_group(self, name) -> bool:
|
72 |
+
if self._name_is_registered(name) and (
|
73 |
+
self.task_index[name]["type"] == "group"
|
74 |
+
):
|
75 |
+
return True
|
76 |
+
return False
|
77 |
+
|
78 |
+
def _name_is_python_task(self, name):
|
79 |
+
if self._name_is_registered(name) and (
|
80 |
+
self.task_index[name]["type"] == "python_task"
|
81 |
+
):
|
82 |
+
return True
|
83 |
+
return False
|
84 |
+
|
85 |
+
def _config_is_task(self, config) -> bool:
|
86 |
+
if ("task" in config) and isinstance(config["task"], str):
|
87 |
+
return True
|
88 |
+
return False
|
89 |
+
|
90 |
+
def _config_is_group(self, config) -> bool:
|
91 |
+
if ("task" in config) and isinstance(config["task"], list):
|
92 |
+
return True
|
93 |
+
return False
|
94 |
+
|
95 |
+
def _config_is_python_task(self, config) -> bool:
|
96 |
+
if "class" in config:
|
97 |
+
return True
|
98 |
+
return False
|
99 |
+
|
100 |
+
def _get_yaml_path(self, name):
|
101 |
+
if name not in self.task_index:
|
102 |
+
raise ValueError
|
103 |
+
return self.task_index[name]["yaml_path"]
|
104 |
+
|
105 |
+
def _get_config(self, name):
|
106 |
+
if name not in self.task_index:
|
107 |
+
raise ValueError
|
108 |
+
yaml_path = self._get_yaml_path(name)
|
109 |
+
if yaml_path == -1:
|
110 |
+
return {}
|
111 |
+
else:
|
112 |
+
return utils.load_yaml_config(yaml_path, mode="full")
|
113 |
+
|
114 |
+
def _get_tasklist(self, name):
|
115 |
+
if self._name_is_task(name):
|
116 |
+
raise ValueError
|
117 |
+
return self.task_index[name]["task"]
|
118 |
+
|
119 |
+
def _process_alias(self, config, group=None):
|
120 |
+
# If the group is not the same as the original
|
121 |
+
# group which the group alias was intended for,
|
122 |
+
# Set the group_alias to None instead.
|
123 |
+
if ("group_alias" in config) and ("group" in config) and group is not None:
|
124 |
+
if config["group"] != group:
|
125 |
+
config["group_alias"] = None
|
126 |
+
return config
|
127 |
+
|
128 |
+
def _load_individual_task_or_group(
|
129 |
+
self,
|
130 |
+
name_or_config: Optional[Union[str, dict]] = None,
|
131 |
+
parent_name: Optional[str] = None,
|
132 |
+
update_config: Optional[dict] = None,
|
133 |
+
yaml_path: Optional[str] = None,
|
134 |
+
) -> Mapping:
|
135 |
+
def load_task(config, task, group=None, yaml_path=None):
|
136 |
+
if "include" in config:
|
137 |
+
if yaml_path is None:
|
138 |
+
raise ValueError
|
139 |
+
config = {
|
140 |
+
**utils.load_yaml_config(
|
141 |
+
yaml_path,
|
142 |
+
yaml_config={"include": config.pop("include")},
|
143 |
+
mode="full",
|
144 |
+
),
|
145 |
+
**config,
|
146 |
+
}
|
147 |
+
if self._config_is_python_task(config):
|
148 |
+
task_object = config["class"]()
|
149 |
+
else:
|
150 |
+
config = self._process_alias(config, group=group)
|
151 |
+
task_object = ConfigurableTask(config=config)
|
152 |
+
if group is not None:
|
153 |
+
task_object = (group, task_object)
|
154 |
+
return {task: task_object}
|
155 |
+
|
156 |
+
if isinstance(name_or_config, str):
|
157 |
+
if update_config is not None:
|
158 |
+
# Process name_or_config as a dict instead
|
159 |
+
name_or_config = {"task": name_or_config, **update_config}
|
160 |
+
elif self._name_is_task(name_or_config):
|
161 |
+
task_config = self._get_config(name_or_config)
|
162 |
+
return load_task(task_config, task=name_or_config, group=parent_name)
|
163 |
+
else:
|
164 |
+
group_name = name_or_config
|
165 |
+
subtask_list = self._get_tasklist(name_or_config)
|
166 |
+
if subtask_list == -1:
|
167 |
+
group_config = self._get_config(name_or_config)
|
168 |
+
subtask_list = group_config["task"]
|
169 |
+
|
170 |
+
# This checks if we're at the root.
|
171 |
+
if parent_name is None:
|
172 |
+
group_config = self._get_config(name_or_config)
|
173 |
+
if set(group_config.keys()) > {"task", "group"}:
|
174 |
+
update_config = {
|
175 |
+
k: v
|
176 |
+
for k, v in group_config.items()
|
177 |
+
if k not in ["task", "group"]
|
178 |
+
}
|
179 |
+
yaml_path = self._get_yaml_path(group_name)
|
180 |
+
|
181 |
+
if (update_config is not None) and ("group_alias" in update_config):
|
182 |
+
group_name = update_config["group_alias"]
|
183 |
+
update_config.pop("group_alias")
|
184 |
+
|
185 |
+
if isinstance(name_or_config, dict):
|
186 |
+
if update_config is not None:
|
187 |
+
name_or_config = {
|
188 |
+
**name_or_config,
|
189 |
+
**update_config,
|
190 |
+
}
|
191 |
+
|
192 |
+
if self._config_is_task(name_or_config):
|
193 |
+
name = name_or_config["task"]
|
194 |
+
# If the name is registered as a group
|
195 |
+
# if self._name_is_task(name) is False:
|
196 |
+
if self._name_is_group(name):
|
197 |
+
group_name = name
|
198 |
+
update_config = {
|
199 |
+
k: v for k, v in name_or_config.items() if k != "task"
|
200 |
+
}
|
201 |
+
subtask_list = self._get_tasklist(name)
|
202 |
+
if subtask_list == -1:
|
203 |
+
subtask_list = self._get_config(name)["task"]
|
204 |
+
else:
|
205 |
+
if self._name_is_registered(name):
|
206 |
+
base_task_config = self._get_config(name)
|
207 |
+
|
208 |
+
# Check if this is a duplicate.
|
209 |
+
if parent_name is not None:
|
210 |
+
name_or_config["group"] = parent_name
|
211 |
+
num_duplicate = len(
|
212 |
+
list(
|
213 |
+
filter(
|
214 |
+
lambda x: x.startswith(name),
|
215 |
+
self.task_group_map[parent_name],
|
216 |
+
)
|
217 |
+
)
|
218 |
+
)
|
219 |
+
if num_duplicate > 0:
|
220 |
+
name = f"{name}-{num_duplicate}"
|
221 |
+
self.task_group_map[parent_name].append(name)
|
222 |
+
|
223 |
+
task_config = {
|
224 |
+
**base_task_config,
|
225 |
+
**name_or_config,
|
226 |
+
}
|
227 |
+
else:
|
228 |
+
task_config = name_or_config
|
229 |
+
return load_task(
|
230 |
+
task_config, task=name, group=parent_name, yaml_path=yaml_path
|
231 |
+
)
|
232 |
+
else:
|
233 |
+
group_name = name_or_config["group"]
|
234 |
+
subtask_list = name_or_config["task"]
|
235 |
+
if set(name_or_config.keys()) > {"task", "group"}:
|
236 |
+
update_config = {
|
237 |
+
k: v
|
238 |
+
for k, v in name_or_config.items()
|
239 |
+
if k not in ["task", "group"]
|
240 |
+
}
|
241 |
+
|
242 |
+
all_subtasks = {}
|
243 |
+
if parent_name is not None:
|
244 |
+
all_subtasks = {group_name: (parent_name, None)}
|
245 |
+
|
246 |
+
fn = partial(
|
247 |
+
self._load_individual_task_or_group,
|
248 |
+
parent_name=group_name,
|
249 |
+
update_config=update_config,
|
250 |
+
yaml_path=yaml_path,
|
251 |
+
)
|
252 |
+
all_subtasks = {
|
253 |
+
**all_subtasks,
|
254 |
+
**dict(collections.ChainMap(*map(fn, subtask_list))),
|
255 |
+
}
|
256 |
+
return all_subtasks
|
257 |
+
|
258 |
+
def load_task_or_group(self, task_list: Optional[Union[str, list]] = None) -> dict:
|
259 |
+
"""Loads a dictionary of task objects from a list
|
260 |
+
|
261 |
+
:param task_list: Union[str, list] = None
|
262 |
+
Single string or list of string of task names to be loaded
|
263 |
+
|
264 |
+
:return
|
265 |
+
Dictionary of task objects
|
266 |
+
"""
|
267 |
+
if isinstance(task_list, str):
|
268 |
+
task_list = [task_list]
|
269 |
+
|
270 |
+
all_loaded_tasks = dict(
|
271 |
+
collections.ChainMap(*map(self._load_individual_task_or_group, task_list))
|
272 |
+
)
|
273 |
+
return all_loaded_tasks
|
274 |
+
|
275 |
+
def load_config(self, config: Dict):
|
276 |
+
return self._load_individual_task_or_group(config)
|
277 |
+
|
278 |
+
def _get_task_and_group(self, task_dir: str):
|
279 |
+
"""Creates a dictionary of tasks index with the following metadata,
|
280 |
+
- `type`, that can be either `task`, `python_task`, or `group`.
|
281 |
+
`task` refer to regular task configs, `python_task` are special
|
282 |
+
yaml files that only consists of `task` and `class` parameters.
|
283 |
+
`group` are group configs.
|
284 |
+
- `yaml_path`, path to the yaml file. If the entry is a `group` that
|
285 |
+
was configured through a task config, the yaml_path will be -1
|
286 |
+
and all subtasks will be listed in `task` (see below)
|
287 |
+
- `task`, reserved for entries with `type` as `group`. This will list
|
288 |
+
all subtasks. When a group config is created (as opposed to task
|
289 |
+
config having `group` parameter set), this will be set to -1 to
|
290 |
+
avoid recursive indexing. The whole list of subtasks will be loaded
|
291 |
+
at evaluation.
|
292 |
+
|
293 |
+
:param task_dir: str
|
294 |
+
A directory to check for tasks
|
295 |
+
|
296 |
+
:return
|
297 |
+
Dictionary of task names as key and task metadata
|
298 |
+
"""
|
299 |
+
tasks_and_groups = collections.defaultdict()
|
300 |
+
for root, _, file_list in os.walk(task_dir):
|
301 |
+
for f in file_list:
|
302 |
+
if f.endswith(".yaml"):
|
303 |
+
yaml_path = os.path.join(root, f)
|
304 |
+
config = utils.load_yaml_config(yaml_path, mode="simple")
|
305 |
+
if self._config_is_python_task(config):
|
306 |
+
# This is a python class config
|
307 |
+
tasks_and_groups[config["task"]] = {
|
308 |
+
"type": "python_task",
|
309 |
+
"yaml_path": yaml_path,
|
310 |
+
}
|
311 |
+
elif self._config_is_group(config):
|
312 |
+
# This is a group config
|
313 |
+
tasks_and_groups[config["group"]] = {
|
314 |
+
"type": "group",
|
315 |
+
"task": -1, # This signals that
|
316 |
+
# we don't need to know
|
317 |
+
# the task list for indexing
|
318 |
+
# as it can be loaded
|
319 |
+
# when called.
|
320 |
+
"yaml_path": yaml_path,
|
321 |
+
}
|
322 |
+
|
323 |
+
# # Registered the level 1 tasks from a group config
|
324 |
+
# for config in config["task"]:
|
325 |
+
# if isinstance(config, dict) and self._config_is_task(config):
|
326 |
+
# task = config["task"]
|
327 |
+
# tasks_and_groups[task] = {
|
328 |
+
# "type": "task",
|
329 |
+
# "yaml_path": yaml_path,
|
330 |
+
# }
|
331 |
+
|
332 |
+
elif self._config_is_task(config):
|
333 |
+
# This is a task config
|
334 |
+
task = config["task"]
|
335 |
+
tasks_and_groups[task] = {
|
336 |
+
"type": "task",
|
337 |
+
"yaml_path": yaml_path,
|
338 |
+
}
|
339 |
+
|
340 |
+
if "group" in config:
|
341 |
+
groups = config["group"]
|
342 |
+
if isinstance(config["group"], str):
|
343 |
+
groups = [groups]
|
344 |
+
|
345 |
+
for group in groups:
|
346 |
+
if group not in tasks_and_groups:
|
347 |
+
tasks_and_groups[group] = {
|
348 |
+
"type": "group",
|
349 |
+
"task": [task],
|
350 |
+
"yaml_path": -1,
|
351 |
+
}
|
352 |
+
else:
|
353 |
+
tasks_and_groups[group]["task"].append(task)
|
354 |
+
else:
|
355 |
+
self.logger.debug(f"File {f} in {root} could not be loaded")
|
356 |
+
|
357 |
+
return tasks_and_groups
|
358 |
+
|
359 |
+
|
360 |
+
def get_task_name_from_config(task_config: Dict[str, str]) -> str:
|
361 |
+
if "task" in task_config:
|
362 |
+
return task_config["task"]
|
363 |
+
if "dataset_name" in task_config:
|
364 |
+
return "{dataset_path}_{dataset_name}".format(**task_config)
|
365 |
+
else:
|
366 |
+
return "{dataset_path}".format(**task_config)
|
367 |
+
|
368 |
+
|
369 |
+
def get_task_name_from_object(task_object):
|
370 |
+
if hasattr(task_object, "config"):
|
371 |
+
return task_object._config["task"]
|
372 |
+
|
373 |
+
# TODO: scrap this
|
374 |
+
# this gives a mechanism for non-registered tasks to have a custom name anyways when reporting
|
375 |
+
return (
|
376 |
+
task_object.EVAL_HARNESS_NAME
|
377 |
+
if hasattr(task_object, "EVAL_HARNESS_NAME")
|
378 |
+
else type(task_object).__name__
|
379 |
+
)
|
380 |
+
|
381 |
+
|
382 |
+
def get_task_dict(
|
383 |
+
task_name_list: Union[str, List[Union[str, Dict, Task]]],
|
384 |
+
task_manager: Optional[TaskManager] = None,
|
385 |
+
):
|
386 |
+
"""Creates a dictionary of task objects from either a name of task, config, or prepared Task object.
|
387 |
+
|
388 |
+
:param task_name_list: List[Union[str, Dict, Task]]
|
389 |
+
Name of model or LM object, see lm_eval.models.get_model
|
390 |
+
:param task_manager: TaskManager = None
|
391 |
+
A TaskManager object that stores indexed tasks. If not set,
|
392 |
+
task_manager will load one. This should be set by the user
|
393 |
+
if there are additional paths that want to be included
|
394 |
+
via `include_path`
|
395 |
+
|
396 |
+
:return
|
397 |
+
Dictionary of task objects
|
398 |
+
"""
|
399 |
+
task_name_from_string_dict = {}
|
400 |
+
task_name_from_config_dict = {}
|
401 |
+
task_name_from_object_dict = {}
|
402 |
+
|
403 |
+
if isinstance(task_name_list, str):
|
404 |
+
task_name_list = [task_name_list]
|
405 |
+
elif isinstance(task_name_list, list):
|
406 |
+
if not all([isinstance(task, (str, dict, Task)) for task in task_name_list]):
|
407 |
+
raise TypeError(
|
408 |
+
"Expected all list items to be of types 'str', 'dict', or 'Task', but at least one entry did not match."
|
409 |
+
)
|
410 |
+
else:
|
411 |
+
raise TypeError(
|
412 |
+
f"Expected a 'str' or 'list' but received {type(task_name_list)}."
|
413 |
+
)
|
414 |
+
|
415 |
+
string_task_name_list = [task for task in task_name_list if isinstance(task, str)]
|
416 |
+
others_task_name_list = [task for task in task_name_list if ~isinstance(task, str)]
|
417 |
+
if len(string_task_name_list) > 0:
|
418 |
+
if task_manager is None:
|
419 |
+
task_manager = TaskManager()
|
420 |
+
|
421 |
+
task_name_from_string_dict = task_manager.load_task_or_group(
|
422 |
+
string_task_name_list
|
423 |
+
)
|
424 |
+
|
425 |
+
for task_element in others_task_name_list:
|
426 |
+
if isinstance(task_element, dict):
|
427 |
+
task_name_from_config_dict = {
|
428 |
+
**task_name_from_config_dict,
|
429 |
+
**task_manager.load_config(config=task_element),
|
430 |
+
}
|
431 |
+
|
432 |
+
elif isinstance(task_element, Task):
|
433 |
+
task_name_from_object_dict = {
|
434 |
+
**task_name_from_object_dict,
|
435 |
+
get_task_name_from_object(task_element): task_element,
|
436 |
+
}
|
437 |
+
|
438 |
+
if not set(task_name_from_string_dict.keys()).isdisjoint(
|
439 |
+
set(task_name_from_object_dict.keys())
|
440 |
+
):
|
441 |
+
raise ValueError
|
442 |
+
|
443 |
+
return {
|
444 |
+
**task_name_from_string_dict,
|
445 |
+
**task_name_from_config_dict,
|
446 |
+
**task_name_from_object_dict,
|
447 |
+
}
|
lm-evaluation-harness/lm_eval/tasks/__pycache__/__init__.cpython-310.pyc
ADDED
Binary file (11.6 kB). View file
|
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (1.88 kB). View file
|
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: [LANG]
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_[LANG]
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_common_yaml
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file will be included in the generated language-specific task configs.
|
2 |
+
# It doesn't have a yaml file extension as it is not meant to be imported directly
|
3 |
+
# by the harness.
|
4 |
+
group: Cognitive-Lab/Indic-ARC-Easy
|
5 |
+
dataset_path: Cognitive-Lab/Indic-ARC-Easy
|
6 |
+
|
7 |
+
output_type: multiple_choice
|
8 |
+
#training_split: train
|
9 |
+
#validation_split: validation
|
10 |
+
test_split: test
|
11 |
+
|
12 |
+
doc_to_target: label
|
13 |
+
doc_to_choice: !function utils.doc_to_choice
|
14 |
+
|
15 |
+
metric_list:
|
16 |
+
- metric: acc
|
17 |
+
aggregation: mean
|
18 |
+
higher_is_better: true
|
19 |
+
metadata:
|
20 |
+
version: 1.0
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_gu.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: gu
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_gu
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_hi.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: hi
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_hi
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_kn.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: kn
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_kn
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_ml.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: ml
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_ml
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_mr.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: mr
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_mr
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_ta.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: ta
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_ta
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/indic_arc_easy_te.yaml
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: te
|
2 |
+
include: indic_arc_easy_common_yaml
|
3 |
+
doc_to_text: "Question: {{translated_question}}\nAnswer:"
|
4 |
+
doc_to_target: "{{translated_choices.label.index(answerKey)}}"
|
5 |
+
doc_to_choice: "{{translated_choices.text}}"
|
6 |
+
should_decontaminate: true
|
7 |
+
doc_to_decontamination_query: "Question: {{translated_question}}\nAnswer:"
|
8 |
+
|
9 |
+
task: indic_arc_easy_te
|
lm-evaluation-harness/lm_eval/tasks/indic_arc_easy/utils.py
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from functools import partial
|
2 |
+
|
3 |
+
|
4 |
+
def convert_choice(choice):
|
5 |
+
return choice
|
6 |
+
|
7 |
+
|
8 |
+
def doc_to_text(doc, connector):
|
9 |
+
# Drop the period
|
10 |
+
conn = connector[doc["question"]]
|
11 |
+
return doc["premise"].strip()[:-1] + f" {conn}"
|
12 |
+
|
13 |
+
|
14 |
+
def doc_to_choice(doc):
|
15 |
+
return [convert_choice(doc["choice1"]), convert_choice(doc["choice2"])]
|
16 |
+
|
17 |
+
|
18 |
+
doc_to_text_hi = partial(
|
19 |
+
doc_to_text,
|
20 |
+
connector={
|
21 |
+
"cause": "कारण",
|
22 |
+
"effect": "परिणाम",
|
23 |
+
},
|
24 |
+
)
|
25 |
+
|
26 |
+
doc_to_text_mr = partial(
|
27 |
+
doc_to_text,
|
28 |
+
connector={
|
29 |
+
"cause": "कारण",
|
30 |
+
"effect": "परिणाम",
|
31 |
+
},
|
32 |
+
)
|
33 |
+
|
34 |
+
doc_to_text_as = partial(
|
35 |
+
doc_to_text,
|
36 |
+
connector={
|
37 |
+
"cause": "কাৰণ",
|
38 |
+
"effect": "প্ৰভাৱ",
|
39 |
+
},
|
40 |
+
)
|
41 |
+
|
42 |
+
doc_to_text_bn = partial(
|
43 |
+
doc_to_text,
|
44 |
+
connector={
|
45 |
+
"cause": "কারণ",
|
46 |
+
"effect": "প্রভাব",
|
47 |
+
},
|
48 |
+
)
|
49 |
+
|
50 |
+
doc_to_text_gu = partial(
|
51 |
+
doc_to_text,
|
52 |
+
connector={
|
53 |
+
"cause": "કારણ",
|
54 |
+
"effect": "અસર",
|
55 |
+
},
|
56 |
+
)
|
57 |
+
|
58 |
+
doc_to_text_kn = partial(
|
59 |
+
doc_to_text,
|
60 |
+
connector={
|
61 |
+
"cause": "ಕಾರಣ",
|
62 |
+
"effect": "ಪರಿಣಾಮ",
|
63 |
+
},
|
64 |
+
)
|
65 |
+
|
66 |
+
doc_to_text_mai = partial(
|
67 |
+
doc_to_text,
|
68 |
+
connector={
|
69 |
+
"cause": "कारण",
|
70 |
+
"effect": "प्रभाव",
|
71 |
+
},
|
72 |
+
)
|
73 |
+
|
74 |
+
doc_to_text_ml = partial(
|
75 |
+
doc_to_text,
|
76 |
+
connector={
|
77 |
+
"cause": "കാരണമാകുന്നു",
|
78 |
+
"effect": "ഫലം",
|
79 |
+
},
|
80 |
+
)
|
81 |
+
|
82 |
+
doc_to_text_ne = partial(
|
83 |
+
doc_to_text,
|
84 |
+
connector={
|
85 |
+
"cause": "कारण",
|
86 |
+
"effect": "असर",
|
87 |
+
},
|
88 |
+
)
|
89 |
+
|
90 |
+
doc_to_text_or = partial(
|
91 |
+
doc_to_text,
|
92 |
+
connector={
|
93 |
+
"cause": "କାରଣ",
|
94 |
+
"effect": "ପ୍ରଭାବ",
|
95 |
+
},
|
96 |
+
)
|
97 |
+
|
98 |
+
doc_to_text_sa = partial(
|
99 |
+
doc_to_text,
|
100 |
+
connector={
|
101 |
+
"cause": "निमित्तम्",
|
102 |
+
"effect": "परिणाम",
|
103 |
+
},
|
104 |
+
)
|
105 |
+
|
106 |
+
doc_to_text_sd = partial(
|
107 |
+
doc_to_text,
|
108 |
+
connector={
|
109 |
+
"cause": "سبب",
|
110 |
+
"effect": "اثر",
|
111 |
+
},
|
112 |
+
)
|
113 |
+
|
114 |
+
doc_to_text_ta = partial(
|
115 |
+
doc_to_text,
|
116 |
+
connector={
|
117 |
+
"cause": "காரணம்",
|
118 |
+
"effect": "விளைவு",
|
119 |
+
},
|
120 |
+
)
|
121 |
+
|
122 |
+
doc_to_text_te = partial(
|
123 |
+
doc_to_text,
|
124 |
+
connector={
|
125 |
+
"cause": "కారణం",
|
126 |
+
"effect": "ప్రభావం",
|
127 |
+
},
|
128 |
+
)
|
129 |
+
|
130 |
+
doc_to_text_ur = partial(
|
131 |
+
doc_to_text,
|
132 |
+
connector={
|
133 |
+
"cause": "وجہ",
|
134 |
+
"effect": "اثر",
|
135 |
+
},
|
136 |
+
)
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/__pycache__/utils.cpython-310.pyc
ADDED
Binary file (1.12 kB). View file
|
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag.yaml
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: [LANG]
|
2 |
+
include: indic_hellaswag_common_yaml
|
3 |
+
task: indic_hellaswag_[LANG]
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_common_yaml
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# This file will be included in the generated language-specific task configs.
|
2 |
+
# It doesn't have a yaml file extension as it is not meant to be imported directly
|
3 |
+
# by the harness.
|
4 |
+
group: Cognitive-Lab/Indic-Hellaswag
|
5 |
+
dataset_path: Cognitive-Lab/Indic-Hellaswag
|
6 |
+
|
7 |
+
output_type: multiple_choice
|
8 |
+
#training_split: train
|
9 |
+
validation_split: validation
|
10 |
+
test_split: null
|
11 |
+
|
12 |
+
process_docs: !function utils.process_docs
|
13 |
+
doc_to_text: "{{query}}"
|
14 |
+
doc_to_target: "{{label}}"
|
15 |
+
doc_to_choice: "{{choices}}"
|
16 |
+
|
17 |
+
metric_list:
|
18 |
+
- metric: acc
|
19 |
+
aggregation: mean
|
20 |
+
higher_is_better: true
|
21 |
+
metadata:
|
22 |
+
version: 1.0
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_gu.yaml
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: gu
|
2 |
+
include: indic_hellaswag_common_yaml
|
3 |
+
task: indic_hellaswag_gu
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_hi.yaml
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: hi
|
2 |
+
include: indic_hellaswag_common_yaml
|
3 |
+
task: indic_hellaswag_hi
|
lm-evaluation-harness/lm_eval/tasks/indic_hellaswag/indic_hellaswag_kn.yaml
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
dataset_name: kn
|
2 |
+
include: indic_hellaswag_common_yaml
|
3 |
+
task: indic_hellaswag_kn
|