Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .editorconfig +21 -0
- CONTRIBUTING.md +111 -0
- README.md +93 -0
- bigscience/__init__.py +5 -0
- bigscience/bigscience.py +1 -0
- evaluation/README.md +7 -0
- evaluation/generation/generate.py +67 -0
- evaluation/results/tr1/Tr1-13B-harness-eval.json +165 -0
- evaluation/results/tr11/bloom1b3/bslmevalfiles/concat.py +103 -0
- evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-13-19-23-37.json +701 -0
- evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-10-03-25.json +2169 -0
- evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-12-00-55.json +1255 -0
- evaluation/results/tr11/bloom2b5/bslmeval.json +0 -0
- evaluation/results/tr11/bloom2b5/bslmevalfiles/concat.py +103 -0
- evaluation/results/tr11/bloom2b5/bslmevalfiles/tr11c-2b5-ml-bsevalharness-results_lm-eval_global_step337250_2022-07-12-23-12-44.json +0 -0
- evaluation/results/tr11/bloom2b5/bslmevalfiles/tr11c-2b5-ml-evalharness-results_lm-eval_global_step337250_2022-07-13-09-55-04.json +172 -0
- evaluation/results/tr11/bloom2b5/humaneval_temp02.json +1 -0
- evaluation/results/tr11/bloom2b5/humaneval_temp06.json +1 -0
- evaluation/results/tr11/bloom2b5/humaneval_temp08.json +1 -0
- evaluation/results/tr11/bloom2b5/mdmeta.txt +1540 -0
- evaluation/results/tr11/bloom2b5/mdtable.txt +143 -0
- evaluation/results/tr11/conversion/json_to_markdown.py +307 -0
- evaluation/results/tr11/opt/bslmeval.json +0 -0
- evaluation/results/tr11/opt/humaneval_temp06.json +1 -0
- evaluation/results/tr11/scripts/download_bsevalharness.py +21 -0
- evaluation/results/tr11/scripts/run_bsevalharness_generation_6b3.slurm +101 -0
- evaluation/results/tr11/scripts/run_bsevalharness_tr11-176b-ml.slurm +122 -0
- evaluation/results/tr11/scripts/run_bsevalharness_tr11b-1b3-ml.slurm +122 -0
- evaluation/results/tr11/scripts/run_bsevalharness_tr11d-750m-ml.slurm +120 -0
- evaluation/results/tr11/scripts/run_trevalharness_176b.slurm +60 -0
- evaluation/results/tr12/tr12a-1B3-oscar-en-filtered_agg.json +0 -0
- evaluation/results/tr12/tr12b-1B3-oscar-en-filtered-dedup_agg.json +0 -0
- evaluation/results/tr13/merge_all_json.py +97 -0
- evaluation/results/tr13/plot_results.py +230 -0
- evaluation/results/tr13/results_to_csv.py +72 -0
- evaluation/results/tr13/tzeroeval/evaluate_t0_v100.slurm +751 -0
- evaluation/results/tr3/README.md +1 -0
- evaluation/results/tr3/plot_task_solve_graph.py +133 -0
- evaluation/results/tr3/switch_tokenizer_to_t5_for_tr3e.sh +6 -0
- evaluation/results/tr3/tr3e-1B3-c4-checkpoints_agg.json +3084 -0
- evaluation/results/tr3/tr3m-1B3-pile-checkpoints_agg.json +0 -0
- evaluation/utilities/convert_results_to_json.py +111 -0
- evaluation/utilities/download_all_models.py +47 -0
- evaluation/utilities/download_all_models.slurm +26 -0
- evaluation/utilities/export_results_through_training_to_wandb.py +86 -0
- evaluation/utilities/find_checkpoints_at_token_intervals.py +27 -0
- evaluation/utilities/plot_all_eval.py +45 -0
- jz/.gitignore +133 -0
- jz/.gitmodules +3 -0
- jz/README.md +27 -0
.editorconfig
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# http://editorconfig.org
|
| 2 |
+
|
| 3 |
+
root = true
|
| 4 |
+
|
| 5 |
+
[*]
|
| 6 |
+
indent_style = space
|
| 7 |
+
indent_size = 4
|
| 8 |
+
trim_trailing_whitespace = true
|
| 9 |
+
insert_final_newline = true
|
| 10 |
+
charset = utf-8
|
| 11 |
+
end_of_line = lf
|
| 12 |
+
|
| 13 |
+
[*.bat]
|
| 14 |
+
indent_style = tab
|
| 15 |
+
end_of_line = crlf
|
| 16 |
+
|
| 17 |
+
[LICENSE]
|
| 18 |
+
insert_final_newline = false
|
| 19 |
+
|
| 20 |
+
[Makefile]
|
| 21 |
+
indent_style = tab
|
CONTRIBUTING.md
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Contributing
|
| 2 |
+
|
| 3 |
+
This is a community project and contributions are welcome, and they are greatly appreciated! Every little bit helps, and credit will always be given.
|
| 4 |
+
|
| 5 |
+
If you are inspired to contribute please see the following entries:
|
| 6 |
+
|
| 7 |
+
Megatron-DeeepSpeed:
|
| 8 |
+
|
| 9 |
+
- [Megatron-DeepSpeed Issues](https://github.com/bigscience-workshop/Megatron-DeepSpeed/issues)
|
| 10 |
+
- [Good First Issues](https://github.com/bigscience-workshop/Megatron-DeepSpeed/contribute)
|
| 11 |
+
|
| 12 |
+
General BigScience:
|
| 13 |
+
|
| 14 |
+
- [bigscience Issues](https://github.com/bigscience-workshop/bigscience/issues)
|
| 15 |
+
- [Good First Issues](https://github.com/bigscience-workshop/bigscience/contribute)
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
### Report Bugs
|
| 20 |
+
|
| 21 |
+
Report bugs at
|
| 22 |
+
<https://github.com/bigscience-workshop/bigscience/issues>.
|
| 23 |
+
|
| 24 |
+
If you are reporting a bug, please include:
|
| 25 |
+
|
| 26 |
+
- Your operating system name and version.
|
| 27 |
+
- Any details about your local setup that might be helpful in
|
| 28 |
+
troubleshooting.
|
| 29 |
+
- Detailed steps to reproduce the bug.
|
| 30 |
+
|
| 31 |
+
### Fix Bugs
|
| 32 |
+
|
| 33 |
+
Look through the GitHub issues for bugs. Anything tagged with "bug" and
|
| 34 |
+
"help wanted" is open to whoever wants to implement it.
|
| 35 |
+
|
| 36 |
+
### Implement Features
|
| 37 |
+
|
| 38 |
+
Look through the GitHub issues for features. Anything tagged with
|
| 39 |
+
"enhancement" and "help wanted" is open to whoever wants to implement
|
| 40 |
+
it.
|
| 41 |
+
|
| 42 |
+
### Write Documentation
|
| 43 |
+
|
| 44 |
+
Big Science could always use more documentation, whether as part of the
|
| 45 |
+
official Big Science docs, in docstrings, or even on the web in blog
|
| 46 |
+
posts, articles, and such.
|
| 47 |
+
|
| 48 |
+
### Submit Feedback
|
| 49 |
+
|
| 50 |
+
The best way to send feedback is to file an issue at
|
| 51 |
+
<https://github.com/bigscience-workshop/bigscience/issues>.
|
| 52 |
+
|
| 53 |
+
If you are proposing a feature:
|
| 54 |
+
|
| 55 |
+
- Explain in detail how it would work.
|
| 56 |
+
- Keep the scope as narrow as possible, to make it easier to
|
| 57 |
+
implement.
|
| 58 |
+
- Remember that this is a volunteer-driven project, and that
|
| 59 |
+
contributions are welcome :)
|
| 60 |
+
|
| 61 |
+
Get Started!
|
| 62 |
+
------------
|
| 63 |
+
|
| 64 |
+
Ready to contribute? Here's how to set up bigscience for local
|
| 65 |
+
development.
|
| 66 |
+
|
| 67 |
+
1. Fork the bigscience repo on GitHub.
|
| 68 |
+
2. Clone your fork locally:
|
| 69 |
+
|
| 70 |
+
$ git clone [email protected]:your_name_here/bigscience.git
|
| 71 |
+
|
| 72 |
+
3. Install your local copy into a virtualenv. Assuming you have
|
| 73 |
+
virtualenvwrapper installed, this is how you set up your fork for
|
| 74 |
+
local development:
|
| 75 |
+
```
|
| 76 |
+
$ mkvirtualenv bigscience
|
| 77 |
+
$ cd bigscience/
|
| 78 |
+
$ python setup.py develop
|
| 79 |
+
```
|
| 80 |
+
4. Create a branch for local development:
|
| 81 |
+
```
|
| 82 |
+
$ git checkout -b name-of-your-bugfix-or-feature
|
| 83 |
+
```
|
| 84 |
+
Now you can make your changes locally.
|
| 85 |
+
|
| 86 |
+
5. When you're done making changes, check that your changes pass flake8
|
| 87 |
+
and the tests, including testing other Python versions with tox:
|
| 88 |
+
```
|
| 89 |
+
$ flake8 bigscience tests
|
| 90 |
+
$ python setup.py test or pytest
|
| 91 |
+
$ tox
|
| 92 |
+
```
|
| 93 |
+
To get flake8 and tox, just pip install them into your virtualenv.
|
| 94 |
+
|
| 95 |
+
6. Commit your changes and push your branch to GitHub:
|
| 96 |
+
```
|
| 97 |
+
$ git add .
|
| 98 |
+
$ git commit -m "Your detailed description of your changes."
|
| 99 |
+
$ git push origin name-of-your-bugfix-or-feature
|
| 100 |
+
```
|
| 101 |
+
7. Submit a pull request through the GitHub website.
|
| 102 |
+
|
| 103 |
+
Pull Request Guidelines
|
| 104 |
+
-----------------------
|
| 105 |
+
|
| 106 |
+
Before you submit a pull request, check that it meets these guidelines:
|
| 107 |
+
|
| 108 |
+
1. The pull request should include tests.
|
| 109 |
+
2. If the pull request adds functionality, the docs should be updated.
|
| 110 |
+
Put your new functionality into a function with a docstring, and add
|
| 111 |
+
the feature to the list in README.rst.
|
README.md
ADDED
|
@@ -0,0 +1,93 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# bigscience
|
| 2 |
+
|
| 3 |
+
[Research workshop on large language models - The Summer of Language Models 21](https://bigscience.huggingface.co/)
|
| 4 |
+
|
| 5 |
+
At the moment we have 2 code repos:
|
| 6 |
+
|
| 7 |
+
1. https://github.com/bigscience-workshop/Megatron-DeepSpeed - this is our flagship code base
|
| 8 |
+
2. https://github.com/bigscience-workshop/bigscience - (this repo) for everything else - docs, experiments, etc.
|
| 9 |
+
|
| 10 |
+
Currently, the most active segments of this repo are:
|
| 11 |
+
|
| 12 |
+
- [JZ](./jz/) - Lots of information about our work environment which helps evaluate, plan and get things done
|
| 13 |
+
- [Experiments](./experiments) - many experiments are being done. Documentation, result tables, scripts and logs are all there
|
| 14 |
+
- [Datasets info](./data/)
|
| 15 |
+
- [Train](./train) - all the information about the current trainings (see below for the most important ones)
|
| 16 |
+
|
| 17 |
+
We have READMEs for specific aspects, such as:
|
| 18 |
+
- [hub integration](./tools/README.md)
|
| 19 |
+
|
| 20 |
+
|
| 21 |
+
## Trainings
|
| 22 |
+
|
| 23 |
+
While we keep detailed chronicles of experiments and findings for some of the main trainings, here is a doc that contains a summary of the most important findings: [Lessons learned](train/lessons-learned.md)
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
### Train 1 - 13B - unmodified Megatron gpt2 - baseline
|
| 27 |
+
|
| 28 |
+
* [the full spec and discussions](./train/tr1-13B-base)
|
| 29 |
+
* [the training script](./train/tr1-13B-base/tr1-13B-round1.slurm)
|
| 30 |
+
* checkpoints and logs:
|
| 31 |
+
- [tensorboard](https://huggingface.co/bigscience/tr1-13B-tensorboard/tensorboard)
|
| 32 |
+
- [logs](https://huggingface.co/bigscience/tr1-13B-logs/)
|
| 33 |
+
* [chronicles](./train/tr1-13B-base/chronicles.md)
|
| 34 |
+
|
| 35 |
+
You can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:
|
| 36 |
+
```
|
| 37 |
+
perl -e '$u=shift; $b=0; while(1){($e)=qx[curl -sI $u]=~/content-length: (\d+)/; \
|
| 38 |
+
print qx[curl -sr $b-$e -L $u] if $e>$b; $b=$e; sleep 300}' \
|
| 39 |
+
https://huggingface.co/bigscience/tr1-13B-logs/resolve/main/main_log.txt
|
| 40 |
+
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
### Train 3
|
| 44 |
+
|
| 45 |
+
Architecture and scaling baseline runs: no fancy tricks, just GPT2. Here are links to the respective tensorboards:
|
| 46 |
+
|
| 47 |
+
| Size | 1B3 | 760M | 350M | 125M |
|
| 48 |
+
|--------------------- |----- |------ |------ |------ |
|
| 49 |
+
| C4 + low warmup | [a](https://huggingface.co/bigscience/tr3-1B3-modeling-baseline-tensorboard) | [b](https://huggingface.co/bigscience/tr3b-760M-modeling-baseline-tensorboard) | [c](https://huggingface.co/bigscience/tr3c-350M-modeling-baseline-tensorboard) | |
|
| 50 |
+
| OSCAR + low warmup | [f](https://huggingface.co/bigscience/tr3f-1B3-diagnostic2-low-warmup-oscar-tensorboard) | | | |
|
| 51 |
+
| C4 + high warmup | [e](https://huggingface.co/bigscience/tr3e-1B3-diagnostic1-warmup-c4-tensorboard) | | | |
|
| 52 |
+
| OSCAR + high warmup | **[d (current baseline)](https://huggingface.co/bigscience/tr3d-1B3-more-warmup-tensorboard)** | [g](https://huggingface.co/bigscience/tr3g-760M-v2-tensorboard) | [h](https://huggingface.co/bigscience/tr3h-350M-v2-tensorboard) | [i](https://huggingface.co/bigscience/tr3i-125M-v2-tensorboard) |
|
| 53 |
+
| Pile + high warmup | [m](https://huggingface.co/bigscience/tr3m-1B3-pile-tensorboard) | [j](https://huggingface.co/bigscience/tr3j-760M-pile-tensorboard) | [k](https://huggingface.co/bigscience/tr3k-350M-pile-tensorboard) | [l](https://huggingface.co/bigscience/tr3l-125M-pile-tensorboard) |
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
### Train 8
|
| 57 |
+
|
| 58 |
+
104B - unmodified Megatron gpt2 - with extra-wide hidden size to learn how to deal with training instabilities
|
| 59 |
+
|
| 60 |
+
* [the full spec and discussions](./train/tr8-104B-wide)
|
| 61 |
+
* [the training script](./train/tr8-104B-wide/tr8-104B.slurm)
|
| 62 |
+
* checkpoints and logs:
|
| 63 |
+
- [tensorboard](https://huggingface.co/bigscience/tr8-104B-logs/tensorboard)
|
| 64 |
+
- [logs](https://huggingface.co/bigscience/tr8-104B-logs/tree/main/logs)
|
| 65 |
+
* [chronicles](./train/tr8-104B-wide/chronicles.md)
|
| 66 |
+
|
| 67 |
+
You can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:
|
| 68 |
+
```
|
| 69 |
+
perl -e '$u=shift; $b=0; while(1){($e)=qx[curl -sI $u]=~/content-length: (\d+)/; \
|
| 70 |
+
print qx[curl -sr $b-$e -L $u] if $e>$b; $b=$e; sleep 300}' \
|
| 71 |
+
https://cdn-lfs.huggingface.co/bigscience/tr8-104B-logs/b2cc478d5ae7c9ec937ea2db1d2fe09de593fa2ec38c171d6cc5dca094cd79f9
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
### Train 11
|
| 75 |
+
|
| 76 |
+
**This is the current main training**
|
| 77 |
+
|
| 78 |
+
tr11-176B-ml
|
| 79 |
+
|
| 80 |
+
* [the full spec and discussions](./train/tr11-176B-ml/)
|
| 81 |
+
* [the training script](./train/tr11-176B-ml/tr11-176B-ml.slurm)
|
| 82 |
+
* checkpoints and logs:
|
| 83 |
+
- [tensorboard](https://huggingface.co/bigscience/tr11-176B-ml-logs/tensorboard)
|
| 84 |
+
- [logs](https://huggingface.co/bigscience/tr11-176B-ml-logs/tree/main/logs/main)
|
| 85 |
+
* [chronicles-prequel](./train/tr11-176B-ml/chronicles-prequel.md)
|
| 86 |
+
* [chronicles](./train/tr11-176B-ml/chronicles.md)
|
| 87 |
+
|
| 88 |
+
You can watch the training logs live by running this `tail -f` like script over remote log file that gets synced to the hub once an hour:
|
| 89 |
+
```
|
| 90 |
+
perl -e '$u=shift; $b=0; while(1){($e)=qx[curl -LsI $u]=~/2 200.*?content-length: (\d+)/s; \
|
| 91 |
+
print qx[curl -Lsr $b-$e $u] if $e>$b; $b=$e; sleep 300}' \
|
| 92 |
+
https://huggingface.co/bigscience/tr11-176B-ml-logs/resolve/main/logs/main/main_log.txt
|
| 93 |
+
```
|
bigscience/__init__.py
ADDED
|
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""Top-level package for Big Science."""
|
| 2 |
+
|
| 3 |
+
__author__ = """Stas Bekman"""
|
| 4 |
+
__email__ = '[email protected]'
|
| 5 |
+
__version__ = '0.1.0'
|
bigscience/bigscience.py
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
"""Main module."""
|
evaluation/README.md
ADDED
|
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Evaluation
|
| 2 |
+
|
| 3 |
+
This folder contains scripts and results for intermediate evaluation, mostly based on zero-shot prompting performance. Most are performed with Eleuther AI's [LM eval harness](https://github.com/EleutherAI/lm-evaluation-harness).
|
| 4 |
+
|
| 5 |
+
Evaluated models:
|
| 6 |
+
- BLOOM (tr11 / The `bigscience/bloom` model in 176B / 6B3 / 2B5 / 1B3 / 750M / 350M variants)
|
| 7 |
+
- [13B](https://github.com/bigscience-workshop/bigscience/blob/master/evaluation/Tr1-13B-harness-eval.json)
|
evaluation/generation/generate.py
ADDED
|
@@ -0,0 +1,67 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
import datetime
|
| 3 |
+
|
| 4 |
+
import torch
|
| 5 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 6 |
+
|
| 7 |
+
def get_args():
|
| 8 |
+
parser = argparse.ArgumentParser()
|
| 9 |
+
parser.add_argument("--checkpoint", type=str, help="Checkpoint path", required=True)
|
| 10 |
+
parser.add_argument("--max-memory-per-gpu", type=str, help="Defines maximum memory allocated to gpu", required=True)
|
| 11 |
+
parser.add_argument("--global-step", type=str, default=None)
|
| 12 |
+
parser.add_argument("--generate-max-length", type=int, default=50, help="max generation length")
|
| 13 |
+
parser.add_argument("--greedy", action="store_true")
|
| 14 |
+
parser.add_argument("--top-k", type=int, default=0)
|
| 15 |
+
parser.add_argument("--top-p", type=float, default=0.)
|
| 16 |
+
parser.add_argument("--offload_folder", type=str, help="offload folder for accelerate", default="./offload")
|
| 17 |
+
|
| 18 |
+
return parser.parse_args()
|
| 19 |
+
|
| 20 |
+
def get_gpus_max_memory(max_memory):
|
| 21 |
+
max_memory = {i: max_memory for i in range(torch.cuda.device_count())}
|
| 22 |
+
return max_memory
|
| 23 |
+
|
| 24 |
+
def generate_from_text(model, text, tokenizer, max_length=200, greedy=False, top_k=0, top_p=0.):
|
| 25 |
+
input_ids = tokenizer.encode(text, return_tensors='pt').to("cuda:0")
|
| 26 |
+
max_length = input_ids.size(-1) + max_length
|
| 27 |
+
|
| 28 |
+
greedy_output = model.generate(
|
| 29 |
+
input_ids.to('cuda:0'),
|
| 30 |
+
max_length=max_length,
|
| 31 |
+
do_sample=not greedy,
|
| 32 |
+
top_k=None if greedy else top_k,
|
| 33 |
+
top_p=None if greedy else top_p
|
| 34 |
+
)
|
| 35 |
+
return tokenizer.decode(greedy_output[0], skip_special_tokens=True)
|
| 36 |
+
|
| 37 |
+
def main():
|
| 38 |
+
args = get_args()
|
| 39 |
+
print("Loading model")
|
| 40 |
+
|
| 41 |
+
tokenizer = AutoTokenizer.from_pretrained(args.checkpoint, padding_side="left")
|
| 42 |
+
|
| 43 |
+
print("Loaded tokenizer!")
|
| 44 |
+
start = datetime.datetime.now()
|
| 45 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 46 |
+
args.checkpoint,
|
| 47 |
+
device_map="auto",
|
| 48 |
+
max_memory=get_gpus_max_memory(args.max_memory_per_gpu),
|
| 49 |
+
torch_dtype=torch.bfloat16,
|
| 50 |
+
revision="gs{}".format(args.global_step) if args.global_step else None,
|
| 51 |
+
offload_folder=args.offload_folder,
|
| 52 |
+
)
|
| 53 |
+
print(f"Loaded model in {datetime.datetime.now() - start}")
|
| 54 |
+
|
| 55 |
+
texts = []
|
| 56 |
+
while True:
|
| 57 |
+
try:
|
| 58 |
+
dummy = input('''Enter the paragraph (Enter for to validate new input line and Ctrl-c to start generating the prompt):''')
|
| 59 |
+
texts.append(dummy)
|
| 60 |
+
except KeyboardInterrupt:
|
| 61 |
+
text = "\n".join(texts)
|
| 62 |
+
output = generate_from_text(model, text, tokenizer, max_length=args.generate_max_length, greedy=args.greedy, top_k=args.top_k, top_p=args.top_p)
|
| 63 |
+
print(output)
|
| 64 |
+
texts = []
|
| 65 |
+
|
| 66 |
+
if __name__ == "__main__":
|
| 67 |
+
main()
|
evaluation/results/tr1/Tr1-13B-harness-eval.json
ADDED
|
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"results": {
|
| 3 |
+
"lambada": {
|
| 4 |
+
"ppl": 5.020137688328123,
|
| 5 |
+
"ppl_stderr": 0.11575351197990837,
|
| 6 |
+
"acc": 0.634193673588201,
|
| 7 |
+
"acc_stderr": 0.006710403442216892
|
| 8 |
+
},
|
| 9 |
+
"winogrande": {
|
| 10 |
+
"acc": 0.6471981057616417,
|
| 11 |
+
"acc_stderr": 0.013429728101788954
|
| 12 |
+
},
|
| 13 |
+
"hellaswag": {
|
| 14 |
+
"acc": 0.5416251742680741,
|
| 15 |
+
"acc_stderr": 0.004972460206842306,
|
| 16 |
+
"acc_norm": 0.7162915753833897,
|
| 17 |
+
"acc_norm_stderr": 0.004498757194493409
|
| 18 |
+
},
|
| 19 |
+
"piqa": {
|
| 20 |
+
"acc": 0.7769314472252449,
|
| 21 |
+
"acc_stderr": 0.009713057213018522,
|
| 22 |
+
"acc_norm": 0.7878128400435256,
|
| 23 |
+
"acc_norm_stderr": 0.009539299828174046
|
| 24 |
+
},
|
| 25 |
+
"cola": {
|
| 26 |
+
"mcc": 0.05586916675965605,
|
| 27 |
+
"mcc_stderr": 0.034250689348891604
|
| 28 |
+
},
|
| 29 |
+
"mnli": {
|
| 30 |
+
"acc": 0.3959246051961284,
|
| 31 |
+
"acc_stderr": 0.004936609703575665
|
| 32 |
+
},
|
| 33 |
+
"mnli_mismatched": {
|
| 34 |
+
"acc": 0.3984947111472742,
|
| 35 |
+
"acc_stderr": 0.004937784794740595
|
| 36 |
+
},
|
| 37 |
+
"mrpc": {
|
| 38 |
+
"acc": 0.6764705882352942,
|
| 39 |
+
"acc_stderr": 0.023189113109403536,
|
| 40 |
+
"f1": 0.8058823529411765,
|
| 41 |
+
"f1_stderr": 0.016598529068410604
|
| 42 |
+
},
|
| 43 |
+
"rte": {
|
| 44 |
+
"acc": 0.5234657039711191,
|
| 45 |
+
"acc_stderr": 0.03006330041190266
|
| 46 |
+
},
|
| 47 |
+
"qnli": {
|
| 48 |
+
"acc": 0.5171151382024529,
|
| 49 |
+
"acc_stderr": 0.006761445834294947
|
| 50 |
+
},
|
| 51 |
+
"qqp": {
|
| 52 |
+
"acc": 0.36772198862231015,
|
| 53 |
+
"acc_stderr": 0.0023981002797098354,
|
| 54 |
+
"f1": 0.532523819102829,
|
| 55 |
+
"f1_stderr": 0.0025759259415034795
|
| 56 |
+
},
|
| 57 |
+
"sst": {
|
| 58 |
+
"acc": 0.5137614678899083,
|
| 59 |
+
"acc_stderr": 0.01693543564494107
|
| 60 |
+
},
|
| 61 |
+
"wnli": {
|
| 62 |
+
"acc": 0.18309859154929578,
|
| 63 |
+
"acc_stderr": 0.046225147349214284
|
| 64 |
+
},
|
| 65 |
+
"boolq": {
|
| 66 |
+
"acc": 0.5868501529051988,
|
| 67 |
+
"acc_stderr": 0.008612117547803569
|
| 68 |
+
},
|
| 69 |
+
"copa": {
|
| 70 |
+
"acc": 0.88,
|
| 71 |
+
"acc_stderr": 0.03265986323710906
|
| 72 |
+
},
|
| 73 |
+
"multirc": {
|
| 74 |
+
"acc": 0.017838405036726127,
|
| 75 |
+
"acc_stderr": 0.00428993794671089
|
| 76 |
+
},
|
| 77 |
+
"record": {
|
| 78 |
+
"f1": 0.885354285714286,
|
| 79 |
+
"f1_stderr": 0.00314773987203575,
|
| 80 |
+
"em": 0.8783,
|
| 81 |
+
"em_stderr": 0.003269553486028481
|
| 82 |
+
},
|
| 83 |
+
"wic": {
|
| 84 |
+
"acc": 0.49843260188087773,
|
| 85 |
+
"acc_stderr": 0.019810623954060382
|
| 86 |
+
},
|
| 87 |
+
"wsc": {
|
| 88 |
+
"acc": 0.5,
|
| 89 |
+
"acc_stderr": 0.04926646390821466
|
| 90 |
+
},
|
| 91 |
+
"prost": {
|
| 92 |
+
"acc": 0.28047608881298036,
|
| 93 |
+
"acc_stderr": 0.003282038627279345,
|
| 94 |
+
"acc_norm": 0.2830380017079419,
|
| 95 |
+
"acc_norm_stderr": 0.003291119066155946
|
| 96 |
+
},
|
| 97 |
+
"mc_taco": {
|
| 98 |
+
"em": 0.12612612612612611,
|
| 99 |
+
"f1": 0.4965489467730623
|
| 100 |
+
},
|
| 101 |
+
"pubmedqa": {
|
| 102 |
+
"acc": 0.615,
|
| 103 |
+
"acc_stderr": 0.015395194445410805
|
| 104 |
+
},
|
| 105 |
+
"sciq": {
|
| 106 |
+
"acc": 0.895,
|
| 107 |
+
"acc_stderr": 0.009698921026024957,
|
| 108 |
+
"acc_norm": 0.815,
|
| 109 |
+
"acc_norm_stderr": 0.012285191326386693
|
| 110 |
+
},
|
| 111 |
+
"triviaqa": {
|
| 112 |
+
"acc": 0.13294440024750287,
|
| 113 |
+
"acc_stderr": 0.0031921904944669202
|
| 114 |
+
},
|
| 115 |
+
"arc_easy": {
|
| 116 |
+
"acc": 0.6813973063973064,
|
| 117 |
+
"acc_stderr": 0.009560775507673364,
|
| 118 |
+
"acc_norm": 0.6001683501683501,
|
| 119 |
+
"acc_norm_stderr": 0.010051788039412911
|
| 120 |
+
},
|
| 121 |
+
"arc_challenge": {
|
| 122 |
+
"acc": 0.3216723549488055,
|
| 123 |
+
"acc_stderr": 0.013650488084494164,
|
| 124 |
+
"acc_norm": 0.34215017064846415,
|
| 125 |
+
"acc_norm_stderr": 0.013864152159177275
|
| 126 |
+
},
|
| 127 |
+
"logiqa": {
|
| 128 |
+
"acc": 0.23195084485407066,
|
| 129 |
+
"acc_stderr": 0.0165552524979259,
|
| 130 |
+
"acc_norm": 0.2749615975422427,
|
| 131 |
+
"acc_norm_stderr": 0.01751297178222522
|
| 132 |
+
},
|
| 133 |
+
"openbookqa": {
|
| 134 |
+
"acc": 0.294,
|
| 135 |
+
"acc_stderr": 0.020395095484936603,
|
| 136 |
+
"acc_norm": 0.412,
|
| 137 |
+
"acc_norm_stderr": 0.022033677993740865
|
| 138 |
+
},
|
| 139 |
+
"race": {
|
| 140 |
+
"acc": 0.3741626794258373,
|
| 141 |
+
"acc_stderr": 0.014976513181619648
|
| 142 |
+
},
|
| 143 |
+
"headqa": {
|
| 144 |
+
"acc": 0.22283005105762219,
|
| 145 |
+
"acc_stderr": 0.007948594863726302,
|
| 146 |
+
"acc_norm": 0.26258205689277897,
|
| 147 |
+
"acc_norm_stderr": 0.00840494460823324
|
| 148 |
+
},
|
| 149 |
+
"mathqa": {
|
| 150 |
+
"acc": 0.2375209380234506,
|
| 151 |
+
"acc_stderr": 0.0077905030438074,
|
| 152 |
+
"acc_norm": 0.23450586264656617,
|
| 153 |
+
"acc_norm_stderr": 0.007756188894243557
|
| 154 |
+
},
|
| 155 |
+
"webqs": {
|
| 156 |
+
"acc": 0.0265748031496063,
|
| 157 |
+
"acc_stderr": 0.003568875174120304
|
| 158 |
+
},
|
| 159 |
+
"wikitext": {
|
| 160 |
+
"word_perplexity": 12.921754196505068,
|
| 161 |
+
"byte_perplexity": 1.6136995247803747,
|
| 162 |
+
"bits_per_byte": 0.4785293844744369
|
| 163 |
+
}
|
| 164 |
+
}
|
| 165 |
+
}
|
evaluation/results/tr11/bloom1b3/bslmevalfiles/concat.py
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
import json
|
| 3 |
+
import re
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
from re import Pattern
|
| 6 |
+
from typing import List, Dict
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def get_args():
|
| 10 |
+
parser = argparse.ArgumentParser()
|
| 11 |
+
parser.add_argument("--results-dir", required=True, type=Path, help="Path to the list of results")
|
| 12 |
+
parser.add_argument("--concatenate-output-file", required=True, type=Path, help="Path to store the final output file")
|
| 13 |
+
return parser.parse_args()
|
| 14 |
+
|
| 15 |
+
MODEL = "tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500"
|
| 16 |
+
# MODEL = "global_step95000"
|
| 17 |
+
RESULTS_REGEX = re.compile(rf"(eai|bs)_results_lm-eval_{MODEL}_(\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2})_backup\.json")
|
| 18 |
+
RESULTS_REGEX = re.compile(rf"{MODEL}_*.json")
|
| 19 |
+
#tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-10-03-25.json
|
| 20 |
+
def get_all_files_that_match_results_in_folder(root_folder: Path) -> List[Path]:
|
| 21 |
+
json_files = []
|
| 22 |
+
for folder in root_folder.iterdir():
|
| 23 |
+
if folder.is_dir():
|
| 24 |
+
json_files += get_all_files_that_match_results_in_folder(folder)
|
| 25 |
+
else:
|
| 26 |
+
# it's actually a file
|
| 27 |
+
file = folder
|
| 28 |
+
|
| 29 |
+
#match = RESULTS_REGEX.match(file.name)
|
| 30 |
+
|
| 31 |
+
if not str(file.name).endswith("json"):
|
| 32 |
+
continue
|
| 33 |
+
else:
|
| 34 |
+
json_files.append(file)
|
| 35 |
+
return json_files
|
| 36 |
+
|
| 37 |
+
def sort_dict(dictionary: Dict) -> Dict:
|
| 38 |
+
results = {}
|
| 39 |
+
|
| 40 |
+
for key, value in sorted(dictionary.items()):
|
| 41 |
+
new_value = value
|
| 42 |
+
|
| 43 |
+
if isinstance(value, dict):
|
| 44 |
+
new_value = sort_dict(new_value)
|
| 45 |
+
elif isinstance(value, list):
|
| 46 |
+
new_value = sorted(value)
|
| 47 |
+
|
| 48 |
+
results[key] = new_value
|
| 49 |
+
|
| 50 |
+
return results
|
| 51 |
+
|
| 52 |
+
def main():
|
| 53 |
+
args = get_args()
|
| 54 |
+
|
| 55 |
+
# Get all json files
|
| 56 |
+
json_files = get_all_files_that_match_results_in_folder(args.results_dir)
|
| 57 |
+
print("GOT", json_files)
|
| 58 |
+
# Merge all json files
|
| 59 |
+
final_result = {
|
| 60 |
+
"results": {},
|
| 61 |
+
"versions": {}
|
| 62 |
+
}
|
| 63 |
+
for file in json_files:
|
| 64 |
+
with open(file, "r") as fi:
|
| 65 |
+
task_result = json.load(fi)
|
| 66 |
+
|
| 67 |
+
#match = RESULTS_REGEX.match(file.name)
|
| 68 |
+
#assert match is not None
|
| 69 |
+
prefix = "bs" if "bs" in file.name else "eai"#match.group(1)
|
| 70 |
+
datetime_string = file.name[file.name.index("global_step340500_") + len("global_step340500_"):].replace(".json", "")#match.group(2)
|
| 71 |
+
|
| 72 |
+
if prefix == "eai":
|
| 73 |
+
results_key = "results"
|
| 74 |
+
elif prefix == "bs":
|
| 75 |
+
results_key = "table_results"
|
| 76 |
+
else:
|
| 77 |
+
raise ValueError(f"Unsupported key: {prefix}")
|
| 78 |
+
|
| 79 |
+
for key, value in task_result[results_key].items():
|
| 80 |
+
if key not in final_result["results"]:
|
| 81 |
+
final_result["results"][key] = {
|
| 82 |
+
datetime_string: value
|
| 83 |
+
}
|
| 84 |
+
#else:
|
| 85 |
+
# assert datetime_string not in final_result["results"][key]
|
| 86 |
+
# final_result["results"][key][datetime_string] = value
|
| 87 |
+
|
| 88 |
+
for key, value in task_result["versions"].items():
|
| 89 |
+
final_result["versions"][key] = value
|
| 90 |
+
|
| 91 |
+
# We sort dict, better for serialization
|
| 92 |
+
print(final_result)
|
| 93 |
+
final_result = sort_dict(final_result)
|
| 94 |
+
|
| 95 |
+
# Save result
|
| 96 |
+
with open(args.concatenate_output_file, "w") as fo:
|
| 97 |
+
json.dump(final_result, fo, indent=2)
|
| 98 |
+
|
| 99 |
+
pass
|
| 100 |
+
|
| 101 |
+
if __name__ == "__main__":
|
| 102 |
+
main()
|
| 103 |
+
|
evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-13-19-23-37.json
ADDED
|
@@ -0,0 +1,701 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"results": [
|
| 3 |
+
{
|
| 4 |
+
"task_name": "qqp",
|
| 5 |
+
"prompt_name": "answer",
|
| 6 |
+
"acc": 0.40558990848379917,
|
| 7 |
+
"fixed_answer_choice_list": [
|
| 8 |
+
"no",
|
| 9 |
+
"yes"
|
| 10 |
+
],
|
| 11 |
+
"dataset_path": "glue",
|
| 12 |
+
"dataset_name": "qqp",
|
| 13 |
+
"subset": null,
|
| 14 |
+
"prompt_id": "c0182cd1-c7ac-4abe-829f-4651536af951",
|
| 15 |
+
"prompt_jinja": "Can an answer to \"{{question1}}\" also be used to answer \"{{question2}}\"? ||| {{ answer_choices[label] }}",
|
| 16 |
+
"prompt_original_task": false,
|
| 17 |
+
"comment": "",
|
| 18 |
+
"acc_stderr": 0.002441969063495092
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"task_name": "qqp",
|
| 22 |
+
"prompt_name": "answer",
|
| 23 |
+
"acc_norm": 0.36816720257234725,
|
| 24 |
+
"fixed_answer_choice_list": [
|
| 25 |
+
"no",
|
| 26 |
+
"yes"
|
| 27 |
+
],
|
| 28 |
+
"dataset_path": "glue",
|
| 29 |
+
"dataset_name": "qqp",
|
| 30 |
+
"subset": null,
|
| 31 |
+
"prompt_id": "c0182cd1-c7ac-4abe-829f-4651536af951",
|
| 32 |
+
"prompt_jinja": "Can an answer to \"{{question1}}\" also be used to answer \"{{question2}}\"? ||| {{ answer_choices[label] }}",
|
| 33 |
+
"prompt_original_task": false,
|
| 34 |
+
"comment": "",
|
| 35 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"task_name": "qqp",
|
| 39 |
+
"prompt_name": "duplicate",
|
| 40 |
+
"acc": 0.3788523373732377,
|
| 41 |
+
"fixed_answer_choice_list": [
|
| 42 |
+
"no",
|
| 43 |
+
"yes"
|
| 44 |
+
],
|
| 45 |
+
"dataset_path": "glue",
|
| 46 |
+
"dataset_name": "qqp",
|
| 47 |
+
"subset": null,
|
| 48 |
+
"prompt_id": "fd244bd3-ca3b-4e4f-9722-fd006c50e157",
|
| 49 |
+
"prompt_jinja": "I received the questions \"{{question1}}\" and \"{{question2}}\". Are they duplicates? ||| {{ answer_choices[label] }}",
|
| 50 |
+
"prompt_original_task": true,
|
| 51 |
+
"comment": "",
|
| 52 |
+
"acc_stderr": 0.002412603277723025
|
| 53 |
+
},
|
| 54 |
+
{
|
| 55 |
+
"task_name": "qqp",
|
| 56 |
+
"prompt_name": "duplicate",
|
| 57 |
+
"acc_norm": 0.36816720257234725,
|
| 58 |
+
"fixed_answer_choice_list": [
|
| 59 |
+
"no",
|
| 60 |
+
"yes"
|
| 61 |
+
],
|
| 62 |
+
"dataset_path": "glue",
|
| 63 |
+
"dataset_name": "qqp",
|
| 64 |
+
"subset": null,
|
| 65 |
+
"prompt_id": "fd244bd3-ca3b-4e4f-9722-fd006c50e157",
|
| 66 |
+
"prompt_jinja": "I received the questions \"{{question1}}\" and \"{{question2}}\". Are they duplicates? ||| {{ answer_choices[label] }}",
|
| 67 |
+
"prompt_original_task": true,
|
| 68 |
+
"comment": "",
|
| 69 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"task_name": "qqp",
|
| 73 |
+
"prompt_name": "duplicate or not",
|
| 74 |
+
"acc": 0.5761315854563444,
|
| 75 |
+
"fixed_answer_choice_list": [
|
| 76 |
+
"not duplicates",
|
| 77 |
+
"duplicates"
|
| 78 |
+
],
|
| 79 |
+
"dataset_path": "glue",
|
| 80 |
+
"dataset_name": "qqp",
|
| 81 |
+
"subset": null,
|
| 82 |
+
"prompt_id": "94972071-a726-42a3-a726-13f414b65e67",
|
| 83 |
+
"prompt_jinja": "{{question1}}\n{{question2}}\nPick one: These questions are \"{{\"duplicates\"}}\" or \"{{\"not duplicates\"}}\".\n|||\n{{ answer_choices[label] }}",
|
| 84 |
+
"prompt_original_task": true,
|
| 85 |
+
"comment": "",
|
| 86 |
+
"acc_stderr": 0.0024577056660753426
|
| 87 |
+
},
|
| 88 |
+
{
|
| 89 |
+
"task_name": "qqp",
|
| 90 |
+
"prompt_name": "duplicate or not",
|
| 91 |
+
"acc_norm": 0.6318327974276527,
|
| 92 |
+
"fixed_answer_choice_list": [
|
| 93 |
+
"not duplicates",
|
| 94 |
+
"duplicates"
|
| 95 |
+
],
|
| 96 |
+
"dataset_path": "glue",
|
| 97 |
+
"dataset_name": "qqp",
|
| 98 |
+
"subset": null,
|
| 99 |
+
"prompt_id": "94972071-a726-42a3-a726-13f414b65e67",
|
| 100 |
+
"prompt_jinja": "{{question1}}\n{{question2}}\nPick one: These questions are \"{{\"duplicates\"}}\" or \"{{\"not duplicates\"}}\".\n|||\n{{ answer_choices[label] }}",
|
| 101 |
+
"prompt_original_task": true,
|
| 102 |
+
"comment": "",
|
| 103 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 104 |
+
},
|
| 105 |
+
{
|
| 106 |
+
"task_name": "qqp",
|
| 107 |
+
"prompt_name": "meaning",
|
| 108 |
+
"acc": 0.3681424684640119,
|
| 109 |
+
"fixed_answer_choice_list": [
|
| 110 |
+
"No",
|
| 111 |
+
"Yes"
|
| 112 |
+
],
|
| 113 |
+
"dataset_path": "glue",
|
| 114 |
+
"dataset_name": "qqp",
|
| 115 |
+
"subset": null,
|
| 116 |
+
"prompt_id": "c0724198-97e7-44a1-89d8-c51e97ce0b04",
|
| 117 |
+
"prompt_jinja": "Question 1: {{question1}}\nQuestion 2: {{question2}}\n\nDo these two questions convey the same meaning? Yes or no? ||| {{answer_choices[label]}}",
|
| 118 |
+
"prompt_original_task": true,
|
| 119 |
+
"comment": "",
|
| 120 |
+
"acc_stderr": 0.0023986729832071916
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"task_name": "qqp",
|
| 124 |
+
"prompt_name": "meaning",
|
| 125 |
+
"acc_norm": 0.36816720257234725,
|
| 126 |
+
"fixed_answer_choice_list": [
|
| 127 |
+
"No",
|
| 128 |
+
"Yes"
|
| 129 |
+
],
|
| 130 |
+
"dataset_path": "glue",
|
| 131 |
+
"dataset_name": "qqp",
|
| 132 |
+
"subset": null,
|
| 133 |
+
"prompt_id": "c0724198-97e7-44a1-89d8-c51e97ce0b04",
|
| 134 |
+
"prompt_jinja": "Question 1: {{question1}}\nQuestion 2: {{question2}}\n\nDo these two questions convey the same meaning? Yes or no? ||| {{answer_choices[label]}}",
|
| 135 |
+
"prompt_original_task": true,
|
| 136 |
+
"comment": "",
|
| 137 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"task_name": "qqp",
|
| 141 |
+
"prompt_name": "quora",
|
| 142 |
+
"acc": 0.36821667078901804,
|
| 143 |
+
"fixed_answer_choice_list": [
|
| 144 |
+
"no",
|
| 145 |
+
"yes"
|
| 146 |
+
],
|
| 147 |
+
"dataset_path": "glue",
|
| 148 |
+
"dataset_name": "qqp",
|
| 149 |
+
"subset": null,
|
| 150 |
+
"prompt_id": "8e711799-a57c-4941-833b-466bedfb80ad",
|
| 151 |
+
"prompt_jinja": "I'm an administrator on the website Quora. There are two posts, one that asks \"{{question1}}\" and another that asks \"{{question2}}\". I can merge questions if they are asking the same thing. Can I merge these two questions? ||| {{ answer_choices[label] }}",
|
| 152 |
+
"prompt_original_task": true,
|
| 153 |
+
"comment": "",
|
| 154 |
+
"acc_stderr": 0.0023987738450886556
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"task_name": "qqp",
|
| 158 |
+
"prompt_name": "quora",
|
| 159 |
+
"acc_norm": 0.36816720257234725,
|
| 160 |
+
"fixed_answer_choice_list": [
|
| 161 |
+
"no",
|
| 162 |
+
"yes"
|
| 163 |
+
],
|
| 164 |
+
"dataset_path": "glue",
|
| 165 |
+
"dataset_name": "qqp",
|
| 166 |
+
"subset": null,
|
| 167 |
+
"prompt_id": "8e711799-a57c-4941-833b-466bedfb80ad",
|
| 168 |
+
"prompt_jinja": "I'm an administrator on the website Quora. There are two posts, one that asks \"{{question1}}\" and another that asks \"{{question2}}\". I can merge questions if they are asking the same thing. Can I merge these two questions? ||| {{ answer_choices[label] }}",
|
| 169 |
+
"prompt_original_task": true,
|
| 170 |
+
"comment": "",
|
| 171 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"task_name": "qqp",
|
| 175 |
+
"prompt_name": "same thing",
|
| 176 |
+
"acc": 0.5099431115508286,
|
| 177 |
+
"fixed_answer_choice_list": [
|
| 178 |
+
"no",
|
| 179 |
+
"yes"
|
| 180 |
+
],
|
| 181 |
+
"dataset_path": "glue",
|
| 182 |
+
"dataset_name": "qqp",
|
| 183 |
+
"subset": null,
|
| 184 |
+
"prompt_id": "a45ad5cd-a3ba-4ab2-a728-a9ea0f27102b",
|
| 185 |
+
"prompt_jinja": "Are the questions \"{{question1}}\" and \"{{question2}}\" asking the same thing? ||| {{ answer_choices[label] }}",
|
| 186 |
+
"prompt_original_task": true,
|
| 187 |
+
"comment": "",
|
| 188 |
+
"acc_stderr": 0.002486208885430481
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"task_name": "qqp",
|
| 192 |
+
"prompt_name": "same thing",
|
| 193 |
+
"acc_norm": 0.36816720257234725,
|
| 194 |
+
"fixed_answer_choice_list": [
|
| 195 |
+
"no",
|
| 196 |
+
"yes"
|
| 197 |
+
],
|
| 198 |
+
"dataset_path": "glue",
|
| 199 |
+
"dataset_name": "qqp",
|
| 200 |
+
"subset": null,
|
| 201 |
+
"prompt_id": "a45ad5cd-a3ba-4ab2-a728-a9ea0f27102b",
|
| 202 |
+
"prompt_jinja": "Are the questions \"{{question1}}\" and \"{{question2}}\" asking the same thing? ||| {{ answer_choices[label] }}",
|
| 203 |
+
"prompt_original_task": true,
|
| 204 |
+
"comment": "",
|
| 205 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"task_name": "rte",
|
| 209 |
+
"prompt_name": "does the claim\u2026 follow the fact\u2026",
|
| 210 |
+
"acc": 0.4729241877256318,
|
| 211 |
+
"fixed_answer_choice_list": [
|
| 212 |
+
"yes",
|
| 213 |
+
"no"
|
| 214 |
+
],
|
| 215 |
+
"dataset_path": "glue",
|
| 216 |
+
"dataset_name": "rte",
|
| 217 |
+
"subset": null,
|
| 218 |
+
"prompt_id": "4ee6ff27-de63-4e7b-a9d4-82a17eba407a",
|
| 219 |
+
"prompt_jinja": "Does the claim \"{{sentence2}}\" follow from the fact that \"{{sentence1}}\"? Please answer either {{\"yes\"}} or {{\"no\"}}.\n|||\n{{answer_choices[label]}}",
|
| 220 |
+
"prompt_original_task": true,
|
| 221 |
+
"comment": "",
|
| 222 |
+
"acc_stderr": 0.030052303463143706
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"task_name": "rte",
|
| 226 |
+
"prompt_name": "does the claim\u2026 follow the fact\u2026",
|
| 227 |
+
"acc_norm": 0.5270758122743683,
|
| 228 |
+
"fixed_answer_choice_list": [
|
| 229 |
+
"yes",
|
| 230 |
+
"no"
|
| 231 |
+
],
|
| 232 |
+
"dataset_path": "glue",
|
| 233 |
+
"dataset_name": "rte",
|
| 234 |
+
"subset": null,
|
| 235 |
+
"prompt_id": "4ee6ff27-de63-4e7b-a9d4-82a17eba407a",
|
| 236 |
+
"prompt_jinja": "Does the claim \"{{sentence2}}\" follow from the fact that \"{{sentence1}}\"? Please answer either {{\"yes\"}} or {{\"no\"}}.\n|||\n{{answer_choices[label]}}",
|
| 237 |
+
"prompt_original_task": true,
|
| 238 |
+
"comment": "",
|
| 239 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"task_name": "rte",
|
| 243 |
+
"prompt_name": "entailment explained",
|
| 244 |
+
"acc": 0.49458483754512633,
|
| 245 |
+
"fixed_answer_choice_list": [
|
| 246 |
+
"entailment",
|
| 247 |
+
"not entailment"
|
| 248 |
+
],
|
| 249 |
+
"dataset_path": "glue",
|
| 250 |
+
"dataset_name": "rte",
|
| 251 |
+
"subset": null,
|
| 252 |
+
"prompt_id": "9e2b4267-ec23-44c8-b82a-107e2c890fec",
|
| 253 |
+
"prompt_jinja": "We say that one sentence \"{{\"entails\"}}\" another sentence when the first sentence implies the second sentence. Consider the following two sentences:\n{{sentence1}}\n{{sentence2}}\nIs the relationship from the first to the second sentence \"{{\"entailment\"}}\" or \"{{\"not entailment\"}}\"?\n|||\n{{answer_choices[label]}}",
|
| 254 |
+
"prompt_original_task": true,
|
| 255 |
+
"comment": "",
|
| 256 |
+
"acc_stderr": 0.030094698123239966
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"task_name": "rte",
|
| 260 |
+
"prompt_name": "entailment explained",
|
| 261 |
+
"acc_norm": 0.4729241877256318,
|
| 262 |
+
"fixed_answer_choice_list": [
|
| 263 |
+
"entailment",
|
| 264 |
+
"not entailment"
|
| 265 |
+
],
|
| 266 |
+
"dataset_path": "glue",
|
| 267 |
+
"dataset_name": "rte",
|
| 268 |
+
"subset": null,
|
| 269 |
+
"prompt_id": "9e2b4267-ec23-44c8-b82a-107e2c890fec",
|
| 270 |
+
"prompt_jinja": "We say that one sentence \"{{\"entails\"}}\" another sentence when the first sentence implies the second sentence. Consider the following two sentences:\n{{sentence1}}\n{{sentence2}}\nIs the relationship from the first to the second sentence \"{{\"entailment\"}}\" or \"{{\"not entailment\"}}\"?\n|||\n{{answer_choices[label]}}",
|
| 271 |
+
"prompt_original_task": true,
|
| 272 |
+
"comment": "",
|
| 273 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"task_name": "rte",
|
| 277 |
+
"prompt_name": "imply",
|
| 278 |
+
"acc": 0.48375451263537905,
|
| 279 |
+
"fixed_answer_choice_list": [
|
| 280 |
+
"yes",
|
| 281 |
+
"no"
|
| 282 |
+
],
|
| 283 |
+
"dataset_path": "glue",
|
| 284 |
+
"dataset_name": "rte",
|
| 285 |
+
"subset": null,
|
| 286 |
+
"prompt_id": "c8dfc879-40f2-412d-be1e-4cd70107f6e6",
|
| 287 |
+
"prompt_jinja": "Does \"{{sentence1}}\" imply that \"{{sentence2}}\"? Please answer either {{\"yes\"}} or {{\"no\"}}.\n|||\n{{answer_choices[label]}}",
|
| 288 |
+
"prompt_original_task": true,
|
| 289 |
+
"comment": "",
|
| 290 |
+
"acc_stderr": 0.030080573208738064
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"task_name": "rte",
|
| 294 |
+
"prompt_name": "imply",
|
| 295 |
+
"acc_norm": 0.5270758122743683,
|
| 296 |
+
"fixed_answer_choice_list": [
|
| 297 |
+
"yes",
|
| 298 |
+
"no"
|
| 299 |
+
],
|
| 300 |
+
"dataset_path": "glue",
|
| 301 |
+
"dataset_name": "rte",
|
| 302 |
+
"subset": null,
|
| 303 |
+
"prompt_id": "c8dfc879-40f2-412d-be1e-4cd70107f6e6",
|
| 304 |
+
"prompt_jinja": "Does \"{{sentence1}}\" imply that \"{{sentence2}}\"? Please answer either {{\"yes\"}} or {{\"no\"}}.\n|||\n{{answer_choices[label]}}",
|
| 305 |
+
"prompt_original_task": true,
|
| 306 |
+
"comment": "",
|
| 307 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"task_name": "rte",
|
| 311 |
+
"prompt_name": "imply separated",
|
| 312 |
+
"acc": 0.45126353790613716,
|
| 313 |
+
"fixed_answer_choice_list": [
|
| 314 |
+
"yes",
|
| 315 |
+
"no"
|
| 316 |
+
],
|
| 317 |
+
"dataset_path": "glue",
|
| 318 |
+
"dataset_name": "rte",
|
| 319 |
+
"subset": null,
|
| 320 |
+
"prompt_id": "f56ffced-9b16-431a-8a17-501e63cddf73",
|
| 321 |
+
"prompt_jinja": "{{sentence1}}\nDoes this imply\n{{sentence2}}\nPlease answer {{\"A) yes or B) no.\"}}\n|||\n{{answer_choices[label]}}",
|
| 322 |
+
"prompt_original_task": true,
|
| 323 |
+
"comment": "",
|
| 324 |
+
"acc_stderr": 0.029953149241808943
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"task_name": "rte",
|
| 328 |
+
"prompt_name": "imply separated",
|
| 329 |
+
"acc_norm": 0.5270758122743683,
|
| 330 |
+
"fixed_answer_choice_list": [
|
| 331 |
+
"yes",
|
| 332 |
+
"no"
|
| 333 |
+
],
|
| 334 |
+
"dataset_path": "glue",
|
| 335 |
+
"dataset_name": "rte",
|
| 336 |
+
"subset": null,
|
| 337 |
+
"prompt_id": "f56ffced-9b16-431a-8a17-501e63cddf73",
|
| 338 |
+
"prompt_jinja": "{{sentence1}}\nDoes this imply\n{{sentence2}}\nPlease answer {{\"A) yes or B) no.\"}}\n|||\n{{answer_choices[label]}}",
|
| 339 |
+
"prompt_original_task": true,
|
| 340 |
+
"comment": "",
|
| 341 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"task_name": "rte",
|
| 345 |
+
"prompt_name": "mean",
|
| 346 |
+
"acc": 0.48014440433212996,
|
| 347 |
+
"fixed_answer_choice_list": [
|
| 348 |
+
"yes",
|
| 349 |
+
"no"
|
| 350 |
+
],
|
| 351 |
+
"dataset_path": "glue",
|
| 352 |
+
"dataset_name": "rte",
|
| 353 |
+
"subset": null,
|
| 354 |
+
"prompt_id": "03a7ae07-5ddd-46c4-92f3-2152223d44ec",
|
| 355 |
+
"prompt_jinja": "{{sentence1}}\nDoes this mean that \"{{sentence2}}\" is true? {{\"A) yes or B) no.\"}}\n|||\n{{answer_choices[label]}}",
|
| 356 |
+
"prompt_original_task": true,
|
| 357 |
+
"comment": "",
|
| 358 |
+
"acc_stderr": 0.030072723167317194
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"task_name": "rte",
|
| 362 |
+
"prompt_name": "mean",
|
| 363 |
+
"acc_norm": 0.5270758122743683,
|
| 364 |
+
"fixed_answer_choice_list": [
|
| 365 |
+
"yes",
|
| 366 |
+
"no"
|
| 367 |
+
],
|
| 368 |
+
"dataset_path": "glue",
|
| 369 |
+
"dataset_name": "rte",
|
| 370 |
+
"subset": null,
|
| 371 |
+
"prompt_id": "03a7ae07-5ddd-46c4-92f3-2152223d44ec",
|
| 372 |
+
"prompt_jinja": "{{sentence1}}\nDoes this mean that \"{{sentence2}}\" is true? {{\"A) yes or B) no.\"}}\n|||\n{{answer_choices[label]}}",
|
| 373 |
+
"prompt_original_task": true,
|
| 374 |
+
"comment": "",
|
| 375 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"task_name": "sst",
|
| 379 |
+
"prompt_name": "following positive negative",
|
| 380 |
+
"acc": 0.8061926605504587,
|
| 381 |
+
"fixed_answer_choice_list": [
|
| 382 |
+
"negative",
|
| 383 |
+
"positive"
|
| 384 |
+
],
|
| 385 |
+
"dataset_path": "glue",
|
| 386 |
+
"dataset_name": "sst2",
|
| 387 |
+
"subset": null,
|
| 388 |
+
"prompt_id": "63c6b2be-8ecd-42ad-88c7-0d1dc1a8323a",
|
| 389 |
+
"prompt_jinja": "Does the following sentence have a {{\"positive\"}} or {{\"negative\"}} sentiment?\n{{sentence}}\n|||\n{{ answer_choices[label] }}",
|
| 390 |
+
"prompt_original_task": true,
|
| 391 |
+
"comment": "",
|
| 392 |
+
"acc_stderr": 0.013393542261521812
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"task_name": "sst",
|
| 396 |
+
"prompt_name": "following positive negative",
|
| 397 |
+
"acc_norm": 0.8061926605504587,
|
| 398 |
+
"fixed_answer_choice_list": [
|
| 399 |
+
"negative",
|
| 400 |
+
"positive"
|
| 401 |
+
],
|
| 402 |
+
"dataset_path": "glue",
|
| 403 |
+
"dataset_name": "sst2",
|
| 404 |
+
"subset": null,
|
| 405 |
+
"prompt_id": "63c6b2be-8ecd-42ad-88c7-0d1dc1a8323a",
|
| 406 |
+
"prompt_jinja": "Does the following sentence have a {{\"positive\"}} or {{\"negative\"}} sentiment?\n{{sentence}}\n|||\n{{ answer_choices[label] }}",
|
| 407 |
+
"prompt_original_task": true,
|
| 408 |
+
"comment": "",
|
| 409 |
+
"acc_norm_stderr": 0.013393542261521812
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"task_name": "sst",
|
| 413 |
+
"prompt_name": "happy or mad",
|
| 414 |
+
"acc": 0.5091743119266054,
|
| 415 |
+
"fixed_answer_choice_list": [
|
| 416 |
+
"bad",
|
| 417 |
+
"good"
|
| 418 |
+
],
|
| 419 |
+
"dataset_path": "glue",
|
| 420 |
+
"dataset_name": "sst2",
|
| 421 |
+
"subset": null,
|
| 422 |
+
"prompt_id": "6dd74cd5-e074-4612-9e96-c17ca88c3bc4",
|
| 423 |
+
"prompt_jinja": "Someone sent me an email with the sentence \"{{sentence}}\". Do you think they are feeling {{\"good\"}} or {{\"bad\"}}? ||| {{ answer_choices[label] }}",
|
| 424 |
+
"prompt_original_task": true,
|
| 425 |
+
"comment": "",
|
| 426 |
+
"acc_stderr": 0.01693900152535154
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"task_name": "sst",
|
| 430 |
+
"prompt_name": "happy or mad",
|
| 431 |
+
"acc_norm": 0.5091743119266054,
|
| 432 |
+
"fixed_answer_choice_list": [
|
| 433 |
+
"bad",
|
| 434 |
+
"good"
|
| 435 |
+
],
|
| 436 |
+
"dataset_path": "glue",
|
| 437 |
+
"dataset_name": "sst2",
|
| 438 |
+
"subset": null,
|
| 439 |
+
"prompt_id": "6dd74cd5-e074-4612-9e96-c17ca88c3bc4",
|
| 440 |
+
"prompt_jinja": "Someone sent me an email with the sentence \"{{sentence}}\". Do you think they are feeling {{\"good\"}} or {{\"bad\"}}? ||| {{ answer_choices[label] }}",
|
| 441 |
+
"prompt_original_task": true,
|
| 442 |
+
"comment": "",
|
| 443 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"task_name": "sst",
|
| 447 |
+
"prompt_name": "positive negative after",
|
| 448 |
+
"acc": 0.6204128440366973,
|
| 449 |
+
"fixed_answer_choice_list": [
|
| 450 |
+
"negative",
|
| 451 |
+
"positive"
|
| 452 |
+
],
|
| 453 |
+
"dataset_path": "glue",
|
| 454 |
+
"dataset_name": "sst2",
|
| 455 |
+
"subset": null,
|
| 456 |
+
"prompt_id": "11d1c505-9232-4c35-82a4-4c3642843e2e",
|
| 457 |
+
"prompt_jinja": "{{sentence}}\nQuestion: Was that sentence {{\"positive\"}} or {{\"negative\"}}? Answer: ||| {{ answer_choices[label] }}",
|
| 458 |
+
"prompt_original_task": true,
|
| 459 |
+
"comment": "",
|
| 460 |
+
"acc_stderr": 0.016443227556688766
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"task_name": "sst",
|
| 464 |
+
"prompt_name": "positive negative after",
|
| 465 |
+
"acc_norm": 0.6204128440366973,
|
| 466 |
+
"fixed_answer_choice_list": [
|
| 467 |
+
"negative",
|
| 468 |
+
"positive"
|
| 469 |
+
],
|
| 470 |
+
"dataset_path": "glue",
|
| 471 |
+
"dataset_name": "sst2",
|
| 472 |
+
"subset": null,
|
| 473 |
+
"prompt_id": "11d1c505-9232-4c35-82a4-4c3642843e2e",
|
| 474 |
+
"prompt_jinja": "{{sentence}}\nQuestion: Was that sentence {{\"positive\"}} or {{\"negative\"}}? Answer: ||| {{ answer_choices[label] }}",
|
| 475 |
+
"prompt_original_task": true,
|
| 476 |
+
"comment": "",
|
| 477 |
+
"acc_norm_stderr": 0.016443227556688766
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"task_name": "sst",
|
| 481 |
+
"prompt_name": "review",
|
| 482 |
+
"acc": 0.5091743119266054,
|
| 483 |
+
"fixed_answer_choice_list": [
|
| 484 |
+
"negative",
|
| 485 |
+
"positive"
|
| 486 |
+
],
|
| 487 |
+
"dataset_path": "glue",
|
| 488 |
+
"dataset_name": "sst2",
|
| 489 |
+
"subset": null,
|
| 490 |
+
"prompt_id": "228fcae7-7f4c-4e3c-9ac4-e49b26bc103d",
|
| 491 |
+
"prompt_jinja": "I'm reading a review that says \"{{sentence}}\".\n\nDo you think the review is {{\"positive\"}} or {{\"negative\"}}? ||| {{ answer_choices[label] }}",
|
| 492 |
+
"prompt_original_task": true,
|
| 493 |
+
"comment": "",
|
| 494 |
+
"acc_stderr": 0.01693900152535154
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"task_name": "sst",
|
| 498 |
+
"prompt_name": "review",
|
| 499 |
+
"acc_norm": 0.5091743119266054,
|
| 500 |
+
"fixed_answer_choice_list": [
|
| 501 |
+
"negative",
|
| 502 |
+
"positive"
|
| 503 |
+
],
|
| 504 |
+
"dataset_path": "glue",
|
| 505 |
+
"dataset_name": "sst2",
|
| 506 |
+
"subset": null,
|
| 507 |
+
"prompt_id": "228fcae7-7f4c-4e3c-9ac4-e49b26bc103d",
|
| 508 |
+
"prompt_jinja": "I'm reading a review that says \"{{sentence}}\".\n\nDo you think the review is {{\"positive\"}} or {{\"negative\"}}? ||| {{ answer_choices[label] }}",
|
| 509 |
+
"prompt_original_task": true,
|
| 510 |
+
"comment": "",
|
| 511 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"task_name": "sst",
|
| 515 |
+
"prompt_name": "said",
|
| 516 |
+
"acc": 0.4908256880733945,
|
| 517 |
+
"fixed_answer_choice_list": [
|
| 518 |
+
"sad",
|
| 519 |
+
"happy"
|
| 520 |
+
],
|
| 521 |
+
"dataset_path": "glue",
|
| 522 |
+
"dataset_name": "sst2",
|
| 523 |
+
"subset": null,
|
| 524 |
+
"prompt_id": "5aa0cea9-0f8d-454d-b25b-b0d4cda273b8",
|
| 525 |
+
"prompt_jinja": "Someone just said to me \"{{sentence}}\".\n\nDo you think they are {{\"sad\"}} or {{\"happy\"}}? ||| {{ answer_choices[label] }}",
|
| 526 |
+
"prompt_original_task": true,
|
| 527 |
+
"comment": "",
|
| 528 |
+
"acc_stderr": 0.01693900152535154
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"task_name": "sst",
|
| 532 |
+
"prompt_name": "said",
|
| 533 |
+
"acc_norm": 0.5091743119266054,
|
| 534 |
+
"fixed_answer_choice_list": [
|
| 535 |
+
"sad",
|
| 536 |
+
"happy"
|
| 537 |
+
],
|
| 538 |
+
"dataset_path": "glue",
|
| 539 |
+
"dataset_name": "sst2",
|
| 540 |
+
"subset": null,
|
| 541 |
+
"prompt_id": "5aa0cea9-0f8d-454d-b25b-b0d4cda273b8",
|
| 542 |
+
"prompt_jinja": "Someone just said to me \"{{sentence}}\".\n\nDo you think they are {{\"sad\"}} or {{\"happy\"}}? ||| {{ answer_choices[label] }}",
|
| 543 |
+
"prompt_original_task": true,
|
| 544 |
+
"comment": "",
|
| 545 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 546 |
+
}
|
| 547 |
+
],
|
| 548 |
+
"versions": {
|
| 549 |
+
"qqp+answer": 0,
|
| 550 |
+
"qqp+duplicate": 0,
|
| 551 |
+
"qqp+duplicate or not": 0,
|
| 552 |
+
"qqp+meaning": 0,
|
| 553 |
+
"qqp+quora": 0,
|
| 554 |
+
"qqp+same thing": 0,
|
| 555 |
+
"rte+does the claim\u2026 follow the fact\u2026": 0,
|
| 556 |
+
"rte+entailment explained": 0,
|
| 557 |
+
"rte+imply": 0,
|
| 558 |
+
"rte+imply separated": 0,
|
| 559 |
+
"rte+mean": 0,
|
| 560 |
+
"sst+following positive negative": 0,
|
| 561 |
+
"sst+happy or mad": 0,
|
| 562 |
+
"sst+positive negative after": 0,
|
| 563 |
+
"sst+review": 0,
|
| 564 |
+
"sst+said": 0
|
| 565 |
+
},
|
| 566 |
+
"table_results": {
|
| 567 |
+
"qqp+answer": {
|
| 568 |
+
"task_name": "qqp",
|
| 569 |
+
"prompt_name": "answer",
|
| 570 |
+
"acc": 0.40558990848379917,
|
| 571 |
+
"acc_stderr": 0.002441969063495092,
|
| 572 |
+
"acc_norm": 0.36816720257234725,
|
| 573 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 574 |
+
},
|
| 575 |
+
"qqp+duplicate": {
|
| 576 |
+
"task_name": "qqp",
|
| 577 |
+
"prompt_name": "duplicate",
|
| 578 |
+
"acc": 0.3788523373732377,
|
| 579 |
+
"acc_stderr": 0.002412603277723025,
|
| 580 |
+
"acc_norm": 0.36816720257234725,
|
| 581 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 582 |
+
},
|
| 583 |
+
"qqp+duplicate or not": {
|
| 584 |
+
"task_name": "qqp",
|
| 585 |
+
"prompt_name": "duplicate or not",
|
| 586 |
+
"acc": 0.5761315854563444,
|
| 587 |
+
"acc_stderr": 0.0024577056660753426,
|
| 588 |
+
"acc_norm": 0.6318327974276527,
|
| 589 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 590 |
+
},
|
| 591 |
+
"qqp+meaning": {
|
| 592 |
+
"task_name": "qqp",
|
| 593 |
+
"prompt_name": "meaning",
|
| 594 |
+
"acc": 0.3681424684640119,
|
| 595 |
+
"acc_stderr": 0.0023986729832071916,
|
| 596 |
+
"acc_norm": 0.36816720257234725,
|
| 597 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 598 |
+
},
|
| 599 |
+
"qqp+quora": {
|
| 600 |
+
"task_name": "qqp",
|
| 601 |
+
"prompt_name": "quora",
|
| 602 |
+
"acc": 0.36821667078901804,
|
| 603 |
+
"acc_stderr": 0.0023987738450886556,
|
| 604 |
+
"acc_norm": 0.36816720257234725,
|
| 605 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 606 |
+
},
|
| 607 |
+
"qqp+same thing": {
|
| 608 |
+
"task_name": "qqp",
|
| 609 |
+
"prompt_name": "same thing",
|
| 610 |
+
"acc": 0.5099431115508286,
|
| 611 |
+
"acc_stderr": 0.002486208885430481,
|
| 612 |
+
"acc_norm": 0.36816720257234725,
|
| 613 |
+
"acc_norm_stderr": 0.002398706610614492
|
| 614 |
+
},
|
| 615 |
+
"rte+does the claim\u2026 follow the fact\u2026": {
|
| 616 |
+
"task_name": "rte",
|
| 617 |
+
"prompt_name": "does the claim\u2026 follow the fact\u2026",
|
| 618 |
+
"acc": 0.4729241877256318,
|
| 619 |
+
"acc_stderr": 0.030052303463143706,
|
| 620 |
+
"acc_norm": 0.5270758122743683,
|
| 621 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 622 |
+
},
|
| 623 |
+
"rte+entailment explained": {
|
| 624 |
+
"task_name": "rte",
|
| 625 |
+
"prompt_name": "entailment explained",
|
| 626 |
+
"acc": 0.49458483754512633,
|
| 627 |
+
"acc_stderr": 0.030094698123239966,
|
| 628 |
+
"acc_norm": 0.4729241877256318,
|
| 629 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 630 |
+
},
|
| 631 |
+
"rte+imply": {
|
| 632 |
+
"task_name": "rte",
|
| 633 |
+
"prompt_name": "imply",
|
| 634 |
+
"acc": 0.48375451263537905,
|
| 635 |
+
"acc_stderr": 0.030080573208738064,
|
| 636 |
+
"acc_norm": 0.5270758122743683,
|
| 637 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 638 |
+
},
|
| 639 |
+
"rte+imply separated": {
|
| 640 |
+
"task_name": "rte",
|
| 641 |
+
"prompt_name": "imply separated",
|
| 642 |
+
"acc": 0.45126353790613716,
|
| 643 |
+
"acc_stderr": 0.029953149241808943,
|
| 644 |
+
"acc_norm": 0.5270758122743683,
|
| 645 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 646 |
+
},
|
| 647 |
+
"rte+mean": {
|
| 648 |
+
"task_name": "rte",
|
| 649 |
+
"prompt_name": "mean",
|
| 650 |
+
"acc": 0.48014440433212996,
|
| 651 |
+
"acc_stderr": 0.030072723167317194,
|
| 652 |
+
"acc_norm": 0.5270758122743683,
|
| 653 |
+
"acc_norm_stderr": 0.0300523034631437
|
| 654 |
+
},
|
| 655 |
+
"sst+following positive negative": {
|
| 656 |
+
"task_name": "sst",
|
| 657 |
+
"prompt_name": "following positive negative",
|
| 658 |
+
"acc": 0.8061926605504587,
|
| 659 |
+
"acc_stderr": 0.013393542261521812,
|
| 660 |
+
"acc_norm": 0.8061926605504587,
|
| 661 |
+
"acc_norm_stderr": 0.013393542261521812
|
| 662 |
+
},
|
| 663 |
+
"sst+happy or mad": {
|
| 664 |
+
"task_name": "sst",
|
| 665 |
+
"prompt_name": "happy or mad",
|
| 666 |
+
"acc": 0.5091743119266054,
|
| 667 |
+
"acc_stderr": 0.01693900152535154,
|
| 668 |
+
"acc_norm": 0.5091743119266054,
|
| 669 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 670 |
+
},
|
| 671 |
+
"sst+positive negative after": {
|
| 672 |
+
"task_name": "sst",
|
| 673 |
+
"prompt_name": "positive negative after",
|
| 674 |
+
"acc": 0.6204128440366973,
|
| 675 |
+
"acc_stderr": 0.016443227556688766,
|
| 676 |
+
"acc_norm": 0.6204128440366973,
|
| 677 |
+
"acc_norm_stderr": 0.016443227556688766
|
| 678 |
+
},
|
| 679 |
+
"sst+review": {
|
| 680 |
+
"task_name": "sst",
|
| 681 |
+
"prompt_name": "review",
|
| 682 |
+
"acc": 0.5091743119266054,
|
| 683 |
+
"acc_stderr": 0.01693900152535154,
|
| 684 |
+
"acc_norm": 0.5091743119266054,
|
| 685 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 686 |
+
},
|
| 687 |
+
"sst+said": {
|
| 688 |
+
"task_name": "sst",
|
| 689 |
+
"prompt_name": "said",
|
| 690 |
+
"acc": 0.4908256880733945,
|
| 691 |
+
"acc_stderr": 0.01693900152535154,
|
| 692 |
+
"acc_norm": 0.5091743119266054,
|
| 693 |
+
"acc_norm_stderr": 0.01693900152535154
|
| 694 |
+
}
|
| 695 |
+
},
|
| 696 |
+
"config": {
|
| 697 |
+
"adaptive_seq_len": true,
|
| 698 |
+
"num_fewshot": 0,
|
| 699 |
+
"bootstrap_iters": 100000
|
| 700 |
+
}
|
| 701 |
+
}
|
evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-10-03-25.json
ADDED
|
@@ -0,0 +1,2169 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"results": [
|
| 3 |
+
{
|
| 4 |
+
"task_name": "wic",
|
| 5 |
+
"prompt_name": "GPT-3-prompt",
|
| 6 |
+
"acc": 0.5,
|
| 7 |
+
"fixed_answer_choice_list": [
|
| 8 |
+
"No",
|
| 9 |
+
"Yes"
|
| 10 |
+
],
|
| 11 |
+
"dataset_path": "super_glue",
|
| 12 |
+
"dataset_name": "wic",
|
| 13 |
+
"subset": null,
|
| 14 |
+
"prompt_id": "c3a0a5d8-cfe9-4a7f-8a3c-3c526e0ad0c6",
|
| 15 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nQuestion: Is the word '{{word}}' used in the same sense in the two sentences above?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 16 |
+
"prompt_original_task": true,
|
| 17 |
+
"comment": "",
|
| 18 |
+
"acc_stderr": 0.01981072129375818
|
| 19 |
+
},
|
| 20 |
+
{
|
| 21 |
+
"task_name": "wic",
|
| 22 |
+
"prompt_name": "GPT-3-prompt",
|
| 23 |
+
"acc_norm": 0.5,
|
| 24 |
+
"fixed_answer_choice_list": [
|
| 25 |
+
"No",
|
| 26 |
+
"Yes"
|
| 27 |
+
],
|
| 28 |
+
"dataset_path": "super_glue",
|
| 29 |
+
"dataset_name": "wic",
|
| 30 |
+
"subset": null,
|
| 31 |
+
"prompt_id": "c3a0a5d8-cfe9-4a7f-8a3c-3c526e0ad0c6",
|
| 32 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nQuestion: Is the word '{{word}}' used in the same sense in the two sentences above?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 33 |
+
"prompt_original_task": true,
|
| 34 |
+
"comment": "",
|
| 35 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 36 |
+
},
|
| 37 |
+
{
|
| 38 |
+
"task_name": "wic",
|
| 39 |
+
"prompt_name": "GPT-3-prompt-with-label",
|
| 40 |
+
"acc": 0.49216300940438873,
|
| 41 |
+
"fixed_answer_choice_list": [
|
| 42 |
+
"No",
|
| 43 |
+
"Yes"
|
| 44 |
+
],
|
| 45 |
+
"dataset_path": "super_glue",
|
| 46 |
+
"dataset_name": "wic",
|
| 47 |
+
"subset": null,
|
| 48 |
+
"prompt_id": "d9e1db2a-ab0b-4621-bb41-01d5788d3873",
|
| 49 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nQuestion: Is the word '{{word}}' used in the same sense in the two sentences above? Yes, No?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 50 |
+
"prompt_original_task": true,
|
| 51 |
+
"comment": "",
|
| 52 |
+
"acc_stderr": 0.019808287657813832
|
| 53 |
+
},
|
| 54 |
+
{
|
| 55 |
+
"task_name": "wic",
|
| 56 |
+
"prompt_name": "GPT-3-prompt-with-label",
|
| 57 |
+
"acc_norm": 0.5,
|
| 58 |
+
"fixed_answer_choice_list": [
|
| 59 |
+
"No",
|
| 60 |
+
"Yes"
|
| 61 |
+
],
|
| 62 |
+
"dataset_path": "super_glue",
|
| 63 |
+
"dataset_name": "wic",
|
| 64 |
+
"subset": null,
|
| 65 |
+
"prompt_id": "d9e1db2a-ab0b-4621-bb41-01d5788d3873",
|
| 66 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nQuestion: Is the word '{{word}}' used in the same sense in the two sentences above? Yes, No?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 67 |
+
"prompt_original_task": true,
|
| 68 |
+
"comment": "",
|
| 69 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 70 |
+
},
|
| 71 |
+
{
|
| 72 |
+
"task_name": "wic",
|
| 73 |
+
"prompt_name": "affirmation_true_or_false",
|
| 74 |
+
"acc": 0.5,
|
| 75 |
+
"fixed_answer_choice_list": [
|
| 76 |
+
"False",
|
| 77 |
+
"True"
|
| 78 |
+
],
|
| 79 |
+
"dataset_path": "super_glue",
|
| 80 |
+
"dataset_name": "wic",
|
| 81 |
+
"subset": null,
|
| 82 |
+
"prompt_id": "725b5ed0-7728-4890-95a4-a74cb7ae1bb4",
|
| 83 |
+
"prompt_jinja": "Sentence A: {{sentence1}}\nSentence B: {{sentence2}}\n\n\"{{word}}\" has a similar meaning in sentences A and B. True or False?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 84 |
+
"prompt_original_task": true,
|
| 85 |
+
"comment": "",
|
| 86 |
+
"acc_stderr": 0.01981072129375818
|
| 87 |
+
},
|
| 88 |
+
{
|
| 89 |
+
"task_name": "wic",
|
| 90 |
+
"prompt_name": "affirmation_true_or_false",
|
| 91 |
+
"acc_norm": 0.5078369905956113,
|
| 92 |
+
"fixed_answer_choice_list": [
|
| 93 |
+
"False",
|
| 94 |
+
"True"
|
| 95 |
+
],
|
| 96 |
+
"dataset_path": "super_glue",
|
| 97 |
+
"dataset_name": "wic",
|
| 98 |
+
"subset": null,
|
| 99 |
+
"prompt_id": "725b5ed0-7728-4890-95a4-a74cb7ae1bb4",
|
| 100 |
+
"prompt_jinja": "Sentence A: {{sentence1}}\nSentence B: {{sentence2}}\n\n\"{{word}}\" has a similar meaning in sentences A and B. True or False?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 101 |
+
"prompt_original_task": true,
|
| 102 |
+
"comment": "",
|
| 103 |
+
"acc_norm_stderr": 0.019808287657813832
|
| 104 |
+
},
|
| 105 |
+
{
|
| 106 |
+
"task_name": "wic",
|
| 107 |
+
"prompt_name": "grammar_homework",
|
| 108 |
+
"acc": 0.5094043887147336,
|
| 109 |
+
"fixed_answer_choice_list": [
|
| 110 |
+
"No",
|
| 111 |
+
"Yes"
|
| 112 |
+
],
|
| 113 |
+
"dataset_path": "super_glue",
|
| 114 |
+
"dataset_name": "wic",
|
| 115 |
+
"subset": null,
|
| 116 |
+
"prompt_id": "611d13dc-d414-4b9b-9204-e4f325e859e7",
|
| 117 |
+
"prompt_jinja": "Homework\n\nDecide whether the word \"{{word}}\" is used with the same meaning in the two following sentences. Answer by yes or no.\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 118 |
+
"prompt_original_task": true,
|
| 119 |
+
"comment": "",
|
| 120 |
+
"acc_stderr": 0.019807216763271497
|
| 121 |
+
},
|
| 122 |
+
{
|
| 123 |
+
"task_name": "wic",
|
| 124 |
+
"prompt_name": "grammar_homework",
|
| 125 |
+
"acc_norm": 0.49843260188087773,
|
| 126 |
+
"fixed_answer_choice_list": [
|
| 127 |
+
"No",
|
| 128 |
+
"Yes"
|
| 129 |
+
],
|
| 130 |
+
"dataset_path": "super_glue",
|
| 131 |
+
"dataset_name": "wic",
|
| 132 |
+
"subset": null,
|
| 133 |
+
"prompt_id": "611d13dc-d414-4b9b-9204-e4f325e859e7",
|
| 134 |
+
"prompt_jinja": "Homework\n\nDecide whether the word \"{{word}}\" is used with the same meaning in the two following sentences. Answer by yes or no.\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 135 |
+
"prompt_original_task": true,
|
| 136 |
+
"comment": "",
|
| 137 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 138 |
+
},
|
| 139 |
+
{
|
| 140 |
+
"task_name": "wic",
|
| 141 |
+
"prompt_name": "polysemous",
|
| 142 |
+
"acc": 0.512539184952978,
|
| 143 |
+
"fixed_answer_choice_list": [
|
| 144 |
+
"No",
|
| 145 |
+
"Yes"
|
| 146 |
+
],
|
| 147 |
+
"dataset_path": "super_glue",
|
| 148 |
+
"dataset_name": "wic",
|
| 149 |
+
"subset": null,
|
| 150 |
+
"prompt_id": "dd2080cf-3117-49ba-9aff-c988a21fdb69",
|
| 151 |
+
"prompt_jinja": "The word \"{{word}}\" has multiple meanings. Does it have the same meaning in sentences 1 and 2? Yes or no?\n\nSentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 152 |
+
"prompt_original_task": true,
|
| 153 |
+
"comment": "",
|
| 154 |
+
"acc_stderr": 0.019804490588592596
|
| 155 |
+
},
|
| 156 |
+
{
|
| 157 |
+
"task_name": "wic",
|
| 158 |
+
"prompt_name": "polysemous",
|
| 159 |
+
"acc_norm": 0.49843260188087773,
|
| 160 |
+
"fixed_answer_choice_list": [
|
| 161 |
+
"No",
|
| 162 |
+
"Yes"
|
| 163 |
+
],
|
| 164 |
+
"dataset_path": "super_glue",
|
| 165 |
+
"dataset_name": "wic",
|
| 166 |
+
"subset": null,
|
| 167 |
+
"prompt_id": "dd2080cf-3117-49ba-9aff-c988a21fdb69",
|
| 168 |
+
"prompt_jinja": "The word \"{{word}}\" has multiple meanings. Does it have the same meaning in sentences 1 and 2? Yes or no?\n\nSentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 169 |
+
"prompt_original_task": true,
|
| 170 |
+
"comment": "",
|
| 171 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"task_name": "wic",
|
| 175 |
+
"prompt_name": "question-context",
|
| 176 |
+
"acc": 0.5266457680250783,
|
| 177 |
+
"fixed_answer_choice_list": [
|
| 178 |
+
"No",
|
| 179 |
+
"Yes"
|
| 180 |
+
],
|
| 181 |
+
"dataset_path": "super_glue",
|
| 182 |
+
"dataset_name": "wic",
|
| 183 |
+
"subset": null,
|
| 184 |
+
"prompt_id": "cfbc1637-10b8-4f20-a31c-55292f3cebd0",
|
| 185 |
+
"prompt_jinja": "Determine if the word '{{word}}' is used in the same way in the two sentences below. \n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 186 |
+
"prompt_original_task": true,
|
| 187 |
+
"comment": "",
|
| 188 |
+
"acc_stderr": 0.019782570188812167
|
| 189 |
+
},
|
| 190 |
+
{
|
| 191 |
+
"task_name": "wic",
|
| 192 |
+
"prompt_name": "question-context",
|
| 193 |
+
"acc_norm": 0.5031347962382445,
|
| 194 |
+
"fixed_answer_choice_list": [
|
| 195 |
+
"No",
|
| 196 |
+
"Yes"
|
| 197 |
+
],
|
| 198 |
+
"dataset_path": "super_glue",
|
| 199 |
+
"dataset_name": "wic",
|
| 200 |
+
"subset": null,
|
| 201 |
+
"prompt_id": "cfbc1637-10b8-4f20-a31c-55292f3cebd0",
|
| 202 |
+
"prompt_jinja": "Determine if the word '{{word}}' is used in the same way in the two sentences below. \n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 203 |
+
"prompt_original_task": true,
|
| 204 |
+
"comment": "",
|
| 205 |
+
"acc_norm_stderr": 0.019810331932097542
|
| 206 |
+
},
|
| 207 |
+
{
|
| 208 |
+
"task_name": "wic",
|
| 209 |
+
"prompt_name": "question-context-meaning",
|
| 210 |
+
"acc": 0.5438871473354232,
|
| 211 |
+
"fixed_answer_choice_list": [
|
| 212 |
+
"No",
|
| 213 |
+
"Yes"
|
| 214 |
+
],
|
| 215 |
+
"dataset_path": "super_glue",
|
| 216 |
+
"dataset_name": "wic",
|
| 217 |
+
"subset": null,
|
| 218 |
+
"prompt_id": "3503ead5-4fa5-4f77-95dc-f0c2ed3eecdc",
|
| 219 |
+
"prompt_jinja": "Does the word \"{{word}}\" have the same meaning in these two sentences?\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 220 |
+
"prompt_original_task": true,
|
| 221 |
+
"comment": "",
|
| 222 |
+
"acc_stderr": 0.019734259601993404
|
| 223 |
+
},
|
| 224 |
+
{
|
| 225 |
+
"task_name": "wic",
|
| 226 |
+
"prompt_name": "question-context-meaning",
|
| 227 |
+
"acc_norm": 0.5015673981191222,
|
| 228 |
+
"fixed_answer_choice_list": [
|
| 229 |
+
"No",
|
| 230 |
+
"Yes"
|
| 231 |
+
],
|
| 232 |
+
"dataset_path": "super_glue",
|
| 233 |
+
"dataset_name": "wic",
|
| 234 |
+
"subset": null,
|
| 235 |
+
"prompt_id": "3503ead5-4fa5-4f77-95dc-f0c2ed3eecdc",
|
| 236 |
+
"prompt_jinja": "Does the word \"{{word}}\" have the same meaning in these two sentences?\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 237 |
+
"prompt_original_task": true,
|
| 238 |
+
"comment": "",
|
| 239 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 240 |
+
},
|
| 241 |
+
{
|
| 242 |
+
"task_name": "wic",
|
| 243 |
+
"prompt_name": "question-context-meaning-with-label",
|
| 244 |
+
"acc": 0.5156739811912225,
|
| 245 |
+
"fixed_answer_choice_list": [
|
| 246 |
+
"No",
|
| 247 |
+
"Yes"
|
| 248 |
+
],
|
| 249 |
+
"dataset_path": "super_glue",
|
| 250 |
+
"dataset_name": "wic",
|
| 251 |
+
"subset": null,
|
| 252 |
+
"prompt_id": "14e73f39-a0d1-44c2-b9a4-4e48f9f1608e",
|
| 253 |
+
"prompt_jinja": "Does the word \"{{word}}\" have the same meaning in these two sentences? Yes, No?\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 254 |
+
"prompt_original_task": true,
|
| 255 |
+
"comment": "",
|
| 256 |
+
"acc_stderr": 0.019800984955347847
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"task_name": "wic",
|
| 260 |
+
"prompt_name": "question-context-meaning-with-label",
|
| 261 |
+
"acc_norm": 0.5015673981191222,
|
| 262 |
+
"fixed_answer_choice_list": [
|
| 263 |
+
"No",
|
| 264 |
+
"Yes"
|
| 265 |
+
],
|
| 266 |
+
"dataset_path": "super_glue",
|
| 267 |
+
"dataset_name": "wic",
|
| 268 |
+
"subset": null,
|
| 269 |
+
"prompt_id": "14e73f39-a0d1-44c2-b9a4-4e48f9f1608e",
|
| 270 |
+
"prompt_jinja": "Does the word \"{{word}}\" have the same meaning in these two sentences? Yes, No?\n{{sentence1}}\n{{sentence2}}\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 271 |
+
"prompt_original_task": true,
|
| 272 |
+
"comment": "",
|
| 273 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 274 |
+
},
|
| 275 |
+
{
|
| 276 |
+
"task_name": "wic",
|
| 277 |
+
"prompt_name": "same_sense",
|
| 278 |
+
"acc": 0.5047021943573667,
|
| 279 |
+
"fixed_answer_choice_list": [
|
| 280 |
+
"No",
|
| 281 |
+
"Yes"
|
| 282 |
+
],
|
| 283 |
+
"dataset_path": "super_glue",
|
| 284 |
+
"dataset_name": "wic",
|
| 285 |
+
"subset": null,
|
| 286 |
+
"prompt_id": "ce8b5a93-1841-4897-84db-b100f1c84f4b",
|
| 287 |
+
"prompt_jinja": "Sentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\n\nDetermine whether the word \"{{word}}\" is used in the same sense in both sentences. Yes or no?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 288 |
+
"prompt_original_task": true,
|
| 289 |
+
"comment": "",
|
| 290 |
+
"acc_stderr": 0.019809845219259763
|
| 291 |
+
},
|
| 292 |
+
{
|
| 293 |
+
"task_name": "wic",
|
| 294 |
+
"prompt_name": "same_sense",
|
| 295 |
+
"acc_norm": 0.5,
|
| 296 |
+
"fixed_answer_choice_list": [
|
| 297 |
+
"No",
|
| 298 |
+
"Yes"
|
| 299 |
+
],
|
| 300 |
+
"dataset_path": "super_glue",
|
| 301 |
+
"dataset_name": "wic",
|
| 302 |
+
"subset": null,
|
| 303 |
+
"prompt_id": "ce8b5a93-1841-4897-84db-b100f1c84f4b",
|
| 304 |
+
"prompt_jinja": "Sentence 1: {{sentence1}}\nSentence 2: {{sentence2}}\n\nDetermine whether the word \"{{word}}\" is used in the same sense in both sentences. Yes or no?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 305 |
+
"prompt_original_task": true,
|
| 306 |
+
"comment": "",
|
| 307 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 308 |
+
},
|
| 309 |
+
{
|
| 310 |
+
"task_name": "wic",
|
| 311 |
+
"prompt_name": "similar-sense",
|
| 312 |
+
"acc": 0.542319749216301,
|
| 313 |
+
"fixed_answer_choice_list": [
|
| 314 |
+
"No",
|
| 315 |
+
"Yes"
|
| 316 |
+
],
|
| 317 |
+
"dataset_path": "super_glue",
|
| 318 |
+
"dataset_name": "wic",
|
| 319 |
+
"subset": null,
|
| 320 |
+
"prompt_id": "f934a96d-fe4d-4075-aa47-5595b9a604c7",
|
| 321 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nSimilar sense of {{word}}?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 322 |
+
"prompt_original_task": true,
|
| 323 |
+
"comment": "",
|
| 324 |
+
"acc_stderr": 0.01973963328373276
|
| 325 |
+
},
|
| 326 |
+
{
|
| 327 |
+
"task_name": "wic",
|
| 328 |
+
"prompt_name": "similar-sense",
|
| 329 |
+
"acc_norm": 0.5,
|
| 330 |
+
"fixed_answer_choice_list": [
|
| 331 |
+
"No",
|
| 332 |
+
"Yes"
|
| 333 |
+
],
|
| 334 |
+
"dataset_path": "super_glue",
|
| 335 |
+
"dataset_name": "wic",
|
| 336 |
+
"subset": null,
|
| 337 |
+
"prompt_id": "f934a96d-fe4d-4075-aa47-5595b9a604c7",
|
| 338 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nSimilar sense of {{word}}?\n||| {% if label != -1%}\n{{answer_choices[label]}}\n{% endif %}",
|
| 339 |
+
"prompt_original_task": true,
|
| 340 |
+
"comment": "",
|
| 341 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"task_name": "wsc",
|
| 345 |
+
"prompt_name": "GPT-3 Style",
|
| 346 |
+
"acc": 0.36538461538461536,
|
| 347 |
+
"fixed_answer_choice_list": [
|
| 348 |
+
"No",
|
| 349 |
+
"Yes"
|
| 350 |
+
],
|
| 351 |
+
"dataset_path": "super_glue",
|
| 352 |
+
"dataset_name": "wsc.fixed",
|
| 353 |
+
"subset": null,
|
| 354 |
+
"prompt_id": "7d377293-d043-4b6c-8ec1-d61eaf14ec67",
|
| 355 |
+
"prompt_jinja": "Passage: {{ text }} \n\nQuestion: In the passage above, does the pronoun \"{{ span2_text }}\" refer to {{ span1_text }}?\n\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 356 |
+
"prompt_original_task": true,
|
| 357 |
+
"comment": "",
|
| 358 |
+
"acc_stderr": 0.0474473339327792
|
| 359 |
+
},
|
| 360 |
+
{
|
| 361 |
+
"task_name": "wsc",
|
| 362 |
+
"prompt_name": "GPT-3 Style",
|
| 363 |
+
"acc_norm": 0.36538461538461536,
|
| 364 |
+
"fixed_answer_choice_list": [
|
| 365 |
+
"No",
|
| 366 |
+
"Yes"
|
| 367 |
+
],
|
| 368 |
+
"dataset_path": "super_glue",
|
| 369 |
+
"dataset_name": "wsc.fixed",
|
| 370 |
+
"subset": null,
|
| 371 |
+
"prompt_id": "7d377293-d043-4b6c-8ec1-d61eaf14ec67",
|
| 372 |
+
"prompt_jinja": "Passage: {{ text }} \n\nQuestion: In the passage above, does the pronoun \"{{ span2_text }}\" refer to {{ span1_text }}?\n\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 373 |
+
"prompt_original_task": true,
|
| 374 |
+
"comment": "",
|
| 375 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 376 |
+
},
|
| 377 |
+
{
|
| 378 |
+
"task_name": "wsc",
|
| 379 |
+
"prompt_name": "I think they mean",
|
| 380 |
+
"acc": 0.36538461538461536,
|
| 381 |
+
"fixed_answer_choice_list": [
|
| 382 |
+
"No",
|
| 383 |
+
"Yes"
|
| 384 |
+
],
|
| 385 |
+
"dataset_path": "super_glue",
|
| 386 |
+
"dataset_name": "wsc.fixed",
|
| 387 |
+
"subset": null,
|
| 388 |
+
"prompt_id": "4b3e29cc-ccb8-4e4c-a845-4935ca29cf34",
|
| 389 |
+
"prompt_jinja": "{{ text }} I think they mean \"{{ text.split(\" \")[span2_index:] | join(\" \") | replace(span2_text, span1_text) }}\" Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 390 |
+
"prompt_original_task": true,
|
| 391 |
+
"comment": "",
|
| 392 |
+
"acc_stderr": 0.0474473339327792
|
| 393 |
+
},
|
| 394 |
+
{
|
| 395 |
+
"task_name": "wsc",
|
| 396 |
+
"prompt_name": "I think they mean",
|
| 397 |
+
"acc_norm": 0.36538461538461536,
|
| 398 |
+
"fixed_answer_choice_list": [
|
| 399 |
+
"No",
|
| 400 |
+
"Yes"
|
| 401 |
+
],
|
| 402 |
+
"dataset_path": "super_glue",
|
| 403 |
+
"dataset_name": "wsc.fixed",
|
| 404 |
+
"subset": null,
|
| 405 |
+
"prompt_id": "4b3e29cc-ccb8-4e4c-a845-4935ca29cf34",
|
| 406 |
+
"prompt_jinja": "{{ text }} I think they mean \"{{ text.split(\" \")[span2_index:] | join(\" \") | replace(span2_text, span1_text) }}\" Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 407 |
+
"prompt_original_task": true,
|
| 408 |
+
"comment": "",
|
| 409 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 410 |
+
},
|
| 411 |
+
{
|
| 412 |
+
"task_name": "wsc",
|
| 413 |
+
"prompt_name": "Who or what is/are",
|
| 414 |
+
"acc": 0.40384615384615385,
|
| 415 |
+
"fixed_answer_choice_list": [
|
| 416 |
+
"No",
|
| 417 |
+
"Yes"
|
| 418 |
+
],
|
| 419 |
+
"dataset_path": "super_glue",
|
| 420 |
+
"dataset_name": "wsc.fixed",
|
| 421 |
+
"subset": null,
|
| 422 |
+
"prompt_id": "d88f3e21-42dc-49a5-924d-69b764a14816",
|
| 423 |
+
"prompt_jinja": "{{ text }} \n{% if span2_text.lower() == \"they\" or span2_text.lower() == \"them\" %}\nQuestion: Who or what are \"{{ span2_text.lower() }}\"? {{ span1_text }}?\n{% else %}\nQuestion: Who or what is \"{{ span2_text.lower() }}\"? Is it {{ span1_text }}?\n{% endif %}\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 424 |
+
"prompt_original_task": true,
|
| 425 |
+
"comment": "",
|
| 426 |
+
"acc_stderr": 0.048346889526540184
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"task_name": "wsc",
|
| 430 |
+
"prompt_name": "Who or what is/are",
|
| 431 |
+
"acc_norm": 0.36538461538461536,
|
| 432 |
+
"fixed_answer_choice_list": [
|
| 433 |
+
"No",
|
| 434 |
+
"Yes"
|
| 435 |
+
],
|
| 436 |
+
"dataset_path": "super_glue",
|
| 437 |
+
"dataset_name": "wsc.fixed",
|
| 438 |
+
"subset": null,
|
| 439 |
+
"prompt_id": "d88f3e21-42dc-49a5-924d-69b764a14816",
|
| 440 |
+
"prompt_jinja": "{{ text }} \n{% if span2_text.lower() == \"they\" or span2_text.lower() == \"them\" %}\nQuestion: Who or what are \"{{ span2_text.lower() }}\"? {{ span1_text }}?\n{% else %}\nQuestion: Who or what is \"{{ span2_text.lower() }}\"? Is it {{ span1_text }}?\n{% endif %}\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 441 |
+
"prompt_original_task": true,
|
| 442 |
+
"comment": "",
|
| 443 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 444 |
+
},
|
| 445 |
+
{
|
| 446 |
+
"task_name": "wsc",
|
| 447 |
+
"prompt_name": "by p they mean",
|
| 448 |
+
"acc": 0.36538461538461536,
|
| 449 |
+
"fixed_answer_choice_list": [
|
| 450 |
+
"No",
|
| 451 |
+
"Yes"
|
| 452 |
+
],
|
| 453 |
+
"dataset_path": "super_glue",
|
| 454 |
+
"dataset_name": "wsc.fixed",
|
| 455 |
+
"subset": null,
|
| 456 |
+
"prompt_id": "23361c5d-b67f-4c2a-9da7-16301c55d0e1",
|
| 457 |
+
"prompt_jinja": "{{ text }} Here, by \"{{ span2_text }}\" they mean \"{{ span1_text }}\". Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 458 |
+
"prompt_original_task": true,
|
| 459 |
+
"comment": "",
|
| 460 |
+
"acc_stderr": 0.0474473339327792
|
| 461 |
+
},
|
| 462 |
+
{
|
| 463 |
+
"task_name": "wsc",
|
| 464 |
+
"prompt_name": "by p they mean",
|
| 465 |
+
"acc_norm": 0.36538461538461536,
|
| 466 |
+
"fixed_answer_choice_list": [
|
| 467 |
+
"No",
|
| 468 |
+
"Yes"
|
| 469 |
+
],
|
| 470 |
+
"dataset_path": "super_glue",
|
| 471 |
+
"dataset_name": "wsc.fixed",
|
| 472 |
+
"subset": null,
|
| 473 |
+
"prompt_id": "23361c5d-b67f-4c2a-9da7-16301c55d0e1",
|
| 474 |
+
"prompt_jinja": "{{ text }} Here, by \"{{ span2_text }}\" they mean \"{{ span1_text }}\". Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 475 |
+
"prompt_original_task": true,
|
| 476 |
+
"comment": "",
|
| 477 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 478 |
+
},
|
| 479 |
+
{
|
| 480 |
+
"task_name": "wsc",
|
| 481 |
+
"prompt_name": "does p stand for",
|
| 482 |
+
"acc": 0.375,
|
| 483 |
+
"fixed_answer_choice_list": [
|
| 484 |
+
"No",
|
| 485 |
+
"Yes"
|
| 486 |
+
],
|
| 487 |
+
"dataset_path": "super_glue",
|
| 488 |
+
"dataset_name": "wsc.fixed",
|
| 489 |
+
"subset": null,
|
| 490 |
+
"prompt_id": "7482d24f-cf45-4013-b82d-369489fc958b",
|
| 491 |
+
"prompt_jinja": "{{ text }} Here, does \"{{ span2_text.lower() }}\" stand for {{ span1_text }}? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 492 |
+
"prompt_original_task": true,
|
| 493 |
+
"comment": "",
|
| 494 |
+
"acc_stderr": 0.04770204856076104
|
| 495 |
+
},
|
| 496 |
+
{
|
| 497 |
+
"task_name": "wsc",
|
| 498 |
+
"prompt_name": "does p stand for",
|
| 499 |
+
"acc_norm": 0.36538461538461536,
|
| 500 |
+
"fixed_answer_choice_list": [
|
| 501 |
+
"No",
|
| 502 |
+
"Yes"
|
| 503 |
+
],
|
| 504 |
+
"dataset_path": "super_glue",
|
| 505 |
+
"dataset_name": "wsc.fixed",
|
| 506 |
+
"subset": null,
|
| 507 |
+
"prompt_id": "7482d24f-cf45-4013-b82d-369489fc958b",
|
| 508 |
+
"prompt_jinja": "{{ text }} Here, does \"{{ span2_text.lower() }}\" stand for {{ span1_text }}? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 509 |
+
"prompt_original_task": true,
|
| 510 |
+
"comment": "",
|
| 511 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"task_name": "wsc",
|
| 515 |
+
"prompt_name": "does the pronoun refer to",
|
| 516 |
+
"acc": 0.5480769230769231,
|
| 517 |
+
"fixed_answer_choice_list": [
|
| 518 |
+
"No",
|
| 519 |
+
"Yes"
|
| 520 |
+
],
|
| 521 |
+
"dataset_path": "super_glue",
|
| 522 |
+
"dataset_name": "wsc.fixed",
|
| 523 |
+
"subset": null,
|
| 524 |
+
"prompt_id": "212fb8b1-8436-4f64-8f37-a9094fe029f4",
|
| 525 |
+
"prompt_jinja": "{{ text }} In the previous sentence, does the pronoun \"{{ span2_text.lower() }}\" refer to {{ span1_text }}? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 526 |
+
"prompt_original_task": true,
|
| 527 |
+
"comment": "",
|
| 528 |
+
"acc_stderr": 0.049038186969314335
|
| 529 |
+
},
|
| 530 |
+
{
|
| 531 |
+
"task_name": "wsc",
|
| 532 |
+
"prompt_name": "does the pronoun refer to",
|
| 533 |
+
"acc_norm": 0.36538461538461536,
|
| 534 |
+
"fixed_answer_choice_list": [
|
| 535 |
+
"No",
|
| 536 |
+
"Yes"
|
| 537 |
+
],
|
| 538 |
+
"dataset_path": "super_glue",
|
| 539 |
+
"dataset_name": "wsc.fixed",
|
| 540 |
+
"subset": null,
|
| 541 |
+
"prompt_id": "212fb8b1-8436-4f64-8f37-a9094fe029f4",
|
| 542 |
+
"prompt_jinja": "{{ text }} In the previous sentence, does the pronoun \"{{ span2_text.lower() }}\" refer to {{ span1_text }}? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 543 |
+
"prompt_original_task": true,
|
| 544 |
+
"comment": "",
|
| 545 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 546 |
+
},
|
| 547 |
+
{
|
| 548 |
+
"task_name": "wsc",
|
| 549 |
+
"prompt_name": "in other words",
|
| 550 |
+
"acc": 0.36538461538461536,
|
| 551 |
+
"fixed_answer_choice_list": [
|
| 552 |
+
"False",
|
| 553 |
+
"True"
|
| 554 |
+
],
|
| 555 |
+
"dataset_path": "super_glue",
|
| 556 |
+
"dataset_name": "wsc.fixed",
|
| 557 |
+
"subset": null,
|
| 558 |
+
"prompt_id": "2f17f18b-6daa-44ef-a2dd-dddaf04aec0e",
|
| 559 |
+
"prompt_jinja": "{{ text }} \n\nIn other words, {{ text.split(\" \")[span2_index:] | join(\" \") | replace(span2_text, span1_text) }} True or false? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 560 |
+
"prompt_original_task": true,
|
| 561 |
+
"comment": "",
|
| 562 |
+
"acc_stderr": 0.0474473339327792
|
| 563 |
+
},
|
| 564 |
+
{
|
| 565 |
+
"task_name": "wsc",
|
| 566 |
+
"prompt_name": "in other words",
|
| 567 |
+
"acc_norm": 0.5288461538461539,
|
| 568 |
+
"fixed_answer_choice_list": [
|
| 569 |
+
"False",
|
| 570 |
+
"True"
|
| 571 |
+
],
|
| 572 |
+
"dataset_path": "super_glue",
|
| 573 |
+
"dataset_name": "wsc.fixed",
|
| 574 |
+
"subset": null,
|
| 575 |
+
"prompt_id": "2f17f18b-6daa-44ef-a2dd-dddaf04aec0e",
|
| 576 |
+
"prompt_jinja": "{{ text }} \n\nIn other words, {{ text.split(\" \")[span2_index:] | join(\" \") | replace(span2_text, span1_text) }} True or false? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 577 |
+
"prompt_original_task": true,
|
| 578 |
+
"comment": "",
|
| 579 |
+
"acc_norm_stderr": 0.04918440626354964
|
| 580 |
+
},
|
| 581 |
+
{
|
| 582 |
+
"task_name": "wsc",
|
| 583 |
+
"prompt_name": "p is/are r",
|
| 584 |
+
"acc": 0.36538461538461536,
|
| 585 |
+
"fixed_answer_choice_list": [
|
| 586 |
+
"False",
|
| 587 |
+
"True"
|
| 588 |
+
],
|
| 589 |
+
"dataset_path": "super_glue",
|
| 590 |
+
"dataset_name": "wsc.fixed",
|
| 591 |
+
"subset": null,
|
| 592 |
+
"prompt_id": "87f97aa0-1fa9-4f0b-b8e6-89d3c1f19bd6",
|
| 593 |
+
"prompt_jinja": "Context: {{ text }} \n\n{% if span2_text.lower() == \"they\" or span2_text.lower() == \"them\" %}\nQuestion: \"{{ span2_text }}\" are {{ span1_text }}. True or false?\n{% else %}\nQuestion: \"{{ span2_text }}\" is {{ span1_text }}. True or false?\n{% endif %}\n\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 594 |
+
"prompt_original_task": true,
|
| 595 |
+
"comment": "",
|
| 596 |
+
"acc_stderr": 0.0474473339327792
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"task_name": "wsc",
|
| 600 |
+
"prompt_name": "p is/are r",
|
| 601 |
+
"acc_norm": 0.34615384615384615,
|
| 602 |
+
"fixed_answer_choice_list": [
|
| 603 |
+
"False",
|
| 604 |
+
"True"
|
| 605 |
+
],
|
| 606 |
+
"dataset_path": "super_glue",
|
| 607 |
+
"dataset_name": "wsc.fixed",
|
| 608 |
+
"subset": null,
|
| 609 |
+
"prompt_id": "87f97aa0-1fa9-4f0b-b8e6-89d3c1f19bd6",
|
| 610 |
+
"prompt_jinja": "Context: {{ text }} \n\n{% if span2_text.lower() == \"they\" or span2_text.lower() == \"them\" %}\nQuestion: \"{{ span2_text }}\" are {{ span1_text }}. True or false?\n{% else %}\nQuestion: \"{{ span2_text }}\" is {{ span1_text }}. True or false?\n{% endif %}\n\nAnswer: ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 611 |
+
"prompt_original_task": true,
|
| 612 |
+
"comment": "",
|
| 613 |
+
"acc_norm_stderr": 0.04687634642174987
|
| 614 |
+
},
|
| 615 |
+
{
|
| 616 |
+
"task_name": "wsc",
|
| 617 |
+
"prompt_name": "replaced with",
|
| 618 |
+
"acc": 0.6153846153846154,
|
| 619 |
+
"fixed_answer_choice_list": [
|
| 620 |
+
"No",
|
| 621 |
+
"Yes"
|
| 622 |
+
],
|
| 623 |
+
"dataset_path": "super_glue",
|
| 624 |
+
"dataset_name": "wsc.fixed",
|
| 625 |
+
"subset": null,
|
| 626 |
+
"prompt_id": "809eacd0-2f6c-4e3a-b52a-57c783879d36",
|
| 627 |
+
"prompt_jinja": "{{ text }} In the previous sentence, can the pronoun \"{{ span2_text }}\" be replaced with \"{{ span1_text }}\"? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 628 |
+
"prompt_original_task": true,
|
| 629 |
+
"comment": "",
|
| 630 |
+
"acc_stderr": 0.047936688680750406
|
| 631 |
+
},
|
| 632 |
+
{
|
| 633 |
+
"task_name": "wsc",
|
| 634 |
+
"prompt_name": "replaced with",
|
| 635 |
+
"acc_norm": 0.36538461538461536,
|
| 636 |
+
"fixed_answer_choice_list": [
|
| 637 |
+
"No",
|
| 638 |
+
"Yes"
|
| 639 |
+
],
|
| 640 |
+
"dataset_path": "super_glue",
|
| 641 |
+
"dataset_name": "wsc.fixed",
|
| 642 |
+
"subset": null,
|
| 643 |
+
"prompt_id": "809eacd0-2f6c-4e3a-b52a-57c783879d36",
|
| 644 |
+
"prompt_jinja": "{{ text }} In the previous sentence, can the pronoun \"{{ span2_text }}\" be replaced with \"{{ span1_text }}\"? Yes or no? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 645 |
+
"prompt_original_task": true,
|
| 646 |
+
"comment": "",
|
| 647 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 648 |
+
},
|
| 649 |
+
{
|
| 650 |
+
"task_name": "wsc",
|
| 651 |
+
"prompt_name": "the pronoun refers to",
|
| 652 |
+
"acc": 0.36538461538461536,
|
| 653 |
+
"fixed_answer_choice_list": [
|
| 654 |
+
"False",
|
| 655 |
+
"True"
|
| 656 |
+
],
|
| 657 |
+
"dataset_path": "super_glue",
|
| 658 |
+
"dataset_name": "wsc.fixed",
|
| 659 |
+
"subset": null,
|
| 660 |
+
"prompt_id": "aae24b54-c3a7-4f69-8b77-f6dc115988f8",
|
| 661 |
+
"prompt_jinja": "{{ text }} \nIn the passage above, the pronoun \"{{ span2_text }}\" refers to {{ span1_text }}. True or false? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 662 |
+
"prompt_original_task": true,
|
| 663 |
+
"comment": "",
|
| 664 |
+
"acc_stderr": 0.0474473339327792
|
| 665 |
+
},
|
| 666 |
+
{
|
| 667 |
+
"task_name": "wsc",
|
| 668 |
+
"prompt_name": "the pronoun refers to",
|
| 669 |
+
"acc_norm": 0.5865384615384616,
|
| 670 |
+
"fixed_answer_choice_list": [
|
| 671 |
+
"False",
|
| 672 |
+
"True"
|
| 673 |
+
],
|
| 674 |
+
"dataset_path": "super_glue",
|
| 675 |
+
"dataset_name": "wsc.fixed",
|
| 676 |
+
"subset": null,
|
| 677 |
+
"prompt_id": "aae24b54-c3a7-4f69-8b77-f6dc115988f8",
|
| 678 |
+
"prompt_jinja": "{{ text }} \nIn the passage above, the pronoun \"{{ span2_text }}\" refers to {{ span1_text }}. True or false? ||| {% if label != -1 %}{{ answer_choices[label] }}{% endif %}",
|
| 679 |
+
"prompt_original_task": true,
|
| 680 |
+
"comment": "",
|
| 681 |
+
"acc_norm_stderr": 0.04852294969729053
|
| 682 |
+
},
|
| 683 |
+
{
|
| 684 |
+
"task_name": "wnli",
|
| 685 |
+
"prompt_name": "confident",
|
| 686 |
+
"acc": 0.43661971830985913,
|
| 687 |
+
"fixed_answer_choice_list": [
|
| 688 |
+
"not confident",
|
| 689 |
+
"very confident"
|
| 690 |
+
],
|
| 691 |
+
"dataset_path": "glue",
|
| 692 |
+
"dataset_name": "wnli",
|
| 693 |
+
"subset": null,
|
| 694 |
+
"prompt_id": "10c354ee-6f4e-4b04-91e1-29e999a8f3e7",
|
| 695 |
+
"prompt_jinja": "If it's true that\n{{sentence1}}\nhow {{\"confident\"}} should I be that\n{{sentence2}}\n{{\"very confident or not confident?\"}}\n|||\n{{answer_choices[label]}}",
|
| 696 |
+
"prompt_original_task": true,
|
| 697 |
+
"comment": "",
|
| 698 |
+
"acc_stderr": 0.0592793555841297
|
| 699 |
+
},
|
| 700 |
+
{
|
| 701 |
+
"task_name": "wnli",
|
| 702 |
+
"prompt_name": "confident",
|
| 703 |
+
"acc_norm": 0.43661971830985913,
|
| 704 |
+
"fixed_answer_choice_list": [
|
| 705 |
+
"not confident",
|
| 706 |
+
"very confident"
|
| 707 |
+
],
|
| 708 |
+
"dataset_path": "glue",
|
| 709 |
+
"dataset_name": "wnli",
|
| 710 |
+
"subset": null,
|
| 711 |
+
"prompt_id": "10c354ee-6f4e-4b04-91e1-29e999a8f3e7",
|
| 712 |
+
"prompt_jinja": "If it's true that\n{{sentence1}}\nhow {{\"confident\"}} should I be that\n{{sentence2}}\n{{\"very confident or not confident?\"}}\n|||\n{{answer_choices[label]}}",
|
| 713 |
+
"prompt_original_task": true,
|
| 714 |
+
"comment": "",
|
| 715 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 716 |
+
},
|
| 717 |
+
{
|
| 718 |
+
"task_name": "wnli",
|
| 719 |
+
"prompt_name": "entailment explained",
|
| 720 |
+
"acc": 0.39436619718309857,
|
| 721 |
+
"fixed_answer_choice_list": [
|
| 722 |
+
"no",
|
| 723 |
+
"yes"
|
| 724 |
+
],
|
| 725 |
+
"dataset_path": "glue",
|
| 726 |
+
"dataset_name": "wnli",
|
| 727 |
+
"subset": null,
|
| 728 |
+
"prompt_id": "3a0e46cb-0b96-4972-83f6-29a6c6a09ba9",
|
| 729 |
+
"prompt_jinja": "{{\"Entailment\"}} means that the second sentence follows from the first sentence. Are the following two sentences an example of entailment?\n{{sentence1}}\n{{sentence2}}\n|||\n{{answer_choices[label]}}",
|
| 730 |
+
"prompt_original_task": true,
|
| 731 |
+
"comment": "",
|
| 732 |
+
"acc_stderr": 0.058412510854444266
|
| 733 |
+
},
|
| 734 |
+
{
|
| 735 |
+
"task_name": "wnli",
|
| 736 |
+
"prompt_name": "entailment explained",
|
| 737 |
+
"acc_norm": 0.43661971830985913,
|
| 738 |
+
"fixed_answer_choice_list": [
|
| 739 |
+
"no",
|
| 740 |
+
"yes"
|
| 741 |
+
],
|
| 742 |
+
"dataset_path": "glue",
|
| 743 |
+
"dataset_name": "wnli",
|
| 744 |
+
"subset": null,
|
| 745 |
+
"prompt_id": "3a0e46cb-0b96-4972-83f6-29a6c6a09ba9",
|
| 746 |
+
"prompt_jinja": "{{\"Entailment\"}} means that the second sentence follows from the first sentence. Are the following two sentences an example of entailment?\n{{sentence1}}\n{{sentence2}}\n|||\n{{answer_choices[label]}}",
|
| 747 |
+
"prompt_original_task": true,
|
| 748 |
+
"comment": "",
|
| 749 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 750 |
+
},
|
| 751 |
+
{
|
| 752 |
+
"task_name": "wnli",
|
| 753 |
+
"prompt_name": "imply",
|
| 754 |
+
"acc": 0.4225352112676056,
|
| 755 |
+
"fixed_answer_choice_list": [
|
| 756 |
+
"no",
|
| 757 |
+
"yes"
|
| 758 |
+
],
|
| 759 |
+
"dataset_path": "glue",
|
| 760 |
+
"dataset_name": "wnli",
|
| 761 |
+
"subset": null,
|
| 762 |
+
"prompt_id": "a2ce492b-dfd0-4f04-bc44-70c7867ba231",
|
| 763 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nDoes the first sentence imply the second sentence?\n|||\n{{answer_choices[label]}}",
|
| 764 |
+
"prompt_original_task": true,
|
| 765 |
+
"comment": "",
|
| 766 |
+
"acc_stderr": 0.05903984205682581
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"task_name": "wnli",
|
| 770 |
+
"prompt_name": "imply",
|
| 771 |
+
"acc_norm": 0.43661971830985913,
|
| 772 |
+
"fixed_answer_choice_list": [
|
| 773 |
+
"no",
|
| 774 |
+
"yes"
|
| 775 |
+
],
|
| 776 |
+
"dataset_path": "glue",
|
| 777 |
+
"dataset_name": "wnli",
|
| 778 |
+
"subset": null,
|
| 779 |
+
"prompt_id": "a2ce492b-dfd0-4f04-bc44-70c7867ba231",
|
| 780 |
+
"prompt_jinja": "{{sentence1}}\n{{sentence2}}\nDoes the first sentence imply the second sentence?\n|||\n{{answer_choices[label]}}",
|
| 781 |
+
"prompt_original_task": true,
|
| 782 |
+
"comment": "",
|
| 783 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 784 |
+
},
|
| 785 |
+
{
|
| 786 |
+
"task_name": "wnli",
|
| 787 |
+
"prompt_name": "justified",
|
| 788 |
+
"acc": 0.43661971830985913,
|
| 789 |
+
"fixed_answer_choice_list": [
|
| 790 |
+
"no",
|
| 791 |
+
"yes"
|
| 792 |
+
],
|
| 793 |
+
"dataset_path": "glue",
|
| 794 |
+
"dataset_name": "wnli",
|
| 795 |
+
"subset": null,
|
| 796 |
+
"prompt_id": "a244158a-a248-4e34-bef7-66e269dd0815",
|
| 797 |
+
"prompt_jinja": "Someone told me \"{{sentence1}}\" Now, I think that \"{{sentence2}}\" Am I justified in thinking this?\n|||\n{{answer_choices[label]}}",
|
| 798 |
+
"prompt_original_task": true,
|
| 799 |
+
"comment": "",
|
| 800 |
+
"acc_stderr": 0.0592793555841297
|
| 801 |
+
},
|
| 802 |
+
{
|
| 803 |
+
"task_name": "wnli",
|
| 804 |
+
"prompt_name": "justified",
|
| 805 |
+
"acc_norm": 0.43661971830985913,
|
| 806 |
+
"fixed_answer_choice_list": [
|
| 807 |
+
"no",
|
| 808 |
+
"yes"
|
| 809 |
+
],
|
| 810 |
+
"dataset_path": "glue",
|
| 811 |
+
"dataset_name": "wnli",
|
| 812 |
+
"subset": null,
|
| 813 |
+
"prompt_id": "a244158a-a248-4e34-bef7-66e269dd0815",
|
| 814 |
+
"prompt_jinja": "Someone told me \"{{sentence1}}\" Now, I think that \"{{sentence2}}\" Am I justified in thinking this?\n|||\n{{answer_choices[label]}}",
|
| 815 |
+
"prompt_original_task": true,
|
| 816 |
+
"comment": "",
|
| 817 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 818 |
+
},
|
| 819 |
+
{
|
| 820 |
+
"task_name": "wnli",
|
| 821 |
+
"prompt_name": "mean",
|
| 822 |
+
"acc": 0.6619718309859155,
|
| 823 |
+
"fixed_answer_choice_list": [
|
| 824 |
+
"no",
|
| 825 |
+
"yes"
|
| 826 |
+
],
|
| 827 |
+
"dataset_path": "glue",
|
| 828 |
+
"dataset_name": "wnli",
|
| 829 |
+
"subset": null,
|
| 830 |
+
"prompt_id": "75f89b05-5a81-401b-8a04-8239211a9a95",
|
| 831 |
+
"prompt_jinja": "Assume that the following is true:\n{{sentence1}}\nDoes this mean that \"{{sentence2}}\"?\n|||\n{{answer_choices[label]}}",
|
| 832 |
+
"prompt_original_task": true,
|
| 833 |
+
"comment": "",
|
| 834 |
+
"acc_stderr": 0.05653887739133513
|
| 835 |
+
},
|
| 836 |
+
{
|
| 837 |
+
"task_name": "wnli",
|
| 838 |
+
"prompt_name": "mean",
|
| 839 |
+
"acc_norm": 0.43661971830985913,
|
| 840 |
+
"fixed_answer_choice_list": [
|
| 841 |
+
"no",
|
| 842 |
+
"yes"
|
| 843 |
+
],
|
| 844 |
+
"dataset_path": "glue",
|
| 845 |
+
"dataset_name": "wnli",
|
| 846 |
+
"subset": null,
|
| 847 |
+
"prompt_id": "75f89b05-5a81-401b-8a04-8239211a9a95",
|
| 848 |
+
"prompt_jinja": "Assume that the following is true:\n{{sentence1}}\nDoes this mean that \"{{sentence2}}\"?\n|||\n{{answer_choices[label]}}",
|
| 849 |
+
"prompt_original_task": true,
|
| 850 |
+
"comment": "",
|
| 851 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 852 |
+
},
|
| 853 |
+
{
|
| 854 |
+
"task_name": "gsarti/flores_101_afr",
|
| 855 |
+
"prompt_name": null,
|
| 856 |
+
"word_perplexity": 139324.0466654445
|
| 857 |
+
},
|
| 858 |
+
{
|
| 859 |
+
"task_name": "gsarti/flores_101_afr",
|
| 860 |
+
"prompt_name": null,
|
| 861 |
+
"byte_perplexity": 7.049422805555328
|
| 862 |
+
},
|
| 863 |
+
{
|
| 864 |
+
"task_name": "gsarti/flores_101_afr",
|
| 865 |
+
"prompt_name": null,
|
| 866 |
+
"bits_per_byte": 2.8175051369933213
|
| 867 |
+
},
|
| 868 |
+
{
|
| 869 |
+
"task_name": "gsarti/flores_101_amh",
|
| 870 |
+
"prompt_name": null,
|
| 871 |
+
"word_perplexity": 105036774.30501972
|
| 872 |
+
},
|
| 873 |
+
{
|
| 874 |
+
"task_name": "gsarti/flores_101_amh",
|
| 875 |
+
"prompt_name": null,
|
| 876 |
+
"byte_perplexity": 4.172368790188039
|
| 877 |
+
},
|
| 878 |
+
{
|
| 879 |
+
"task_name": "gsarti/flores_101_amh",
|
| 880 |
+
"prompt_name": null,
|
| 881 |
+
"bits_per_byte": 2.0608666814101815
|
| 882 |
+
},
|
| 883 |
+
{
|
| 884 |
+
"task_name": "gsarti/flores_101_ara",
|
| 885 |
+
"prompt_name": null,
|
| 886 |
+
"word_perplexity": 674.8640314665696
|
| 887 |
+
},
|
| 888 |
+
{
|
| 889 |
+
"task_name": "gsarti/flores_101_ara",
|
| 890 |
+
"prompt_name": null,
|
| 891 |
+
"byte_perplexity": 1.8400375612633983
|
| 892 |
+
},
|
| 893 |
+
{
|
| 894 |
+
"task_name": "gsarti/flores_101_ara",
|
| 895 |
+
"prompt_name": null,
|
| 896 |
+
"bits_per_byte": 0.8797352167688847
|
| 897 |
+
},
|
| 898 |
+
{
|
| 899 |
+
"task_name": "gsarti/flores_101_hye",
|
| 900 |
+
"prompt_name": null,
|
| 901 |
+
"word_perplexity": 99262887.01092263
|
| 902 |
+
},
|
| 903 |
+
{
|
| 904 |
+
"task_name": "gsarti/flores_101_hye",
|
| 905 |
+
"prompt_name": null,
|
| 906 |
+
"byte_perplexity": 3.7481249397064547
|
| 907 |
+
},
|
| 908 |
+
{
|
| 909 |
+
"task_name": "gsarti/flores_101_hye",
|
| 910 |
+
"prompt_name": null,
|
| 911 |
+
"bits_per_byte": 1.906169044483402
|
| 912 |
+
},
|
| 913 |
+
{
|
| 914 |
+
"task_name": "gsarti/flores_101_asm",
|
| 915 |
+
"prompt_name": null,
|
| 916 |
+
"word_perplexity": 6763188828222.085
|
| 917 |
+
},
|
| 918 |
+
{
|
| 919 |
+
"task_name": "gsarti/flores_101_asm",
|
| 920 |
+
"prompt_name": null,
|
| 921 |
+
"byte_perplexity": 5.497254736157445
|
| 922 |
+
},
|
| 923 |
+
{
|
| 924 |
+
"task_name": "gsarti/flores_101_asm",
|
| 925 |
+
"prompt_name": null,
|
| 926 |
+
"bits_per_byte": 2.458711333673663
|
| 927 |
+
},
|
| 928 |
+
{
|
| 929 |
+
"task_name": "gsarti/flores_101_ast",
|
| 930 |
+
"prompt_name": null,
|
| 931 |
+
"word_perplexity": 10657.272913539553
|
| 932 |
+
},
|
| 933 |
+
{
|
| 934 |
+
"task_name": "gsarti/flores_101_ast",
|
| 935 |
+
"prompt_name": null,
|
| 936 |
+
"byte_perplexity": 4.260251728273795
|
| 937 |
+
},
|
| 938 |
+
{
|
| 939 |
+
"task_name": "gsarti/flores_101_ast",
|
| 940 |
+
"prompt_name": null,
|
| 941 |
+
"bits_per_byte": 2.0909386784329675
|
| 942 |
+
},
|
| 943 |
+
{
|
| 944 |
+
"task_name": "gsarti/flores_101_azj",
|
| 945 |
+
"prompt_name": null,
|
| 946 |
+
"word_perplexity": 45923924.18878753
|
| 947 |
+
},
|
| 948 |
+
{
|
| 949 |
+
"task_name": "gsarti/flores_101_azj",
|
| 950 |
+
"prompt_name": null,
|
| 951 |
+
"byte_perplexity": 7.691396328945705
|
| 952 |
+
},
|
| 953 |
+
{
|
| 954 |
+
"task_name": "gsarti/flores_101_azj",
|
| 955 |
+
"prompt_name": null,
|
| 956 |
+
"bits_per_byte": 2.9432455349850195
|
| 957 |
+
},
|
| 958 |
+
{
|
| 959 |
+
"task_name": "gsarti/flores_101_bel",
|
| 960 |
+
"prompt_name": null,
|
| 961 |
+
"word_perplexity": 23935692.781315073
|
| 962 |
+
},
|
| 963 |
+
{
|
| 964 |
+
"task_name": "gsarti/flores_101_bel",
|
| 965 |
+
"prompt_name": null,
|
| 966 |
+
"byte_perplexity": 3.7706591215465943
|
| 967 |
+
},
|
| 968 |
+
{
|
| 969 |
+
"task_name": "gsarti/flores_101_bel",
|
| 970 |
+
"prompt_name": null,
|
| 971 |
+
"bits_per_byte": 1.914816732584341
|
| 972 |
+
},
|
| 973 |
+
{
|
| 974 |
+
"task_name": "gsarti/flores_101_ben",
|
| 975 |
+
"prompt_name": null,
|
| 976 |
+
"word_perplexity": 2480418685142.412
|
| 977 |
+
},
|
| 978 |
+
{
|
| 979 |
+
"task_name": "gsarti/flores_101_ben",
|
| 980 |
+
"prompt_name": null,
|
| 981 |
+
"byte_perplexity": 5.074281765515423
|
| 982 |
+
},
|
| 983 |
+
{
|
| 984 |
+
"task_name": "gsarti/flores_101_ben",
|
| 985 |
+
"prompt_name": null,
|
| 986 |
+
"bits_per_byte": 2.3432036318231058
|
| 987 |
+
},
|
| 988 |
+
{
|
| 989 |
+
"task_name": "gsarti/flores_101_bos",
|
| 990 |
+
"prompt_name": null,
|
| 991 |
+
"word_perplexity": 229622.13691086147
|
| 992 |
+
},
|
| 993 |
+
{
|
| 994 |
+
"task_name": "gsarti/flores_101_bos",
|
| 995 |
+
"prompt_name": null,
|
| 996 |
+
"byte_perplexity": 6.343363734045183
|
| 997 |
+
},
|
| 998 |
+
{
|
| 999 |
+
"task_name": "gsarti/flores_101_bos",
|
| 1000 |
+
"prompt_name": null,
|
| 1001 |
+
"bits_per_byte": 2.665248069942796
|
| 1002 |
+
},
|
| 1003 |
+
{
|
| 1004 |
+
"task_name": "gsarti/flores_101_bul",
|
| 1005 |
+
"prompt_name": null,
|
| 1006 |
+
"word_perplexity": 194851.13344620814
|
| 1007 |
+
},
|
| 1008 |
+
{
|
| 1009 |
+
"task_name": "gsarti/flores_101_bul",
|
| 1010 |
+
"prompt_name": null,
|
| 1011 |
+
"byte_perplexity": 2.8553687444403257
|
| 1012 |
+
},
|
| 1013 |
+
{
|
| 1014 |
+
"task_name": "gsarti/flores_101_bul",
|
| 1015 |
+
"prompt_name": null,
|
| 1016 |
+
"bits_per_byte": 1.5136770683283687
|
| 1017 |
+
},
|
| 1018 |
+
{
|
| 1019 |
+
"task_name": "gsarti/flores_101_mya",
|
| 1020 |
+
"prompt_name": null,
|
| 1021 |
+
"word_perplexity": 5.887577237013639e+18
|
| 1022 |
+
},
|
| 1023 |
+
{
|
| 1024 |
+
"task_name": "gsarti/flores_101_mya",
|
| 1025 |
+
"prompt_name": null,
|
| 1026 |
+
"byte_perplexity": 2.657561458464019
|
| 1027 |
+
},
|
| 1028 |
+
{
|
| 1029 |
+
"task_name": "gsarti/flores_101_mya",
|
| 1030 |
+
"prompt_name": null,
|
| 1031 |
+
"bits_per_byte": 1.4101030557435918
|
| 1032 |
+
},
|
| 1033 |
+
{
|
| 1034 |
+
"task_name": "gsarti/flores_101_cat",
|
| 1035 |
+
"prompt_name": null,
|
| 1036 |
+
"word_perplexity": 179.13123174533087
|
| 1037 |
+
},
|
| 1038 |
+
{
|
| 1039 |
+
"task_name": "gsarti/flores_101_cat",
|
| 1040 |
+
"prompt_name": null,
|
| 1041 |
+
"byte_perplexity": 2.358207169698056
|
| 1042 |
+
},
|
| 1043 |
+
{
|
| 1044 |
+
"task_name": "gsarti/flores_101_cat",
|
| 1045 |
+
"prompt_name": null,
|
| 1046 |
+
"bits_per_byte": 1.2376904653775254
|
| 1047 |
+
},
|
| 1048 |
+
{
|
| 1049 |
+
"task_name": "gsarti/flores_101_ceb",
|
| 1050 |
+
"prompt_name": null,
|
| 1051 |
+
"word_perplexity": 113330.67154113152
|
| 1052 |
+
},
|
| 1053 |
+
{
|
| 1054 |
+
"task_name": "gsarti/flores_101_ceb",
|
| 1055 |
+
"prompt_name": null,
|
| 1056 |
+
"byte_perplexity": 6.896481056329736
|
| 1057 |
+
},
|
| 1058 |
+
{
|
| 1059 |
+
"task_name": "gsarti/flores_101_ceb",
|
| 1060 |
+
"prompt_name": null,
|
| 1061 |
+
"bits_per_byte": 2.7858604115174295
|
| 1062 |
+
},
|
| 1063 |
+
{
|
| 1064 |
+
"task_name": "gsarti/flores_101_zho_simpl",
|
| 1065 |
+
"prompt_name": null,
|
| 1066 |
+
"word_perplexity": 1.0554528210220222e+21
|
| 1067 |
+
},
|
| 1068 |
+
{
|
| 1069 |
+
"task_name": "gsarti/flores_101_zho_simpl",
|
| 1070 |
+
"prompt_name": null,
|
| 1071 |
+
"byte_perplexity": 2.322457417595381
|
| 1072 |
+
},
|
| 1073 |
+
{
|
| 1074 |
+
"task_name": "gsarti/flores_101_zho_simpl",
|
| 1075 |
+
"prompt_name": null,
|
| 1076 |
+
"bits_per_byte": 1.2156521449449949
|
| 1077 |
+
},
|
| 1078 |
+
{
|
| 1079 |
+
"task_name": "gsarti/flores_101_zho_trad",
|
| 1080 |
+
"prompt_name": null,
|
| 1081 |
+
"word_perplexity": 4.787781515987923e+24
|
| 1082 |
+
},
|
| 1083 |
+
{
|
| 1084 |
+
"task_name": "gsarti/flores_101_zho_trad",
|
| 1085 |
+
"prompt_name": null,
|
| 1086 |
+
"byte_perplexity": 2.5709177552415134
|
| 1087 |
+
},
|
| 1088 |
+
{
|
| 1089 |
+
"task_name": "gsarti/flores_101_zho_trad",
|
| 1090 |
+
"prompt_name": null,
|
| 1091 |
+
"bits_per_byte": 1.3622834584784203
|
| 1092 |
+
},
|
| 1093 |
+
{
|
| 1094 |
+
"task_name": "gsarti/flores_101_hrv",
|
| 1095 |
+
"prompt_name": null,
|
| 1096 |
+
"word_perplexity": 307789.1462790266
|
| 1097 |
+
},
|
| 1098 |
+
{
|
| 1099 |
+
"task_name": "gsarti/flores_101_hrv",
|
| 1100 |
+
"prompt_name": null,
|
| 1101 |
+
"byte_perplexity": 6.50559790827845
|
| 1102 |
+
},
|
| 1103 |
+
{
|
| 1104 |
+
"task_name": "gsarti/flores_101_hrv",
|
| 1105 |
+
"prompt_name": null,
|
| 1106 |
+
"bits_per_byte": 2.7016816564307984
|
| 1107 |
+
},
|
| 1108 |
+
{
|
| 1109 |
+
"task_name": "gsarti/flores_101_ces",
|
| 1110 |
+
"prompt_name": null,
|
| 1111 |
+
"word_perplexity": 625101.1441414964
|
| 1112 |
+
},
|
| 1113 |
+
{
|
| 1114 |
+
"task_name": "gsarti/flores_101_ces",
|
| 1115 |
+
"prompt_name": null,
|
| 1116 |
+
"byte_perplexity": 6.126526835715164
|
| 1117 |
+
},
|
| 1118 |
+
{
|
| 1119 |
+
"task_name": "gsarti/flores_101_ces",
|
| 1120 |
+
"prompt_name": null,
|
| 1121 |
+
"bits_per_byte": 2.6150694333085327
|
| 1122 |
+
},
|
| 1123 |
+
{
|
| 1124 |
+
"task_name": "gsarti/flores_101_dan",
|
| 1125 |
+
"prompt_name": null,
|
| 1126 |
+
"word_perplexity": 71695.50336412797
|
| 1127 |
+
},
|
| 1128 |
+
{
|
| 1129 |
+
"task_name": "gsarti/flores_101_dan",
|
| 1130 |
+
"prompt_name": null,
|
| 1131 |
+
"byte_perplexity": 5.778786323448377
|
| 1132 |
+
},
|
| 1133 |
+
{
|
| 1134 |
+
"task_name": "gsarti/flores_101_dan",
|
| 1135 |
+
"prompt_name": null,
|
| 1136 |
+
"bits_per_byte": 2.5307665257708245
|
| 1137 |
+
},
|
| 1138 |
+
{
|
| 1139 |
+
"task_name": "gsarti/flores_101_nld",
|
| 1140 |
+
"prompt_name": null,
|
| 1141 |
+
"word_perplexity": 13951.877058430618
|
| 1142 |
+
},
|
| 1143 |
+
{
|
| 1144 |
+
"task_name": "gsarti/flores_101_nld",
|
| 1145 |
+
"prompt_name": null,
|
| 1146 |
+
"byte_perplexity": 4.535651709856251
|
| 1147 |
+
},
|
| 1148 |
+
{
|
| 1149 |
+
"task_name": "gsarti/flores_101_nld",
|
| 1150 |
+
"prompt_name": null,
|
| 1151 |
+
"bits_per_byte": 2.1813098607926804
|
| 1152 |
+
},
|
| 1153 |
+
{
|
| 1154 |
+
"task_name": "gsarti/flores_101_eng",
|
| 1155 |
+
"prompt_name": null,
|
| 1156 |
+
"word_perplexity": 75.56480997823662
|
| 1157 |
+
},
|
| 1158 |
+
{
|
| 1159 |
+
"task_name": "gsarti/flores_101_eng",
|
| 1160 |
+
"prompt_name": null,
|
| 1161 |
+
"byte_perplexity": 2.061283234268159
|
| 1162 |
+
},
|
| 1163 |
+
{
|
| 1164 |
+
"task_name": "gsarti/flores_101_eng",
|
| 1165 |
+
"prompt_name": null,
|
| 1166 |
+
"bits_per_byte": 1.0435427545613876
|
| 1167 |
+
},
|
| 1168 |
+
{
|
| 1169 |
+
"task_name": "gsarti/flores_101_est",
|
| 1170 |
+
"prompt_name": null,
|
| 1171 |
+
"word_perplexity": 92602633.82439691
|
| 1172 |
+
},
|
| 1173 |
+
{
|
| 1174 |
+
"task_name": "gsarti/flores_101_est",
|
| 1175 |
+
"prompt_name": null,
|
| 1176 |
+
"byte_perplexity": 10.131736127467489
|
| 1177 |
+
},
|
| 1178 |
+
{
|
| 1179 |
+
"task_name": "gsarti/flores_101_est",
|
| 1180 |
+
"prompt_name": null,
|
| 1181 |
+
"bits_per_byte": 3.340809503762674
|
| 1182 |
+
},
|
| 1183 |
+
{
|
| 1184 |
+
"task_name": "gsarti/flores_101_tgl",
|
| 1185 |
+
"prompt_name": null,
|
| 1186 |
+
"word_perplexity": 87554.31770184237
|
| 1187 |
+
},
|
| 1188 |
+
{
|
| 1189 |
+
"task_name": "gsarti/flores_101_tgl",
|
| 1190 |
+
"prompt_name": null,
|
| 1191 |
+
"byte_perplexity": 6.256957969905079
|
| 1192 |
+
},
|
| 1193 |
+
{
|
| 1194 |
+
"task_name": "gsarti/flores_101_tgl",
|
| 1195 |
+
"prompt_name": null,
|
| 1196 |
+
"bits_per_byte": 2.645461413001105
|
| 1197 |
+
},
|
| 1198 |
+
{
|
| 1199 |
+
"task_name": "gsarti/flores_101_fin",
|
| 1200 |
+
"prompt_name": null,
|
| 1201 |
+
"word_perplexity": 91621886.60145952
|
| 1202 |
+
},
|
| 1203 |
+
{
|
| 1204 |
+
"task_name": "gsarti/flores_101_fin",
|
| 1205 |
+
"prompt_name": null,
|
| 1206 |
+
"byte_perplexity": 7.5129644427067355
|
| 1207 |
+
},
|
| 1208 |
+
{
|
| 1209 |
+
"task_name": "gsarti/flores_101_fin",
|
| 1210 |
+
"prompt_name": null,
|
| 1211 |
+
"bits_per_byte": 2.9093822743068216
|
| 1212 |
+
},
|
| 1213 |
+
{
|
| 1214 |
+
"task_name": "gsarti/flores_101_fra",
|
| 1215 |
+
"prompt_name": null,
|
| 1216 |
+
"word_perplexity": 89.45884576931464
|
| 1217 |
+
},
|
| 1218 |
+
{
|
| 1219 |
+
"task_name": "gsarti/flores_101_fra",
|
| 1220 |
+
"prompt_name": null,
|
| 1221 |
+
"byte_perplexity": 2.0177390037335385
|
| 1222 |
+
},
|
| 1223 |
+
{
|
| 1224 |
+
"task_name": "gsarti/flores_101_fra",
|
| 1225 |
+
"prompt_name": null,
|
| 1226 |
+
"bits_per_byte": 1.0127395726746855
|
| 1227 |
+
},
|
| 1228 |
+
{
|
| 1229 |
+
"task_name": "gsarti/flores_101_ful",
|
| 1230 |
+
"prompt_name": null,
|
| 1231 |
+
"word_perplexity": 908715.1423017589
|
| 1232 |
+
},
|
| 1233 |
+
{
|
| 1234 |
+
"task_name": "gsarti/flores_101_ful",
|
| 1235 |
+
"prompt_name": null,
|
| 1236 |
+
"byte_perplexity": 11.810263420287875
|
| 1237 |
+
},
|
| 1238 |
+
{
|
| 1239 |
+
"task_name": "gsarti/flores_101_ful",
|
| 1240 |
+
"prompt_name": null,
|
| 1241 |
+
"bits_per_byte": 3.561969238361191
|
| 1242 |
+
},
|
| 1243 |
+
{
|
| 1244 |
+
"task_name": "gsarti/flores_101_glg",
|
| 1245 |
+
"prompt_name": null,
|
| 1246 |
+
"word_perplexity": 1537.3193913761668
|
| 1247 |
+
},
|
| 1248 |
+
{
|
| 1249 |
+
"task_name": "gsarti/flores_101_glg",
|
| 1250 |
+
"prompt_name": null,
|
| 1251 |
+
"byte_perplexity": 3.2214647330840154
|
| 1252 |
+
},
|
| 1253 |
+
{
|
| 1254 |
+
"task_name": "gsarti/flores_101_glg",
|
| 1255 |
+
"prompt_name": null,
|
| 1256 |
+
"bits_per_byte": 1.6877168009728167
|
| 1257 |
+
},
|
| 1258 |
+
{
|
| 1259 |
+
"task_name": "gsarti/flores_101_lug",
|
| 1260 |
+
"prompt_name": null,
|
| 1261 |
+
"word_perplexity": 32046806.791237485
|
| 1262 |
+
},
|
| 1263 |
+
{
|
| 1264 |
+
"task_name": "gsarti/flores_101_lug",
|
| 1265 |
+
"prompt_name": null,
|
| 1266 |
+
"byte_perplexity": 9.285708185212261
|
| 1267 |
+
},
|
| 1268 |
+
{
|
| 1269 |
+
"task_name": "gsarti/flores_101_lug",
|
| 1270 |
+
"prompt_name": null,
|
| 1271 |
+
"bits_per_byte": 3.2150119431528754
|
| 1272 |
+
},
|
| 1273 |
+
{
|
| 1274 |
+
"task_name": "gsarti/flores_101_kat",
|
| 1275 |
+
"prompt_name": null,
|
| 1276 |
+
"word_perplexity": 1133105340.614723
|
| 1277 |
+
},
|
| 1278 |
+
{
|
| 1279 |
+
"task_name": "gsarti/flores_101_kat",
|
| 1280 |
+
"prompt_name": null,
|
| 1281 |
+
"byte_perplexity": 2.5184571084900518
|
| 1282 |
+
},
|
| 1283 |
+
{
|
| 1284 |
+
"task_name": "gsarti/flores_101_kat",
|
| 1285 |
+
"prompt_name": null,
|
| 1286 |
+
"bits_per_byte": 1.3325401608568794
|
| 1287 |
+
},
|
| 1288 |
+
{
|
| 1289 |
+
"task_name": "gsarti/flores_101_deu",
|
| 1290 |
+
"prompt_name": null,
|
| 1291 |
+
"word_perplexity": 5647.282599404732
|
| 1292 |
+
},
|
| 1293 |
+
{
|
| 1294 |
+
"task_name": "gsarti/flores_101_deu",
|
| 1295 |
+
"prompt_name": null,
|
| 1296 |
+
"byte_perplexity": 3.361758059911202
|
| 1297 |
+
},
|
| 1298 |
+
{
|
| 1299 |
+
"task_name": "gsarti/flores_101_deu",
|
| 1300 |
+
"prompt_name": null,
|
| 1301 |
+
"bits_per_byte": 1.7492158999678582
|
| 1302 |
+
},
|
| 1303 |
+
{
|
| 1304 |
+
"task_name": "gsarti/flores_101_ell",
|
| 1305 |
+
"prompt_name": null,
|
| 1306 |
+
"word_perplexity": 102751.5248402687
|
| 1307 |
+
},
|
| 1308 |
+
{
|
| 1309 |
+
"task_name": "gsarti/flores_101_ell",
|
| 1310 |
+
"prompt_name": null,
|
| 1311 |
+
"byte_perplexity": 2.6139607239932805
|
| 1312 |
+
},
|
| 1313 |
+
{
|
| 1314 |
+
"task_name": "gsarti/flores_101_ell",
|
| 1315 |
+
"prompt_name": null,
|
| 1316 |
+
"bits_per_byte": 1.3862374641150543
|
| 1317 |
+
},
|
| 1318 |
+
{
|
| 1319 |
+
"task_name": "gsarti/flores_101_guj",
|
| 1320 |
+
"prompt_name": null,
|
| 1321 |
+
"word_perplexity": 133216198508.6925
|
| 1322 |
+
},
|
| 1323 |
+
{
|
| 1324 |
+
"task_name": "gsarti/flores_101_guj",
|
| 1325 |
+
"prompt_name": null,
|
| 1326 |
+
"byte_perplexity": 5.125904532570054
|
| 1327 |
+
},
|
| 1328 |
+
{
|
| 1329 |
+
"task_name": "gsarti/flores_101_guj",
|
| 1330 |
+
"prompt_name": null,
|
| 1331 |
+
"bits_per_byte": 2.357806609400009
|
| 1332 |
+
},
|
| 1333 |
+
{
|
| 1334 |
+
"task_name": "gsarti/flores_101_hau",
|
| 1335 |
+
"prompt_name": null,
|
| 1336 |
+
"word_perplexity": 730749.6449046461
|
| 1337 |
+
},
|
| 1338 |
+
{
|
| 1339 |
+
"task_name": "gsarti/flores_101_hau",
|
| 1340 |
+
"prompt_name": null,
|
| 1341 |
+
"byte_perplexity": 11.049458818357667
|
| 1342 |
+
},
|
| 1343 |
+
{
|
| 1344 |
+
"task_name": "gsarti/flores_101_hau",
|
| 1345 |
+
"prompt_name": null,
|
| 1346 |
+
"bits_per_byte": 3.4659038057537184
|
| 1347 |
+
},
|
| 1348 |
+
{
|
| 1349 |
+
"task_name": "gsarti/flores_101_heb",
|
| 1350 |
+
"prompt_name": null,
|
| 1351 |
+
"word_perplexity": 880255.4148832298
|
| 1352 |
+
},
|
| 1353 |
+
{
|
| 1354 |
+
"task_name": "gsarti/flores_101_heb",
|
| 1355 |
+
"prompt_name": null,
|
| 1356 |
+
"byte_perplexity": 3.7036842387723694
|
| 1357 |
+
},
|
| 1358 |
+
{
|
| 1359 |
+
"task_name": "gsarti/flores_101_heb",
|
| 1360 |
+
"prompt_name": null,
|
| 1361 |
+
"bits_per_byte": 1.8889611054621571
|
| 1362 |
+
},
|
| 1363 |
+
{
|
| 1364 |
+
"task_name": "gsarti/flores_101_hin",
|
| 1365 |
+
"prompt_name": null,
|
| 1366 |
+
"word_perplexity": 453226793.5348556
|
| 1367 |
+
},
|
| 1368 |
+
{
|
| 1369 |
+
"task_name": "gsarti/flores_101_hin",
|
| 1370 |
+
"prompt_name": null,
|
| 1371 |
+
"byte_perplexity": 4.581311639568996
|
| 1372 |
+
},
|
| 1373 |
+
{
|
| 1374 |
+
"task_name": "gsarti/flores_101_hin",
|
| 1375 |
+
"prompt_name": null,
|
| 1376 |
+
"bits_per_byte": 2.195760704215568
|
| 1377 |
+
},
|
| 1378 |
+
{
|
| 1379 |
+
"task_name": "gsarti/flores_101_hun",
|
| 1380 |
+
"prompt_name": null,
|
| 1381 |
+
"word_perplexity": 8545882.19823639
|
| 1382 |
+
},
|
| 1383 |
+
{
|
| 1384 |
+
"task_name": "gsarti/flores_101_hun",
|
| 1385 |
+
"prompt_name": null,
|
| 1386 |
+
"byte_perplexity": 7.19531655942431
|
| 1387 |
+
},
|
| 1388 |
+
{
|
| 1389 |
+
"task_name": "gsarti/flores_101_hun",
|
| 1390 |
+
"prompt_name": null,
|
| 1391 |
+
"bits_per_byte": 2.8470581600253615
|
| 1392 |
+
},
|
| 1393 |
+
{
|
| 1394 |
+
"task_name": "gsarti/flores_101_isl",
|
| 1395 |
+
"prompt_name": null,
|
| 1396 |
+
"word_perplexity": 3947458.536983725
|
| 1397 |
+
},
|
| 1398 |
+
{
|
| 1399 |
+
"task_name": "gsarti/flores_101_isl",
|
| 1400 |
+
"prompt_name": null,
|
| 1401 |
+
"byte_perplexity": 8.812045732299993
|
| 1402 |
+
},
|
| 1403 |
+
{
|
| 1404 |
+
"task_name": "gsarti/flores_101_isl",
|
| 1405 |
+
"prompt_name": null,
|
| 1406 |
+
"bits_per_byte": 3.1394769822824644
|
| 1407 |
+
},
|
| 1408 |
+
{
|
| 1409 |
+
"task_name": "gsarti/flores_101_ibo",
|
| 1410 |
+
"prompt_name": null,
|
| 1411 |
+
"word_perplexity": 99576.38125028457
|
| 1412 |
+
},
|
| 1413 |
+
{
|
| 1414 |
+
"task_name": "gsarti/flores_101_ibo",
|
| 1415 |
+
"prompt_name": null,
|
| 1416 |
+
"byte_perplexity": 6.06807351892086
|
| 1417 |
+
},
|
| 1418 |
+
{
|
| 1419 |
+
"task_name": "gsarti/flores_101_ibo",
|
| 1420 |
+
"prompt_name": null,
|
| 1421 |
+
"bits_per_byte": 2.6012385649422316
|
| 1422 |
+
},
|
| 1423 |
+
{
|
| 1424 |
+
"task_name": "gsarti/flores_101_ind",
|
| 1425 |
+
"prompt_name": null,
|
| 1426 |
+
"word_perplexity": 299.41864562936706
|
| 1427 |
+
},
|
| 1428 |
+
{
|
| 1429 |
+
"task_name": "gsarti/flores_101_ind",
|
| 1430 |
+
"prompt_name": null,
|
| 1431 |
+
"byte_perplexity": 2.2193428661828962
|
| 1432 |
+
},
|
| 1433 |
+
{
|
| 1434 |
+
"task_name": "gsarti/flores_101_ind",
|
| 1435 |
+
"prompt_name": null,
|
| 1436 |
+
"bits_per_byte": 1.1501325666473412
|
| 1437 |
+
},
|
| 1438 |
+
{
|
| 1439 |
+
"task_name": "gsarti/flores_101_gle",
|
| 1440 |
+
"prompt_name": null,
|
| 1441 |
+
"word_perplexity": 1548851.5929806433
|
| 1442 |
+
},
|
| 1443 |
+
{
|
| 1444 |
+
"task_name": "gsarti/flores_101_gle",
|
| 1445 |
+
"prompt_name": null,
|
| 1446 |
+
"byte_perplexity": 9.712259930753122
|
| 1447 |
+
},
|
| 1448 |
+
{
|
| 1449 |
+
"task_name": "gsarti/flores_101_gle",
|
| 1450 |
+
"prompt_name": null,
|
| 1451 |
+
"bits_per_byte": 3.2798070331865063
|
| 1452 |
+
},
|
| 1453 |
+
{
|
| 1454 |
+
"task_name": "gsarti/flores_101_ita",
|
| 1455 |
+
"prompt_name": null,
|
| 1456 |
+
"word_perplexity": 1951.0663459405935
|
| 1457 |
+
},
|
| 1458 |
+
{
|
| 1459 |
+
"task_name": "gsarti/flores_101_ita",
|
| 1460 |
+
"prompt_name": null,
|
| 1461 |
+
"byte_perplexity": 3.238337491305615
|
| 1462 |
+
},
|
| 1463 |
+
{
|
| 1464 |
+
"task_name": "gsarti/flores_101_ita",
|
| 1465 |
+
"prompt_name": null,
|
| 1466 |
+
"bits_per_byte": 1.695253347487448
|
| 1467 |
+
},
|
| 1468 |
+
{
|
| 1469 |
+
"task_name": "gsarti/flores_101_jpn",
|
| 1470 |
+
"prompt_name": null,
|
| 1471 |
+
"word_perplexity": 6.0024027118732196e+69
|
| 1472 |
+
},
|
| 1473 |
+
{
|
| 1474 |
+
"task_name": "gsarti/flores_101_jpn",
|
| 1475 |
+
"prompt_name": null,
|
| 1476 |
+
"byte_perplexity": 2.907038023970581
|
| 1477 |
+
},
|
| 1478 |
+
{
|
| 1479 |
+
"task_name": "gsarti/flores_101_jpn",
|
| 1480 |
+
"prompt_name": null,
|
| 1481 |
+
"bits_per_byte": 1.539549942005635
|
| 1482 |
+
},
|
| 1483 |
+
{
|
| 1484 |
+
"task_name": "gsarti/flores_101_jav",
|
| 1485 |
+
"prompt_name": null,
|
| 1486 |
+
"word_perplexity": 956961.3940329206
|
| 1487 |
+
},
|
| 1488 |
+
{
|
| 1489 |
+
"task_name": "gsarti/flores_101_jav",
|
| 1490 |
+
"prompt_name": null,
|
| 1491 |
+
"byte_perplexity": 7.460632752007581
|
| 1492 |
+
},
|
| 1493 |
+
{
|
| 1494 |
+
"task_name": "gsarti/flores_101_jav",
|
| 1495 |
+
"prompt_name": null,
|
| 1496 |
+
"bits_per_byte": 2.899297993680408
|
| 1497 |
+
},
|
| 1498 |
+
{
|
| 1499 |
+
"task_name": "gsarti/flores_101_kea",
|
| 1500 |
+
"prompt_name": null,
|
| 1501 |
+
"word_perplexity": 438558.0012817139
|
| 1502 |
+
},
|
| 1503 |
+
{
|
| 1504 |
+
"task_name": "gsarti/flores_101_kea",
|
| 1505 |
+
"prompt_name": null,
|
| 1506 |
+
"byte_perplexity": 9.281572608888562
|
| 1507 |
+
},
|
| 1508 |
+
{
|
| 1509 |
+
"task_name": "gsarti/flores_101_kea",
|
| 1510 |
+
"prompt_name": null,
|
| 1511 |
+
"bits_per_byte": 3.2143692668645976
|
| 1512 |
+
},
|
| 1513 |
+
{
|
| 1514 |
+
"task_name": "gsarti/flores_101_kam",
|
| 1515 |
+
"prompt_name": null,
|
| 1516 |
+
"word_perplexity": 4288601.196402131
|
| 1517 |
+
},
|
| 1518 |
+
{
|
| 1519 |
+
"task_name": "gsarti/flores_101_kam",
|
| 1520 |
+
"prompt_name": null,
|
| 1521 |
+
"byte_perplexity": 11.436917146974627
|
| 1522 |
+
},
|
| 1523 |
+
{
|
| 1524 |
+
"task_name": "gsarti/flores_101_kam",
|
| 1525 |
+
"prompt_name": null,
|
| 1526 |
+
"bits_per_byte": 3.515626316920499
|
| 1527 |
+
},
|
| 1528 |
+
{
|
| 1529 |
+
"task_name": "gsarti/flores_101_kan",
|
| 1530 |
+
"prompt_name": null,
|
| 1531 |
+
"word_perplexity": 5.3861539364992216e+16
|
| 1532 |
+
},
|
| 1533 |
+
{
|
| 1534 |
+
"task_name": "gsarti/flores_101_kan",
|
| 1535 |
+
"prompt_name": null,
|
| 1536 |
+
"byte_perplexity": 5.274956219477929
|
| 1537 |
+
},
|
| 1538 |
+
{
|
| 1539 |
+
"task_name": "gsarti/flores_101_kan",
|
| 1540 |
+
"prompt_name": null,
|
| 1541 |
+
"bits_per_byte": 2.3991591199422513
|
| 1542 |
+
},
|
| 1543 |
+
{
|
| 1544 |
+
"task_name": "gsarti/flores_101_kaz",
|
| 1545 |
+
"prompt_name": null,
|
| 1546 |
+
"word_perplexity": 89537342.10068764
|
| 1547 |
+
},
|
| 1548 |
+
{
|
| 1549 |
+
"task_name": "gsarti/flores_101_kaz",
|
| 1550 |
+
"prompt_name": null,
|
| 1551 |
+
"byte_perplexity": 3.5945005448756477
|
| 1552 |
+
},
|
| 1553 |
+
{
|
| 1554 |
+
"task_name": "gsarti/flores_101_kaz",
|
| 1555 |
+
"prompt_name": null,
|
| 1556 |
+
"bits_per_byte": 1.845791322405974
|
| 1557 |
+
}
|
| 1558 |
+
],
|
| 1559 |
+
"versions": {
|
| 1560 |
+
"wic+GPT-3-prompt": 0,
|
| 1561 |
+
"wic+GPT-3-prompt-with-label": 0,
|
| 1562 |
+
"wic+affirmation_true_or_false": 0,
|
| 1563 |
+
"wic+grammar_homework": 0,
|
| 1564 |
+
"wic+polysemous": 0,
|
| 1565 |
+
"wic+question-context": 0,
|
| 1566 |
+
"wic+question-context-meaning": 0,
|
| 1567 |
+
"wic+question-context-meaning-with-label": 0,
|
| 1568 |
+
"wic+same_sense": 0,
|
| 1569 |
+
"wic+similar-sense": 0,
|
| 1570 |
+
"wsc+GPT-3 Style": 0,
|
| 1571 |
+
"wsc+I think they mean": 0,
|
| 1572 |
+
"wsc+Who or what is/are": 0,
|
| 1573 |
+
"wsc+by p they mean": 0,
|
| 1574 |
+
"wsc+does p stand for": 0,
|
| 1575 |
+
"wsc+does the pronoun refer to": 0,
|
| 1576 |
+
"wsc+in other words": 0,
|
| 1577 |
+
"wsc+p is/are r": 0,
|
| 1578 |
+
"wsc+replaced with": 0,
|
| 1579 |
+
"wsc+the pronoun refers to": 0,
|
| 1580 |
+
"wnli+confident": 1,
|
| 1581 |
+
"wnli+entailment explained": 1,
|
| 1582 |
+
"wnli+imply": 1,
|
| 1583 |
+
"wnli+justified": 1,
|
| 1584 |
+
"wnli+mean": 1,
|
| 1585 |
+
"gsarti/flores_101_afr+null": 0,
|
| 1586 |
+
"gsarti/flores_101_amh+null": 0,
|
| 1587 |
+
"gsarti/flores_101_ara+null": 0,
|
| 1588 |
+
"gsarti/flores_101_hye+null": 0,
|
| 1589 |
+
"gsarti/flores_101_asm+null": 0,
|
| 1590 |
+
"gsarti/flores_101_ast+null": 0,
|
| 1591 |
+
"gsarti/flores_101_azj+null": 0,
|
| 1592 |
+
"gsarti/flores_101_bel+null": 0,
|
| 1593 |
+
"gsarti/flores_101_ben+null": 0,
|
| 1594 |
+
"gsarti/flores_101_bos+null": 0,
|
| 1595 |
+
"gsarti/flores_101_bul+null": 0,
|
| 1596 |
+
"gsarti/flores_101_mya+null": 0,
|
| 1597 |
+
"gsarti/flores_101_cat+null": 0,
|
| 1598 |
+
"gsarti/flores_101_ceb+null": 0,
|
| 1599 |
+
"gsarti/flores_101_zho_simpl+null": 0,
|
| 1600 |
+
"gsarti/flores_101_zho_trad+null": 0,
|
| 1601 |
+
"gsarti/flores_101_hrv+null": 0,
|
| 1602 |
+
"gsarti/flores_101_ces+null": 0,
|
| 1603 |
+
"gsarti/flores_101_dan+null": 0,
|
| 1604 |
+
"gsarti/flores_101_nld+null": 0,
|
| 1605 |
+
"gsarti/flores_101_eng+null": 0,
|
| 1606 |
+
"gsarti/flores_101_est+null": 0,
|
| 1607 |
+
"gsarti/flores_101_tgl+null": 0,
|
| 1608 |
+
"gsarti/flores_101_fin+null": 0,
|
| 1609 |
+
"gsarti/flores_101_fra+null": 0,
|
| 1610 |
+
"gsarti/flores_101_ful+null": 0,
|
| 1611 |
+
"gsarti/flores_101_glg+null": 0,
|
| 1612 |
+
"gsarti/flores_101_lug+null": 0,
|
| 1613 |
+
"gsarti/flores_101_kat+null": 0,
|
| 1614 |
+
"gsarti/flores_101_deu+null": 0,
|
| 1615 |
+
"gsarti/flores_101_ell+null": 0,
|
| 1616 |
+
"gsarti/flores_101_guj+null": 0,
|
| 1617 |
+
"gsarti/flores_101_hau+null": 0,
|
| 1618 |
+
"gsarti/flores_101_heb+null": 0,
|
| 1619 |
+
"gsarti/flores_101_hin+null": 0,
|
| 1620 |
+
"gsarti/flores_101_hun+null": 0,
|
| 1621 |
+
"gsarti/flores_101_isl+null": 0,
|
| 1622 |
+
"gsarti/flores_101_ibo+null": 0,
|
| 1623 |
+
"gsarti/flores_101_ind+null": 0,
|
| 1624 |
+
"gsarti/flores_101_gle+null": 0,
|
| 1625 |
+
"gsarti/flores_101_ita+null": 0,
|
| 1626 |
+
"gsarti/flores_101_jpn+null": 0,
|
| 1627 |
+
"gsarti/flores_101_jav+null": 0,
|
| 1628 |
+
"gsarti/flores_101_kea+null": 0,
|
| 1629 |
+
"gsarti/flores_101_kam+null": 0,
|
| 1630 |
+
"gsarti/flores_101_kan+null": 0,
|
| 1631 |
+
"gsarti/flores_101_kaz+null": 0
|
| 1632 |
+
},
|
| 1633 |
+
"table_results": {
|
| 1634 |
+
"wic+GPT-3-prompt": {
|
| 1635 |
+
"task_name": "wic",
|
| 1636 |
+
"prompt_name": "GPT-3-prompt",
|
| 1637 |
+
"acc": 0.5,
|
| 1638 |
+
"acc_stderr": 0.01981072129375818,
|
| 1639 |
+
"acc_norm": 0.5,
|
| 1640 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 1641 |
+
},
|
| 1642 |
+
"wic+GPT-3-prompt-with-label": {
|
| 1643 |
+
"task_name": "wic",
|
| 1644 |
+
"prompt_name": "GPT-3-prompt-with-label",
|
| 1645 |
+
"acc": 0.49216300940438873,
|
| 1646 |
+
"acc_stderr": 0.019808287657813832,
|
| 1647 |
+
"acc_norm": 0.5,
|
| 1648 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 1649 |
+
},
|
| 1650 |
+
"wic+affirmation_true_or_false": {
|
| 1651 |
+
"task_name": "wic",
|
| 1652 |
+
"prompt_name": "affirmation_true_or_false",
|
| 1653 |
+
"acc": 0.5,
|
| 1654 |
+
"acc_stderr": 0.01981072129375818,
|
| 1655 |
+
"acc_norm": 0.5078369905956113,
|
| 1656 |
+
"acc_norm_stderr": 0.019808287657813832
|
| 1657 |
+
},
|
| 1658 |
+
"wic+grammar_homework": {
|
| 1659 |
+
"task_name": "wic",
|
| 1660 |
+
"prompt_name": "grammar_homework",
|
| 1661 |
+
"acc": 0.5094043887147336,
|
| 1662 |
+
"acc_stderr": 0.019807216763271497,
|
| 1663 |
+
"acc_norm": 0.49843260188087773,
|
| 1664 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 1665 |
+
},
|
| 1666 |
+
"wic+polysemous": {
|
| 1667 |
+
"task_name": "wic",
|
| 1668 |
+
"prompt_name": "polysemous",
|
| 1669 |
+
"acc": 0.512539184952978,
|
| 1670 |
+
"acc_stderr": 0.019804490588592596,
|
| 1671 |
+
"acc_norm": 0.49843260188087773,
|
| 1672 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 1673 |
+
},
|
| 1674 |
+
"wic+question-context": {
|
| 1675 |
+
"task_name": "wic",
|
| 1676 |
+
"prompt_name": "question-context",
|
| 1677 |
+
"acc": 0.5266457680250783,
|
| 1678 |
+
"acc_stderr": 0.019782570188812167,
|
| 1679 |
+
"acc_norm": 0.5031347962382445,
|
| 1680 |
+
"acc_norm_stderr": 0.019810331932097542
|
| 1681 |
+
},
|
| 1682 |
+
"wic+question-context-meaning": {
|
| 1683 |
+
"task_name": "wic",
|
| 1684 |
+
"prompt_name": "question-context-meaning",
|
| 1685 |
+
"acc": 0.5438871473354232,
|
| 1686 |
+
"acc_stderr": 0.019734259601993404,
|
| 1687 |
+
"acc_norm": 0.5015673981191222,
|
| 1688 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 1689 |
+
},
|
| 1690 |
+
"wic+question-context-meaning-with-label": {
|
| 1691 |
+
"task_name": "wic",
|
| 1692 |
+
"prompt_name": "question-context-meaning-with-label",
|
| 1693 |
+
"acc": 0.5156739811912225,
|
| 1694 |
+
"acc_stderr": 0.019800984955347847,
|
| 1695 |
+
"acc_norm": 0.5015673981191222,
|
| 1696 |
+
"acc_norm_stderr": 0.019810623954060382
|
| 1697 |
+
},
|
| 1698 |
+
"wic+same_sense": {
|
| 1699 |
+
"task_name": "wic",
|
| 1700 |
+
"prompt_name": "same_sense",
|
| 1701 |
+
"acc": 0.5047021943573667,
|
| 1702 |
+
"acc_stderr": 0.019809845219259763,
|
| 1703 |
+
"acc_norm": 0.5,
|
| 1704 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 1705 |
+
},
|
| 1706 |
+
"wic+similar-sense": {
|
| 1707 |
+
"task_name": "wic",
|
| 1708 |
+
"prompt_name": "similar-sense",
|
| 1709 |
+
"acc": 0.542319749216301,
|
| 1710 |
+
"acc_stderr": 0.01973963328373276,
|
| 1711 |
+
"acc_norm": 0.5,
|
| 1712 |
+
"acc_norm_stderr": 0.01981072129375818
|
| 1713 |
+
},
|
| 1714 |
+
"wsc+GPT-3 Style": {
|
| 1715 |
+
"task_name": "wsc",
|
| 1716 |
+
"prompt_name": "GPT-3 Style",
|
| 1717 |
+
"acc": 0.36538461538461536,
|
| 1718 |
+
"acc_stderr": 0.0474473339327792,
|
| 1719 |
+
"acc_norm": 0.36538461538461536,
|
| 1720 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1721 |
+
},
|
| 1722 |
+
"wsc+I think they mean": {
|
| 1723 |
+
"task_name": "wsc",
|
| 1724 |
+
"prompt_name": "I think they mean",
|
| 1725 |
+
"acc": 0.36538461538461536,
|
| 1726 |
+
"acc_stderr": 0.0474473339327792,
|
| 1727 |
+
"acc_norm": 0.36538461538461536,
|
| 1728 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1729 |
+
},
|
| 1730 |
+
"wsc+Who or what is/are": {
|
| 1731 |
+
"task_name": "wsc",
|
| 1732 |
+
"prompt_name": "Who or what is/are",
|
| 1733 |
+
"acc": 0.40384615384615385,
|
| 1734 |
+
"acc_stderr": 0.048346889526540184,
|
| 1735 |
+
"acc_norm": 0.36538461538461536,
|
| 1736 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1737 |
+
},
|
| 1738 |
+
"wsc+by p they mean": {
|
| 1739 |
+
"task_name": "wsc",
|
| 1740 |
+
"prompt_name": "by p they mean",
|
| 1741 |
+
"acc": 0.36538461538461536,
|
| 1742 |
+
"acc_stderr": 0.0474473339327792,
|
| 1743 |
+
"acc_norm": 0.36538461538461536,
|
| 1744 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1745 |
+
},
|
| 1746 |
+
"wsc+does p stand for": {
|
| 1747 |
+
"task_name": "wsc",
|
| 1748 |
+
"prompt_name": "does p stand for",
|
| 1749 |
+
"acc": 0.375,
|
| 1750 |
+
"acc_stderr": 0.04770204856076104,
|
| 1751 |
+
"acc_norm": 0.36538461538461536,
|
| 1752 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1753 |
+
},
|
| 1754 |
+
"wsc+does the pronoun refer to": {
|
| 1755 |
+
"task_name": "wsc",
|
| 1756 |
+
"prompt_name": "does the pronoun refer to",
|
| 1757 |
+
"acc": 0.5480769230769231,
|
| 1758 |
+
"acc_stderr": 0.049038186969314335,
|
| 1759 |
+
"acc_norm": 0.36538461538461536,
|
| 1760 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1761 |
+
},
|
| 1762 |
+
"wsc+in other words": {
|
| 1763 |
+
"task_name": "wsc",
|
| 1764 |
+
"prompt_name": "in other words",
|
| 1765 |
+
"acc": 0.36538461538461536,
|
| 1766 |
+
"acc_stderr": 0.0474473339327792,
|
| 1767 |
+
"acc_norm": 0.5288461538461539,
|
| 1768 |
+
"acc_norm_stderr": 0.04918440626354964
|
| 1769 |
+
},
|
| 1770 |
+
"wsc+p is/are r": {
|
| 1771 |
+
"task_name": "wsc",
|
| 1772 |
+
"prompt_name": "p is/are r",
|
| 1773 |
+
"acc": 0.36538461538461536,
|
| 1774 |
+
"acc_stderr": 0.0474473339327792,
|
| 1775 |
+
"acc_norm": 0.34615384615384615,
|
| 1776 |
+
"acc_norm_stderr": 0.04687634642174987
|
| 1777 |
+
},
|
| 1778 |
+
"wsc+replaced with": {
|
| 1779 |
+
"task_name": "wsc",
|
| 1780 |
+
"prompt_name": "replaced with",
|
| 1781 |
+
"acc": 0.6153846153846154,
|
| 1782 |
+
"acc_stderr": 0.047936688680750406,
|
| 1783 |
+
"acc_norm": 0.36538461538461536,
|
| 1784 |
+
"acc_norm_stderr": 0.0474473339327792
|
| 1785 |
+
},
|
| 1786 |
+
"wsc+the pronoun refers to": {
|
| 1787 |
+
"task_name": "wsc",
|
| 1788 |
+
"prompt_name": "the pronoun refers to",
|
| 1789 |
+
"acc": 0.36538461538461536,
|
| 1790 |
+
"acc_stderr": 0.0474473339327792,
|
| 1791 |
+
"acc_norm": 0.5865384615384616,
|
| 1792 |
+
"acc_norm_stderr": 0.04852294969729053
|
| 1793 |
+
},
|
| 1794 |
+
"wnli+confident": {
|
| 1795 |
+
"task_name": "wnli",
|
| 1796 |
+
"prompt_name": "confident",
|
| 1797 |
+
"acc": 0.43661971830985913,
|
| 1798 |
+
"acc_stderr": 0.0592793555841297,
|
| 1799 |
+
"acc_norm": 0.43661971830985913,
|
| 1800 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 1801 |
+
},
|
| 1802 |
+
"wnli+entailment explained": {
|
| 1803 |
+
"task_name": "wnli",
|
| 1804 |
+
"prompt_name": "entailment explained",
|
| 1805 |
+
"acc": 0.39436619718309857,
|
| 1806 |
+
"acc_stderr": 0.058412510854444266,
|
| 1807 |
+
"acc_norm": 0.43661971830985913,
|
| 1808 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 1809 |
+
},
|
| 1810 |
+
"wnli+imply": {
|
| 1811 |
+
"task_name": "wnli",
|
| 1812 |
+
"prompt_name": "imply",
|
| 1813 |
+
"acc": 0.4225352112676056,
|
| 1814 |
+
"acc_stderr": 0.05903984205682581,
|
| 1815 |
+
"acc_norm": 0.43661971830985913,
|
| 1816 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 1817 |
+
},
|
| 1818 |
+
"wnli+justified": {
|
| 1819 |
+
"task_name": "wnli",
|
| 1820 |
+
"prompt_name": "justified",
|
| 1821 |
+
"acc": 0.43661971830985913,
|
| 1822 |
+
"acc_stderr": 0.0592793555841297,
|
| 1823 |
+
"acc_norm": 0.43661971830985913,
|
| 1824 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 1825 |
+
},
|
| 1826 |
+
"wnli+mean": {
|
| 1827 |
+
"task_name": "wnli",
|
| 1828 |
+
"prompt_name": "mean",
|
| 1829 |
+
"acc": 0.6619718309859155,
|
| 1830 |
+
"acc_stderr": 0.05653887739133513,
|
| 1831 |
+
"acc_norm": 0.43661971830985913,
|
| 1832 |
+
"acc_norm_stderr": 0.0592793555841297
|
| 1833 |
+
},
|
| 1834 |
+
"gsarti/flores_101_afr+null": {
|
| 1835 |
+
"task_name": "gsarti/flores_101_afr",
|
| 1836 |
+
"prompt_name": "null",
|
| 1837 |
+
"word_perplexity": 139324.0466654445,
|
| 1838 |
+
"byte_perplexity": 7.049422805555328,
|
| 1839 |
+
"bits_per_byte": 2.8175051369933213
|
| 1840 |
+
},
|
| 1841 |
+
"gsarti/flores_101_amh+null": {
|
| 1842 |
+
"task_name": "gsarti/flores_101_amh",
|
| 1843 |
+
"prompt_name": "null",
|
| 1844 |
+
"word_perplexity": 105036774.30501972,
|
| 1845 |
+
"byte_perplexity": 4.172368790188039,
|
| 1846 |
+
"bits_per_byte": 2.0608666814101815
|
| 1847 |
+
},
|
| 1848 |
+
"gsarti/flores_101_ara+null": {
|
| 1849 |
+
"task_name": "gsarti/flores_101_ara",
|
| 1850 |
+
"prompt_name": "null",
|
| 1851 |
+
"word_perplexity": 674.8640314665696,
|
| 1852 |
+
"byte_perplexity": 1.8400375612633983,
|
| 1853 |
+
"bits_per_byte": 0.8797352167688847
|
| 1854 |
+
},
|
| 1855 |
+
"gsarti/flores_101_hye+null": {
|
| 1856 |
+
"task_name": "gsarti/flores_101_hye",
|
| 1857 |
+
"prompt_name": "null",
|
| 1858 |
+
"word_perplexity": 99262887.01092263,
|
| 1859 |
+
"byte_perplexity": 3.7481249397064547,
|
| 1860 |
+
"bits_per_byte": 1.906169044483402
|
| 1861 |
+
},
|
| 1862 |
+
"gsarti/flores_101_asm+null": {
|
| 1863 |
+
"task_name": "gsarti/flores_101_asm",
|
| 1864 |
+
"prompt_name": "null",
|
| 1865 |
+
"word_perplexity": 6763188828222.085,
|
| 1866 |
+
"byte_perplexity": 5.497254736157445,
|
| 1867 |
+
"bits_per_byte": 2.458711333673663
|
| 1868 |
+
},
|
| 1869 |
+
"gsarti/flores_101_ast+null": {
|
| 1870 |
+
"task_name": "gsarti/flores_101_ast",
|
| 1871 |
+
"prompt_name": "null",
|
| 1872 |
+
"word_perplexity": 10657.272913539553,
|
| 1873 |
+
"byte_perplexity": 4.260251728273795,
|
| 1874 |
+
"bits_per_byte": 2.0909386784329675
|
| 1875 |
+
},
|
| 1876 |
+
"gsarti/flores_101_azj+null": {
|
| 1877 |
+
"task_name": "gsarti/flores_101_azj",
|
| 1878 |
+
"prompt_name": "null",
|
| 1879 |
+
"word_perplexity": 45923924.18878753,
|
| 1880 |
+
"byte_perplexity": 7.691396328945705,
|
| 1881 |
+
"bits_per_byte": 2.9432455349850195
|
| 1882 |
+
},
|
| 1883 |
+
"gsarti/flores_101_bel+null": {
|
| 1884 |
+
"task_name": "gsarti/flores_101_bel",
|
| 1885 |
+
"prompt_name": "null",
|
| 1886 |
+
"word_perplexity": 23935692.781315073,
|
| 1887 |
+
"byte_perplexity": 3.7706591215465943,
|
| 1888 |
+
"bits_per_byte": 1.914816732584341
|
| 1889 |
+
},
|
| 1890 |
+
"gsarti/flores_101_ben+null": {
|
| 1891 |
+
"task_name": "gsarti/flores_101_ben",
|
| 1892 |
+
"prompt_name": "null",
|
| 1893 |
+
"word_perplexity": 2480418685142.412,
|
| 1894 |
+
"byte_perplexity": 5.074281765515423,
|
| 1895 |
+
"bits_per_byte": 2.3432036318231058
|
| 1896 |
+
},
|
| 1897 |
+
"gsarti/flores_101_bos+null": {
|
| 1898 |
+
"task_name": "gsarti/flores_101_bos",
|
| 1899 |
+
"prompt_name": "null",
|
| 1900 |
+
"word_perplexity": 229622.13691086147,
|
| 1901 |
+
"byte_perplexity": 6.343363734045183,
|
| 1902 |
+
"bits_per_byte": 2.665248069942796
|
| 1903 |
+
},
|
| 1904 |
+
"gsarti/flores_101_bul+null": {
|
| 1905 |
+
"task_name": "gsarti/flores_101_bul",
|
| 1906 |
+
"prompt_name": "null",
|
| 1907 |
+
"word_perplexity": 194851.13344620814,
|
| 1908 |
+
"byte_perplexity": 2.8553687444403257,
|
| 1909 |
+
"bits_per_byte": 1.5136770683283687
|
| 1910 |
+
},
|
| 1911 |
+
"gsarti/flores_101_mya+null": {
|
| 1912 |
+
"task_name": "gsarti/flores_101_mya",
|
| 1913 |
+
"prompt_name": "null",
|
| 1914 |
+
"word_perplexity": 5.887577237013639e+18,
|
| 1915 |
+
"byte_perplexity": 2.657561458464019,
|
| 1916 |
+
"bits_per_byte": 1.4101030557435918
|
| 1917 |
+
},
|
| 1918 |
+
"gsarti/flores_101_cat+null": {
|
| 1919 |
+
"task_name": "gsarti/flores_101_cat",
|
| 1920 |
+
"prompt_name": "null",
|
| 1921 |
+
"word_perplexity": 179.13123174533087,
|
| 1922 |
+
"byte_perplexity": 2.358207169698056,
|
| 1923 |
+
"bits_per_byte": 1.2376904653775254
|
| 1924 |
+
},
|
| 1925 |
+
"gsarti/flores_101_ceb+null": {
|
| 1926 |
+
"task_name": "gsarti/flores_101_ceb",
|
| 1927 |
+
"prompt_name": "null",
|
| 1928 |
+
"word_perplexity": 113330.67154113152,
|
| 1929 |
+
"byte_perplexity": 6.896481056329736,
|
| 1930 |
+
"bits_per_byte": 2.7858604115174295
|
| 1931 |
+
},
|
| 1932 |
+
"gsarti/flores_101_zho_simpl+null": {
|
| 1933 |
+
"task_name": "gsarti/flores_101_zho_simpl",
|
| 1934 |
+
"prompt_name": "null",
|
| 1935 |
+
"word_perplexity": 1.0554528210220222e+21,
|
| 1936 |
+
"byte_perplexity": 2.322457417595381,
|
| 1937 |
+
"bits_per_byte": 1.2156521449449949
|
| 1938 |
+
},
|
| 1939 |
+
"gsarti/flores_101_zho_trad+null": {
|
| 1940 |
+
"task_name": "gsarti/flores_101_zho_trad",
|
| 1941 |
+
"prompt_name": "null",
|
| 1942 |
+
"word_perplexity": 4.787781515987923e+24,
|
| 1943 |
+
"byte_perplexity": 2.5709177552415134,
|
| 1944 |
+
"bits_per_byte": 1.3622834584784203
|
| 1945 |
+
},
|
| 1946 |
+
"gsarti/flores_101_hrv+null": {
|
| 1947 |
+
"task_name": "gsarti/flores_101_hrv",
|
| 1948 |
+
"prompt_name": "null",
|
| 1949 |
+
"word_perplexity": 307789.1462790266,
|
| 1950 |
+
"byte_perplexity": 6.50559790827845,
|
| 1951 |
+
"bits_per_byte": 2.7016816564307984
|
| 1952 |
+
},
|
| 1953 |
+
"gsarti/flores_101_ces+null": {
|
| 1954 |
+
"task_name": "gsarti/flores_101_ces",
|
| 1955 |
+
"prompt_name": "null",
|
| 1956 |
+
"word_perplexity": 625101.1441414964,
|
| 1957 |
+
"byte_perplexity": 6.126526835715164,
|
| 1958 |
+
"bits_per_byte": 2.6150694333085327
|
| 1959 |
+
},
|
| 1960 |
+
"gsarti/flores_101_dan+null": {
|
| 1961 |
+
"task_name": "gsarti/flores_101_dan",
|
| 1962 |
+
"prompt_name": "null",
|
| 1963 |
+
"word_perplexity": 71695.50336412797,
|
| 1964 |
+
"byte_perplexity": 5.778786323448377,
|
| 1965 |
+
"bits_per_byte": 2.5307665257708245
|
| 1966 |
+
},
|
| 1967 |
+
"gsarti/flores_101_nld+null": {
|
| 1968 |
+
"task_name": "gsarti/flores_101_nld",
|
| 1969 |
+
"prompt_name": "null",
|
| 1970 |
+
"word_perplexity": 13951.877058430618,
|
| 1971 |
+
"byte_perplexity": 4.535651709856251,
|
| 1972 |
+
"bits_per_byte": 2.1813098607926804
|
| 1973 |
+
},
|
| 1974 |
+
"gsarti/flores_101_eng+null": {
|
| 1975 |
+
"task_name": "gsarti/flores_101_eng",
|
| 1976 |
+
"prompt_name": "null",
|
| 1977 |
+
"word_perplexity": 75.56480997823662,
|
| 1978 |
+
"byte_perplexity": 2.061283234268159,
|
| 1979 |
+
"bits_per_byte": 1.0435427545613876
|
| 1980 |
+
},
|
| 1981 |
+
"gsarti/flores_101_est+null": {
|
| 1982 |
+
"task_name": "gsarti/flores_101_est",
|
| 1983 |
+
"prompt_name": "null",
|
| 1984 |
+
"word_perplexity": 92602633.82439691,
|
| 1985 |
+
"byte_perplexity": 10.131736127467489,
|
| 1986 |
+
"bits_per_byte": 3.340809503762674
|
| 1987 |
+
},
|
| 1988 |
+
"gsarti/flores_101_tgl+null": {
|
| 1989 |
+
"task_name": "gsarti/flores_101_tgl",
|
| 1990 |
+
"prompt_name": "null",
|
| 1991 |
+
"word_perplexity": 87554.31770184237,
|
| 1992 |
+
"byte_perplexity": 6.256957969905079,
|
| 1993 |
+
"bits_per_byte": 2.645461413001105
|
| 1994 |
+
},
|
| 1995 |
+
"gsarti/flores_101_fin+null": {
|
| 1996 |
+
"task_name": "gsarti/flores_101_fin",
|
| 1997 |
+
"prompt_name": "null",
|
| 1998 |
+
"word_perplexity": 91621886.60145952,
|
| 1999 |
+
"byte_perplexity": 7.5129644427067355,
|
| 2000 |
+
"bits_per_byte": 2.9093822743068216
|
| 2001 |
+
},
|
| 2002 |
+
"gsarti/flores_101_fra+null": {
|
| 2003 |
+
"task_name": "gsarti/flores_101_fra",
|
| 2004 |
+
"prompt_name": "null",
|
| 2005 |
+
"word_perplexity": 89.45884576931464,
|
| 2006 |
+
"byte_perplexity": 2.0177390037335385,
|
| 2007 |
+
"bits_per_byte": 1.0127395726746855
|
| 2008 |
+
},
|
| 2009 |
+
"gsarti/flores_101_ful+null": {
|
| 2010 |
+
"task_name": "gsarti/flores_101_ful",
|
| 2011 |
+
"prompt_name": "null",
|
| 2012 |
+
"word_perplexity": 908715.1423017589,
|
| 2013 |
+
"byte_perplexity": 11.810263420287875,
|
| 2014 |
+
"bits_per_byte": 3.561969238361191
|
| 2015 |
+
},
|
| 2016 |
+
"gsarti/flores_101_glg+null": {
|
| 2017 |
+
"task_name": "gsarti/flores_101_glg",
|
| 2018 |
+
"prompt_name": "null",
|
| 2019 |
+
"word_perplexity": 1537.3193913761668,
|
| 2020 |
+
"byte_perplexity": 3.2214647330840154,
|
| 2021 |
+
"bits_per_byte": 1.6877168009728167
|
| 2022 |
+
},
|
| 2023 |
+
"gsarti/flores_101_lug+null": {
|
| 2024 |
+
"task_name": "gsarti/flores_101_lug",
|
| 2025 |
+
"prompt_name": "null",
|
| 2026 |
+
"word_perplexity": 32046806.791237485,
|
| 2027 |
+
"byte_perplexity": 9.285708185212261,
|
| 2028 |
+
"bits_per_byte": 3.2150119431528754
|
| 2029 |
+
},
|
| 2030 |
+
"gsarti/flores_101_kat+null": {
|
| 2031 |
+
"task_name": "gsarti/flores_101_kat",
|
| 2032 |
+
"prompt_name": "null",
|
| 2033 |
+
"word_perplexity": 1133105340.614723,
|
| 2034 |
+
"byte_perplexity": 2.5184571084900518,
|
| 2035 |
+
"bits_per_byte": 1.3325401608568794
|
| 2036 |
+
},
|
| 2037 |
+
"gsarti/flores_101_deu+null": {
|
| 2038 |
+
"task_name": "gsarti/flores_101_deu",
|
| 2039 |
+
"prompt_name": "null",
|
| 2040 |
+
"word_perplexity": 5647.282599404732,
|
| 2041 |
+
"byte_perplexity": 3.361758059911202,
|
| 2042 |
+
"bits_per_byte": 1.7492158999678582
|
| 2043 |
+
},
|
| 2044 |
+
"gsarti/flores_101_ell+null": {
|
| 2045 |
+
"task_name": "gsarti/flores_101_ell",
|
| 2046 |
+
"prompt_name": "null",
|
| 2047 |
+
"word_perplexity": 102751.5248402687,
|
| 2048 |
+
"byte_perplexity": 2.6139607239932805,
|
| 2049 |
+
"bits_per_byte": 1.3862374641150543
|
| 2050 |
+
},
|
| 2051 |
+
"gsarti/flores_101_guj+null": {
|
| 2052 |
+
"task_name": "gsarti/flores_101_guj",
|
| 2053 |
+
"prompt_name": "null",
|
| 2054 |
+
"word_perplexity": 133216198508.6925,
|
| 2055 |
+
"byte_perplexity": 5.125904532570054,
|
| 2056 |
+
"bits_per_byte": 2.357806609400009
|
| 2057 |
+
},
|
| 2058 |
+
"gsarti/flores_101_hau+null": {
|
| 2059 |
+
"task_name": "gsarti/flores_101_hau",
|
| 2060 |
+
"prompt_name": "null",
|
| 2061 |
+
"word_perplexity": 730749.6449046461,
|
| 2062 |
+
"byte_perplexity": 11.049458818357667,
|
| 2063 |
+
"bits_per_byte": 3.4659038057537184
|
| 2064 |
+
},
|
| 2065 |
+
"gsarti/flores_101_heb+null": {
|
| 2066 |
+
"task_name": "gsarti/flores_101_heb",
|
| 2067 |
+
"prompt_name": "null",
|
| 2068 |
+
"word_perplexity": 880255.4148832298,
|
| 2069 |
+
"byte_perplexity": 3.7036842387723694,
|
| 2070 |
+
"bits_per_byte": 1.8889611054621571
|
| 2071 |
+
},
|
| 2072 |
+
"gsarti/flores_101_hin+null": {
|
| 2073 |
+
"task_name": "gsarti/flores_101_hin",
|
| 2074 |
+
"prompt_name": "null",
|
| 2075 |
+
"word_perplexity": 453226793.5348556,
|
| 2076 |
+
"byte_perplexity": 4.581311639568996,
|
| 2077 |
+
"bits_per_byte": 2.195760704215568
|
| 2078 |
+
},
|
| 2079 |
+
"gsarti/flores_101_hun+null": {
|
| 2080 |
+
"task_name": "gsarti/flores_101_hun",
|
| 2081 |
+
"prompt_name": "null",
|
| 2082 |
+
"word_perplexity": 8545882.19823639,
|
| 2083 |
+
"byte_perplexity": 7.19531655942431,
|
| 2084 |
+
"bits_per_byte": 2.8470581600253615
|
| 2085 |
+
},
|
| 2086 |
+
"gsarti/flores_101_isl+null": {
|
| 2087 |
+
"task_name": "gsarti/flores_101_isl",
|
| 2088 |
+
"prompt_name": "null",
|
| 2089 |
+
"word_perplexity": 3947458.536983725,
|
| 2090 |
+
"byte_perplexity": 8.812045732299993,
|
| 2091 |
+
"bits_per_byte": 3.1394769822824644
|
| 2092 |
+
},
|
| 2093 |
+
"gsarti/flores_101_ibo+null": {
|
| 2094 |
+
"task_name": "gsarti/flores_101_ibo",
|
| 2095 |
+
"prompt_name": "null",
|
| 2096 |
+
"word_perplexity": 99576.38125028457,
|
| 2097 |
+
"byte_perplexity": 6.06807351892086,
|
| 2098 |
+
"bits_per_byte": 2.6012385649422316
|
| 2099 |
+
},
|
| 2100 |
+
"gsarti/flores_101_ind+null": {
|
| 2101 |
+
"task_name": "gsarti/flores_101_ind",
|
| 2102 |
+
"prompt_name": "null",
|
| 2103 |
+
"word_perplexity": 299.41864562936706,
|
| 2104 |
+
"byte_perplexity": 2.2193428661828962,
|
| 2105 |
+
"bits_per_byte": 1.1501325666473412
|
| 2106 |
+
},
|
| 2107 |
+
"gsarti/flores_101_gle+null": {
|
| 2108 |
+
"task_name": "gsarti/flores_101_gle",
|
| 2109 |
+
"prompt_name": "null",
|
| 2110 |
+
"word_perplexity": 1548851.5929806433,
|
| 2111 |
+
"byte_perplexity": 9.712259930753122,
|
| 2112 |
+
"bits_per_byte": 3.2798070331865063
|
| 2113 |
+
},
|
| 2114 |
+
"gsarti/flores_101_ita+null": {
|
| 2115 |
+
"task_name": "gsarti/flores_101_ita",
|
| 2116 |
+
"prompt_name": "null",
|
| 2117 |
+
"word_perplexity": 1951.0663459405935,
|
| 2118 |
+
"byte_perplexity": 3.238337491305615,
|
| 2119 |
+
"bits_per_byte": 1.695253347487448
|
| 2120 |
+
},
|
| 2121 |
+
"gsarti/flores_101_jpn+null": {
|
| 2122 |
+
"task_name": "gsarti/flores_101_jpn",
|
| 2123 |
+
"prompt_name": "null",
|
| 2124 |
+
"word_perplexity": 6.0024027118732196e+69,
|
| 2125 |
+
"byte_perplexity": 2.907038023970581,
|
| 2126 |
+
"bits_per_byte": 1.539549942005635
|
| 2127 |
+
},
|
| 2128 |
+
"gsarti/flores_101_jav+null": {
|
| 2129 |
+
"task_name": "gsarti/flores_101_jav",
|
| 2130 |
+
"prompt_name": "null",
|
| 2131 |
+
"word_perplexity": 956961.3940329206,
|
| 2132 |
+
"byte_perplexity": 7.460632752007581,
|
| 2133 |
+
"bits_per_byte": 2.899297993680408
|
| 2134 |
+
},
|
| 2135 |
+
"gsarti/flores_101_kea+null": {
|
| 2136 |
+
"task_name": "gsarti/flores_101_kea",
|
| 2137 |
+
"prompt_name": "null",
|
| 2138 |
+
"word_perplexity": 438558.0012817139,
|
| 2139 |
+
"byte_perplexity": 9.281572608888562,
|
| 2140 |
+
"bits_per_byte": 3.2143692668645976
|
| 2141 |
+
},
|
| 2142 |
+
"gsarti/flores_101_kam+null": {
|
| 2143 |
+
"task_name": "gsarti/flores_101_kam",
|
| 2144 |
+
"prompt_name": "null",
|
| 2145 |
+
"word_perplexity": 4288601.196402131,
|
| 2146 |
+
"byte_perplexity": 11.436917146974627,
|
| 2147 |
+
"bits_per_byte": 3.515626316920499
|
| 2148 |
+
},
|
| 2149 |
+
"gsarti/flores_101_kan+null": {
|
| 2150 |
+
"task_name": "gsarti/flores_101_kan",
|
| 2151 |
+
"prompt_name": "null",
|
| 2152 |
+
"word_perplexity": 5.3861539364992216e+16,
|
| 2153 |
+
"byte_perplexity": 5.274956219477929,
|
| 2154 |
+
"bits_per_byte": 2.3991591199422513
|
| 2155 |
+
},
|
| 2156 |
+
"gsarti/flores_101_kaz+null": {
|
| 2157 |
+
"task_name": "gsarti/flores_101_kaz",
|
| 2158 |
+
"prompt_name": "null",
|
| 2159 |
+
"word_perplexity": 89537342.10068764,
|
| 2160 |
+
"byte_perplexity": 3.5945005448756477,
|
| 2161 |
+
"bits_per_byte": 1.845791322405974
|
| 2162 |
+
}
|
| 2163 |
+
},
|
| 2164 |
+
"config": {
|
| 2165 |
+
"adaptive_seq_len": true,
|
| 2166 |
+
"num_fewshot": 0,
|
| 2167 |
+
"bootstrap_iters": 100000
|
| 2168 |
+
}
|
| 2169 |
+
}
|
evaluation/results/tr11/bloom1b3/bslmevalfiles/tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-12-00-55.json
ADDED
|
@@ -0,0 +1,1255 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"results": [
|
| 3 |
+
{
|
| 4 |
+
"task_name": "gsarti/flores_101_kor",
|
| 5 |
+
"prompt_name": null,
|
| 6 |
+
"word_perplexity": 1684949.6449262113
|
| 7 |
+
},
|
| 8 |
+
{
|
| 9 |
+
"task_name": "gsarti/flores_101_kor",
|
| 10 |
+
"prompt_name": null,
|
| 11 |
+
"byte_perplexity": 4.065690303705374
|
| 12 |
+
},
|
| 13 |
+
{
|
| 14 |
+
"task_name": "gsarti/flores_101_kor",
|
| 15 |
+
"prompt_name": null,
|
| 16 |
+
"bits_per_byte": 2.023500324792833
|
| 17 |
+
},
|
| 18 |
+
{
|
| 19 |
+
"task_name": "gsarti/flores_101_kir",
|
| 20 |
+
"prompt_name": null,
|
| 21 |
+
"word_perplexity": 235337758.18519488
|
| 22 |
+
},
|
| 23 |
+
{
|
| 24 |
+
"task_name": "gsarti/flores_101_kir",
|
| 25 |
+
"prompt_name": null,
|
| 26 |
+
"byte_perplexity": 3.8667573034119127
|
| 27 |
+
},
|
| 28 |
+
{
|
| 29 |
+
"task_name": "gsarti/flores_101_kir",
|
| 30 |
+
"prompt_name": null,
|
| 31 |
+
"bits_per_byte": 1.9511242166700078
|
| 32 |
+
},
|
| 33 |
+
{
|
| 34 |
+
"task_name": "gsarti/flores_101_lao",
|
| 35 |
+
"prompt_name": null,
|
| 36 |
+
"word_perplexity": 3.0817754157127624e+28
|
| 37 |
+
},
|
| 38 |
+
{
|
| 39 |
+
"task_name": "gsarti/flores_101_lao",
|
| 40 |
+
"prompt_name": null,
|
| 41 |
+
"byte_perplexity": 3.1116396826339545
|
| 42 |
+
},
|
| 43 |
+
{
|
| 44 |
+
"task_name": "gsarti/flores_101_lao",
|
| 45 |
+
"prompt_name": null,
|
| 46 |
+
"bits_per_byte": 1.6376750107826055
|
| 47 |
+
},
|
| 48 |
+
{
|
| 49 |
+
"task_name": "gsarti/flores_101_lav",
|
| 50 |
+
"prompt_name": null,
|
| 51 |
+
"word_perplexity": 20692036.880855087
|
| 52 |
+
},
|
| 53 |
+
{
|
| 54 |
+
"task_name": "gsarti/flores_101_lav",
|
| 55 |
+
"prompt_name": null,
|
| 56 |
+
"byte_perplexity": 8.431943399753028
|
| 57 |
+
},
|
| 58 |
+
{
|
| 59 |
+
"task_name": "gsarti/flores_101_lav",
|
| 60 |
+
"prompt_name": null,
|
| 61 |
+
"bits_per_byte": 3.075865182775687
|
| 62 |
+
},
|
| 63 |
+
{
|
| 64 |
+
"task_name": "gsarti/flores_101_lin",
|
| 65 |
+
"prompt_name": null,
|
| 66 |
+
"word_perplexity": 259077.7174090486
|
| 67 |
+
},
|
| 68 |
+
{
|
| 69 |
+
"task_name": "gsarti/flores_101_lin",
|
| 70 |
+
"prompt_name": null,
|
| 71 |
+
"byte_perplexity": 8.10168498947524
|
| 72 |
+
},
|
| 73 |
+
{
|
| 74 |
+
"task_name": "gsarti/flores_101_lin",
|
| 75 |
+
"prompt_name": null,
|
| 76 |
+
"bits_per_byte": 3.018221991102226
|
| 77 |
+
},
|
| 78 |
+
{
|
| 79 |
+
"task_name": "gsarti/flores_101_lit",
|
| 80 |
+
"prompt_name": null,
|
| 81 |
+
"word_perplexity": 22011900.13997282
|
| 82 |
+
},
|
| 83 |
+
{
|
| 84 |
+
"task_name": "gsarti/flores_101_lit",
|
| 85 |
+
"prompt_name": null,
|
| 86 |
+
"byte_perplexity": 8.297153789252596
|
| 87 |
+
},
|
| 88 |
+
{
|
| 89 |
+
"task_name": "gsarti/flores_101_lit",
|
| 90 |
+
"prompt_name": null,
|
| 91 |
+
"bits_per_byte": 3.0526165270213905
|
| 92 |
+
},
|
| 93 |
+
{
|
| 94 |
+
"task_name": "gsarti/flores_101_luo",
|
| 95 |
+
"prompt_name": null,
|
| 96 |
+
"word_perplexity": 1485111.1306447538
|
| 97 |
+
},
|
| 98 |
+
{
|
| 99 |
+
"task_name": "gsarti/flores_101_luo",
|
| 100 |
+
"prompt_name": null,
|
| 101 |
+
"byte_perplexity": 12.202407052163576
|
| 102 |
+
},
|
| 103 |
+
{
|
| 104 |
+
"task_name": "gsarti/flores_101_luo",
|
| 105 |
+
"prompt_name": null,
|
| 106 |
+
"bits_per_byte": 3.609093857404177
|
| 107 |
+
},
|
| 108 |
+
{
|
| 109 |
+
"task_name": "gsarti/flores_101_ltz",
|
| 110 |
+
"prompt_name": null,
|
| 111 |
+
"word_perplexity": 6731220.931729273
|
| 112 |
+
},
|
| 113 |
+
{
|
| 114 |
+
"task_name": "gsarti/flores_101_ltz",
|
| 115 |
+
"prompt_name": null,
|
| 116 |
+
"byte_perplexity": 9.453152958003827
|
| 117 |
+
},
|
| 118 |
+
{
|
| 119 |
+
"task_name": "gsarti/flores_101_ltz",
|
| 120 |
+
"prompt_name": null,
|
| 121 |
+
"bits_per_byte": 3.2407955989852377
|
| 122 |
+
},
|
| 123 |
+
{
|
| 124 |
+
"task_name": "gsarti/flores_101_mkd",
|
| 125 |
+
"prompt_name": null,
|
| 126 |
+
"word_perplexity": 513306.31562258815
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"task_name": "gsarti/flores_101_mkd",
|
| 130 |
+
"prompt_name": null,
|
| 131 |
+
"byte_perplexity": 3.11420755589491
|
| 132 |
+
},
|
| 133 |
+
{
|
| 134 |
+
"task_name": "gsarti/flores_101_mkd",
|
| 135 |
+
"prompt_name": null,
|
| 136 |
+
"bits_per_byte": 1.6388651004482695
|
| 137 |
+
},
|
| 138 |
+
{
|
| 139 |
+
"task_name": "gsarti/flores_101_msa",
|
| 140 |
+
"prompt_name": null,
|
| 141 |
+
"word_perplexity": 1188.7251531670374
|
| 142 |
+
},
|
| 143 |
+
{
|
| 144 |
+
"task_name": "gsarti/flores_101_msa",
|
| 145 |
+
"prompt_name": null,
|
| 146 |
+
"byte_perplexity": 2.659096901190639
|
| 147 |
+
},
|
| 148 |
+
{
|
| 149 |
+
"task_name": "gsarti/flores_101_msa",
|
| 150 |
+
"prompt_name": null,
|
| 151 |
+
"bits_per_byte": 1.4109363519680242
|
| 152 |
+
},
|
| 153 |
+
{
|
| 154 |
+
"task_name": "gsarti/flores_101_mal",
|
| 155 |
+
"prompt_name": null,
|
| 156 |
+
"word_perplexity": 4.8990954217696134e+17
|
| 157 |
+
},
|
| 158 |
+
{
|
| 159 |
+
"task_name": "gsarti/flores_101_mal",
|
| 160 |
+
"prompt_name": null,
|
| 161 |
+
"byte_perplexity": 4.465506197375413
|
| 162 |
+
},
|
| 163 |
+
{
|
| 164 |
+
"task_name": "gsarti/flores_101_mal",
|
| 165 |
+
"prompt_name": null,
|
| 166 |
+
"bits_per_byte": 2.1588237245178132
|
| 167 |
+
},
|
| 168 |
+
{
|
| 169 |
+
"task_name": "gsarti/flores_101_mlt",
|
| 170 |
+
"prompt_name": null,
|
| 171 |
+
"word_perplexity": 3271065298.9525104
|
| 172 |
+
},
|
| 173 |
+
{
|
| 174 |
+
"task_name": "gsarti/flores_101_mlt",
|
| 175 |
+
"prompt_name": null,
|
| 176 |
+
"byte_perplexity": 16.164200382975334
|
| 177 |
+
},
|
| 178 |
+
{
|
| 179 |
+
"task_name": "gsarti/flores_101_mlt",
|
| 180 |
+
"prompt_name": null,
|
| 181 |
+
"bits_per_byte": 4.014730236310589
|
| 182 |
+
},
|
| 183 |
+
{
|
| 184 |
+
"task_name": "gsarti/flores_101_mri",
|
| 185 |
+
"prompt_name": null,
|
| 186 |
+
"word_perplexity": 42667.84366725716
|
| 187 |
+
},
|
| 188 |
+
{
|
| 189 |
+
"task_name": "gsarti/flores_101_mri",
|
| 190 |
+
"prompt_name": null,
|
| 191 |
+
"byte_perplexity": 8.213330128288407
|
| 192 |
+
},
|
| 193 |
+
{
|
| 194 |
+
"task_name": "gsarti/flores_101_mri",
|
| 195 |
+
"prompt_name": null,
|
| 196 |
+
"bits_per_byte": 3.037967287223778
|
| 197 |
+
},
|
| 198 |
+
{
|
| 199 |
+
"task_name": "gsarti/flores_101_mar",
|
| 200 |
+
"prompt_name": null,
|
| 201 |
+
"word_perplexity": 53348101396468.1
|
| 202 |
+
},
|
| 203 |
+
{
|
| 204 |
+
"task_name": "gsarti/flores_101_mar",
|
| 205 |
+
"prompt_name": null,
|
| 206 |
+
"byte_perplexity": 5.479577601103449
|
| 207 |
+
},
|
| 208 |
+
{
|
| 209 |
+
"task_name": "gsarti/flores_101_mar",
|
| 210 |
+
"prompt_name": null,
|
| 211 |
+
"bits_per_byte": 2.454064685835334
|
| 212 |
+
},
|
| 213 |
+
{
|
| 214 |
+
"task_name": "gsarti/flores_101_mon",
|
| 215 |
+
"prompt_name": null,
|
| 216 |
+
"word_perplexity": 11967156.496346941
|
| 217 |
+
},
|
| 218 |
+
{
|
| 219 |
+
"task_name": "gsarti/flores_101_mon",
|
| 220 |
+
"prompt_name": null,
|
| 221 |
+
"byte_perplexity": 3.5723563966116956
|
| 222 |
+
},
|
| 223 |
+
{
|
| 224 |
+
"task_name": "gsarti/flores_101_mon",
|
| 225 |
+
"prompt_name": null,
|
| 226 |
+
"bits_per_byte": 1.8368760183021453
|
| 227 |
+
},
|
| 228 |
+
{
|
| 229 |
+
"task_name": "gsarti/flores_101_npi",
|
| 230 |
+
"prompt_name": null,
|
| 231 |
+
"word_perplexity": 7452421298650.788
|
| 232 |
+
},
|
| 233 |
+
{
|
| 234 |
+
"task_name": "gsarti/flores_101_npi",
|
| 235 |
+
"prompt_name": null,
|
| 236 |
+
"byte_perplexity": 5.138638996619111
|
| 237 |
+
},
|
| 238 |
+
{
|
| 239 |
+
"task_name": "gsarti/flores_101_npi",
|
| 240 |
+
"prompt_name": null,
|
| 241 |
+
"bits_per_byte": 2.361386302448311
|
| 242 |
+
},
|
| 243 |
+
{
|
| 244 |
+
"task_name": "gsarti/flores_101_nso",
|
| 245 |
+
"prompt_name": null,
|
| 246 |
+
"word_perplexity": 133251.3907730927
|
| 247 |
+
},
|
| 248 |
+
{
|
| 249 |
+
"task_name": "gsarti/flores_101_nso",
|
| 250 |
+
"prompt_name": null,
|
| 251 |
+
"byte_perplexity": 8.876839962509171
|
| 252 |
+
},
|
| 253 |
+
{
|
| 254 |
+
"task_name": "gsarti/flores_101_nso",
|
| 255 |
+
"prompt_name": null,
|
| 256 |
+
"bits_per_byte": 3.150046187635368
|
| 257 |
+
},
|
| 258 |
+
{
|
| 259 |
+
"task_name": "gsarti/flores_101_nob",
|
| 260 |
+
"prompt_name": null,
|
| 261 |
+
"word_perplexity": 64134.3587194621
|
| 262 |
+
},
|
| 263 |
+
{
|
| 264 |
+
"task_name": "gsarti/flores_101_nob",
|
| 265 |
+
"prompt_name": null,
|
| 266 |
+
"byte_perplexity": 5.901843358131797
|
| 267 |
+
},
|
| 268 |
+
{
|
| 269 |
+
"task_name": "gsarti/flores_101_nob",
|
| 270 |
+
"prompt_name": null,
|
| 271 |
+
"bits_per_byte": 2.561165630453858
|
| 272 |
+
},
|
| 273 |
+
{
|
| 274 |
+
"task_name": "gsarti/flores_101_nya",
|
| 275 |
+
"prompt_name": null,
|
| 276 |
+
"word_perplexity": 13237249.320560299
|
| 277 |
+
},
|
| 278 |
+
{
|
| 279 |
+
"task_name": "gsarti/flores_101_nya",
|
| 280 |
+
"prompt_name": null,
|
| 281 |
+
"byte_perplexity": 8.97654874419086
|
| 282 |
+
},
|
| 283 |
+
{
|
| 284 |
+
"task_name": "gsarti/flores_101_nya",
|
| 285 |
+
"prompt_name": null,
|
| 286 |
+
"bits_per_byte": 3.166160871838487
|
| 287 |
+
},
|
| 288 |
+
{
|
| 289 |
+
"task_name": "gsarti/flores_101_oci",
|
| 290 |
+
"prompt_name": null,
|
| 291 |
+
"word_perplexity": 29786.57326210068
|
| 292 |
+
},
|
| 293 |
+
{
|
| 294 |
+
"task_name": "gsarti/flores_101_oci",
|
| 295 |
+
"prompt_name": null,
|
| 296 |
+
"byte_perplexity": 5.114108118049416
|
| 297 |
+
},
|
| 298 |
+
{
|
| 299 |
+
"task_name": "gsarti/flores_101_oci",
|
| 300 |
+
"prompt_name": null,
|
| 301 |
+
"bits_per_byte": 2.3544826611123932
|
| 302 |
+
},
|
| 303 |
+
{
|
| 304 |
+
"task_name": "gsarti/flores_101_ory",
|
| 305 |
+
"prompt_name": null,
|
| 306 |
+
"word_perplexity": 8232620282886.167
|
| 307 |
+
},
|
| 308 |
+
{
|
| 309 |
+
"task_name": "gsarti/flores_101_ory",
|
| 310 |
+
"prompt_name": null,
|
| 311 |
+
"byte_perplexity": 5.086518347981296
|
| 312 |
+
},
|
| 313 |
+
{
|
| 314 |
+
"task_name": "gsarti/flores_101_ory",
|
| 315 |
+
"prompt_name": null,
|
| 316 |
+
"bits_per_byte": 2.3466784891528936
|
| 317 |
+
},
|
| 318 |
+
{
|
| 319 |
+
"task_name": "gsarti/flores_101_orm",
|
| 320 |
+
"prompt_name": null,
|
| 321 |
+
"word_perplexity": 1286222337.8393624
|
| 322 |
+
},
|
| 323 |
+
{
|
| 324 |
+
"task_name": "gsarti/flores_101_orm",
|
| 325 |
+
"prompt_name": null,
|
| 326 |
+
"byte_perplexity": 13.414303089263644
|
| 327 |
+
},
|
| 328 |
+
{
|
| 329 |
+
"task_name": "gsarti/flores_101_orm",
|
| 330 |
+
"prompt_name": null,
|
| 331 |
+
"bits_per_byte": 3.7457001993717243
|
| 332 |
+
},
|
| 333 |
+
{
|
| 334 |
+
"task_name": "gsarti/flores_101_pus",
|
| 335 |
+
"prompt_name": null,
|
| 336 |
+
"word_perplexity": 200303.57214724104
|
| 337 |
+
},
|
| 338 |
+
{
|
| 339 |
+
"task_name": "gsarti/flores_101_pus",
|
| 340 |
+
"prompt_name": null,
|
| 341 |
+
"byte_perplexity": 4.650458574106675
|
| 342 |
+
},
|
| 343 |
+
{
|
| 344 |
+
"task_name": "gsarti/flores_101_pus",
|
| 345 |
+
"prompt_name": null,
|
| 346 |
+
"bits_per_byte": 2.2173729850313615
|
| 347 |
+
},
|
| 348 |
+
{
|
| 349 |
+
"task_name": "gsarti/flores_101_fas",
|
| 350 |
+
"prompt_name": null,
|
| 351 |
+
"word_perplexity": 59965.98383842629
|
| 352 |
+
},
|
| 353 |
+
{
|
| 354 |
+
"task_name": "gsarti/flores_101_fas",
|
| 355 |
+
"prompt_name": null,
|
| 356 |
+
"byte_perplexity": 3.1572599808371367
|
| 357 |
+
},
|
| 358 |
+
{
|
| 359 |
+
"task_name": "gsarti/flores_101_fas",
|
| 360 |
+
"prompt_name": null,
|
| 361 |
+
"bits_per_byte": 1.6586730625582675
|
| 362 |
+
},
|
| 363 |
+
{
|
| 364 |
+
"task_name": "gsarti/flores_101_pol",
|
| 365 |
+
"prompt_name": null,
|
| 366 |
+
"word_perplexity": 239703.75452947227
|
| 367 |
+
},
|
| 368 |
+
{
|
| 369 |
+
"task_name": "gsarti/flores_101_pol",
|
| 370 |
+
"prompt_name": null,
|
| 371 |
+
"byte_perplexity": 5.165261846492578
|
| 372 |
+
},
|
| 373 |
+
{
|
| 374 |
+
"task_name": "gsarti/flores_101_pol",
|
| 375 |
+
"prompt_name": null,
|
| 376 |
+
"bits_per_byte": 2.3688414865658434
|
| 377 |
+
},
|
| 378 |
+
{
|
| 379 |
+
"task_name": "gsarti/flores_101_por",
|
| 380 |
+
"prompt_name": null,
|
| 381 |
+
"word_perplexity": 78.66129921108659
|
| 382 |
+
},
|
| 383 |
+
{
|
| 384 |
+
"task_name": "gsarti/flores_101_por",
|
| 385 |
+
"prompt_name": null,
|
| 386 |
+
"byte_perplexity": 2.012150908931838
|
| 387 |
+
},
|
| 388 |
+
{
|
| 389 |
+
"task_name": "gsarti/flores_101_por",
|
| 390 |
+
"prompt_name": null,
|
| 391 |
+
"bits_per_byte": 1.0087385096181816
|
| 392 |
+
},
|
| 393 |
+
{
|
| 394 |
+
"task_name": "gsarti/flores_101_pan",
|
| 395 |
+
"prompt_name": null,
|
| 396 |
+
"word_perplexity": 2003582065.835696
|
| 397 |
+
},
|
| 398 |
+
{
|
| 399 |
+
"task_name": "gsarti/flores_101_pan",
|
| 400 |
+
"prompt_name": null,
|
| 401 |
+
"byte_perplexity": 5.012603107956229
|
| 402 |
+
},
|
| 403 |
+
{
|
| 404 |
+
"task_name": "gsarti/flores_101_pan",
|
| 405 |
+
"prompt_name": null,
|
| 406 |
+
"bits_per_byte": 2.3255600077385723
|
| 407 |
+
},
|
| 408 |
+
{
|
| 409 |
+
"task_name": "gsarti/flores_101_ron",
|
| 410 |
+
"prompt_name": null,
|
| 411 |
+
"word_perplexity": 80490.92705368399
|
| 412 |
+
},
|
| 413 |
+
{
|
| 414 |
+
"task_name": "gsarti/flores_101_ron",
|
| 415 |
+
"prompt_name": null,
|
| 416 |
+
"byte_perplexity": 5.603607947317877
|
| 417 |
+
},
|
| 418 |
+
{
|
| 419 |
+
"task_name": "gsarti/flores_101_ron",
|
| 420 |
+
"prompt_name": null,
|
| 421 |
+
"bits_per_byte": 2.486356022105963
|
| 422 |
+
},
|
| 423 |
+
{
|
| 424 |
+
"task_name": "gsarti/flores_101_rus",
|
| 425 |
+
"prompt_name": null,
|
| 426 |
+
"word_perplexity": 22038.65288574451
|
| 427 |
+
},
|
| 428 |
+
{
|
| 429 |
+
"task_name": "gsarti/flores_101_rus",
|
| 430 |
+
"prompt_name": null,
|
| 431 |
+
"byte_perplexity": 2.1372096174466697
|
| 432 |
+
},
|
| 433 |
+
{
|
| 434 |
+
"task_name": "gsarti/flores_101_rus",
|
| 435 |
+
"prompt_name": null,
|
| 436 |
+
"bits_per_byte": 1.095728414417906
|
| 437 |
+
},
|
| 438 |
+
{
|
| 439 |
+
"task_name": "gsarti/flores_101_srp",
|
| 440 |
+
"prompt_name": null,
|
| 441 |
+
"word_perplexity": 359037.4163692842
|
| 442 |
+
},
|
| 443 |
+
{
|
| 444 |
+
"task_name": "gsarti/flores_101_srp",
|
| 445 |
+
"prompt_name": null,
|
| 446 |
+
"byte_perplexity": 3.050738229673983
|
| 447 |
+
},
|
| 448 |
+
{
|
| 449 |
+
"task_name": "gsarti/flores_101_srp",
|
| 450 |
+
"prompt_name": null,
|
| 451 |
+
"bits_per_byte": 1.6091583939601046
|
| 452 |
+
},
|
| 453 |
+
{
|
| 454 |
+
"task_name": "gsarti/flores_101_sna",
|
| 455 |
+
"prompt_name": null,
|
| 456 |
+
"word_perplexity": 151658287.08006003
|
| 457 |
+
},
|
| 458 |
+
{
|
| 459 |
+
"task_name": "gsarti/flores_101_sna",
|
| 460 |
+
"prompt_name": null,
|
| 461 |
+
"byte_perplexity": 9.361234419948593
|
| 462 |
+
},
|
| 463 |
+
{
|
| 464 |
+
"task_name": "gsarti/flores_101_sna",
|
| 465 |
+
"prompt_name": null,
|
| 466 |
+
"bits_per_byte": 3.226698783453375
|
| 467 |
+
},
|
| 468 |
+
{
|
| 469 |
+
"task_name": "gsarti/flores_101_snd",
|
| 470 |
+
"prompt_name": null,
|
| 471 |
+
"word_perplexity": 2195879.0537875695
|
| 472 |
+
},
|
| 473 |
+
{
|
| 474 |
+
"task_name": "gsarti/flores_101_snd",
|
| 475 |
+
"prompt_name": null,
|
| 476 |
+
"byte_perplexity": 5.678399375652783
|
| 477 |
+
},
|
| 478 |
+
{
|
| 479 |
+
"task_name": "gsarti/flores_101_snd",
|
| 480 |
+
"prompt_name": null,
|
| 481 |
+
"bits_per_byte": 2.505484320885354
|
| 482 |
+
},
|
| 483 |
+
{
|
| 484 |
+
"task_name": "gsarti/flores_101_slk",
|
| 485 |
+
"prompt_name": null,
|
| 486 |
+
"word_perplexity": 1873211.2703176092
|
| 487 |
+
},
|
| 488 |
+
{
|
| 489 |
+
"task_name": "gsarti/flores_101_slk",
|
| 490 |
+
"prompt_name": null,
|
| 491 |
+
"byte_perplexity": 7.294354718439043
|
| 492 |
+
},
|
| 493 |
+
{
|
| 494 |
+
"task_name": "gsarti/flores_101_slk",
|
| 495 |
+
"prompt_name": null,
|
| 496 |
+
"bits_per_byte": 2.8667803584469502
|
| 497 |
+
},
|
| 498 |
+
{
|
| 499 |
+
"task_name": "gsarti/flores_101_slv",
|
| 500 |
+
"prompt_name": null,
|
| 501 |
+
"word_perplexity": 609965.8362492598
|
| 502 |
+
},
|
| 503 |
+
{
|
| 504 |
+
"task_name": "gsarti/flores_101_slv",
|
| 505 |
+
"prompt_name": null,
|
| 506 |
+
"byte_perplexity": 7.438107250941839
|
| 507 |
+
},
|
| 508 |
+
{
|
| 509 |
+
"task_name": "gsarti/flores_101_slv",
|
| 510 |
+
"prompt_name": null,
|
| 511 |
+
"bits_per_byte": 2.894935550489075
|
| 512 |
+
},
|
| 513 |
+
{
|
| 514 |
+
"task_name": "gsarti/flores_101_som",
|
| 515 |
+
"prompt_name": null,
|
| 516 |
+
"word_perplexity": 12921970.127169678
|
| 517 |
+
},
|
| 518 |
+
{
|
| 519 |
+
"task_name": "gsarti/flores_101_som",
|
| 520 |
+
"prompt_name": null,
|
| 521 |
+
"byte_perplexity": 12.622705630414286
|
| 522 |
+
},
|
| 523 |
+
{
|
| 524 |
+
"task_name": "gsarti/flores_101_som",
|
| 525 |
+
"prompt_name": null,
|
| 526 |
+
"bits_per_byte": 3.6579492747174616
|
| 527 |
+
},
|
| 528 |
+
{
|
| 529 |
+
"task_name": "gsarti/flores_101_ckb",
|
| 530 |
+
"prompt_name": null,
|
| 531 |
+
"word_perplexity": 11104497.438038943
|
| 532 |
+
},
|
| 533 |
+
{
|
| 534 |
+
"task_name": "gsarti/flores_101_ckb",
|
| 535 |
+
"prompt_name": null,
|
| 536 |
+
"byte_perplexity": 3.842852526862475
|
| 537 |
+
},
|
| 538 |
+
{
|
| 539 |
+
"task_name": "gsarti/flores_101_ckb",
|
| 540 |
+
"prompt_name": null,
|
| 541 |
+
"bits_per_byte": 1.9421776126623524
|
| 542 |
+
},
|
| 543 |
+
{
|
| 544 |
+
"task_name": "gsarti/flores_101_spa",
|
| 545 |
+
"prompt_name": null,
|
| 546 |
+
"word_perplexity": 55.14408503293887
|
| 547 |
+
},
|
| 548 |
+
{
|
| 549 |
+
"task_name": "gsarti/flores_101_spa",
|
| 550 |
+
"prompt_name": null,
|
| 551 |
+
"byte_perplexity": 1.9240269109386998
|
| 552 |
+
},
|
| 553 |
+
{
|
| 554 |
+
"task_name": "gsarti/flores_101_spa",
|
| 555 |
+
"prompt_name": null,
|
| 556 |
+
"bits_per_byte": 0.9441289779054047
|
| 557 |
+
},
|
| 558 |
+
{
|
| 559 |
+
"task_name": "gsarti/flores_101_swh",
|
| 560 |
+
"prompt_name": null,
|
| 561 |
+
"word_perplexity": 6985.646204087442
|
| 562 |
+
},
|
| 563 |
+
{
|
| 564 |
+
"task_name": "gsarti/flores_101_swh",
|
| 565 |
+
"prompt_name": null,
|
| 566 |
+
"byte_perplexity": 3.923430589092355
|
| 567 |
+
},
|
| 568 |
+
{
|
| 569 |
+
"task_name": "gsarti/flores_101_swh",
|
| 570 |
+
"prompt_name": null,
|
| 571 |
+
"bits_per_byte": 1.9721156771582438
|
| 572 |
+
},
|
| 573 |
+
{
|
| 574 |
+
"task_name": "gsarti/flores_101_swe",
|
| 575 |
+
"prompt_name": null,
|
| 576 |
+
"word_perplexity": 104567.9891705103
|
| 577 |
+
},
|
| 578 |
+
{
|
| 579 |
+
"task_name": "gsarti/flores_101_swe",
|
| 580 |
+
"prompt_name": null,
|
| 581 |
+
"byte_perplexity": 5.634635291846611
|
| 582 |
+
},
|
| 583 |
+
{
|
| 584 |
+
"task_name": "gsarti/flores_101_swe",
|
| 585 |
+
"prompt_name": null,
|
| 586 |
+
"bits_per_byte": 2.4943222333483153
|
| 587 |
+
},
|
| 588 |
+
{
|
| 589 |
+
"task_name": "gsarti/flores_101_tgk",
|
| 590 |
+
"prompt_name": null,
|
| 591 |
+
"word_perplexity": 10003619.893239152
|
| 592 |
+
},
|
| 593 |
+
{
|
| 594 |
+
"task_name": "gsarti/flores_101_tgk",
|
| 595 |
+
"prompt_name": null,
|
| 596 |
+
"byte_perplexity": 3.836804862794101
|
| 597 |
+
},
|
| 598 |
+
{
|
| 599 |
+
"task_name": "gsarti/flores_101_tgk",
|
| 600 |
+
"prompt_name": null,
|
| 601 |
+
"bits_per_byte": 1.9399053923480125
|
| 602 |
+
},
|
| 603 |
+
{
|
| 604 |
+
"task_name": "gsarti/flores_101_tam",
|
| 605 |
+
"prompt_name": null,
|
| 606 |
+
"word_perplexity": 4220234444737767.0
|
| 607 |
+
},
|
| 608 |
+
{
|
| 609 |
+
"task_name": "gsarti/flores_101_tam",
|
| 610 |
+
"prompt_name": null,
|
| 611 |
+
"byte_perplexity": 4.286894531607389
|
| 612 |
+
},
|
| 613 |
+
{
|
| 614 |
+
"task_name": "gsarti/flores_101_tam",
|
| 615 |
+
"prompt_name": null,
|
| 616 |
+
"bits_per_byte": 2.0999329236632325
|
| 617 |
+
},
|
| 618 |
+
{
|
| 619 |
+
"task_name": "gsarti/flores_101_tel",
|
| 620 |
+
"prompt_name": null,
|
| 621 |
+
"word_perplexity": 7315913985648022.0
|
| 622 |
+
},
|
| 623 |
+
{
|
| 624 |
+
"task_name": "gsarti/flores_101_tel",
|
| 625 |
+
"prompt_name": null,
|
| 626 |
+
"byte_perplexity": 5.852344181819556
|
| 627 |
+
},
|
| 628 |
+
{
|
| 629 |
+
"task_name": "gsarti/flores_101_tel",
|
| 630 |
+
"prompt_name": null,
|
| 631 |
+
"bits_per_byte": 2.549014618212334
|
| 632 |
+
},
|
| 633 |
+
{
|
| 634 |
+
"task_name": "gsarti/flores_101_tha",
|
| 635 |
+
"prompt_name": null,
|
| 636 |
+
"word_perplexity": 6.85384626099906e+32
|
| 637 |
+
},
|
| 638 |
+
{
|
| 639 |
+
"task_name": "gsarti/flores_101_tha",
|
| 640 |
+
"prompt_name": null,
|
| 641 |
+
"byte_perplexity": 2.458737675753546
|
| 642 |
+
},
|
| 643 |
+
{
|
| 644 |
+
"task_name": "gsarti/flores_101_tha",
|
| 645 |
+
"prompt_name": null,
|
| 646 |
+
"bits_per_byte": 1.2979178211163922
|
| 647 |
+
},
|
| 648 |
+
{
|
| 649 |
+
"task_name": "gsarti/flores_101_tur",
|
| 650 |
+
"prompt_name": null,
|
| 651 |
+
"word_perplexity": 1230000.8194755162
|
| 652 |
+
},
|
| 653 |
+
{
|
| 654 |
+
"task_name": "gsarti/flores_101_tur",
|
| 655 |
+
"prompt_name": null,
|
| 656 |
+
"byte_perplexity": 5.323529328304652
|
| 657 |
+
},
|
| 658 |
+
{
|
| 659 |
+
"task_name": "gsarti/flores_101_tur",
|
| 660 |
+
"prompt_name": null,
|
| 661 |
+
"bits_per_byte": 2.4123830232149
|
| 662 |
+
},
|
| 663 |
+
{
|
| 664 |
+
"task_name": "gsarti/flores_101_ukr",
|
| 665 |
+
"prompt_name": null,
|
| 666 |
+
"word_perplexity": 780615.9486315987
|
| 667 |
+
},
|
| 668 |
+
{
|
| 669 |
+
"task_name": "gsarti/flores_101_ukr",
|
| 670 |
+
"prompt_name": null,
|
| 671 |
+
"byte_perplexity": 2.8843863497020608
|
| 672 |
+
},
|
| 673 |
+
{
|
| 674 |
+
"task_name": "gsarti/flores_101_ukr",
|
| 675 |
+
"prompt_name": null,
|
| 676 |
+
"bits_per_byte": 1.5282644195953918
|
| 677 |
+
},
|
| 678 |
+
{
|
| 679 |
+
"task_name": "gsarti/flores_101_umb",
|
| 680 |
+
"prompt_name": null,
|
| 681 |
+
"word_perplexity": 346118506.64866126
|
| 682 |
+
},
|
| 683 |
+
{
|
| 684 |
+
"task_name": "gsarti/flores_101_umb",
|
| 685 |
+
"prompt_name": null,
|
| 686 |
+
"byte_perplexity": 13.088423907901921
|
| 687 |
+
},
|
| 688 |
+
{
|
| 689 |
+
"task_name": "gsarti/flores_101_umb",
|
| 690 |
+
"prompt_name": null,
|
| 691 |
+
"bits_per_byte": 3.710219475046473
|
| 692 |
+
},
|
| 693 |
+
{
|
| 694 |
+
"task_name": "gsarti/flores_101_urd",
|
| 695 |
+
"prompt_name": null,
|
| 696 |
+
"word_perplexity": 335.1943886252716
|
| 697 |
+
},
|
| 698 |
+
{
|
| 699 |
+
"task_name": "gsarti/flores_101_urd",
|
| 700 |
+
"prompt_name": null,
|
| 701 |
+
"byte_perplexity": 2.010562039704537
|
| 702 |
+
},
|
| 703 |
+
{
|
| 704 |
+
"task_name": "gsarti/flores_101_urd",
|
| 705 |
+
"prompt_name": null,
|
| 706 |
+
"bits_per_byte": 1.0075988539165108
|
| 707 |
+
},
|
| 708 |
+
{
|
| 709 |
+
"task_name": "gsarti/flores_101_uzb",
|
| 710 |
+
"prompt_name": null,
|
| 711 |
+
"word_perplexity": 1248263505.2751954
|
| 712 |
+
},
|
| 713 |
+
{
|
| 714 |
+
"task_name": "gsarti/flores_101_uzb",
|
| 715 |
+
"prompt_name": null,
|
| 716 |
+
"byte_perplexity": 12.980834294137205
|
| 717 |
+
},
|
| 718 |
+
{
|
| 719 |
+
"task_name": "gsarti/flores_101_uzb",
|
| 720 |
+
"prompt_name": null,
|
| 721 |
+
"bits_per_byte": 3.69831120498359
|
| 722 |
+
},
|
| 723 |
+
{
|
| 724 |
+
"task_name": "gsarti/flores_101_vie",
|
| 725 |
+
"prompt_name": null,
|
| 726 |
+
"word_perplexity": 33.51752264232948
|
| 727 |
+
},
|
| 728 |
+
{
|
| 729 |
+
"task_name": "gsarti/flores_101_vie",
|
| 730 |
+
"prompt_name": null,
|
| 731 |
+
"byte_perplexity": 1.7976491760484148
|
| 732 |
+
},
|
| 733 |
+
{
|
| 734 |
+
"task_name": "gsarti/flores_101_vie",
|
| 735 |
+
"prompt_name": null,
|
| 736 |
+
"bits_per_byte": 0.8461114961807352
|
| 737 |
+
},
|
| 738 |
+
{
|
| 739 |
+
"task_name": "gsarti/flores_101_cym",
|
| 740 |
+
"prompt_name": null,
|
| 741 |
+
"word_perplexity": 5900331.966242436
|
| 742 |
+
},
|
| 743 |
+
{
|
| 744 |
+
"task_name": "gsarti/flores_101_cym",
|
| 745 |
+
"prompt_name": null,
|
| 746 |
+
"byte_perplexity": 14.390369428021707
|
| 747 |
+
},
|
| 748 |
+
{
|
| 749 |
+
"task_name": "gsarti/flores_101_cym",
|
| 750 |
+
"prompt_name": null,
|
| 751 |
+
"bits_per_byte": 3.8470317241534553
|
| 752 |
+
},
|
| 753 |
+
{
|
| 754 |
+
"task_name": "gsarti/flores_101_wol",
|
| 755 |
+
"prompt_name": null,
|
| 756 |
+
"word_perplexity": 199684.7010180392
|
| 757 |
+
},
|
| 758 |
+
{
|
| 759 |
+
"task_name": "gsarti/flores_101_wol",
|
| 760 |
+
"prompt_name": null,
|
| 761 |
+
"byte_perplexity": 10.072733993132132
|
| 762 |
+
},
|
| 763 |
+
{
|
| 764 |
+
"task_name": "gsarti/flores_101_wol",
|
| 765 |
+
"prompt_name": null,
|
| 766 |
+
"bits_per_byte": 3.332383415073327
|
| 767 |
+
},
|
| 768 |
+
{
|
| 769 |
+
"task_name": "gsarti/flores_101_xho",
|
| 770 |
+
"prompt_name": null,
|
| 771 |
+
"word_perplexity": 141017733.33017766
|
| 772 |
+
},
|
| 773 |
+
{
|
| 774 |
+
"task_name": "gsarti/flores_101_xho",
|
| 775 |
+
"prompt_name": null,
|
| 776 |
+
"byte_perplexity": 8.241450154294917
|
| 777 |
+
},
|
| 778 |
+
{
|
| 779 |
+
"task_name": "gsarti/flores_101_xho",
|
| 780 |
+
"prompt_name": null,
|
| 781 |
+
"bits_per_byte": 3.0428982143908727
|
| 782 |
+
},
|
| 783 |
+
{
|
| 784 |
+
"task_name": "gsarti/flores_101_yor",
|
| 785 |
+
"prompt_name": null,
|
| 786 |
+
"word_perplexity": 171980.641422536
|
| 787 |
+
},
|
| 788 |
+
{
|
| 789 |
+
"task_name": "gsarti/flores_101_yor",
|
| 790 |
+
"prompt_name": null,
|
| 791 |
+
"byte_perplexity": 6.165831615133067
|
| 792 |
+
},
|
| 793 |
+
{
|
| 794 |
+
"task_name": "gsarti/flores_101_yor",
|
| 795 |
+
"prompt_name": null,
|
| 796 |
+
"bits_per_byte": 2.62429549091613
|
| 797 |
+
},
|
| 798 |
+
{
|
| 799 |
+
"task_name": "gsarti/flores_101_zul",
|
| 800 |
+
"prompt_name": null,
|
| 801 |
+
"word_perplexity": 998742068.9481835
|
| 802 |
+
},
|
| 803 |
+
{
|
| 804 |
+
"task_name": "gsarti/flores_101_zul",
|
| 805 |
+
"prompt_name": null,
|
| 806 |
+
"byte_perplexity": 9.202622963132773
|
| 807 |
+
},
|
| 808 |
+
{
|
| 809 |
+
"task_name": "gsarti/flores_101_zul",
|
| 810 |
+
"prompt_name": null,
|
| 811 |
+
"bits_per_byte": 3.2020451216662975
|
| 812 |
+
}
|
| 813 |
+
],
|
| 814 |
+
"versions": {
|
| 815 |
+
"gsarti/flores_101_kor+null": 0,
|
| 816 |
+
"gsarti/flores_101_kir+null": 0,
|
| 817 |
+
"gsarti/flores_101_lao+null": 0,
|
| 818 |
+
"gsarti/flores_101_lav+null": 0,
|
| 819 |
+
"gsarti/flores_101_lin+null": 0,
|
| 820 |
+
"gsarti/flores_101_lit+null": 0,
|
| 821 |
+
"gsarti/flores_101_luo+null": 0,
|
| 822 |
+
"gsarti/flores_101_ltz+null": 0,
|
| 823 |
+
"gsarti/flores_101_mkd+null": 0,
|
| 824 |
+
"gsarti/flores_101_msa+null": 0,
|
| 825 |
+
"gsarti/flores_101_mal+null": 0,
|
| 826 |
+
"gsarti/flores_101_mlt+null": 0,
|
| 827 |
+
"gsarti/flores_101_mri+null": 0,
|
| 828 |
+
"gsarti/flores_101_mar+null": 0,
|
| 829 |
+
"gsarti/flores_101_mon+null": 0,
|
| 830 |
+
"gsarti/flores_101_npi+null": 0,
|
| 831 |
+
"gsarti/flores_101_nso+null": 0,
|
| 832 |
+
"gsarti/flores_101_nob+null": 0,
|
| 833 |
+
"gsarti/flores_101_nya+null": 0,
|
| 834 |
+
"gsarti/flores_101_oci+null": 0,
|
| 835 |
+
"gsarti/flores_101_ory+null": 0,
|
| 836 |
+
"gsarti/flores_101_orm+null": 0,
|
| 837 |
+
"gsarti/flores_101_pus+null": 0,
|
| 838 |
+
"gsarti/flores_101_fas+null": 0,
|
| 839 |
+
"gsarti/flores_101_pol+null": 0,
|
| 840 |
+
"gsarti/flores_101_por+null": 0,
|
| 841 |
+
"gsarti/flores_101_pan+null": 0,
|
| 842 |
+
"gsarti/flores_101_ron+null": 0,
|
| 843 |
+
"gsarti/flores_101_rus+null": 0,
|
| 844 |
+
"gsarti/flores_101_srp+null": 0,
|
| 845 |
+
"gsarti/flores_101_sna+null": 0,
|
| 846 |
+
"gsarti/flores_101_snd+null": 0,
|
| 847 |
+
"gsarti/flores_101_slk+null": 0,
|
| 848 |
+
"gsarti/flores_101_slv+null": 0,
|
| 849 |
+
"gsarti/flores_101_som+null": 0,
|
| 850 |
+
"gsarti/flores_101_ckb+null": 0,
|
| 851 |
+
"gsarti/flores_101_spa+null": 0,
|
| 852 |
+
"gsarti/flores_101_swh+null": 0,
|
| 853 |
+
"gsarti/flores_101_swe+null": 0,
|
| 854 |
+
"gsarti/flores_101_tgk+null": 0,
|
| 855 |
+
"gsarti/flores_101_tam+null": 0,
|
| 856 |
+
"gsarti/flores_101_tel+null": 0,
|
| 857 |
+
"gsarti/flores_101_tha+null": 0,
|
| 858 |
+
"gsarti/flores_101_tur+null": 0,
|
| 859 |
+
"gsarti/flores_101_ukr+null": 0,
|
| 860 |
+
"gsarti/flores_101_umb+null": 0,
|
| 861 |
+
"gsarti/flores_101_urd+null": 0,
|
| 862 |
+
"gsarti/flores_101_uzb+null": 0,
|
| 863 |
+
"gsarti/flores_101_vie+null": 0,
|
| 864 |
+
"gsarti/flores_101_cym+null": 0,
|
| 865 |
+
"gsarti/flores_101_wol+null": 0,
|
| 866 |
+
"gsarti/flores_101_xho+null": 0,
|
| 867 |
+
"gsarti/flores_101_yor+null": 0,
|
| 868 |
+
"gsarti/flores_101_zul+null": 0
|
| 869 |
+
},
|
| 870 |
+
"table_results": {
|
| 871 |
+
"gsarti/flores_101_kor+null": {
|
| 872 |
+
"task_name": "gsarti/flores_101_kor",
|
| 873 |
+
"prompt_name": "null",
|
| 874 |
+
"word_perplexity": 1684949.6449262113,
|
| 875 |
+
"byte_perplexity": 4.065690303705374,
|
| 876 |
+
"bits_per_byte": 2.023500324792833
|
| 877 |
+
},
|
| 878 |
+
"gsarti/flores_101_kir+null": {
|
| 879 |
+
"task_name": "gsarti/flores_101_kir",
|
| 880 |
+
"prompt_name": "null",
|
| 881 |
+
"word_perplexity": 235337758.18519488,
|
| 882 |
+
"byte_perplexity": 3.8667573034119127,
|
| 883 |
+
"bits_per_byte": 1.9511242166700078
|
| 884 |
+
},
|
| 885 |
+
"gsarti/flores_101_lao+null": {
|
| 886 |
+
"task_name": "gsarti/flores_101_lao",
|
| 887 |
+
"prompt_name": "null",
|
| 888 |
+
"word_perplexity": 3.0817754157127624e+28,
|
| 889 |
+
"byte_perplexity": 3.1116396826339545,
|
| 890 |
+
"bits_per_byte": 1.6376750107826055
|
| 891 |
+
},
|
| 892 |
+
"gsarti/flores_101_lav+null": {
|
| 893 |
+
"task_name": "gsarti/flores_101_lav",
|
| 894 |
+
"prompt_name": "null",
|
| 895 |
+
"word_perplexity": 20692036.880855087,
|
| 896 |
+
"byte_perplexity": 8.431943399753028,
|
| 897 |
+
"bits_per_byte": 3.075865182775687
|
| 898 |
+
},
|
| 899 |
+
"gsarti/flores_101_lin+null": {
|
| 900 |
+
"task_name": "gsarti/flores_101_lin",
|
| 901 |
+
"prompt_name": "null",
|
| 902 |
+
"word_perplexity": 259077.7174090486,
|
| 903 |
+
"byte_perplexity": 8.10168498947524,
|
| 904 |
+
"bits_per_byte": 3.018221991102226
|
| 905 |
+
},
|
| 906 |
+
"gsarti/flores_101_lit+null": {
|
| 907 |
+
"task_name": "gsarti/flores_101_lit",
|
| 908 |
+
"prompt_name": "null",
|
| 909 |
+
"word_perplexity": 22011900.13997282,
|
| 910 |
+
"byte_perplexity": 8.297153789252596,
|
| 911 |
+
"bits_per_byte": 3.0526165270213905
|
| 912 |
+
},
|
| 913 |
+
"gsarti/flores_101_luo+null": {
|
| 914 |
+
"task_name": "gsarti/flores_101_luo",
|
| 915 |
+
"prompt_name": "null",
|
| 916 |
+
"word_perplexity": 1485111.1306447538,
|
| 917 |
+
"byte_perplexity": 12.202407052163576,
|
| 918 |
+
"bits_per_byte": 3.609093857404177
|
| 919 |
+
},
|
| 920 |
+
"gsarti/flores_101_ltz+null": {
|
| 921 |
+
"task_name": "gsarti/flores_101_ltz",
|
| 922 |
+
"prompt_name": "null",
|
| 923 |
+
"word_perplexity": 6731220.931729273,
|
| 924 |
+
"byte_perplexity": 9.453152958003827,
|
| 925 |
+
"bits_per_byte": 3.2407955989852377
|
| 926 |
+
},
|
| 927 |
+
"gsarti/flores_101_mkd+null": {
|
| 928 |
+
"task_name": "gsarti/flores_101_mkd",
|
| 929 |
+
"prompt_name": "null",
|
| 930 |
+
"word_perplexity": 513306.31562258815,
|
| 931 |
+
"byte_perplexity": 3.11420755589491,
|
| 932 |
+
"bits_per_byte": 1.6388651004482695
|
| 933 |
+
},
|
| 934 |
+
"gsarti/flores_101_msa+null": {
|
| 935 |
+
"task_name": "gsarti/flores_101_msa",
|
| 936 |
+
"prompt_name": "null",
|
| 937 |
+
"word_perplexity": 1188.7251531670374,
|
| 938 |
+
"byte_perplexity": 2.659096901190639,
|
| 939 |
+
"bits_per_byte": 1.4109363519680242
|
| 940 |
+
},
|
| 941 |
+
"gsarti/flores_101_mal+null": {
|
| 942 |
+
"task_name": "gsarti/flores_101_mal",
|
| 943 |
+
"prompt_name": "null",
|
| 944 |
+
"word_perplexity": 4.8990954217696134e+17,
|
| 945 |
+
"byte_perplexity": 4.465506197375413,
|
| 946 |
+
"bits_per_byte": 2.1588237245178132
|
| 947 |
+
},
|
| 948 |
+
"gsarti/flores_101_mlt+null": {
|
| 949 |
+
"task_name": "gsarti/flores_101_mlt",
|
| 950 |
+
"prompt_name": "null",
|
| 951 |
+
"word_perplexity": 3271065298.9525104,
|
| 952 |
+
"byte_perplexity": 16.164200382975334,
|
| 953 |
+
"bits_per_byte": 4.014730236310589
|
| 954 |
+
},
|
| 955 |
+
"gsarti/flores_101_mri+null": {
|
| 956 |
+
"task_name": "gsarti/flores_101_mri",
|
| 957 |
+
"prompt_name": "null",
|
| 958 |
+
"word_perplexity": 42667.84366725716,
|
| 959 |
+
"byte_perplexity": 8.213330128288407,
|
| 960 |
+
"bits_per_byte": 3.037967287223778
|
| 961 |
+
},
|
| 962 |
+
"gsarti/flores_101_mar+null": {
|
| 963 |
+
"task_name": "gsarti/flores_101_mar",
|
| 964 |
+
"prompt_name": "null",
|
| 965 |
+
"word_perplexity": 53348101396468.1,
|
| 966 |
+
"byte_perplexity": 5.479577601103449,
|
| 967 |
+
"bits_per_byte": 2.454064685835334
|
| 968 |
+
},
|
| 969 |
+
"gsarti/flores_101_mon+null": {
|
| 970 |
+
"task_name": "gsarti/flores_101_mon",
|
| 971 |
+
"prompt_name": "null",
|
| 972 |
+
"word_perplexity": 11967156.496346941,
|
| 973 |
+
"byte_perplexity": 3.5723563966116956,
|
| 974 |
+
"bits_per_byte": 1.8368760183021453
|
| 975 |
+
},
|
| 976 |
+
"gsarti/flores_101_npi+null": {
|
| 977 |
+
"task_name": "gsarti/flores_101_npi",
|
| 978 |
+
"prompt_name": "null",
|
| 979 |
+
"word_perplexity": 7452421298650.788,
|
| 980 |
+
"byte_perplexity": 5.138638996619111,
|
| 981 |
+
"bits_per_byte": 2.361386302448311
|
| 982 |
+
},
|
| 983 |
+
"gsarti/flores_101_nso+null": {
|
| 984 |
+
"task_name": "gsarti/flores_101_nso",
|
| 985 |
+
"prompt_name": "null",
|
| 986 |
+
"word_perplexity": 133251.3907730927,
|
| 987 |
+
"byte_perplexity": 8.876839962509171,
|
| 988 |
+
"bits_per_byte": 3.150046187635368
|
| 989 |
+
},
|
| 990 |
+
"gsarti/flores_101_nob+null": {
|
| 991 |
+
"task_name": "gsarti/flores_101_nob",
|
| 992 |
+
"prompt_name": "null",
|
| 993 |
+
"word_perplexity": 64134.3587194621,
|
| 994 |
+
"byte_perplexity": 5.901843358131797,
|
| 995 |
+
"bits_per_byte": 2.561165630453858
|
| 996 |
+
},
|
| 997 |
+
"gsarti/flores_101_nya+null": {
|
| 998 |
+
"task_name": "gsarti/flores_101_nya",
|
| 999 |
+
"prompt_name": "null",
|
| 1000 |
+
"word_perplexity": 13237249.320560299,
|
| 1001 |
+
"byte_perplexity": 8.97654874419086,
|
| 1002 |
+
"bits_per_byte": 3.166160871838487
|
| 1003 |
+
},
|
| 1004 |
+
"gsarti/flores_101_oci+null": {
|
| 1005 |
+
"task_name": "gsarti/flores_101_oci",
|
| 1006 |
+
"prompt_name": "null",
|
| 1007 |
+
"word_perplexity": 29786.57326210068,
|
| 1008 |
+
"byte_perplexity": 5.114108118049416,
|
| 1009 |
+
"bits_per_byte": 2.3544826611123932
|
| 1010 |
+
},
|
| 1011 |
+
"gsarti/flores_101_ory+null": {
|
| 1012 |
+
"task_name": "gsarti/flores_101_ory",
|
| 1013 |
+
"prompt_name": "null",
|
| 1014 |
+
"word_perplexity": 8232620282886.167,
|
| 1015 |
+
"byte_perplexity": 5.086518347981296,
|
| 1016 |
+
"bits_per_byte": 2.3466784891528936
|
| 1017 |
+
},
|
| 1018 |
+
"gsarti/flores_101_orm+null": {
|
| 1019 |
+
"task_name": "gsarti/flores_101_orm",
|
| 1020 |
+
"prompt_name": "null",
|
| 1021 |
+
"word_perplexity": 1286222337.8393624,
|
| 1022 |
+
"byte_perplexity": 13.414303089263644,
|
| 1023 |
+
"bits_per_byte": 3.7457001993717243
|
| 1024 |
+
},
|
| 1025 |
+
"gsarti/flores_101_pus+null": {
|
| 1026 |
+
"task_name": "gsarti/flores_101_pus",
|
| 1027 |
+
"prompt_name": "null",
|
| 1028 |
+
"word_perplexity": 200303.57214724104,
|
| 1029 |
+
"byte_perplexity": 4.650458574106675,
|
| 1030 |
+
"bits_per_byte": 2.2173729850313615
|
| 1031 |
+
},
|
| 1032 |
+
"gsarti/flores_101_fas+null": {
|
| 1033 |
+
"task_name": "gsarti/flores_101_fas",
|
| 1034 |
+
"prompt_name": "null",
|
| 1035 |
+
"word_perplexity": 59965.98383842629,
|
| 1036 |
+
"byte_perplexity": 3.1572599808371367,
|
| 1037 |
+
"bits_per_byte": 1.6586730625582675
|
| 1038 |
+
},
|
| 1039 |
+
"gsarti/flores_101_pol+null": {
|
| 1040 |
+
"task_name": "gsarti/flores_101_pol",
|
| 1041 |
+
"prompt_name": "null",
|
| 1042 |
+
"word_perplexity": 239703.75452947227,
|
| 1043 |
+
"byte_perplexity": 5.165261846492578,
|
| 1044 |
+
"bits_per_byte": 2.3688414865658434
|
| 1045 |
+
},
|
| 1046 |
+
"gsarti/flores_101_por+null": {
|
| 1047 |
+
"task_name": "gsarti/flores_101_por",
|
| 1048 |
+
"prompt_name": "null",
|
| 1049 |
+
"word_perplexity": 78.66129921108659,
|
| 1050 |
+
"byte_perplexity": 2.012150908931838,
|
| 1051 |
+
"bits_per_byte": 1.0087385096181816
|
| 1052 |
+
},
|
| 1053 |
+
"gsarti/flores_101_pan+null": {
|
| 1054 |
+
"task_name": "gsarti/flores_101_pan",
|
| 1055 |
+
"prompt_name": "null",
|
| 1056 |
+
"word_perplexity": 2003582065.835696,
|
| 1057 |
+
"byte_perplexity": 5.012603107956229,
|
| 1058 |
+
"bits_per_byte": 2.3255600077385723
|
| 1059 |
+
},
|
| 1060 |
+
"gsarti/flores_101_ron+null": {
|
| 1061 |
+
"task_name": "gsarti/flores_101_ron",
|
| 1062 |
+
"prompt_name": "null",
|
| 1063 |
+
"word_perplexity": 80490.92705368399,
|
| 1064 |
+
"byte_perplexity": 5.603607947317877,
|
| 1065 |
+
"bits_per_byte": 2.486356022105963
|
| 1066 |
+
},
|
| 1067 |
+
"gsarti/flores_101_rus+null": {
|
| 1068 |
+
"task_name": "gsarti/flores_101_rus",
|
| 1069 |
+
"prompt_name": "null",
|
| 1070 |
+
"word_perplexity": 22038.65288574451,
|
| 1071 |
+
"byte_perplexity": 2.1372096174466697,
|
| 1072 |
+
"bits_per_byte": 1.095728414417906
|
| 1073 |
+
},
|
| 1074 |
+
"gsarti/flores_101_srp+null": {
|
| 1075 |
+
"task_name": "gsarti/flores_101_srp",
|
| 1076 |
+
"prompt_name": "null",
|
| 1077 |
+
"word_perplexity": 359037.4163692842,
|
| 1078 |
+
"byte_perplexity": 3.050738229673983,
|
| 1079 |
+
"bits_per_byte": 1.6091583939601046
|
| 1080 |
+
},
|
| 1081 |
+
"gsarti/flores_101_sna+null": {
|
| 1082 |
+
"task_name": "gsarti/flores_101_sna",
|
| 1083 |
+
"prompt_name": "null",
|
| 1084 |
+
"word_perplexity": 151658287.08006003,
|
| 1085 |
+
"byte_perplexity": 9.361234419948593,
|
| 1086 |
+
"bits_per_byte": 3.226698783453375
|
| 1087 |
+
},
|
| 1088 |
+
"gsarti/flores_101_snd+null": {
|
| 1089 |
+
"task_name": "gsarti/flores_101_snd",
|
| 1090 |
+
"prompt_name": "null",
|
| 1091 |
+
"word_perplexity": 2195879.0537875695,
|
| 1092 |
+
"byte_perplexity": 5.678399375652783,
|
| 1093 |
+
"bits_per_byte": 2.505484320885354
|
| 1094 |
+
},
|
| 1095 |
+
"gsarti/flores_101_slk+null": {
|
| 1096 |
+
"task_name": "gsarti/flores_101_slk",
|
| 1097 |
+
"prompt_name": "null",
|
| 1098 |
+
"word_perplexity": 1873211.2703176092,
|
| 1099 |
+
"byte_perplexity": 7.294354718439043,
|
| 1100 |
+
"bits_per_byte": 2.8667803584469502
|
| 1101 |
+
},
|
| 1102 |
+
"gsarti/flores_101_slv+null": {
|
| 1103 |
+
"task_name": "gsarti/flores_101_slv",
|
| 1104 |
+
"prompt_name": "null",
|
| 1105 |
+
"word_perplexity": 609965.8362492598,
|
| 1106 |
+
"byte_perplexity": 7.438107250941839,
|
| 1107 |
+
"bits_per_byte": 2.894935550489075
|
| 1108 |
+
},
|
| 1109 |
+
"gsarti/flores_101_som+null": {
|
| 1110 |
+
"task_name": "gsarti/flores_101_som",
|
| 1111 |
+
"prompt_name": "null",
|
| 1112 |
+
"word_perplexity": 12921970.127169678,
|
| 1113 |
+
"byte_perplexity": 12.622705630414286,
|
| 1114 |
+
"bits_per_byte": 3.6579492747174616
|
| 1115 |
+
},
|
| 1116 |
+
"gsarti/flores_101_ckb+null": {
|
| 1117 |
+
"task_name": "gsarti/flores_101_ckb",
|
| 1118 |
+
"prompt_name": "null",
|
| 1119 |
+
"word_perplexity": 11104497.438038943,
|
| 1120 |
+
"byte_perplexity": 3.842852526862475,
|
| 1121 |
+
"bits_per_byte": 1.9421776126623524
|
| 1122 |
+
},
|
| 1123 |
+
"gsarti/flores_101_spa+null": {
|
| 1124 |
+
"task_name": "gsarti/flores_101_spa",
|
| 1125 |
+
"prompt_name": "null",
|
| 1126 |
+
"word_perplexity": 55.14408503293887,
|
| 1127 |
+
"byte_perplexity": 1.9240269109386998,
|
| 1128 |
+
"bits_per_byte": 0.9441289779054047
|
| 1129 |
+
},
|
| 1130 |
+
"gsarti/flores_101_swh+null": {
|
| 1131 |
+
"task_name": "gsarti/flores_101_swh",
|
| 1132 |
+
"prompt_name": "null",
|
| 1133 |
+
"word_perplexity": 6985.646204087442,
|
| 1134 |
+
"byte_perplexity": 3.923430589092355,
|
| 1135 |
+
"bits_per_byte": 1.9721156771582438
|
| 1136 |
+
},
|
| 1137 |
+
"gsarti/flores_101_swe+null": {
|
| 1138 |
+
"task_name": "gsarti/flores_101_swe",
|
| 1139 |
+
"prompt_name": "null",
|
| 1140 |
+
"word_perplexity": 104567.9891705103,
|
| 1141 |
+
"byte_perplexity": 5.634635291846611,
|
| 1142 |
+
"bits_per_byte": 2.4943222333483153
|
| 1143 |
+
},
|
| 1144 |
+
"gsarti/flores_101_tgk+null": {
|
| 1145 |
+
"task_name": "gsarti/flores_101_tgk",
|
| 1146 |
+
"prompt_name": "null",
|
| 1147 |
+
"word_perplexity": 10003619.893239152,
|
| 1148 |
+
"byte_perplexity": 3.836804862794101,
|
| 1149 |
+
"bits_per_byte": 1.9399053923480125
|
| 1150 |
+
},
|
| 1151 |
+
"gsarti/flores_101_tam+null": {
|
| 1152 |
+
"task_name": "gsarti/flores_101_tam",
|
| 1153 |
+
"prompt_name": "null",
|
| 1154 |
+
"word_perplexity": 4220234444737767.0,
|
| 1155 |
+
"byte_perplexity": 4.286894531607389,
|
| 1156 |
+
"bits_per_byte": 2.0999329236632325
|
| 1157 |
+
},
|
| 1158 |
+
"gsarti/flores_101_tel+null": {
|
| 1159 |
+
"task_name": "gsarti/flores_101_tel",
|
| 1160 |
+
"prompt_name": "null",
|
| 1161 |
+
"word_perplexity": 7315913985648022.0,
|
| 1162 |
+
"byte_perplexity": 5.852344181819556,
|
| 1163 |
+
"bits_per_byte": 2.549014618212334
|
| 1164 |
+
},
|
| 1165 |
+
"gsarti/flores_101_tha+null": {
|
| 1166 |
+
"task_name": "gsarti/flores_101_tha",
|
| 1167 |
+
"prompt_name": "null",
|
| 1168 |
+
"word_perplexity": 6.85384626099906e+32,
|
| 1169 |
+
"byte_perplexity": 2.458737675753546,
|
| 1170 |
+
"bits_per_byte": 1.2979178211163922
|
| 1171 |
+
},
|
| 1172 |
+
"gsarti/flores_101_tur+null": {
|
| 1173 |
+
"task_name": "gsarti/flores_101_tur",
|
| 1174 |
+
"prompt_name": "null",
|
| 1175 |
+
"word_perplexity": 1230000.8194755162,
|
| 1176 |
+
"byte_perplexity": 5.323529328304652,
|
| 1177 |
+
"bits_per_byte": 2.4123830232149
|
| 1178 |
+
},
|
| 1179 |
+
"gsarti/flores_101_ukr+null": {
|
| 1180 |
+
"task_name": "gsarti/flores_101_ukr",
|
| 1181 |
+
"prompt_name": "null",
|
| 1182 |
+
"word_perplexity": 780615.9486315987,
|
| 1183 |
+
"byte_perplexity": 2.8843863497020608,
|
| 1184 |
+
"bits_per_byte": 1.5282644195953918
|
| 1185 |
+
},
|
| 1186 |
+
"gsarti/flores_101_umb+null": {
|
| 1187 |
+
"task_name": "gsarti/flores_101_umb",
|
| 1188 |
+
"prompt_name": "null",
|
| 1189 |
+
"word_perplexity": 346118506.64866126,
|
| 1190 |
+
"byte_perplexity": 13.088423907901921,
|
| 1191 |
+
"bits_per_byte": 3.710219475046473
|
| 1192 |
+
},
|
| 1193 |
+
"gsarti/flores_101_urd+null": {
|
| 1194 |
+
"task_name": "gsarti/flores_101_urd",
|
| 1195 |
+
"prompt_name": "null",
|
| 1196 |
+
"word_perplexity": 335.1943886252716,
|
| 1197 |
+
"byte_perplexity": 2.010562039704537,
|
| 1198 |
+
"bits_per_byte": 1.0075988539165108
|
| 1199 |
+
},
|
| 1200 |
+
"gsarti/flores_101_uzb+null": {
|
| 1201 |
+
"task_name": "gsarti/flores_101_uzb",
|
| 1202 |
+
"prompt_name": "null",
|
| 1203 |
+
"word_perplexity": 1248263505.2751954,
|
| 1204 |
+
"byte_perplexity": 12.980834294137205,
|
| 1205 |
+
"bits_per_byte": 3.69831120498359
|
| 1206 |
+
},
|
| 1207 |
+
"gsarti/flores_101_vie+null": {
|
| 1208 |
+
"task_name": "gsarti/flores_101_vie",
|
| 1209 |
+
"prompt_name": "null",
|
| 1210 |
+
"word_perplexity": 33.51752264232948,
|
| 1211 |
+
"byte_perplexity": 1.7976491760484148,
|
| 1212 |
+
"bits_per_byte": 0.8461114961807352
|
| 1213 |
+
},
|
| 1214 |
+
"gsarti/flores_101_cym+null": {
|
| 1215 |
+
"task_name": "gsarti/flores_101_cym",
|
| 1216 |
+
"prompt_name": "null",
|
| 1217 |
+
"word_perplexity": 5900331.966242436,
|
| 1218 |
+
"byte_perplexity": 14.390369428021707,
|
| 1219 |
+
"bits_per_byte": 3.8470317241534553
|
| 1220 |
+
},
|
| 1221 |
+
"gsarti/flores_101_wol+null": {
|
| 1222 |
+
"task_name": "gsarti/flores_101_wol",
|
| 1223 |
+
"prompt_name": "null",
|
| 1224 |
+
"word_perplexity": 199684.7010180392,
|
| 1225 |
+
"byte_perplexity": 10.072733993132132,
|
| 1226 |
+
"bits_per_byte": 3.332383415073327
|
| 1227 |
+
},
|
| 1228 |
+
"gsarti/flores_101_xho+null": {
|
| 1229 |
+
"task_name": "gsarti/flores_101_xho",
|
| 1230 |
+
"prompt_name": "null",
|
| 1231 |
+
"word_perplexity": 141017733.33017766,
|
| 1232 |
+
"byte_perplexity": 8.241450154294917,
|
| 1233 |
+
"bits_per_byte": 3.0428982143908727
|
| 1234 |
+
},
|
| 1235 |
+
"gsarti/flores_101_yor+null": {
|
| 1236 |
+
"task_name": "gsarti/flores_101_yor",
|
| 1237 |
+
"prompt_name": "null",
|
| 1238 |
+
"word_perplexity": 171980.641422536,
|
| 1239 |
+
"byte_perplexity": 6.165831615133067,
|
| 1240 |
+
"bits_per_byte": 2.62429549091613
|
| 1241 |
+
},
|
| 1242 |
+
"gsarti/flores_101_zul+null": {
|
| 1243 |
+
"task_name": "gsarti/flores_101_zul",
|
| 1244 |
+
"prompt_name": "null",
|
| 1245 |
+
"word_perplexity": 998742068.9481835,
|
| 1246 |
+
"byte_perplexity": 9.202622963132773,
|
| 1247 |
+
"bits_per_byte": 3.2020451216662975
|
| 1248 |
+
}
|
| 1249 |
+
},
|
| 1250 |
+
"config": {
|
| 1251 |
+
"adaptive_seq_len": true,
|
| 1252 |
+
"num_fewshot": 0,
|
| 1253 |
+
"bootstrap_iters": 100000
|
| 1254 |
+
}
|
| 1255 |
+
}
|
evaluation/results/tr11/bloom2b5/bslmeval.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/results/tr11/bloom2b5/bslmevalfiles/concat.py
ADDED
|
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import argparse
|
| 2 |
+
import json
|
| 3 |
+
import re
|
| 4 |
+
from pathlib import Path
|
| 5 |
+
from re import Pattern
|
| 6 |
+
from typing import List, Dict
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def get_args():
|
| 10 |
+
parser = argparse.ArgumentParser()
|
| 11 |
+
parser.add_argument("--results-dir", required=True, type=Path, help="Path to the list of results")
|
| 12 |
+
parser.add_argument("--concatenate-output-file", required=True, type=Path, help="Path to store the final output file")
|
| 13 |
+
return parser.parse_args()
|
| 14 |
+
|
| 15 |
+
MODEL = "tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step337250"
|
| 16 |
+
# MODEL = "global_step95000"
|
| 17 |
+
RESULTS_REGEX = re.compile(rf"(eai|bs)_results_lm-eval_{MODEL}_(\d{4}-\d{2}-\d{2}-\d{2}-\d{2}-\d{2})_backup\.json")
|
| 18 |
+
RESULTS_REGEX = re.compile(rf"{MODEL}_*.json")
|
| 19 |
+
#tr11b-1b3-ml-bsevalharness-results_lm-eval_global_step340500_2022-07-14-10-03-25.json
|
| 20 |
+
def get_all_files_that_match_results_in_folder(root_folder: Path) -> List[Path]:
|
| 21 |
+
json_files = []
|
| 22 |
+
for folder in root_folder.iterdir():
|
| 23 |
+
if folder.is_dir():
|
| 24 |
+
json_files += get_all_files_that_match_results_in_folder(folder)
|
| 25 |
+
else:
|
| 26 |
+
# it's actually a file
|
| 27 |
+
file = folder
|
| 28 |
+
|
| 29 |
+
#match = RESULTS_REGEX.match(file.name)
|
| 30 |
+
|
| 31 |
+
if not str(file.name).endswith("json"):
|
| 32 |
+
continue
|
| 33 |
+
else:
|
| 34 |
+
json_files.append(file)
|
| 35 |
+
return json_files
|
| 36 |
+
|
| 37 |
+
def sort_dict(dictionary: Dict) -> Dict:
|
| 38 |
+
results = {}
|
| 39 |
+
|
| 40 |
+
for key, value in sorted(dictionary.items()):
|
| 41 |
+
new_value = value
|
| 42 |
+
|
| 43 |
+
if isinstance(value, dict):
|
| 44 |
+
new_value = sort_dict(new_value)
|
| 45 |
+
elif isinstance(value, list):
|
| 46 |
+
new_value = sorted(value)
|
| 47 |
+
|
| 48 |
+
results[key] = new_value
|
| 49 |
+
|
| 50 |
+
return results
|
| 51 |
+
|
| 52 |
+
def main():
|
| 53 |
+
args = get_args()
|
| 54 |
+
|
| 55 |
+
# Get all json files
|
| 56 |
+
json_files = get_all_files_that_match_results_in_folder(args.results_dir)
|
| 57 |
+
print("GOT", json_files)
|
| 58 |
+
# Merge all json files
|
| 59 |
+
final_result = {
|
| 60 |
+
"results": {},
|
| 61 |
+
"versions": {}
|
| 62 |
+
}
|
| 63 |
+
for file in json_files:
|
| 64 |
+
with open(file, "r") as fi:
|
| 65 |
+
task_result = json.load(fi)
|
| 66 |
+
|
| 67 |
+
#match = RESULTS_REGEX.match(file.name)
|
| 68 |
+
#assert match is not None
|
| 69 |
+
prefix = "bs" if "bs" in file.name else "eai"#match.group(1)
|
| 70 |
+
datetime_string = file.name[file.name.index("global_step337250_") + len("global_step337250_"):].replace(".json", "")#match.group(2)
|
| 71 |
+
|
| 72 |
+
if prefix == "eai":
|
| 73 |
+
results_key = "results"
|
| 74 |
+
elif prefix == "bs":
|
| 75 |
+
results_key = "table_results"
|
| 76 |
+
else:
|
| 77 |
+
raise ValueError(f"Unsupported key: {prefix}")
|
| 78 |
+
|
| 79 |
+
for key, value in task_result[results_key].items():
|
| 80 |
+
if key not in final_result["results"]:
|
| 81 |
+
final_result["results"][key] = {
|
| 82 |
+
datetime_string: value
|
| 83 |
+
}
|
| 84 |
+
#else:
|
| 85 |
+
# assert datetime_string not in final_result["results"][key]
|
| 86 |
+
# final_result["results"][key][datetime_string] = value
|
| 87 |
+
|
| 88 |
+
for key, value in task_result["versions"].items():
|
| 89 |
+
final_result["versions"][key] = value
|
| 90 |
+
|
| 91 |
+
# We sort dict, better for serialization
|
| 92 |
+
print(final_result)
|
| 93 |
+
final_result = sort_dict(final_result)
|
| 94 |
+
|
| 95 |
+
# Save result
|
| 96 |
+
with open(args.concatenate_output_file, "w") as fo:
|
| 97 |
+
json.dump(final_result, fo, indent=2)
|
| 98 |
+
|
| 99 |
+
pass
|
| 100 |
+
|
| 101 |
+
if __name__ == "__main__":
|
| 102 |
+
main()
|
| 103 |
+
|
evaluation/results/tr11/bloom2b5/bslmevalfiles/tr11c-2b5-ml-bsevalharness-results_lm-eval_global_step337250_2022-07-12-23-12-44.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/results/tr11/bloom2b5/bslmevalfiles/tr11c-2b5-ml-evalharness-results_lm-eval_global_step337250_2022-07-13-09-55-04.json
ADDED
|
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"results": {
|
| 3 |
+
"arc_challenge": {
|
| 4 |
+
"acc": 0.27986348122866894,
|
| 5 |
+
"acc_stderr": 0.013119040897725922,
|
| 6 |
+
"acc_norm": 0.3054607508532423,
|
| 7 |
+
"acc_norm_stderr": 0.013460080478002498
|
| 8 |
+
},
|
| 9 |
+
"arc_easy": {
|
| 10 |
+
"acc": 0.5946969696969697,
|
| 11 |
+
"acc_stderr": 0.010074093589739182,
|
| 12 |
+
"acc_norm": 0.5324074074074074,
|
| 13 |
+
"acc_norm_stderr": 0.010238210368801902
|
| 14 |
+
},
|
| 15 |
+
"boolq": {
|
| 16 |
+
"acc": 0.6165137614678899,
|
| 17 |
+
"acc_stderr": 0.008504304838837027
|
| 18 |
+
},
|
| 19 |
+
"copa": {
|
| 20 |
+
"acc": 0.74,
|
| 21 |
+
"acc_stderr": 0.04408440022768078
|
| 22 |
+
},
|
| 23 |
+
"headqa": {
|
| 24 |
+
"acc": 0.26440554339897887,
|
| 25 |
+
"acc_stderr": 0.008423643607316284,
|
| 26 |
+
"acc_norm": 0.3099927060539752,
|
| 27 |
+
"acc_norm_stderr": 0.008833810133604958
|
| 28 |
+
},
|
| 29 |
+
"hellaswag": {
|
| 30 |
+
"acc": 0.41236805417247563,
|
| 31 |
+
"acc_stderr": 0.004912547040132878,
|
| 32 |
+
"acc_norm": 0.527185819557857,
|
| 33 |
+
"acc_norm_stderr": 0.0049824003689396615
|
| 34 |
+
},
|
| 35 |
+
"lambada": {
|
| 36 |
+
"ppl": 9.094305394880015,
|
| 37 |
+
"ppl_stderr": 0.2651922806718523,
|
| 38 |
+
"acc": 0.5181447700368718,
|
| 39 |
+
"acc_stderr": 0.0069613892910728266
|
| 40 |
+
},
|
| 41 |
+
"logiqa": {
|
| 42 |
+
"acc": 0.2073732718894009,
|
| 43 |
+
"acc_stderr": 0.015902084913876333,
|
| 44 |
+
"acc_norm": 0.29185867895545314,
|
| 45 |
+
"acc_norm_stderr": 0.017831570553971925
|
| 46 |
+
},
|
| 47 |
+
"mathqa": {
|
| 48 |
+
"acc": 0.24958123953098826,
|
| 49 |
+
"acc_stderr": 0.007922429819042544,
|
| 50 |
+
"acc_norm": 0.2492462311557789,
|
| 51 |
+
"acc_norm_stderr": 0.007918877981680667
|
| 52 |
+
},
|
| 53 |
+
"mc_taco": {
|
| 54 |
+
"em": 0.11936936936936937,
|
| 55 |
+
"f1": 0.4957122298258418
|
| 56 |
+
},
|
| 57 |
+
"mrpc": {
|
| 58 |
+
"acc": 0.5857843137254902,
|
| 59 |
+
"acc_stderr": 0.02441658575130785,
|
| 60 |
+
"f1": 0.6998223801065719,
|
| 61 |
+
"f1_stderr": 0.021967079752819446
|
| 62 |
+
},
|
| 63 |
+
"multirc": {
|
| 64 |
+
"acc": 0.012591815320041973,
|
| 65 |
+
"acc_stderr": 0.0036138827653638874
|
| 66 |
+
},
|
| 67 |
+
"openbookqa": {
|
| 68 |
+
"acc": 0.216,
|
| 69 |
+
"acc_stderr": 0.01842190906141194,
|
| 70 |
+
"acc_norm": 0.322,
|
| 71 |
+
"acc_norm_stderr": 0.020916668330019882
|
| 72 |
+
},
|
| 73 |
+
"piqa": {
|
| 74 |
+
"acc": 0.7078346028291621,
|
| 75 |
+
"acc_stderr": 0.010610252174513661,
|
| 76 |
+
"acc_norm": 0.705114254624592,
|
| 77 |
+
"acc_norm_stderr": 0.010639030620156982
|
| 78 |
+
},
|
| 79 |
+
"prost": {
|
| 80 |
+
"acc": 0.22683603757472245,
|
| 81 |
+
"acc_stderr": 0.003059602302050251,
|
| 82 |
+
"acc_norm": 0.26371690862510677,
|
| 83 |
+
"acc_norm_stderr": 0.003219323004106053
|
| 84 |
+
},
|
| 85 |
+
"pubmedqa": {
|
| 86 |
+
"acc": 0.616,
|
| 87 |
+
"acc_stderr": 0.01538768276189707
|
| 88 |
+
},
|
| 89 |
+
"qnli": {
|
| 90 |
+
"acc": 0.5072304594545122,
|
| 91 |
+
"acc_stderr": 0.006764703129634549
|
| 92 |
+
},
|
| 93 |
+
"qqp": {
|
| 94 |
+
"acc": 0.38211723967350975,
|
| 95 |
+
"acc_stderr": 0.0024166004681771985,
|
| 96 |
+
"f1": 0.5301408768597062,
|
| 97 |
+
"f1_stderr": 0.002619199330934276
|
| 98 |
+
},
|
| 99 |
+
"race": {
|
| 100 |
+
"acc": 0.3521531100478469,
|
| 101 |
+
"acc_stderr": 0.014782629897202264
|
| 102 |
+
},
|
| 103 |
+
"rte": {
|
| 104 |
+
"acc": 0.5631768953068592,
|
| 105 |
+
"acc_stderr": 0.029855247390314945
|
| 106 |
+
},
|
| 107 |
+
"sciq": {
|
| 108 |
+
"acc": 0.892,
|
| 109 |
+
"acc_stderr": 0.009820001651345703,
|
| 110 |
+
"acc_norm": 0.817,
|
| 111 |
+
"acc_norm_stderr": 0.012233587399477823
|
| 112 |
+
},
|
| 113 |
+
"sst": {
|
| 114 |
+
"acc": 0.49426605504587157,
|
| 115 |
+
"acc_stderr": 0.01694073961990489
|
| 116 |
+
},
|
| 117 |
+
"triviaqa": {
|
| 118 |
+
"acc": 0.041633518960487934,
|
| 119 |
+
"acc_stderr": 0.0018780954895624524
|
| 120 |
+
},
|
| 121 |
+
"webqs": {
|
| 122 |
+
"acc": 0.01673228346456693,
|
| 123 |
+
"acc_stderr": 0.0028461549169432184
|
| 124 |
+
},
|
| 125 |
+
"wic": {
|
| 126 |
+
"acc": 0.49843260188087773,
|
| 127 |
+
"acc_stderr": 0.019810623954060382
|
| 128 |
+
},
|
| 129 |
+
"winogrande": {
|
| 130 |
+
"acc": 0.5864246250986582,
|
| 131 |
+
"acc_stderr": 0.013840971763195303
|
| 132 |
+
},
|
| 133 |
+
"wnli": {
|
| 134 |
+
"acc": 0.4507042253521127,
|
| 135 |
+
"acc_stderr": 0.05947027187737998
|
| 136 |
+
},
|
| 137 |
+
"wsc": {
|
| 138 |
+
"acc": 0.375,
|
| 139 |
+
"acc_stderr": 0.04770204856076104
|
| 140 |
+
}
|
| 141 |
+
},
|
| 142 |
+
"versions": {
|
| 143 |
+
"arc_challenge": 0,
|
| 144 |
+
"arc_easy": 0,
|
| 145 |
+
"boolq": 1,
|
| 146 |
+
"copa": 0,
|
| 147 |
+
"headqa": 0,
|
| 148 |
+
"hellaswag": 0,
|
| 149 |
+
"lambada": 0,
|
| 150 |
+
"logiqa": 0,
|
| 151 |
+
"mathqa": 0,
|
| 152 |
+
"mc_taco": 0,
|
| 153 |
+
"mrpc": 0,
|
| 154 |
+
"multirc": 1,
|
| 155 |
+
"openbookqa": 0,
|
| 156 |
+
"piqa": 0,
|
| 157 |
+
"prost": 0,
|
| 158 |
+
"pubmedqa": 0,
|
| 159 |
+
"qnli": 0,
|
| 160 |
+
"qqp": 0,
|
| 161 |
+
"race": 1,
|
| 162 |
+
"rte": 0,
|
| 163 |
+
"sciq": 0,
|
| 164 |
+
"sst": 0,
|
| 165 |
+
"triviaqa": 0,
|
| 166 |
+
"webqs": 0,
|
| 167 |
+
"wic": 0,
|
| 168 |
+
"winogrande": 0,
|
| 169 |
+
"wnli": 1,
|
| 170 |
+
"wsc": 0
|
| 171 |
+
}
|
| 172 |
+
}
|
evaluation/results/tr11/bloom2b5/humaneval_temp02.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"pass@1": 0.06478658536585366, "pass@10": 0.09537740748119838, "pass@100": 0.12348600494571815}
|
evaluation/results/tr11/bloom2b5/humaneval_temp06.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"pass@1": 0.04460365853658537, "pass@10": 0.11354616672373204, "pass@100": 0.1866822927112951}
|
evaluation/results/tr11/bloom2b5/humaneval_temp08.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"pass@1": 0.03411585365853658, "pass@10": 0.10355342714569304, "pass@100": 0.20427664212871136}
|
evaluation/results/tr11/bloom2b5/mdmeta.txt
ADDED
|
@@ -0,0 +1,1540 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
model-index:
|
| 2 |
+
- name: bloom
|
| 3 |
+
results:
|
| 4 |
+
- task:
|
| 5 |
+
type: text-generation
|
| 6 |
+
name: text generation
|
| 7 |
+
dataset:
|
| 8 |
+
name: arc_challenge
|
| 9 |
+
type: arc_challenge
|
| 10 |
+
metrics:
|
| 11 |
+
- name: acc
|
| 12 |
+
type: acc
|
| 13 |
+
value: 0.27986348122866894
|
| 14 |
+
verified: false
|
| 15 |
+
- task:
|
| 16 |
+
type: text-generation
|
| 17 |
+
name: text generation
|
| 18 |
+
dataset:
|
| 19 |
+
name: arc_easy
|
| 20 |
+
type: arc_easy
|
| 21 |
+
metrics:
|
| 22 |
+
- name: acc
|
| 23 |
+
type: acc
|
| 24 |
+
value: 0.5946969696969697
|
| 25 |
+
verified: false
|
| 26 |
+
- task:
|
| 27 |
+
type: text-generation
|
| 28 |
+
name: text generation
|
| 29 |
+
dataset:
|
| 30 |
+
name: axb
|
| 31 |
+
type: axb
|
| 32 |
+
metrics:
|
| 33 |
+
- name: acc
|
| 34 |
+
type: acc
|
| 35 |
+
value: 0.4433876811594203
|
| 36 |
+
verified: false
|
| 37 |
+
- task:
|
| 38 |
+
type: text-generation
|
| 39 |
+
name: text generation
|
| 40 |
+
dataset:
|
| 41 |
+
name: axg
|
| 42 |
+
type: axg
|
| 43 |
+
metrics:
|
| 44 |
+
- name: acc
|
| 45 |
+
type: acc
|
| 46 |
+
value: 0.5
|
| 47 |
+
verified: false
|
| 48 |
+
- task:
|
| 49 |
+
type: text-generation
|
| 50 |
+
name: text generation
|
| 51 |
+
dataset:
|
| 52 |
+
name: boolq
|
| 53 |
+
type: boolq
|
| 54 |
+
metrics:
|
| 55 |
+
- name: acc
|
| 56 |
+
type: acc
|
| 57 |
+
value: 0.6165137614678899
|
| 58 |
+
verified: false
|
| 59 |
+
- task:
|
| 60 |
+
type: text-generation
|
| 61 |
+
name: text generation
|
| 62 |
+
dataset:
|
| 63 |
+
name: cb
|
| 64 |
+
type: cb
|
| 65 |
+
metrics:
|
| 66 |
+
- name: acc
|
| 67 |
+
type: acc
|
| 68 |
+
value: 0.30357142857142855
|
| 69 |
+
verified: false
|
| 70 |
+
- task:
|
| 71 |
+
type: text-generation
|
| 72 |
+
name: text generation
|
| 73 |
+
dataset:
|
| 74 |
+
name: cola
|
| 75 |
+
type: cola
|
| 76 |
+
metrics:
|
| 77 |
+
- name: acc
|
| 78 |
+
type: acc
|
| 79 |
+
value: 0.610738255033557
|
| 80 |
+
verified: false
|
| 81 |
+
- task:
|
| 82 |
+
type: text-generation
|
| 83 |
+
name: text generation
|
| 84 |
+
dataset:
|
| 85 |
+
name: copa
|
| 86 |
+
type: copa
|
| 87 |
+
metrics:
|
| 88 |
+
- name: acc
|
| 89 |
+
type: acc
|
| 90 |
+
value: 0.63
|
| 91 |
+
verified: false
|
| 92 |
+
- task:
|
| 93 |
+
type: text-generation
|
| 94 |
+
name: text generation
|
| 95 |
+
dataset:
|
| 96 |
+
name: crows_pairs_english
|
| 97 |
+
type: crows_pairs_english
|
| 98 |
+
metrics:
|
| 99 |
+
- name: acc
|
| 100 |
+
type: acc
|
| 101 |
+
value: 0.4973166368515206
|
| 102 |
+
verified: false
|
| 103 |
+
- task:
|
| 104 |
+
type: text-generation
|
| 105 |
+
name: text generation
|
| 106 |
+
dataset:
|
| 107 |
+
name: crows_pairs_french
|
| 108 |
+
type: crows_pairs_french
|
| 109 |
+
metrics:
|
| 110 |
+
- name: acc
|
| 111 |
+
type: acc
|
| 112 |
+
value: 0.5032796660703638
|
| 113 |
+
verified: false
|
| 114 |
+
- task:
|
| 115 |
+
type: text-generation
|
| 116 |
+
name: text generation
|
| 117 |
+
dataset:
|
| 118 |
+
name: diabla
|
| 119 |
+
type: diabla
|
| 120 |
+
metrics:
|
| 121 |
+
- name: acc
|
| 122 |
+
type: acc
|
| 123 |
+
value: 0.28888308977035493
|
| 124 |
+
verified: false
|
| 125 |
+
- task:
|
| 126 |
+
type: text-generation
|
| 127 |
+
name: text generation
|
| 128 |
+
dataset:
|
| 129 |
+
name: gsarti/flores_101_afr
|
| 130 |
+
type: gsarti/flores_101_afr
|
| 131 |
+
metrics:
|
| 132 |
+
- name: byte_perplexity
|
| 133 |
+
type: byte_perplexity
|
| 134 |
+
value: 6.500798737976343
|
| 135 |
+
verified: false
|
| 136 |
+
- task:
|
| 137 |
+
type: text-generation
|
| 138 |
+
name: text generation
|
| 139 |
+
dataset:
|
| 140 |
+
name: gsarti/flores_101_amh
|
| 141 |
+
type: gsarti/flores_101_amh
|
| 142 |
+
metrics:
|
| 143 |
+
- name: byte_perplexity
|
| 144 |
+
type: byte_perplexity
|
| 145 |
+
value: 3.9726863338897145
|
| 146 |
+
verified: false
|
| 147 |
+
- task:
|
| 148 |
+
type: text-generation
|
| 149 |
+
name: text generation
|
| 150 |
+
dataset:
|
| 151 |
+
name: gsarti/flores_101_ara
|
| 152 |
+
type: gsarti/flores_101_ara
|
| 153 |
+
metrics:
|
| 154 |
+
- name: byte_perplexity
|
| 155 |
+
type: byte_perplexity
|
| 156 |
+
value: 1.8083841089875814
|
| 157 |
+
verified: false
|
| 158 |
+
- task:
|
| 159 |
+
type: text-generation
|
| 160 |
+
name: text generation
|
| 161 |
+
dataset:
|
| 162 |
+
name: gsarti/flores_101_asm
|
| 163 |
+
type: gsarti/flores_101_asm
|
| 164 |
+
metrics:
|
| 165 |
+
- name: byte_perplexity
|
| 166 |
+
type: byte_perplexity
|
| 167 |
+
value: 5.699102962086425
|
| 168 |
+
verified: false
|
| 169 |
+
- task:
|
| 170 |
+
type: text-generation
|
| 171 |
+
name: text generation
|
| 172 |
+
dataset:
|
| 173 |
+
name: gsarti/flores_101_ast
|
| 174 |
+
type: gsarti/flores_101_ast
|
| 175 |
+
metrics:
|
| 176 |
+
- name: byte_perplexity
|
| 177 |
+
type: byte_perplexity
|
| 178 |
+
value: 3.9252047073429384
|
| 179 |
+
verified: false
|
| 180 |
+
- task:
|
| 181 |
+
type: text-generation
|
| 182 |
+
name: text generation
|
| 183 |
+
dataset:
|
| 184 |
+
name: gsarti/flores_101_azj
|
| 185 |
+
type: gsarti/flores_101_azj
|
| 186 |
+
metrics:
|
| 187 |
+
- name: byte_perplexity
|
| 188 |
+
type: byte_perplexity
|
| 189 |
+
value: 6.942805054270002
|
| 190 |
+
verified: false
|
| 191 |
+
- task:
|
| 192 |
+
type: text-generation
|
| 193 |
+
name: text generation
|
| 194 |
+
dataset:
|
| 195 |
+
name: gsarti/flores_101_bel
|
| 196 |
+
type: gsarti/flores_101_bel
|
| 197 |
+
metrics:
|
| 198 |
+
- name: byte_perplexity
|
| 199 |
+
type: byte_perplexity
|
| 200 |
+
value: 3.614136245847082
|
| 201 |
+
verified: false
|
| 202 |
+
- task:
|
| 203 |
+
type: text-generation
|
| 204 |
+
name: text generation
|
| 205 |
+
dataset:
|
| 206 |
+
name: gsarti/flores_101_ben
|
| 207 |
+
type: gsarti/flores_101_ben
|
| 208 |
+
metrics:
|
| 209 |
+
- name: byte_perplexity
|
| 210 |
+
type: byte_perplexity
|
| 211 |
+
value: 5.121491534300969
|
| 212 |
+
verified: false
|
| 213 |
+
- task:
|
| 214 |
+
type: text-generation
|
| 215 |
+
name: text generation
|
| 216 |
+
dataset:
|
| 217 |
+
name: gsarti/flores_101_bos
|
| 218 |
+
type: gsarti/flores_101_bos
|
| 219 |
+
metrics:
|
| 220 |
+
- name: byte_perplexity
|
| 221 |
+
type: byte_perplexity
|
| 222 |
+
value: 5.653353469118798
|
| 223 |
+
verified: false
|
| 224 |
+
- task:
|
| 225 |
+
type: text-generation
|
| 226 |
+
name: text generation
|
| 227 |
+
dataset:
|
| 228 |
+
name: gsarti/flores_101_bul
|
| 229 |
+
type: gsarti/flores_101_bul
|
| 230 |
+
metrics:
|
| 231 |
+
- name: byte_perplexity
|
| 232 |
+
type: byte_perplexity
|
| 233 |
+
value: 2.7014693938055068
|
| 234 |
+
verified: false
|
| 235 |
+
- task:
|
| 236 |
+
type: text-generation
|
| 237 |
+
name: text generation
|
| 238 |
+
dataset:
|
| 239 |
+
name: gsarti/flores_101_cat
|
| 240 |
+
type: gsarti/flores_101_cat
|
| 241 |
+
metrics:
|
| 242 |
+
- name: byte_perplexity
|
| 243 |
+
type: byte_perplexity
|
| 244 |
+
value: 2.305190041967345
|
| 245 |
+
verified: false
|
| 246 |
+
- task:
|
| 247 |
+
type: text-generation
|
| 248 |
+
name: text generation
|
| 249 |
+
dataset:
|
| 250 |
+
name: gsarti/flores_101_ceb
|
| 251 |
+
type: gsarti/flores_101_ceb
|
| 252 |
+
metrics:
|
| 253 |
+
- name: byte_perplexity
|
| 254 |
+
type: byte_perplexity
|
| 255 |
+
value: 6.291000321323428
|
| 256 |
+
verified: false
|
| 257 |
+
- task:
|
| 258 |
+
type: text-generation
|
| 259 |
+
name: text generation
|
| 260 |
+
dataset:
|
| 261 |
+
name: gsarti/flores_101_ces
|
| 262 |
+
type: gsarti/flores_101_ces
|
| 263 |
+
metrics:
|
| 264 |
+
- name: byte_perplexity
|
| 265 |
+
type: byte_perplexity
|
| 266 |
+
value: 5.447322753586386
|
| 267 |
+
verified: false
|
| 268 |
+
- task:
|
| 269 |
+
type: text-generation
|
| 270 |
+
name: text generation
|
| 271 |
+
dataset:
|
| 272 |
+
name: gsarti/flores_101_ckb
|
| 273 |
+
type: gsarti/flores_101_ckb
|
| 274 |
+
metrics:
|
| 275 |
+
- name: byte_perplexity
|
| 276 |
+
type: byte_perplexity
|
| 277 |
+
value: 3.7255124939234765
|
| 278 |
+
verified: false
|
| 279 |
+
- task:
|
| 280 |
+
type: text-generation
|
| 281 |
+
name: text generation
|
| 282 |
+
dataset:
|
| 283 |
+
name: gsarti/flores_101_cym
|
| 284 |
+
type: gsarti/flores_101_cym
|
| 285 |
+
metrics:
|
| 286 |
+
- name: byte_perplexity
|
| 287 |
+
type: byte_perplexity
|
| 288 |
+
value: 12.539424151448149
|
| 289 |
+
verified: false
|
| 290 |
+
- task:
|
| 291 |
+
type: text-generation
|
| 292 |
+
name: text generation
|
| 293 |
+
dataset:
|
| 294 |
+
name: gsarti/flores_101_dan
|
| 295 |
+
type: gsarti/flores_101_dan
|
| 296 |
+
metrics:
|
| 297 |
+
- name: byte_perplexity
|
| 298 |
+
type: byte_perplexity
|
| 299 |
+
value: 5.183309001005672
|
| 300 |
+
verified: false
|
| 301 |
+
- task:
|
| 302 |
+
type: text-generation
|
| 303 |
+
name: text generation
|
| 304 |
+
dataset:
|
| 305 |
+
name: gsarti/flores_101_deu
|
| 306 |
+
type: gsarti/flores_101_deu
|
| 307 |
+
metrics:
|
| 308 |
+
- name: byte_perplexity
|
| 309 |
+
type: byte_perplexity
|
| 310 |
+
value: 3.1180422286591347
|
| 311 |
+
verified: false
|
| 312 |
+
- task:
|
| 313 |
+
type: text-generation
|
| 314 |
+
name: text generation
|
| 315 |
+
dataset:
|
| 316 |
+
name: gsarti/flores_101_ell
|
| 317 |
+
type: gsarti/flores_101_ell
|
| 318 |
+
metrics:
|
| 319 |
+
- name: byte_perplexity
|
| 320 |
+
type: byte_perplexity
|
| 321 |
+
value: 2.467943456164706
|
| 322 |
+
verified: false
|
| 323 |
+
- task:
|
| 324 |
+
type: text-generation
|
| 325 |
+
name: text generation
|
| 326 |
+
dataset:
|
| 327 |
+
name: gsarti/flores_101_eng
|
| 328 |
+
type: gsarti/flores_101_eng
|
| 329 |
+
metrics:
|
| 330 |
+
- name: byte_perplexity
|
| 331 |
+
type: byte_perplexity
|
| 332 |
+
value: 2.018740628193298
|
| 333 |
+
verified: false
|
| 334 |
+
- task:
|
| 335 |
+
type: text-generation
|
| 336 |
+
name: text generation
|
| 337 |
+
dataset:
|
| 338 |
+
name: gsarti/flores_101_est
|
| 339 |
+
type: gsarti/flores_101_est
|
| 340 |
+
metrics:
|
| 341 |
+
- name: byte_perplexity
|
| 342 |
+
type: byte_perplexity
|
| 343 |
+
value: 9.11654425176368
|
| 344 |
+
verified: false
|
| 345 |
+
- task:
|
| 346 |
+
type: text-generation
|
| 347 |
+
name: text generation
|
| 348 |
+
dataset:
|
| 349 |
+
name: gsarti/flores_101_fas
|
| 350 |
+
type: gsarti/flores_101_fas
|
| 351 |
+
metrics:
|
| 352 |
+
- name: byte_perplexity
|
| 353 |
+
type: byte_perplexity
|
| 354 |
+
value: 3.058009097116482
|
| 355 |
+
verified: false
|
| 356 |
+
- task:
|
| 357 |
+
type: text-generation
|
| 358 |
+
name: text generation
|
| 359 |
+
dataset:
|
| 360 |
+
name: gsarti/flores_101_fin
|
| 361 |
+
type: gsarti/flores_101_fin
|
| 362 |
+
metrics:
|
| 363 |
+
- name: byte_perplexity
|
| 364 |
+
type: byte_perplexity
|
| 365 |
+
value: 6.847047959628553
|
| 366 |
+
verified: false
|
| 367 |
+
- task:
|
| 368 |
+
type: text-generation
|
| 369 |
+
name: text generation
|
| 370 |
+
dataset:
|
| 371 |
+
name: gsarti/flores_101_fra
|
| 372 |
+
type: gsarti/flores_101_fra
|
| 373 |
+
metrics:
|
| 374 |
+
- name: byte_perplexity
|
| 375 |
+
type: byte_perplexity
|
| 376 |
+
value: 1.9975177011840075
|
| 377 |
+
verified: false
|
| 378 |
+
- task:
|
| 379 |
+
type: text-generation
|
| 380 |
+
name: text generation
|
| 381 |
+
dataset:
|
| 382 |
+
name: gsarti/flores_101_ful
|
| 383 |
+
type: gsarti/flores_101_ful
|
| 384 |
+
metrics:
|
| 385 |
+
- name: byte_perplexity
|
| 386 |
+
type: byte_perplexity
|
| 387 |
+
value: 11.465912731488828
|
| 388 |
+
verified: false
|
| 389 |
+
- task:
|
| 390 |
+
type: text-generation
|
| 391 |
+
name: text generation
|
| 392 |
+
dataset:
|
| 393 |
+
name: gsarti/flores_101_gle
|
| 394 |
+
type: gsarti/flores_101_gle
|
| 395 |
+
metrics:
|
| 396 |
+
- name: byte_perplexity
|
| 397 |
+
type: byte_perplexity
|
| 398 |
+
value: 8.681491663539422
|
| 399 |
+
verified: false
|
| 400 |
+
- task:
|
| 401 |
+
type: text-generation
|
| 402 |
+
name: text generation
|
| 403 |
+
dataset:
|
| 404 |
+
name: gsarti/flores_101_glg
|
| 405 |
+
type: gsarti/flores_101_glg
|
| 406 |
+
metrics:
|
| 407 |
+
- name: byte_perplexity
|
| 408 |
+
type: byte_perplexity
|
| 409 |
+
value: 3.029991089015508
|
| 410 |
+
verified: false
|
| 411 |
+
- task:
|
| 412 |
+
type: text-generation
|
| 413 |
+
name: text generation
|
| 414 |
+
dataset:
|
| 415 |
+
name: gsarti/flores_101_guj
|
| 416 |
+
type: gsarti/flores_101_guj
|
| 417 |
+
metrics:
|
| 418 |
+
- name: byte_perplexity
|
| 419 |
+
type: byte_perplexity
|
| 420 |
+
value: 4.955224230286231
|
| 421 |
+
verified: false
|
| 422 |
+
- task:
|
| 423 |
+
type: text-generation
|
| 424 |
+
name: text generation
|
| 425 |
+
dataset:
|
| 426 |
+
name: gsarti/flores_101_hau
|
| 427 |
+
type: gsarti/flores_101_hau
|
| 428 |
+
metrics:
|
| 429 |
+
- name: byte_perplexity
|
| 430 |
+
type: byte_perplexity
|
| 431 |
+
value: 10.758347356372159
|
| 432 |
+
verified: false
|
| 433 |
+
- task:
|
| 434 |
+
type: text-generation
|
| 435 |
+
name: text generation
|
| 436 |
+
dataset:
|
| 437 |
+
name: gsarti/flores_101_heb
|
| 438 |
+
type: gsarti/flores_101_heb
|
| 439 |
+
metrics:
|
| 440 |
+
- name: byte_perplexity
|
| 441 |
+
type: byte_perplexity
|
| 442 |
+
value: 3.6004478129801667
|
| 443 |
+
verified: false
|
| 444 |
+
- task:
|
| 445 |
+
type: text-generation
|
| 446 |
+
name: text generation
|
| 447 |
+
dataset:
|
| 448 |
+
name: gsarti/flores_101_hin
|
| 449 |
+
type: gsarti/flores_101_hin
|
| 450 |
+
metrics:
|
| 451 |
+
- name: byte_perplexity
|
| 452 |
+
type: byte_perplexity
|
| 453 |
+
value: 4.712530650588064
|
| 454 |
+
verified: false
|
| 455 |
+
- task:
|
| 456 |
+
type: text-generation
|
| 457 |
+
name: text generation
|
| 458 |
+
dataset:
|
| 459 |
+
name: gsarti/flores_101_hrv
|
| 460 |
+
type: gsarti/flores_101_hrv
|
| 461 |
+
metrics:
|
| 462 |
+
- name: byte_perplexity
|
| 463 |
+
type: byte_perplexity
|
| 464 |
+
value: 5.822418943372185
|
| 465 |
+
verified: false
|
| 466 |
+
- task:
|
| 467 |
+
type: text-generation
|
| 468 |
+
name: text generation
|
| 469 |
+
dataset:
|
| 470 |
+
name: gsarti/flores_101_hun
|
| 471 |
+
type: gsarti/flores_101_hun
|
| 472 |
+
metrics:
|
| 473 |
+
- name: byte_perplexity
|
| 474 |
+
type: byte_perplexity
|
| 475 |
+
value: 6.440482646965992
|
| 476 |
+
verified: false
|
| 477 |
+
- task:
|
| 478 |
+
type: text-generation
|
| 479 |
+
name: text generation
|
| 480 |
+
dataset:
|
| 481 |
+
name: gsarti/flores_101_hye
|
| 482 |
+
type: gsarti/flores_101_hye
|
| 483 |
+
metrics:
|
| 484 |
+
- name: byte_perplexity
|
| 485 |
+
type: byte_perplexity
|
| 486 |
+
value: 3.657718918347166
|
| 487 |
+
verified: false
|
| 488 |
+
- task:
|
| 489 |
+
type: text-generation
|
| 490 |
+
name: text generation
|
| 491 |
+
dataset:
|
| 492 |
+
name: gsarti/flores_101_ibo
|
| 493 |
+
type: gsarti/flores_101_ibo
|
| 494 |
+
metrics:
|
| 495 |
+
- name: byte_perplexity
|
| 496 |
+
type: byte_perplexity
|
| 497 |
+
value: 5.564814003872672
|
| 498 |
+
verified: false
|
| 499 |
+
- task:
|
| 500 |
+
type: text-generation
|
| 501 |
+
name: text generation
|
| 502 |
+
dataset:
|
| 503 |
+
name: gsarti/flores_101_ind
|
| 504 |
+
type: gsarti/flores_101_ind
|
| 505 |
+
metrics:
|
| 506 |
+
- name: byte_perplexity
|
| 507 |
+
type: byte_perplexity
|
| 508 |
+
value: 2.1597101468869373
|
| 509 |
+
verified: false
|
| 510 |
+
- task:
|
| 511 |
+
type: text-generation
|
| 512 |
+
name: text generation
|
| 513 |
+
dataset:
|
| 514 |
+
name: gsarti/flores_101_isl
|
| 515 |
+
type: gsarti/flores_101_isl
|
| 516 |
+
metrics:
|
| 517 |
+
- name: byte_perplexity
|
| 518 |
+
type: byte_perplexity
|
| 519 |
+
value: 8.082349269518136
|
| 520 |
+
verified: false
|
| 521 |
+
- task:
|
| 522 |
+
type: text-generation
|
| 523 |
+
name: text generation
|
| 524 |
+
dataset:
|
| 525 |
+
name: gsarti/flores_101_ita
|
| 526 |
+
type: gsarti/flores_101_ita
|
| 527 |
+
metrics:
|
| 528 |
+
- name: byte_perplexity
|
| 529 |
+
type: byte_perplexity
|
| 530 |
+
value: 2.9687591414176207
|
| 531 |
+
verified: false
|
| 532 |
+
- task:
|
| 533 |
+
type: text-generation
|
| 534 |
+
name: text generation
|
| 535 |
+
dataset:
|
| 536 |
+
name: gsarti/flores_101_jav
|
| 537 |
+
type: gsarti/flores_101_jav
|
| 538 |
+
metrics:
|
| 539 |
+
- name: byte_perplexity
|
| 540 |
+
type: byte_perplexity
|
| 541 |
+
value: 7.0573805415708994
|
| 542 |
+
verified: false
|
| 543 |
+
- task:
|
| 544 |
+
type: text-generation
|
| 545 |
+
name: text generation
|
| 546 |
+
dataset:
|
| 547 |
+
name: gsarti/flores_101_jpn
|
| 548 |
+
type: gsarti/flores_101_jpn
|
| 549 |
+
metrics:
|
| 550 |
+
- name: byte_perplexity
|
| 551 |
+
type: byte_perplexity
|
| 552 |
+
value: 2.7758864197116933
|
| 553 |
+
verified: false
|
| 554 |
+
- task:
|
| 555 |
+
type: text-generation
|
| 556 |
+
name: text generation
|
| 557 |
+
dataset:
|
| 558 |
+
name: gsarti/flores_101_kam
|
| 559 |
+
type: gsarti/flores_101_kam
|
| 560 |
+
metrics:
|
| 561 |
+
- name: byte_perplexity
|
| 562 |
+
type: byte_perplexity
|
| 563 |
+
value: 11.072949642861332
|
| 564 |
+
verified: false
|
| 565 |
+
- task:
|
| 566 |
+
type: text-generation
|
| 567 |
+
name: text generation
|
| 568 |
+
dataset:
|
| 569 |
+
name: gsarti/flores_101_kan
|
| 570 |
+
type: gsarti/flores_101_kan
|
| 571 |
+
metrics:
|
| 572 |
+
- name: byte_perplexity
|
| 573 |
+
type: byte_perplexity
|
| 574 |
+
value: 5.551730651007082
|
| 575 |
+
verified: false
|
| 576 |
+
- task:
|
| 577 |
+
type: text-generation
|
| 578 |
+
name: text generation
|
| 579 |
+
dataset:
|
| 580 |
+
name: gsarti/flores_101_kat
|
| 581 |
+
type: gsarti/flores_101_kat
|
| 582 |
+
metrics:
|
| 583 |
+
- name: byte_perplexity
|
| 584 |
+
type: byte_perplexity
|
| 585 |
+
value: 2.522630524283745
|
| 586 |
+
verified: false
|
| 587 |
+
- task:
|
| 588 |
+
type: text-generation
|
| 589 |
+
name: text generation
|
| 590 |
+
dataset:
|
| 591 |
+
name: gsarti/flores_101_kaz
|
| 592 |
+
type: gsarti/flores_101_kaz
|
| 593 |
+
metrics:
|
| 594 |
+
- name: byte_perplexity
|
| 595 |
+
type: byte_perplexity
|
| 596 |
+
value: 3.3901748516975574
|
| 597 |
+
verified: false
|
| 598 |
+
- task:
|
| 599 |
+
type: text-generation
|
| 600 |
+
name: text generation
|
| 601 |
+
dataset:
|
| 602 |
+
name: gsarti/flores_101_kea
|
| 603 |
+
type: gsarti/flores_101_kea
|
| 604 |
+
metrics:
|
| 605 |
+
- name: byte_perplexity
|
| 606 |
+
type: byte_perplexity
|
| 607 |
+
value: 8.918534182590863
|
| 608 |
+
verified: false
|
| 609 |
+
- task:
|
| 610 |
+
type: text-generation
|
| 611 |
+
name: text generation
|
| 612 |
+
dataset:
|
| 613 |
+
name: gsarti/flores_101_kir
|
| 614 |
+
type: gsarti/flores_101_kir
|
| 615 |
+
metrics:
|
| 616 |
+
- name: byte_perplexity
|
| 617 |
+
type: byte_perplexity
|
| 618 |
+
value: 3.729278369847201
|
| 619 |
+
verified: false
|
| 620 |
+
- task:
|
| 621 |
+
type: text-generation
|
| 622 |
+
name: text generation
|
| 623 |
+
dataset:
|
| 624 |
+
name: gsarti/flores_101_kor
|
| 625 |
+
type: gsarti/flores_101_kor
|
| 626 |
+
metrics:
|
| 627 |
+
- name: byte_perplexity
|
| 628 |
+
type: byte_perplexity
|
| 629 |
+
value: 3.932884847226212
|
| 630 |
+
verified: false
|
| 631 |
+
- task:
|
| 632 |
+
type: text-generation
|
| 633 |
+
name: text generation
|
| 634 |
+
dataset:
|
| 635 |
+
name: gsarti/flores_101_lao
|
| 636 |
+
type: gsarti/flores_101_lao
|
| 637 |
+
metrics:
|
| 638 |
+
- name: byte_perplexity
|
| 639 |
+
type: byte_perplexity
|
| 640 |
+
value: 2.9077314760849924
|
| 641 |
+
verified: false
|
| 642 |
+
- task:
|
| 643 |
+
type: text-generation
|
| 644 |
+
name: text generation
|
| 645 |
+
dataset:
|
| 646 |
+
name: gsarti/flores_101_lav
|
| 647 |
+
type: gsarti/flores_101_lav
|
| 648 |
+
metrics:
|
| 649 |
+
- name: byte_perplexity
|
| 650 |
+
type: byte_perplexity
|
| 651 |
+
value: 7.777221919194806
|
| 652 |
+
verified: false
|
| 653 |
+
- task:
|
| 654 |
+
type: text-generation
|
| 655 |
+
name: text generation
|
| 656 |
+
dataset:
|
| 657 |
+
name: gsarti/flores_101_lin
|
| 658 |
+
type: gsarti/flores_101_lin
|
| 659 |
+
metrics:
|
| 660 |
+
- name: byte_perplexity
|
| 661 |
+
type: byte_perplexity
|
| 662 |
+
value: 7.524842908050988
|
| 663 |
+
verified: false
|
| 664 |
+
- task:
|
| 665 |
+
type: text-generation
|
| 666 |
+
name: text generation
|
| 667 |
+
dataset:
|
| 668 |
+
name: gsarti/flores_101_lit
|
| 669 |
+
type: gsarti/flores_101_lit
|
| 670 |
+
metrics:
|
| 671 |
+
- name: byte_perplexity
|
| 672 |
+
type: byte_perplexity
|
| 673 |
+
value: 7.369179434621725
|
| 674 |
+
verified: false
|
| 675 |
+
- task:
|
| 676 |
+
type: text-generation
|
| 677 |
+
name: text generation
|
| 678 |
+
dataset:
|
| 679 |
+
name: gsarti/flores_101_ltz
|
| 680 |
+
type: gsarti/flores_101_ltz
|
| 681 |
+
metrics:
|
| 682 |
+
- name: byte_perplexity
|
| 683 |
+
type: byte_perplexity
|
| 684 |
+
value: 8.801059747949214
|
| 685 |
+
verified: false
|
| 686 |
+
- task:
|
| 687 |
+
type: text-generation
|
| 688 |
+
name: text generation
|
| 689 |
+
dataset:
|
| 690 |
+
name: gsarti/flores_101_lug
|
| 691 |
+
type: gsarti/flores_101_lug
|
| 692 |
+
metrics:
|
| 693 |
+
- name: byte_perplexity
|
| 694 |
+
type: byte_perplexity
|
| 695 |
+
value: 8.483203026364786
|
| 696 |
+
verified: false
|
| 697 |
+
- task:
|
| 698 |
+
type: text-generation
|
| 699 |
+
name: text generation
|
| 700 |
+
dataset:
|
| 701 |
+
name: gsarti/flores_101_luo
|
| 702 |
+
type: gsarti/flores_101_luo
|
| 703 |
+
metrics:
|
| 704 |
+
- name: byte_perplexity
|
| 705 |
+
type: byte_perplexity
|
| 706 |
+
value: 11.975963093623681
|
| 707 |
+
verified: false
|
| 708 |
+
- task:
|
| 709 |
+
type: text-generation
|
| 710 |
+
name: text generation
|
| 711 |
+
dataset:
|
| 712 |
+
name: gsarti/flores_101_mal
|
| 713 |
+
type: gsarti/flores_101_mal
|
| 714 |
+
metrics:
|
| 715 |
+
- name: byte_perplexity
|
| 716 |
+
type: byte_perplexity
|
| 717 |
+
value: 4.615948455160037
|
| 718 |
+
verified: false
|
| 719 |
+
- task:
|
| 720 |
+
type: text-generation
|
| 721 |
+
name: text generation
|
| 722 |
+
dataset:
|
| 723 |
+
name: gsarti/flores_101_mar
|
| 724 |
+
type: gsarti/flores_101_mar
|
| 725 |
+
metrics:
|
| 726 |
+
- name: byte_perplexity
|
| 727 |
+
type: byte_perplexity
|
| 728 |
+
value: 5.483253482821379
|
| 729 |
+
verified: false
|
| 730 |
+
- task:
|
| 731 |
+
type: text-generation
|
| 732 |
+
name: text generation
|
| 733 |
+
dataset:
|
| 734 |
+
name: gsarti/flores_101_mkd
|
| 735 |
+
type: gsarti/flores_101_mkd
|
| 736 |
+
metrics:
|
| 737 |
+
- name: byte_perplexity
|
| 738 |
+
type: byte_perplexity
|
| 739 |
+
value: 2.9656732291754087
|
| 740 |
+
verified: false
|
| 741 |
+
- task:
|
| 742 |
+
type: text-generation
|
| 743 |
+
name: text generation
|
| 744 |
+
dataset:
|
| 745 |
+
name: gsarti/flores_101_mlt
|
| 746 |
+
type: gsarti/flores_101_mlt
|
| 747 |
+
metrics:
|
| 748 |
+
- name: byte_perplexity
|
| 749 |
+
type: byte_perplexity
|
| 750 |
+
value: 15.004773437665275
|
| 751 |
+
verified: false
|
| 752 |
+
- task:
|
| 753 |
+
type: text-generation
|
| 754 |
+
name: text generation
|
| 755 |
+
dataset:
|
| 756 |
+
name: gsarti/flores_101_mon
|
| 757 |
+
type: gsarti/flores_101_mon
|
| 758 |
+
metrics:
|
| 759 |
+
- name: byte_perplexity
|
| 760 |
+
type: byte_perplexity
|
| 761 |
+
value: 3.410598542315402
|
| 762 |
+
verified: false
|
| 763 |
+
- task:
|
| 764 |
+
type: text-generation
|
| 765 |
+
name: text generation
|
| 766 |
+
dataset:
|
| 767 |
+
name: gsarti/flores_101_mri
|
| 768 |
+
type: gsarti/flores_101_mri
|
| 769 |
+
metrics:
|
| 770 |
+
- name: byte_perplexity
|
| 771 |
+
type: byte_perplexity
|
| 772 |
+
value: 7.474035895661322
|
| 773 |
+
verified: false
|
| 774 |
+
- task:
|
| 775 |
+
type: text-generation
|
| 776 |
+
name: text generation
|
| 777 |
+
dataset:
|
| 778 |
+
name: gsarti/flores_101_msa
|
| 779 |
+
type: gsarti/flores_101_msa
|
| 780 |
+
metrics:
|
| 781 |
+
- name: byte_perplexity
|
| 782 |
+
type: byte_perplexity
|
| 783 |
+
value: 2.5710001772665634
|
| 784 |
+
verified: false
|
| 785 |
+
- task:
|
| 786 |
+
type: text-generation
|
| 787 |
+
name: text generation
|
| 788 |
+
dataset:
|
| 789 |
+
name: gsarti/flores_101_mya
|
| 790 |
+
type: gsarti/flores_101_mya
|
| 791 |
+
metrics:
|
| 792 |
+
- name: byte_perplexity
|
| 793 |
+
type: byte_perplexity
|
| 794 |
+
value: 2.413577969878331
|
| 795 |
+
verified: false
|
| 796 |
+
- task:
|
| 797 |
+
type: text-generation
|
| 798 |
+
name: text generation
|
| 799 |
+
dataset:
|
| 800 |
+
name: gsarti/flores_101_nld
|
| 801 |
+
type: gsarti/flores_101_nld
|
| 802 |
+
metrics:
|
| 803 |
+
- name: byte_perplexity
|
| 804 |
+
type: byte_perplexity
|
| 805 |
+
value: 4.127831721885065
|
| 806 |
+
verified: false
|
| 807 |
+
- task:
|
| 808 |
+
type: text-generation
|
| 809 |
+
name: text generation
|
| 810 |
+
dataset:
|
| 811 |
+
name: gsarti/flores_101_nob
|
| 812 |
+
type: gsarti/flores_101_nob
|
| 813 |
+
metrics:
|
| 814 |
+
- name: byte_perplexity
|
| 815 |
+
type: byte_perplexity
|
| 816 |
+
value: 5.402763169129877
|
| 817 |
+
verified: false
|
| 818 |
+
- task:
|
| 819 |
+
type: text-generation
|
| 820 |
+
name: text generation
|
| 821 |
+
dataset:
|
| 822 |
+
name: gsarti/flores_101_npi
|
| 823 |
+
type: gsarti/flores_101_npi
|
| 824 |
+
metrics:
|
| 825 |
+
- name: byte_perplexity
|
| 826 |
+
type: byte_perplexity
|
| 827 |
+
value: 5.199342701937889
|
| 828 |
+
verified: false
|
| 829 |
+
- task:
|
| 830 |
+
type: text-generation
|
| 831 |
+
name: text generation
|
| 832 |
+
dataset:
|
| 833 |
+
name: gsarti/flores_101_nso
|
| 834 |
+
type: gsarti/flores_101_nso
|
| 835 |
+
metrics:
|
| 836 |
+
- name: byte_perplexity
|
| 837 |
+
type: byte_perplexity
|
| 838 |
+
value: 8.154626800955667
|
| 839 |
+
verified: false
|
| 840 |
+
- task:
|
| 841 |
+
type: text-generation
|
| 842 |
+
name: text generation
|
| 843 |
+
dataset:
|
| 844 |
+
name: gsarti/flores_101_nya
|
| 845 |
+
type: gsarti/flores_101_nya
|
| 846 |
+
metrics:
|
| 847 |
+
- name: byte_perplexity
|
| 848 |
+
type: byte_perplexity
|
| 849 |
+
value: 8.179860208369393
|
| 850 |
+
verified: false
|
| 851 |
+
- task:
|
| 852 |
+
type: text-generation
|
| 853 |
+
name: text generation
|
| 854 |
+
dataset:
|
| 855 |
+
name: gsarti/flores_101_oci
|
| 856 |
+
type: gsarti/flores_101_oci
|
| 857 |
+
metrics:
|
| 858 |
+
- name: byte_perplexity
|
| 859 |
+
type: byte_perplexity
|
| 860 |
+
value: 4.8617357393685845
|
| 861 |
+
verified: false
|
| 862 |
+
- task:
|
| 863 |
+
type: text-generation
|
| 864 |
+
name: text generation
|
| 865 |
+
dataset:
|
| 866 |
+
name: gsarti/flores_101_orm
|
| 867 |
+
type: gsarti/flores_101_orm
|
| 868 |
+
metrics:
|
| 869 |
+
- name: byte_perplexity
|
| 870 |
+
type: byte_perplexity
|
| 871 |
+
value: 12.911595421079408
|
| 872 |
+
verified: false
|
| 873 |
+
- task:
|
| 874 |
+
type: text-generation
|
| 875 |
+
name: text generation
|
| 876 |
+
dataset:
|
| 877 |
+
name: gsarti/flores_101_ory
|
| 878 |
+
type: gsarti/flores_101_ory
|
| 879 |
+
metrics:
|
| 880 |
+
- name: byte_perplexity
|
| 881 |
+
type: byte_perplexity
|
| 882 |
+
value: 5.189421861225964
|
| 883 |
+
verified: false
|
| 884 |
+
- task:
|
| 885 |
+
type: text-generation
|
| 886 |
+
name: text generation
|
| 887 |
+
dataset:
|
| 888 |
+
name: gsarti/flores_101_pan
|
| 889 |
+
type: gsarti/flores_101_pan
|
| 890 |
+
metrics:
|
| 891 |
+
- name: byte_perplexity
|
| 892 |
+
type: byte_perplexity
|
| 893 |
+
value: 4.698477289331806
|
| 894 |
+
verified: false
|
| 895 |
+
- task:
|
| 896 |
+
type: text-generation
|
| 897 |
+
name: text generation
|
| 898 |
+
dataset:
|
| 899 |
+
name: gsarti/flores_101_pol
|
| 900 |
+
type: gsarti/flores_101_pol
|
| 901 |
+
metrics:
|
| 902 |
+
- name: byte_perplexity
|
| 903 |
+
type: byte_perplexity
|
| 904 |
+
value: 4.625550458479643
|
| 905 |
+
verified: false
|
| 906 |
+
- task:
|
| 907 |
+
type: text-generation
|
| 908 |
+
name: text generation
|
| 909 |
+
dataset:
|
| 910 |
+
name: gsarti/flores_101_por
|
| 911 |
+
type: gsarti/flores_101_por
|
| 912 |
+
metrics:
|
| 913 |
+
- name: byte_perplexity
|
| 914 |
+
type: byte_perplexity
|
| 915 |
+
value: 1.9754515986213523
|
| 916 |
+
verified: false
|
| 917 |
+
- task:
|
| 918 |
+
type: text-generation
|
| 919 |
+
name: text generation
|
| 920 |
+
dataset:
|
| 921 |
+
name: gsarti/flores_101_pus
|
| 922 |
+
type: gsarti/flores_101_pus
|
| 923 |
+
metrics:
|
| 924 |
+
- name: byte_perplexity
|
| 925 |
+
type: byte_perplexity
|
| 926 |
+
value: 4.4963371422771585
|
| 927 |
+
verified: false
|
| 928 |
+
- task:
|
| 929 |
+
type: text-generation
|
| 930 |
+
name: text generation
|
| 931 |
+
dataset:
|
| 932 |
+
name: gsarti/flores_101_ron
|
| 933 |
+
type: gsarti/flores_101_ron
|
| 934 |
+
metrics:
|
| 935 |
+
- name: byte_perplexity
|
| 936 |
+
type: byte_perplexity
|
| 937 |
+
value: 4.965456830031304
|
| 938 |
+
verified: false
|
| 939 |
+
- task:
|
| 940 |
+
type: text-generation
|
| 941 |
+
name: text generation
|
| 942 |
+
dataset:
|
| 943 |
+
name: gsarti/flores_101_rus
|
| 944 |
+
type: gsarti/flores_101_rus
|
| 945 |
+
metrics:
|
| 946 |
+
- name: byte_perplexity
|
| 947 |
+
type: byte_perplexity
|
| 948 |
+
value: 2.0498020542445303
|
| 949 |
+
verified: false
|
| 950 |
+
- task:
|
| 951 |
+
type: text-generation
|
| 952 |
+
name: text generation
|
| 953 |
+
dataset:
|
| 954 |
+
name: gsarti/flores_101_slk
|
| 955 |
+
type: gsarti/flores_101_slk
|
| 956 |
+
metrics:
|
| 957 |
+
- name: byte_perplexity
|
| 958 |
+
type: byte_perplexity
|
| 959 |
+
value: 6.450822127057479
|
| 960 |
+
verified: false
|
| 961 |
+
- task:
|
| 962 |
+
type: text-generation
|
| 963 |
+
name: text generation
|
| 964 |
+
dataset:
|
| 965 |
+
name: gsarti/flores_101_slv
|
| 966 |
+
type: gsarti/flores_101_slv
|
| 967 |
+
metrics:
|
| 968 |
+
- name: byte_perplexity
|
| 969 |
+
type: byte_perplexity
|
| 970 |
+
value: 6.620252120186232
|
| 971 |
+
verified: false
|
| 972 |
+
- task:
|
| 973 |
+
type: text-generation
|
| 974 |
+
name: text generation
|
| 975 |
+
dataset:
|
| 976 |
+
name: gsarti/flores_101_sna
|
| 977 |
+
type: gsarti/flores_101_sna
|
| 978 |
+
metrics:
|
| 979 |
+
- name: byte_perplexity
|
| 980 |
+
type: byte_perplexity
|
| 981 |
+
value: 8.462166771382726
|
| 982 |
+
verified: false
|
| 983 |
+
- task:
|
| 984 |
+
type: text-generation
|
| 985 |
+
name: text generation
|
| 986 |
+
dataset:
|
| 987 |
+
name: gsarti/flores_101_snd
|
| 988 |
+
type: gsarti/flores_101_snd
|
| 989 |
+
metrics:
|
| 990 |
+
- name: byte_perplexity
|
| 991 |
+
type: byte_perplexity
|
| 992 |
+
value: 5.466066951221973
|
| 993 |
+
verified: false
|
| 994 |
+
- task:
|
| 995 |
+
type: text-generation
|
| 996 |
+
name: text generation
|
| 997 |
+
dataset:
|
| 998 |
+
name: gsarti/flores_101_som
|
| 999 |
+
type: gsarti/flores_101_som
|
| 1000 |
+
metrics:
|
| 1001 |
+
- name: byte_perplexity
|
| 1002 |
+
type: byte_perplexity
|
| 1003 |
+
value: 11.95918054093392
|
| 1004 |
+
verified: false
|
| 1005 |
+
- task:
|
| 1006 |
+
type: text-generation
|
| 1007 |
+
name: text generation
|
| 1008 |
+
dataset:
|
| 1009 |
+
name: gsarti/flores_101_spa
|
| 1010 |
+
type: gsarti/flores_101_spa
|
| 1011 |
+
metrics:
|
| 1012 |
+
- name: byte_perplexity
|
| 1013 |
+
type: byte_perplexity
|
| 1014 |
+
value: 1.8965140104323535
|
| 1015 |
+
verified: false
|
| 1016 |
+
- task:
|
| 1017 |
+
type: text-generation
|
| 1018 |
+
name: text generation
|
| 1019 |
+
dataset:
|
| 1020 |
+
name: gsarti/flores_101_srp
|
| 1021 |
+
type: gsarti/flores_101_srp
|
| 1022 |
+
metrics:
|
| 1023 |
+
- name: byte_perplexity
|
| 1024 |
+
type: byte_perplexity
|
| 1025 |
+
value: 2.871214785885079
|
| 1026 |
+
verified: false
|
| 1027 |
+
- task:
|
| 1028 |
+
type: text-generation
|
| 1029 |
+
name: text generation
|
| 1030 |
+
dataset:
|
| 1031 |
+
name: gsarti/flores_101_swe
|
| 1032 |
+
type: gsarti/flores_101_swe
|
| 1033 |
+
metrics:
|
| 1034 |
+
- name: byte_perplexity
|
| 1035 |
+
type: byte_perplexity
|
| 1036 |
+
value: 5.054972008155866
|
| 1037 |
+
verified: false
|
| 1038 |
+
- task:
|
| 1039 |
+
type: text-generation
|
| 1040 |
+
name: text generation
|
| 1041 |
+
dataset:
|
| 1042 |
+
name: gsarti/flores_101_swh
|
| 1043 |
+
type: gsarti/flores_101_swh
|
| 1044 |
+
metrics:
|
| 1045 |
+
- name: byte_perplexity
|
| 1046 |
+
type: byte_perplexity
|
| 1047 |
+
value: 3.6973091886730676
|
| 1048 |
+
verified: false
|
| 1049 |
+
- task:
|
| 1050 |
+
type: text-generation
|
| 1051 |
+
name: text generation
|
| 1052 |
+
dataset:
|
| 1053 |
+
name: gsarti/flores_101_tam
|
| 1054 |
+
type: gsarti/flores_101_tam
|
| 1055 |
+
metrics:
|
| 1056 |
+
- name: byte_perplexity
|
| 1057 |
+
type: byte_perplexity
|
| 1058 |
+
value: 4.539493400469833
|
| 1059 |
+
verified: false
|
| 1060 |
+
- task:
|
| 1061 |
+
type: text-generation
|
| 1062 |
+
name: text generation
|
| 1063 |
+
dataset:
|
| 1064 |
+
name: gsarti/flores_101_tel
|
| 1065 |
+
type: gsarti/flores_101_tel
|
| 1066 |
+
metrics:
|
| 1067 |
+
- name: byte_perplexity
|
| 1068 |
+
type: byte_perplexity
|
| 1069 |
+
value: 5.807499987508966
|
| 1070 |
+
verified: false
|
| 1071 |
+
- task:
|
| 1072 |
+
type: text-generation
|
| 1073 |
+
name: text generation
|
| 1074 |
+
dataset:
|
| 1075 |
+
name: gsarti/flores_101_tgk
|
| 1076 |
+
type: gsarti/flores_101_tgk
|
| 1077 |
+
metrics:
|
| 1078 |
+
- name: byte_perplexity
|
| 1079 |
+
type: byte_perplexity
|
| 1080 |
+
value: 3.5994818827380426
|
| 1081 |
+
verified: false
|
| 1082 |
+
- task:
|
| 1083 |
+
type: text-generation
|
| 1084 |
+
name: text generation
|
| 1085 |
+
dataset:
|
| 1086 |
+
name: gsarti/flores_101_tgl
|
| 1087 |
+
type: gsarti/flores_101_tgl
|
| 1088 |
+
metrics:
|
| 1089 |
+
- name: byte_perplexity
|
| 1090 |
+
type: byte_perplexity
|
| 1091 |
+
value: 5.667053833119858
|
| 1092 |
+
verified: false
|
| 1093 |
+
- task:
|
| 1094 |
+
type: text-generation
|
| 1095 |
+
name: text generation
|
| 1096 |
+
dataset:
|
| 1097 |
+
name: gsarti/flores_101_tha
|
| 1098 |
+
type: gsarti/flores_101_tha
|
| 1099 |
+
metrics:
|
| 1100 |
+
- name: byte_perplexity
|
| 1101 |
+
type: byte_perplexity
|
| 1102 |
+
value: 2.365940201944242
|
| 1103 |
+
verified: false
|
| 1104 |
+
- task:
|
| 1105 |
+
type: text-generation
|
| 1106 |
+
name: text generation
|
| 1107 |
+
dataset:
|
| 1108 |
+
name: gsarti/flores_101_tur
|
| 1109 |
+
type: gsarti/flores_101_tur
|
| 1110 |
+
metrics:
|
| 1111 |
+
- name: byte_perplexity
|
| 1112 |
+
type: byte_perplexity
|
| 1113 |
+
value: 4.885014749844601
|
| 1114 |
+
verified: false
|
| 1115 |
+
- task:
|
| 1116 |
+
type: text-generation
|
| 1117 |
+
name: text generation
|
| 1118 |
+
dataset:
|
| 1119 |
+
name: gsarti/flores_101_ukr
|
| 1120 |
+
type: gsarti/flores_101_ukr
|
| 1121 |
+
metrics:
|
| 1122 |
+
- name: byte_perplexity
|
| 1123 |
+
type: byte_perplexity
|
| 1124 |
+
value: 2.7240934990288483
|
| 1125 |
+
verified: false
|
| 1126 |
+
- task:
|
| 1127 |
+
type: text-generation
|
| 1128 |
+
name: text generation
|
| 1129 |
+
dataset:
|
| 1130 |
+
name: gsarti/flores_101_umb
|
| 1131 |
+
type: gsarti/flores_101_umb
|
| 1132 |
+
metrics:
|
| 1133 |
+
- name: byte_perplexity
|
| 1134 |
+
type: byte_perplexity
|
| 1135 |
+
value: 12.766915508610673
|
| 1136 |
+
verified: false
|
| 1137 |
+
- task:
|
| 1138 |
+
type: text-generation
|
| 1139 |
+
name: text generation
|
| 1140 |
+
dataset:
|
| 1141 |
+
name: gsarti/flores_101_urd
|
| 1142 |
+
type: gsarti/flores_101_urd
|
| 1143 |
+
metrics:
|
| 1144 |
+
- name: byte_perplexity
|
| 1145 |
+
type: byte_perplexity
|
| 1146 |
+
value: 1.9797467071381232
|
| 1147 |
+
verified: false
|
| 1148 |
+
- task:
|
| 1149 |
+
type: text-generation
|
| 1150 |
+
name: text generation
|
| 1151 |
+
dataset:
|
| 1152 |
+
name: gsarti/flores_101_uzb
|
| 1153 |
+
type: gsarti/flores_101_uzb
|
| 1154 |
+
metrics:
|
| 1155 |
+
- name: byte_perplexity
|
| 1156 |
+
type: byte_perplexity
|
| 1157 |
+
value: 12.002337637722146
|
| 1158 |
+
verified: false
|
| 1159 |
+
- task:
|
| 1160 |
+
type: text-generation
|
| 1161 |
+
name: text generation
|
| 1162 |
+
dataset:
|
| 1163 |
+
name: gsarti/flores_101_vie
|
| 1164 |
+
type: gsarti/flores_101_vie
|
| 1165 |
+
metrics:
|
| 1166 |
+
- name: byte_perplexity
|
| 1167 |
+
type: byte_perplexity
|
| 1168 |
+
value: 1.76578415476397
|
| 1169 |
+
verified: false
|
| 1170 |
+
- task:
|
| 1171 |
+
type: text-generation
|
| 1172 |
+
name: text generation
|
| 1173 |
+
dataset:
|
| 1174 |
+
name: gsarti/flores_101_wol
|
| 1175 |
+
type: gsarti/flores_101_wol
|
| 1176 |
+
metrics:
|
| 1177 |
+
- name: byte_perplexity
|
| 1178 |
+
type: byte_perplexity
|
| 1179 |
+
value: 9.144285650306488
|
| 1180 |
+
verified: false
|
| 1181 |
+
- task:
|
| 1182 |
+
type: text-generation
|
| 1183 |
+
name: text generation
|
| 1184 |
+
dataset:
|
| 1185 |
+
name: gsarti/flores_101_xho
|
| 1186 |
+
type: gsarti/flores_101_xho
|
| 1187 |
+
metrics:
|
| 1188 |
+
- name: byte_perplexity
|
| 1189 |
+
type: byte_perplexity
|
| 1190 |
+
value: 7.403240538286952
|
| 1191 |
+
verified: false
|
| 1192 |
+
- task:
|
| 1193 |
+
type: text-generation
|
| 1194 |
+
name: text generation
|
| 1195 |
+
dataset:
|
| 1196 |
+
name: gsarti/flores_101_yor
|
| 1197 |
+
type: gsarti/flores_101_yor
|
| 1198 |
+
metrics:
|
| 1199 |
+
- name: byte_perplexity
|
| 1200 |
+
type: byte_perplexity
|
| 1201 |
+
value: 5.91272037551173
|
| 1202 |
+
verified: false
|
| 1203 |
+
- task:
|
| 1204 |
+
type: text-generation
|
| 1205 |
+
name: text generation
|
| 1206 |
+
dataset:
|
| 1207 |
+
name: gsarti/flores_101_zho_simpl
|
| 1208 |
+
type: gsarti/flores_101_zho_simpl
|
| 1209 |
+
metrics:
|
| 1210 |
+
- name: byte_perplexity
|
| 1211 |
+
type: byte_perplexity
|
| 1212 |
+
value: 2.2769070822768533
|
| 1213 |
+
verified: false
|
| 1214 |
+
- task:
|
| 1215 |
+
type: text-generation
|
| 1216 |
+
name: text generation
|
| 1217 |
+
dataset:
|
| 1218 |
+
name: gsarti/flores_101_zho_trad
|
| 1219 |
+
type: gsarti/flores_101_zho_trad
|
| 1220 |
+
metrics:
|
| 1221 |
+
- name: byte_perplexity
|
| 1222 |
+
type: byte_perplexity
|
| 1223 |
+
value: 2.5180582198242383
|
| 1224 |
+
verified: false
|
| 1225 |
+
- task:
|
| 1226 |
+
type: text-generation
|
| 1227 |
+
name: text generation
|
| 1228 |
+
dataset:
|
| 1229 |
+
name: gsarti/flores_101_zul
|
| 1230 |
+
type: gsarti/flores_101_zul
|
| 1231 |
+
metrics:
|
| 1232 |
+
- name: byte_perplexity
|
| 1233 |
+
type: byte_perplexity
|
| 1234 |
+
value: 8.53353320693145
|
| 1235 |
+
verified: false
|
| 1236 |
+
- task:
|
| 1237 |
+
type: text-generation
|
| 1238 |
+
name: text generation
|
| 1239 |
+
dataset:
|
| 1240 |
+
name: headqa
|
| 1241 |
+
type: headqa
|
| 1242 |
+
metrics:
|
| 1243 |
+
- name: acc
|
| 1244 |
+
type: acc
|
| 1245 |
+
value: 0.26440554339897887
|
| 1246 |
+
verified: false
|
| 1247 |
+
- task:
|
| 1248 |
+
type: text-generation
|
| 1249 |
+
name: text generation
|
| 1250 |
+
dataset:
|
| 1251 |
+
name: hellaswag
|
| 1252 |
+
type: hellaswag
|
| 1253 |
+
metrics:
|
| 1254 |
+
- name: acc
|
| 1255 |
+
type: acc
|
| 1256 |
+
value: 0.41236805417247563
|
| 1257 |
+
verified: false
|
| 1258 |
+
- task:
|
| 1259 |
+
type: text-generation
|
| 1260 |
+
name: text generation
|
| 1261 |
+
dataset:
|
| 1262 |
+
name: logiqa
|
| 1263 |
+
type: logiqa
|
| 1264 |
+
metrics:
|
| 1265 |
+
- name: acc
|
| 1266 |
+
type: acc
|
| 1267 |
+
value: 0.2073732718894009
|
| 1268 |
+
verified: false
|
| 1269 |
+
- task:
|
| 1270 |
+
type: text-generation
|
| 1271 |
+
name: text generation
|
| 1272 |
+
dataset:
|
| 1273 |
+
name: mathqa
|
| 1274 |
+
type: mathqa
|
| 1275 |
+
metrics:
|
| 1276 |
+
- name: acc
|
| 1277 |
+
type: acc
|
| 1278 |
+
value: 0.24958123953098826
|
| 1279 |
+
verified: false
|
| 1280 |
+
- task:
|
| 1281 |
+
type: text-generation
|
| 1282 |
+
name: text generation
|
| 1283 |
+
dataset:
|
| 1284 |
+
name: mc_taco
|
| 1285 |
+
type: mc_taco
|
| 1286 |
+
metrics:
|
| 1287 |
+
- name: em
|
| 1288 |
+
type: em
|
| 1289 |
+
value: 0.11936936936936937
|
| 1290 |
+
verified: false
|
| 1291 |
+
- task:
|
| 1292 |
+
type: text-generation
|
| 1293 |
+
name: text generation
|
| 1294 |
+
dataset:
|
| 1295 |
+
name: mnli
|
| 1296 |
+
type: mnli
|
| 1297 |
+
metrics:
|
| 1298 |
+
- name: acc
|
| 1299 |
+
type: acc
|
| 1300 |
+
value: 0.35496688741721855
|
| 1301 |
+
verified: false
|
| 1302 |
+
- task:
|
| 1303 |
+
type: text-generation
|
| 1304 |
+
name: text generation
|
| 1305 |
+
dataset:
|
| 1306 |
+
name: mnli_mismatched
|
| 1307 |
+
type: mnli_mismatched
|
| 1308 |
+
metrics:
|
| 1309 |
+
- name: acc
|
| 1310 |
+
type: acc
|
| 1311 |
+
value: 0.35211554109031734
|
| 1312 |
+
verified: false
|
| 1313 |
+
- task:
|
| 1314 |
+
type: text-generation
|
| 1315 |
+
name: text generation
|
| 1316 |
+
dataset:
|
| 1317 |
+
name: mrpc
|
| 1318 |
+
type: mrpc
|
| 1319 |
+
metrics:
|
| 1320 |
+
- name: acc
|
| 1321 |
+
type: acc
|
| 1322 |
+
value: 0.5857843137254902
|
| 1323 |
+
verified: false
|
| 1324 |
+
- task:
|
| 1325 |
+
type: text-generation
|
| 1326 |
+
name: text generation
|
| 1327 |
+
dataset:
|
| 1328 |
+
name: multirc
|
| 1329 |
+
type: multirc
|
| 1330 |
+
metrics:
|
| 1331 |
+
- name: acc
|
| 1332 |
+
type: acc
|
| 1333 |
+
value: 0.5375412541254125
|
| 1334 |
+
verified: false
|
| 1335 |
+
- task:
|
| 1336 |
+
type: text-generation
|
| 1337 |
+
name: text generation
|
| 1338 |
+
dataset:
|
| 1339 |
+
name: openbookqa
|
| 1340 |
+
type: openbookqa
|
| 1341 |
+
metrics:
|
| 1342 |
+
- name: acc
|
| 1343 |
+
type: acc
|
| 1344 |
+
value: 0.216
|
| 1345 |
+
verified: false
|
| 1346 |
+
- task:
|
| 1347 |
+
type: text-generation
|
| 1348 |
+
name: text generation
|
| 1349 |
+
dataset:
|
| 1350 |
+
name: piqa
|
| 1351 |
+
type: piqa
|
| 1352 |
+
metrics:
|
| 1353 |
+
- name: acc
|
| 1354 |
+
type: acc
|
| 1355 |
+
value: 0.7078346028291621
|
| 1356 |
+
verified: false
|
| 1357 |
+
- task:
|
| 1358 |
+
type: text-generation
|
| 1359 |
+
name: text generation
|
| 1360 |
+
dataset:
|
| 1361 |
+
name: prost
|
| 1362 |
+
type: prost
|
| 1363 |
+
metrics:
|
| 1364 |
+
- name: acc
|
| 1365 |
+
type: acc
|
| 1366 |
+
value: 0.22683603757472245
|
| 1367 |
+
verified: false
|
| 1368 |
+
- task:
|
| 1369 |
+
type: text-generation
|
| 1370 |
+
name: text generation
|
| 1371 |
+
dataset:
|
| 1372 |
+
name: pubmedqa
|
| 1373 |
+
type: pubmedqa
|
| 1374 |
+
metrics:
|
| 1375 |
+
- name: acc
|
| 1376 |
+
type: acc
|
| 1377 |
+
value: 0.616
|
| 1378 |
+
verified: false
|
| 1379 |
+
- task:
|
| 1380 |
+
type: text-generation
|
| 1381 |
+
name: text generation
|
| 1382 |
+
dataset:
|
| 1383 |
+
name: qnli
|
| 1384 |
+
type: qnli
|
| 1385 |
+
metrics:
|
| 1386 |
+
- name: acc
|
| 1387 |
+
type: acc
|
| 1388 |
+
value: 0.5072304594545122
|
| 1389 |
+
verified: false
|
| 1390 |
+
- task:
|
| 1391 |
+
type: text-generation
|
| 1392 |
+
name: text generation
|
| 1393 |
+
dataset:
|
| 1394 |
+
name: qqp
|
| 1395 |
+
type: qqp
|
| 1396 |
+
metrics:
|
| 1397 |
+
- name: acc
|
| 1398 |
+
type: acc
|
| 1399 |
+
value: 0.3842443729903537
|
| 1400 |
+
verified: false
|
| 1401 |
+
- task:
|
| 1402 |
+
type: text-generation
|
| 1403 |
+
name: text generation
|
| 1404 |
+
dataset:
|
| 1405 |
+
name: race
|
| 1406 |
+
type: race
|
| 1407 |
+
metrics:
|
| 1408 |
+
- name: acc
|
| 1409 |
+
type: acc
|
| 1410 |
+
value: 0.3521531100478469
|
| 1411 |
+
verified: false
|
| 1412 |
+
- task:
|
| 1413 |
+
type: text-generation
|
| 1414 |
+
name: text generation
|
| 1415 |
+
dataset:
|
| 1416 |
+
name: rte
|
| 1417 |
+
type: rte
|
| 1418 |
+
metrics:
|
| 1419 |
+
- name: acc
|
| 1420 |
+
type: acc
|
| 1421 |
+
value: 0.47653429602888087
|
| 1422 |
+
verified: false
|
| 1423 |
+
- task:
|
| 1424 |
+
type: text-generation
|
| 1425 |
+
name: text generation
|
| 1426 |
+
dataset:
|
| 1427 |
+
name: sciq
|
| 1428 |
+
type: sciq
|
| 1429 |
+
metrics:
|
| 1430 |
+
- name: acc
|
| 1431 |
+
type: acc
|
| 1432 |
+
value: 0.892
|
| 1433 |
+
verified: false
|
| 1434 |
+
- task:
|
| 1435 |
+
type: text-generation
|
| 1436 |
+
name: text generation
|
| 1437 |
+
dataset:
|
| 1438 |
+
name: sst
|
| 1439 |
+
type: sst
|
| 1440 |
+
metrics:
|
| 1441 |
+
- name: acc
|
| 1442 |
+
type: acc
|
| 1443 |
+
value: 0.5177752293577982
|
| 1444 |
+
verified: false
|
| 1445 |
+
- task:
|
| 1446 |
+
type: text-generation
|
| 1447 |
+
name: text generation
|
| 1448 |
+
dataset:
|
| 1449 |
+
name: triviaqa
|
| 1450 |
+
type: triviaqa
|
| 1451 |
+
metrics:
|
| 1452 |
+
- name: acc
|
| 1453 |
+
type: acc
|
| 1454 |
+
value: 0.041633518960487934
|
| 1455 |
+
verified: false
|
| 1456 |
+
- task:
|
| 1457 |
+
type: text-generation
|
| 1458 |
+
name: text generation
|
| 1459 |
+
dataset:
|
| 1460 |
+
name: tydiqa_primary
|
| 1461 |
+
type: tydiqa_primary
|
| 1462 |
+
metrics:
|
| 1463 |
+
- name: acc
|
| 1464 |
+
type: acc
|
| 1465 |
+
value: 0.3011337608795236
|
| 1466 |
+
verified: false
|
| 1467 |
+
- task:
|
| 1468 |
+
type: text-generation
|
| 1469 |
+
name: text generation
|
| 1470 |
+
dataset:
|
| 1471 |
+
name: webqs
|
| 1472 |
+
type: webqs
|
| 1473 |
+
metrics:
|
| 1474 |
+
- name: acc
|
| 1475 |
+
type: acc
|
| 1476 |
+
value: 0.01673228346456693
|
| 1477 |
+
verified: false
|
| 1478 |
+
- task:
|
| 1479 |
+
type: text-generation
|
| 1480 |
+
name: text generation
|
| 1481 |
+
dataset:
|
| 1482 |
+
name: wic
|
| 1483 |
+
type: wic
|
| 1484 |
+
metrics:
|
| 1485 |
+
- name: acc
|
| 1486 |
+
type: acc
|
| 1487 |
+
value: 0.5015673981191222
|
| 1488 |
+
verified: false
|
| 1489 |
+
- task:
|
| 1490 |
+
type: text-generation
|
| 1491 |
+
name: text generation
|
| 1492 |
+
dataset:
|
| 1493 |
+
name: winogrande
|
| 1494 |
+
type: winogrande
|
| 1495 |
+
metrics:
|
| 1496 |
+
- name: acc
|
| 1497 |
+
type: acc
|
| 1498 |
+
value: 0.5864246250986582
|
| 1499 |
+
verified: false
|
| 1500 |
+
- task:
|
| 1501 |
+
type: text-generation
|
| 1502 |
+
name: text generation
|
| 1503 |
+
dataset:
|
| 1504 |
+
name: wnli
|
| 1505 |
+
type: wnli
|
| 1506 |
+
metrics:
|
| 1507 |
+
- name: acc
|
| 1508 |
+
type: acc
|
| 1509 |
+
value: 0.471830985915493
|
| 1510 |
+
verified: false
|
| 1511 |
+
- task:
|
| 1512 |
+
type: text-generation
|
| 1513 |
+
name: text generation
|
| 1514 |
+
dataset:
|
| 1515 |
+
name: wsc
|
| 1516 |
+
type: wsc
|
| 1517 |
+
metrics:
|
| 1518 |
+
- name: acc
|
| 1519 |
+
type: acc
|
| 1520 |
+
value: 0.4423076923076923
|
| 1521 |
+
verified: false
|
| 1522 |
+
- task:
|
| 1523 |
+
type: text-generation
|
| 1524 |
+
name: text generation
|
| 1525 |
+
dataset:
|
| 1526 |
+
name: humaneval
|
| 1527 |
+
type: humaneval
|
| 1528 |
+
metrics:
|
| 1529 |
+
- name: pass@1
|
| 1530 |
+
type: pass@1
|
| 1531 |
+
value: 0.15524390243902436
|
| 1532 |
+
verified: false
|
| 1533 |
+
- name: pass@10
|
| 1534 |
+
type: pass@10
|
| 1535 |
+
value: 0.3220367632383857
|
| 1536 |
+
verified: false
|
| 1537 |
+
- name: pass@100
|
| 1538 |
+
type: pass@100
|
| 1539 |
+
value: 0.5545431515723145
|
| 1540 |
+
verified: false
|
evaluation/results/tr11/bloom2b5/mdtable.txt
ADDED
|
@@ -0,0 +1,143 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
| Task | Language | Metric | BLOOM-2B5 |
|
| 2 |
+
|:----|:----|:----|:----:|
|
| 3 |
+
| arc_challenge | eng | acc ↑ | 0.28 |
|
| 4 |
+
| arc_easy | eng | acc ↑ | 0.595 |
|
| 5 |
+
| axb (Median of 10 prompts) | eng | acc ↑ | 0.443 |
|
| 6 |
+
| axg (Median of 10 prompts) | eng | acc ↑ | 0.5 |
|
| 7 |
+
| boolq (Median of 11 prompts) | eng | acc ↑ | 0.617 |
|
| 8 |
+
| cb (Median of 15 prompts) | eng | acc ↑ | 0.304 |
|
| 9 |
+
| cola (Median of 5 prompts) | eng | acc ↑ | 0.611 |
|
| 10 |
+
| copa (Median of 9 prompts) | eng | acc ↑ | 0.63 |
|
| 11 |
+
| crows_pairs_english (Median of 6 prompts) | eng | acc ↑ | 0.497 |
|
| 12 |
+
| crows_pairs_french (Median of 7 prompts) | fra | acc ↑ | 0.503 |
|
| 13 |
+
| diabla (Median of 2 prompts) | eng | acc ↑ | 0.289 |
|
| 14 |
+
| gsarti/flores_101_afr | afr | byte_perplexity ↓ | 6.501 |
|
| 15 |
+
| gsarti/flores_101_amh | amh | byte_perplexity ↓ | 3.973 |
|
| 16 |
+
| gsarti/flores_101_ara | ara | byte_perplexity ↓ | 1.808 |
|
| 17 |
+
| gsarti/flores_101_asm | asm | byte_perplexity ↓ | 5.699 |
|
| 18 |
+
| gsarti/flores_101_ast | ast | byte_perplexity ↓ | 3.925 |
|
| 19 |
+
| gsarti/flores_101_azj | azj | byte_perplexity ↓ | 6.943 |
|
| 20 |
+
| gsarti/flores_101_bel | bel | byte_perplexity ↓ | 3.614 |
|
| 21 |
+
| gsarti/flores_101_ben | ben | byte_perplexity ↓ | 5.121 |
|
| 22 |
+
| gsarti/flores_101_bos | bos | byte_perplexity ↓ | 5.653 |
|
| 23 |
+
| gsarti/flores_101_bul | bul | byte_perplexity ↓ | 2.701 |
|
| 24 |
+
| gsarti/flores_101_cat | cat | byte_perplexity ↓ | 2.305 |
|
| 25 |
+
| gsarti/flores_101_ceb | ceb | byte_perplexity ↓ | 6.291 |
|
| 26 |
+
| gsarti/flores_101_ces | ces | byte_perplexity ↓ | 5.447 |
|
| 27 |
+
| gsarti/flores_101_ckb | ckb | byte_perplexity ↓ | 3.726 |
|
| 28 |
+
| gsarti/flores_101_cym | cym | byte_perplexity ↓ | 12.539 |
|
| 29 |
+
| gsarti/flores_101_dan | dan | byte_perplexity ↓ | 5.183 |
|
| 30 |
+
| gsarti/flores_101_deu | deu | byte_perplexity ↓ | 3.118 |
|
| 31 |
+
| gsarti/flores_101_ell | ell | byte_perplexity ↓ | 2.468 |
|
| 32 |
+
| gsarti/flores_101_eng | eng | byte_perplexity ↓ | 2.019 |
|
| 33 |
+
| gsarti/flores_101_est | est | byte_perplexity ↓ | 9.117 |
|
| 34 |
+
| gsarti/flores_101_fas | fas | byte_perplexity ↓ | 3.058 |
|
| 35 |
+
| gsarti/flores_101_fin | fin | byte_perplexity ↓ | 6.847 |
|
| 36 |
+
| gsarti/flores_101_fra | fra | byte_perplexity ↓ | 1.998 |
|
| 37 |
+
| gsarti/flores_101_ful | ful | byte_perplexity ↓ | 11.466 |
|
| 38 |
+
| gsarti/flores_101_gle | gle | byte_perplexity ↓ | 8.681 |
|
| 39 |
+
| gsarti/flores_101_glg | glg | byte_perplexity ↓ | 3.03 |
|
| 40 |
+
| gsarti/flores_101_guj | guj | byte_perplexity ↓ | 4.955 |
|
| 41 |
+
| gsarti/flores_101_hau | hau | byte_perplexity ↓ | 10.758 |
|
| 42 |
+
| gsarti/flores_101_heb | heb | byte_perplexity ↓ | 3.6 |
|
| 43 |
+
| gsarti/flores_101_hin | hin | byte_perplexity ↓ | 4.713 |
|
| 44 |
+
| gsarti/flores_101_hrv | hrv | byte_perplexity ↓ | 5.822 |
|
| 45 |
+
| gsarti/flores_101_hun | hun | byte_perplexity ↓ | 6.44 |
|
| 46 |
+
| gsarti/flores_101_hye | hye | byte_perplexity ↓ | 3.658 |
|
| 47 |
+
| gsarti/flores_101_ibo | ibo | byte_perplexity ↓ | 5.565 |
|
| 48 |
+
| gsarti/flores_101_ind | ind | byte_perplexity ↓ | 2.16 |
|
| 49 |
+
| gsarti/flores_101_isl | isl | byte_perplexity ↓ | 8.082 |
|
| 50 |
+
| gsarti/flores_101_ita | ita | byte_perplexity ↓ | 2.969 |
|
| 51 |
+
| gsarti/flores_101_jav | jav | byte_perplexity ↓ | 7.057 |
|
| 52 |
+
| gsarti/flores_101_jpn | jpn | byte_perplexity ↓ | 2.776 |
|
| 53 |
+
| gsarti/flores_101_kam | kam | byte_perplexity ↓ | 11.073 |
|
| 54 |
+
| gsarti/flores_101_kan | kan | byte_perplexity ↓ | 5.552 |
|
| 55 |
+
| gsarti/flores_101_kat | kat | byte_perplexity ↓ | 2.523 |
|
| 56 |
+
| gsarti/flores_101_kaz | kaz | byte_perplexity ↓ | 3.39 |
|
| 57 |
+
| gsarti/flores_101_kea | kea | byte_perplexity ↓ | 8.919 |
|
| 58 |
+
| gsarti/flores_101_kir | kir | byte_perplexity ↓ | 3.729 |
|
| 59 |
+
| gsarti/flores_101_kor | kor | byte_perplexity ↓ | 3.933 |
|
| 60 |
+
| gsarti/flores_101_lao | lao | byte_perplexity ↓ | 2.908 |
|
| 61 |
+
| gsarti/flores_101_lav | lav | byte_perplexity ↓ | 7.777 |
|
| 62 |
+
| gsarti/flores_101_lin | lin | byte_perplexity ↓ | 7.525 |
|
| 63 |
+
| gsarti/flores_101_lit | lit | byte_perplexity ↓ | 7.369 |
|
| 64 |
+
| gsarti/flores_101_ltz | ltz | byte_perplexity ↓ | 8.801 |
|
| 65 |
+
| gsarti/flores_101_lug | lug | byte_perplexity ↓ | 8.483 |
|
| 66 |
+
| gsarti/flores_101_luo | luo | byte_perplexity ↓ | 11.976 |
|
| 67 |
+
| gsarti/flores_101_mal | mal | byte_perplexity ↓ | 4.616 |
|
| 68 |
+
| gsarti/flores_101_mar | mar | byte_perplexity ↓ | 5.483 |
|
| 69 |
+
| gsarti/flores_101_mkd | mkd | byte_perplexity ↓ | 2.966 |
|
| 70 |
+
| gsarti/flores_101_mlt | mlt | byte_perplexity ↓ | 15.005 |
|
| 71 |
+
| gsarti/flores_101_mon | mon | byte_perplexity ↓ | 3.411 |
|
| 72 |
+
| gsarti/flores_101_mri | mri | byte_perplexity ↓ | 7.474 |
|
| 73 |
+
| gsarti/flores_101_msa | msa | byte_perplexity ↓ | 2.571 |
|
| 74 |
+
| gsarti/flores_101_mya | mya | byte_perplexity ↓ | 2.414 |
|
| 75 |
+
| gsarti/flores_101_nld | nld | byte_perplexity ↓ | 4.128 |
|
| 76 |
+
| gsarti/flores_101_nob | nob | byte_perplexity ↓ | 5.403 |
|
| 77 |
+
| gsarti/flores_101_npi | npi | byte_perplexity ↓ | 5.199 |
|
| 78 |
+
| gsarti/flores_101_nso | nso | byte_perplexity ↓ | 8.155 |
|
| 79 |
+
| gsarti/flores_101_nya | nya | byte_perplexity ↓ | 8.18 |
|
| 80 |
+
| gsarti/flores_101_oci | oci | byte_perplexity ↓ | 4.862 |
|
| 81 |
+
| gsarti/flores_101_orm | orm | byte_perplexity ↓ | 12.912 |
|
| 82 |
+
| gsarti/flores_101_ory | ory | byte_perplexity ↓ | 5.189 |
|
| 83 |
+
| gsarti/flores_101_pan | pan | byte_perplexity ↓ | 4.698 |
|
| 84 |
+
| gsarti/flores_101_pol | pol | byte_perplexity ↓ | 4.626 |
|
| 85 |
+
| gsarti/flores_101_por | por | byte_perplexity ↓ | 1.975 |
|
| 86 |
+
| gsarti/flores_101_pus | pus | byte_perplexity ↓ | 4.496 |
|
| 87 |
+
| gsarti/flores_101_ron | ron | byte_perplexity ↓ | 4.965 |
|
| 88 |
+
| gsarti/flores_101_rus | rus | byte_perplexity ↓ | 2.05 |
|
| 89 |
+
| gsarti/flores_101_slk | slk | byte_perplexity ↓ | 6.451 |
|
| 90 |
+
| gsarti/flores_101_slv | slv | byte_perplexity ↓ | 6.62 |
|
| 91 |
+
| gsarti/flores_101_sna | sna | byte_perplexity ↓ | 8.462 |
|
| 92 |
+
| gsarti/flores_101_snd | snd | byte_perplexity ↓ | 5.466 |
|
| 93 |
+
| gsarti/flores_101_som | som | byte_perplexity ↓ | 11.959 |
|
| 94 |
+
| gsarti/flores_101_spa | spa | byte_perplexity ↓ | 1.897 |
|
| 95 |
+
| gsarti/flores_101_srp | srp | byte_perplexity ↓ | 2.871 |
|
| 96 |
+
| gsarti/flores_101_swe | swe | byte_perplexity ↓ | 5.055 |
|
| 97 |
+
| gsarti/flores_101_swh | swh | byte_perplexity ↓ | 3.697 |
|
| 98 |
+
| gsarti/flores_101_tam | tam | byte_perplexity ↓ | 4.539 |
|
| 99 |
+
| gsarti/flores_101_tel | tel | byte_perplexity ↓ | 5.807 |
|
| 100 |
+
| gsarti/flores_101_tgk | tgk | byte_perplexity ↓ | 3.599 |
|
| 101 |
+
| gsarti/flores_101_tgl | tgl | byte_perplexity ↓ | 5.667 |
|
| 102 |
+
| gsarti/flores_101_tha | tha | byte_perplexity ↓ | 2.366 |
|
| 103 |
+
| gsarti/flores_101_tur | tur | byte_perplexity ↓ | 4.885 |
|
| 104 |
+
| gsarti/flores_101_ukr | ukr | byte_perplexity ↓ | 2.724 |
|
| 105 |
+
| gsarti/flores_101_umb | umb | byte_perplexity ↓ | 12.767 |
|
| 106 |
+
| gsarti/flores_101_urd | urd | byte_perplexity ↓ | 1.98 |
|
| 107 |
+
| gsarti/flores_101_uzb | uzb | byte_perplexity ↓ | 12.002 |
|
| 108 |
+
| gsarti/flores_101_vie | vie | byte_perplexity ↓ | 1.766 |
|
| 109 |
+
| gsarti/flores_101_wol | wol | byte_perplexity ↓ | 9.144 |
|
| 110 |
+
| gsarti/flores_101_xho | xho | byte_perplexity ↓ | 7.403 |
|
| 111 |
+
| gsarti/flores_101_yor | yor | byte_perplexity ↓ | 5.913 |
|
| 112 |
+
| gsarti/flores_101_zho_simpl | zho_simpl | byte_perplexity ↓ | 2.277 |
|
| 113 |
+
| gsarti/flores_101_zho_trad | zho_trad | byte_perplexity ↓ | 2.518 |
|
| 114 |
+
| gsarti/flores_101_zul | zul | byte_perplexity ↓ | 8.534 |
|
| 115 |
+
| headqa | esp | acc ↑ | 0.264 |
|
| 116 |
+
| hellaswag | eng | acc ↑ | 0.412 |
|
| 117 |
+
| logiqa | eng | acc ↑ | 0.207 |
|
| 118 |
+
| mathqa | eng | acc ↑ | 0.25 |
|
| 119 |
+
| mc_taco | eng | em ↑ | 0.119 |
|
| 120 |
+
| mnli (Median of 15 prompts) | eng | acc ↑ | 0.355 |
|
| 121 |
+
| mnli_mismatched (Median of 15 prompts) | eng | acc ↑ | 0.352 |
|
| 122 |
+
| mrpc | eng | acc ↑ | 0.586 |
|
| 123 |
+
| multirc (Median of 11 prompts) | eng | acc ↑ | 0.538 |
|
| 124 |
+
| openbookqa | eng | acc ↑ | 0.216 |
|
| 125 |
+
| piqa | eng | acc ↑ | 0.708 |
|
| 126 |
+
| prost | eng | acc ↑ | 0.227 |
|
| 127 |
+
| pubmedqa | eng | acc ↑ | 0.616 |
|
| 128 |
+
| qnli | eng | acc ↑ | 0.507 |
|
| 129 |
+
| qqp (Median of 7 prompts) | eng | acc ↑ | 0.384 |
|
| 130 |
+
| race | eng | acc ↑ | 0.352 |
|
| 131 |
+
| rte (Median of 6 prompts) | eng | acc ↑ | 0.477 |
|
| 132 |
+
| sciq | eng | acc ↑ | 0.892 |
|
| 133 |
+
| sst (Median of 6 prompts) | eng | acc ↑ | 0.518 |
|
| 134 |
+
| triviaqa | eng | acc ↑ | 0.042 |
|
| 135 |
+
| tydiqa_primary (Median of 24 prompts) | eng | acc ↑ | 0.301 |
|
| 136 |
+
| webqs | eng | acc ↑ | 0.017 |
|
| 137 |
+
| wic (Median of 11 prompts) | eng | acc ↑ | 0.502 |
|
| 138 |
+
| winogrande | eng | acc ↑ | 0.586 |
|
| 139 |
+
| wnli (Median of 6 prompts) | eng | acc ↑ | 0.472 |
|
| 140 |
+
| wsc (Median of 11 prompts) | eng | acc ↑ | 0.442 |
|
| 141 |
+
| humaneval | python | pass@1 ↑ | 0.155 |
|
| 142 |
+
| humaneval | python | pass@10 ↑ | 0.322 |
|
| 143 |
+
| humaneval | python | pass@100 ↑ | 0.555 |
|
evaluation/results/tr11/conversion/json_to_markdown.py
ADDED
|
@@ -0,0 +1,307 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Table example:
|
| 3 |
+
|
| 4 |
+
| Task | Language | Metric | BLOOM-176B | OPT-176B |
|
| 5 |
+
|:--------|:-----------------|:------------------------|-------------:|------------:|
|
| 6 |
+
| arc_challenge | eng | acc | 0.4112627986348123 | 0.4121160409556314 |
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
Metadata example:
|
| 10 |
+
|
| 11 |
+
model-index:
|
| 12 |
+
- name: bart-large-cnn-samsum
|
| 13 |
+
results:
|
| 14 |
+
- task:
|
| 15 |
+
type: summarization
|
| 16 |
+
name: Summarization
|
| 17 |
+
dataset:
|
| 18 |
+
name: 'SAMSum Corpus: A Human-annotated Dialogue Dataset for Abstractive Summarization'
|
| 19 |
+
type: samsum
|
| 20 |
+
metrics:
|
| 21 |
+
- name: Validation ROGUE-1
|
| 22 |
+
type: rogue-1
|
| 23 |
+
value: 42.621
|
| 24 |
+
- name: Validation ROGUE-2
|
| 25 |
+
type: rogue-2
|
| 26 |
+
value: 21.9825
|
| 27 |
+
- name: Validation ROGUE-L
|
| 28 |
+
type: rogue-l
|
| 29 |
+
value: 33.034
|
| 30 |
+
- name: Test ROGUE-1
|
| 31 |
+
type: rogue-1
|
| 32 |
+
value: 41.3174
|
| 33 |
+
- name: Test ROGUE-2
|
| 34 |
+
type: rogue-2
|
| 35 |
+
value: 20.8716
|
| 36 |
+
- name: Test ROGUE-L
|
| 37 |
+
type: rogue-l
|
| 38 |
+
value: 32.1337
|
| 39 |
+
- task:
|
| 40 |
+
type: summarization
|
| 41 |
+
name: Summarization
|
| 42 |
+
dataset:
|
| 43 |
+
name: samsum
|
| 44 |
+
type: samsum
|
| 45 |
+
config: samsum
|
| 46 |
+
split: test
|
| 47 |
+
metrics:
|
| 48 |
+
- name: ROUGE-1
|
| 49 |
+
type: rouge
|
| 50 |
+
value: 41.3282
|
| 51 |
+
verified: true
|
| 52 |
+
- name: ROUGE-2
|
| 53 |
+
type: rouge
|
| 54 |
+
value: 20.8755
|
| 55 |
+
verified: true
|
| 56 |
+
- name: ROUGE-L
|
| 57 |
+
type: rouge
|
| 58 |
+
value: 32.1353
|
| 59 |
+
verified: true
|
| 60 |
+
- name: ROUGE-LSUM
|
| 61 |
+
type: rouge
|
| 62 |
+
value: 38.401
|
| 63 |
+
verified: true
|
| 64 |
+
- name: loss
|
| 65 |
+
type: loss
|
| 66 |
+
value: 1.4297215938568115
|
| 67 |
+
verified: true
|
| 68 |
+
- name: gen_len
|
| 69 |
+
type: gen_len
|
| 70 |
+
value: 60.0757
|
| 71 |
+
verified: true
|
| 72 |
+
"""
|
| 73 |
+
|
| 74 |
+
import json
|
| 75 |
+
import statistics
|
| 76 |
+
|
| 77 |
+
FILE_NAMES = ["bslmeval", "humaneval_temp02", "humaneval_temp06", "humaneval_temp08"]
|
| 78 |
+
|
| 79 |
+
# Optionally subselect tasks
|
| 80 |
+
SELECTED_LIST = [
|
| 81 |
+
"winogrande"
|
| 82 |
+
]
|
| 83 |
+
|
| 84 |
+
with open("bloom2b5/bslmeval.json", "r") as f:
|
| 85 |
+
bloom_bslmeval = json.load(f)
|
| 86 |
+
|
| 87 |
+
with open("opt/bslmeval.json", "r") as f:
|
| 88 |
+
opt_bslmeval = json.load(f)
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
results_formatted = {}
|
| 93 |
+
for task_name in bloom_bslmeval["results"]:
|
| 94 |
+
#if task_name not in SELECTED_LIST:
|
| 95 |
+
# continue
|
| 96 |
+
date_keys = list(bloom_bslmeval["results"][task_name].keys())
|
| 97 |
+
assert len(date_keys) == 1
|
| 98 |
+
metrics = bloom_bslmeval["results"][task_name][date_keys[0]]
|
| 99 |
+
|
| 100 |
+
lang = "eng"
|
| 101 |
+
if "gsarti/flores_101_" in task_name:
|
| 102 |
+
lang = task_name.replace("gsarti/flores_101_", "").replace("+null", "")
|
| 103 |
+
elif "lambada_mt_de" in task_name:
|
| 104 |
+
lang = "deu"
|
| 105 |
+
elif "lambada_mt_en" in task_name:
|
| 106 |
+
lang = "eng"
|
| 107 |
+
elif "lambada_mt_es" in task_name:
|
| 108 |
+
lang = "esp"
|
| 109 |
+
elif "lambada_mt_it" in task_name:
|
| 110 |
+
lang = "ita"
|
| 111 |
+
elif "lambada" == task_name:
|
| 112 |
+
continue
|
| 113 |
+
elif "crows_pairs_french" in task_name:
|
| 114 |
+
lang = "fra"
|
| 115 |
+
elif "headqa" == task_name:
|
| 116 |
+
lang = "esp"
|
| 117 |
+
|
| 118 |
+
if "acc" in metrics:
|
| 119 |
+
main_metric_name = "acc ↑"
|
| 120 |
+
elif "byte_perplexity" in metrics:
|
| 121 |
+
main_metric_name = "byte_perplexity ↓"
|
| 122 |
+
elif "pass@100" in metrics:
|
| 123 |
+
main_metric_name = "pass@100 ↑"
|
| 124 |
+
elif "em" in metrics:
|
| 125 |
+
main_metric_name = "em ↑"
|
| 126 |
+
|
| 127 |
+
date_keys_opt = list(opt_bslmeval["results"][task_name].keys())
|
| 128 |
+
score_opt = opt_bslmeval["results"][task_name][date_keys_opt[0]][main_metric_name[:-2]]
|
| 129 |
+
|
| 130 |
+
fin_task_name = metrics.get("task_name", task_name)
|
| 131 |
+
|
| 132 |
+
results_formatted.setdefault(fin_task_name, {})
|
| 133 |
+
results_formatted[fin_task_name].setdefault("prompts", [])
|
| 134 |
+
results_formatted[fin_task_name].setdefault("all_metrics", [])
|
| 135 |
+
results_formatted[fin_task_name].setdefault("main_metrics", [])
|
| 136 |
+
|
| 137 |
+
if "prompt_name" in metrics:
|
| 138 |
+
results_formatted[fin_task_name]["prompts"].append(metrics["prompt_name"])
|
| 139 |
+
results_formatted[fin_task_name]["name"] = fin_task_name
|
| 140 |
+
results_formatted[fin_task_name]["lang"] = lang
|
| 141 |
+
results_formatted[fin_task_name]["all_metrics"].append(metrics) # [{name: score}]
|
| 142 |
+
results_formatted[fin_task_name]["main_metrics"].append((main_metric_name, metrics[main_metric_name[:-2]], score_opt))
|
| 143 |
+
results_formatted[fin_task_name]["type"] = "text-generation"
|
| 144 |
+
|
| 145 |
+
# Take Median of scores
|
| 146 |
+
for k, v in results_formatted.items():
|
| 147 |
+
if "prompts" in v and len(v["prompts"]) > 1:
|
| 148 |
+
assert len(v["all_metrics"]) == len(v["main_metrics"])
|
| 149 |
+
num_scores = len(v["main_metrics"])
|
| 150 |
+
|
| 151 |
+
bloom_median = statistics.median([triplet[1] for triplet in v["main_metrics"]])
|
| 152 |
+
opt_median = statistics.median([triplet[2] for triplet in v["main_metrics"]])
|
| 153 |
+
|
| 154 |
+
results_formatted[k]["main_metrics"] = [(
|
| 155 |
+
v["main_metrics"][0][0],
|
| 156 |
+
bloom_median,
|
| 157 |
+
opt_median,
|
| 158 |
+
)]
|
| 159 |
+
|
| 160 |
+
results_formatted[k]["name"] = results_formatted[k]["name"] + f" (Median of {num_scores} prompts)"
|
| 161 |
+
|
| 162 |
+
|
| 163 |
+
|
| 164 |
+
def keep_best_score(new_eval, old_eval):
|
| 165 |
+
for k, v in new_eval.items():
|
| 166 |
+
old_eval[k] = max(old_eval[k], v)
|
| 167 |
+
return old_eval
|
| 168 |
+
|
| 169 |
+
for i, temp in enumerate(["02", "06", "08"]):
|
| 170 |
+
with open(f"bloom/humaneval_temp{temp}.json", "r") as f:
|
| 171 |
+
if i > 0:
|
| 172 |
+
keep_best_score(json.load(f), bloom_humaneval)
|
| 173 |
+
else:
|
| 174 |
+
bloom_humaneval = json.load(f)
|
| 175 |
+
with open(f"opt/humaneval_temp{temp}.json", "r") as f:
|
| 176 |
+
if i > 0:
|
| 177 |
+
keep_best_score(json.load(f), opt_humaneval)
|
| 178 |
+
else:
|
| 179 |
+
opt_humaneval = json.load(f)
|
| 180 |
+
|
| 181 |
+
results_formatted["humaneval"] = {
|
| 182 |
+
"name": "humaneval",
|
| 183 |
+
"lang": "python",
|
| 184 |
+
"all_metrics": [bloom_humaneval], # [{name: score}]
|
| 185 |
+
"main_metrics": [(f"{name} ↑", score, opt_humaneval[name]) for name, score in bloom_humaneval.items()],
|
| 186 |
+
"type": "text-generation"
|
| 187 |
+
}
|
| 188 |
+
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
# Add multilingual average
|
| 192 |
+
for k, v in results_formatted.items():
|
| 193 |
+
if "prompts" in v and len(v["prompts"]) > 1 and len(v["main_metrics"]) > 1:
|
| 194 |
+
assert len(v["all_metrics"]) == len(v["main_metrics"]), f"{k}, {len(v['all_metrics'])}, {len(v['main_metrics'])}"
|
| 195 |
+
num_scores = len(v["main_metrics"])
|
| 196 |
+
|
| 197 |
+
bloom_median = statistics.median([triplet[1] for triplet in v["main_metrics"]])
|
| 198 |
+
opt_median = statistics.median([triplet[2] for triplet in v["main_metrics"]])
|
| 199 |
+
|
| 200 |
+
results_formatted[k]["main_metrics"] = [(
|
| 201 |
+
v["main_metrics"][0][0],
|
| 202 |
+
bloom_median,
|
| 203 |
+
opt_median,
|
| 204 |
+
)]
|
| 205 |
+
|
| 206 |
+
results_formatted[k]["name"] = results_formatted[k]["name"] + f" (Median of {num_scores} prompts)"
|
| 207 |
+
|
| 208 |
+
"""Optional aggregated statistics
|
| 209 |
+
bloom_mean = statistics.mean([triplet[1] for k,v in results_formatted.items() for triplet in v["main_metrics"] if v["lang"] == "eng"])
|
| 210 |
+
opt_mean = statistics.mean([triplet[2] for k,v in results_formatted.items() for triplet in v["main_metrics"] if v["lang"] == "eng"])
|
| 211 |
+
|
| 212 |
+
results_formatted["mean_eng"] = {
|
| 213 |
+
"name": "mean_eng ↑",
|
| 214 |
+
"lang": "eng",
|
| 215 |
+
"all_metrics": [{"mean": bloom_mean}], # [{name: score}]
|
| 216 |
+
"main_metrics": [("mean", bloom_mean, opt_mean)],
|
| 217 |
+
"type": "text-generation"
|
| 218 |
+
}
|
| 219 |
+
|
| 220 |
+
bloom_mean = statistics.mean([triplet[1] for k,v in results_formatted.items() for triplet in v["main_metrics"] if "flores" in k])
|
| 221 |
+
opt_mean = statistics.mean([triplet[2] for k,v in results_formatted.items() for triplet in v["main_metrics"] if "flores" in k])
|
| 222 |
+
|
| 223 |
+
results_formatted["mean_multilingual"] = {
|
| 224 |
+
"name": "mean_multilingual (Flores) ↓",
|
| 225 |
+
"lang": "mul",
|
| 226 |
+
"all_metrics": [{"mean": bloom_mean}], # [{name: score}]
|
| 227 |
+
"main_metrics": [("mean", bloom_mean, opt_mean)],
|
| 228 |
+
"type": "text-generation"
|
| 229 |
+
}
|
| 230 |
+
|
| 231 |
+
main_metrics = ([triplet for k,v in results_formatted.items() for triplet in v["main_metrics"]])
|
| 232 |
+
|
| 233 |
+
bloom_best_on, opt_best_on = 0,0
|
| 234 |
+
for (name, bloom, opt) in main_metrics:
|
| 235 |
+
if name[:-2] in ["acc", "em"] or "pass" in name:
|
| 236 |
+
if bloom > opt:
|
| 237 |
+
bloom_best_on += 1
|
| 238 |
+
elif bloom < opt:
|
| 239 |
+
opt_best_on += 1
|
| 240 |
+
elif name[:-2] in ["byte_perplexity"]:
|
| 241 |
+
if bloom < opt:
|
| 242 |
+
bloom_best_on += 1
|
| 243 |
+
elif bloom > opt:
|
| 244 |
+
opt_best_on += 1
|
| 245 |
+
"""
|
| 246 |
+
### Markdown Table ###
|
| 247 |
+
|
| 248 |
+
HEADER = "| Task | Language | Metric | BLOOM-350M | BLOOM-750M | BLOOM-1B3 | BLOOM-2B5 | BLOOM-6B3 | BLOOM-176B |"
|
| 249 |
+
SEP = "|:----|:----|:----|:----:|"
|
| 250 |
+
ONE_LINE = "| {} | {} | {} | {} |"
|
| 251 |
+
|
| 252 |
+
TABLE_STRING = "\n".join([HEADER, SEP])
|
| 253 |
+
|
| 254 |
+
for task_name, res_dict in results_formatted.items():
|
| 255 |
+
for (name, score, score_opt) in res_dict["main_metrics"]:
|
| 256 |
+
TABLE_STRING += "\n" + ONE_LINE.format(
|
| 257 |
+
res_dict["name"],
|
| 258 |
+
res_dict["lang"],
|
| 259 |
+
name,
|
| 260 |
+
round(score, 3),
|
| 261 |
+
round(score_opt, 3),
|
| 262 |
+
)
|
| 263 |
+
|
| 264 |
+
with open("./mdtable.txt", "w") as f:
|
| 265 |
+
f.write(TABLE_STRING)
|
| 266 |
+
|
| 267 |
+
|
| 268 |
+
|
| 269 |
+
### Metadata ###
|
| 270 |
+
|
| 271 |
+
HEADER = "model-index:"
|
| 272 |
+
MODEL = "- name: bloom"
|
| 273 |
+
RES = " results:"
|
| 274 |
+
|
| 275 |
+
META_STRING = "\n".join([HEADER, MODEL, RES])
|
| 276 |
+
|
| 277 |
+
ONE_TASK = " - task:\n type: {}\n name: {}\n dataset:\n name: {}\n type: {}\n metrics:"
|
| 278 |
+
ONE_METRIC = " - name: {}\n type: {}\n value: {}\n verified: false"
|
| 279 |
+
|
| 280 |
+
for task_name, res_dict in results_formatted.items():
|
| 281 |
+
META_STRING += "\n" + ONE_TASK.format(
|
| 282 |
+
res_dict["type"],
|
| 283 |
+
res_dict["type"].replace("-", " "),
|
| 284 |
+
task_name,
|
| 285 |
+
task_name,
|
| 286 |
+
)
|
| 287 |
+
for (name, score, score_opt) in res_dict["main_metrics"]:
|
| 288 |
+
META_STRING += "\n" + ONE_METRIC.format(
|
| 289 |
+
name.split(" ")[0],
|
| 290 |
+
name.split(" ")[0],
|
| 291 |
+
score
|
| 292 |
+
)
|
| 293 |
+
"""
|
| 294 |
+
for metrics in res_dict["all_metrics"]:
|
| 295 |
+
for metric_name, metric in metrics.items():
|
| 296 |
+
if isinstance(metric, str):
|
| 297 |
+
continue
|
| 298 |
+
META_STRING += "\n" + ONE_METRIC.format(
|
| 299 |
+
metric_name,
|
| 300 |
+
metric_name,
|
| 301 |
+
metric
|
| 302 |
+
)
|
| 303 |
+
"""
|
| 304 |
+
|
| 305 |
+
|
| 306 |
+
with open("./mdmeta.txt", "w") as f:
|
| 307 |
+
f.write(META_STRING)
|
evaluation/results/tr11/opt/bslmeval.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/results/tr11/opt/humaneval_temp06.json
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
{"pass@1": 3.0487804878048808e-05, "pass@10": 0.0003048780487804881, "pass@100": 0.003048780487804878}
|
evaluation/results/tr11/scripts/download_bsevalharness.py
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Downloads the specified taks in the evaluation harness
|
| 2 |
+
# This is particularly useful when running in environments where the GPU nodes
|
| 3 |
+
# do not have internet access. This way we can pre-download them and use the cached data-set during evaluation.
|
| 4 |
+
|
| 5 |
+
from lm_eval import tasks
|
| 6 |
+
from lm_eval.tasks import ALL_TASKS
|
| 7 |
+
import argparse
|
| 8 |
+
import os
|
| 9 |
+
|
| 10 |
+
|
| 11 |
+
parser = argparse.ArgumentParser(description='Download evaluation harness', allow_abbrev=False)
|
| 12 |
+
parser.add_argument('--task_list', type=str, default = "all", help='Either "all" or comma separated list of tasks to download.')
|
| 13 |
+
args = parser.parse_args()
|
| 14 |
+
|
| 15 |
+
def main():
|
| 16 |
+
task_list = ALL_TASKS if args.task_list == 'all' else args.task_list.split(',')
|
| 17 |
+
tasks.get_task_dict_promptsource(task_list)
|
| 18 |
+
|
| 19 |
+
if __name__ == '__main__':
|
| 20 |
+
main()
|
| 21 |
+
|
evaluation/results/tr11/scripts/run_bsevalharness_generation_6b3.slurm
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=evaluate_t0
|
| 3 |
+
#SBATCH --nodes=1
|
| 4 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 5 |
+
#SBATCH --cpus-per-task=8 # number of cores per tasks
|
| 6 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 7 |
+
#SBATCH --gres=gpu:1 # number of gpus
|
| 8 |
+
#SBATCH --constraint=a100
|
| 9 |
+
#SBATCH --reservation=hug
|
| 10 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 11 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 12 |
+
#SBATCH --account=six@a100
|
| 13 |
+
|
| 14 |
+
set -x -e
|
| 15 |
+
|
| 16 |
+
source $six_ALL_CCFRWORK/start-tr13f-6B3-ml-t0
|
| 17 |
+
conda activate muennighofflmevalgen
|
| 18 |
+
|
| 19 |
+
echo "START TIME: $(date)"
|
| 20 |
+
|
| 21 |
+
# defining the right environment variables
|
| 22 |
+
export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
|
| 23 |
+
export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasets
|
| 24 |
+
export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
|
| 25 |
+
export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
|
| 26 |
+
export HF_DATASETS_OFFLINE=1
|
| 27 |
+
export TRANSFORMERS_OFFLINE=1
|
| 28 |
+
export TOKENIZERS_PARALLELISM=false
|
| 29 |
+
|
| 30 |
+
# Converted transformer checkpoint
|
| 31 |
+
MODEL_CKPT=/gpfsscratch/rech/six/commun/experiments/muennighoff/bloomckpt/6b3/bloom-7b1
|
| 32 |
+
|
| 33 |
+
cd /gpfsscratch/rech/six/commun/experiments/muennighoff/bslmevalgeneration/lm-evaluation-harness
|
| 34 |
+
|
| 35 |
+
# WMT19 ZH-EN does not work
|
| 36 |
+
DATASETS_AND_CONFIGS=(
|
| 37 |
+
GEM/wiki_lingua_en,en,"article_summary_en"
|
| 38 |
+
GEM/wiki_lingua_en,en,"write_abstract_en"
|
| 39 |
+
GEM/wiki_lingua_en,en,"summarize_above_en"
|
| 40 |
+
GEM/wiki_lingua_en,en,"rephrase_en"
|
| 41 |
+
GEM/wiki_lingua_en,en,"tldr_en"
|
| 42 |
+
GEM/wiki_lingua_es,es,"article_summary_es"
|
| 43 |
+
GEM/wiki_lingua_es,es,"write_abstract_es"
|
| 44 |
+
GEM/wiki_lingua_es,es,"summarize_above_es"
|
| 45 |
+
GEM/wiki_lingua_es,es,"rephrase_es"
|
| 46 |
+
GEM/wiki_lingua_es,es,"tldr_es"
|
| 47 |
+
GEM/wiki_lingua_fr,fr,"article_summary_fr"
|
| 48 |
+
GEM/wiki_lingua_fr,fr,"write_abstract_fr"
|
| 49 |
+
GEM/wiki_lingua_fr,fr,"summarize_above_fr"
|
| 50 |
+
GEM/wiki_lingua_fr,fr,"rephrase_fr"
|
| 51 |
+
GEM/wiki_lingua_fr,fr,"tldr_fr"
|
| 52 |
+
GEM/wiki_lingua_hi,hi,"article_summary_hi"
|
| 53 |
+
GEM/wiki_lingua_hi,hi,"write_abstract_hi"
|
| 54 |
+
GEM/wiki_lingua_hi,hi,"summarize_above_hi"
|
| 55 |
+
GEM/wiki_lingua_hi,hi,"rephrase_hi"
|
| 56 |
+
GEM/wiki_lingua_hi,hi,"tldr_hi"
|
| 57 |
+
GEM/wiki_lingua_id,id,"article_summary_id"
|
| 58 |
+
GEM/wiki_lingua_id,id,"write_abstract_id"
|
| 59 |
+
GEM/wiki_lingua_id,id,"summarize_above_id"
|
| 60 |
+
GEM/wiki_lingua_id,id,"rephrase_id"
|
| 61 |
+
GEM/wiki_lingua_id,id,"tldr_id"
|
| 62 |
+
GEM/wiki_lingua_pt,pt,"article_summary_pt"
|
| 63 |
+
GEM/wiki_lingua_pt,pt,"write_abstract_pt"
|
| 64 |
+
GEM/wiki_lingua_pt,pt,"summarize_above_pt"
|
| 65 |
+
GEM/wiki_lingua_pt,pt,"rephrase_pt"
|
| 66 |
+
GEM/wiki_lingua_pt,pt,"tldr_pt"
|
| 67 |
+
GEM/wiki_lingua_vi,vi,"article_summary_vi"
|
| 68 |
+
GEM/wiki_lingua_vi,vi,"write_abstract_vi"
|
| 69 |
+
GEM/wiki_lingua_vi,vi,"summarize_above_vi"
|
| 70 |
+
GEM/wiki_lingua_vi,vi,"rephrase_vi"
|
| 71 |
+
GEM/wiki_lingua_vi,vi,"tldr_vi"
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
#GEM/wiki_lingua_ar,ar,"article_summary_ar"
|
| 75 |
+
#GEM/wiki_lingua_ar,ar,"write_abstract_ar"
|
| 76 |
+
#GEM/wiki_lingua_ar,ar,"summarize_above_ar"
|
| 77 |
+
#GEM/wiki_lingua_ar,ar,"rephrase_ar"
|
| 78 |
+
#GEM/wiki_lingua_ar,ar,"tldr_ar"
|
| 79 |
+
#GEM/wiki_lingua_zh,zh,"article_summary_zh"
|
| 80 |
+
#GEM/wiki_lingua_zh,zh,"write_abstract_zh"
|
| 81 |
+
#GEM/wiki_lingua_zh,zh,"summarize_above_zh"
|
| 82 |
+
#GEM/wiki_lingua_zh,zh,"rephrase_zh"
|
| 83 |
+
#GEM/wiki_lingua_zh,zh,"tldr_zh"
|
| 84 |
+
|
| 85 |
+
DATASET_AND_CONFIG=${DATASETS_AND_CONFIGS[$SLURM_ARRAY_TASK_ID]}
|
| 86 |
+
echo $ARGUMENT
|
| 87 |
+
|
| 88 |
+
IFS=',' read dataset_name lang template_name <<< "${DATASET_AND_CONFIG}"
|
| 89 |
+
|
| 90 |
+
# Use this fork of lm-eval: https://github.com/bigscience-workshop/lm-evaluation-harness/pull/109
|
| 91 |
+
python main.py \
|
| 92 |
+
--model_api_name 'hf-causal' \
|
| 93 |
+
--model_args pretrained=$MODEL_CKPT,use_accelerate=True,tokenizer=$MODEL_CKPT,dtype=float16 \
|
| 94 |
+
--device cuda \
|
| 95 |
+
--batch_size 16 \
|
| 96 |
+
--no_tracking \
|
| 97 |
+
--task_name $dataset_name \
|
| 98 |
+
--template_names $template_name \
|
| 99 |
+
--bootstrap_iters 10
|
| 100 |
+
|
| 101 |
+
echo "END TIME: $(date)"
|
evaluation/results/tr11/scripts/run_bsevalharness_tr11-176b-ml.slurm
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=run_bsevalharness-tr11-176b-ml
|
| 3 |
+
#SBATCH --partition=gpu_p5
|
| 4 |
+
#SBATCH --constraint=a100
|
| 5 |
+
#SBATCH --nodes=1
|
| 6 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 7 |
+
#SBATCH --cpus-per-task=64 # number of cores per tasks
|
| 8 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 9 |
+
#SBATCH --gres=gpu:8 # number of gpus
|
| 10 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 11 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 12 |
+
#SBATCH --account=six@a100
|
| 13 |
+
#SBATCH --reservation=hug
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
set -x -e
|
| 17 |
+
|
| 18 |
+
source $six_ALL_CCFRWORK/start-muennighofflmeval
|
| 19 |
+
|
| 20 |
+
echo "START TIME: $(date)"
|
| 21 |
+
|
| 22 |
+
# a unique identifier for the current eval ideally correspnding to the modelname
|
| 23 |
+
VARIANT="tr11-176b-ml-bsevalharness"
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
CHECKPOINT_PATH=$six_ALL_CCFRSCRATCH/checkpoints/tr11-176B-ml/checkpoints/main/global_step90000
|
| 27 |
+
MEGATRON_DEEPSPEED_REPO=$six_ALL_CCFRSCRATCH/commun/experiments/muennighoff/megdsbslmeval/Megatron-DeepSpeed
|
| 28 |
+
export HF_DATASETS_OFFLINE=1
|
| 29 |
+
export TRANSFORMERS_OFFLINE=1
|
| 30 |
+
|
| 31 |
+
export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
|
| 32 |
+
export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasets
|
| 33 |
+
export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
|
| 34 |
+
export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
|
| 35 |
+
|
| 36 |
+
cd $MEGATRON_DEEPSPEED_REPO
|
| 37 |
+
|
| 38 |
+
TOKENIZER_NAME_OR_PATH=bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k-whitespace-and-eos-regex-alpha-v3-dedup-lines-articles
|
| 39 |
+
|
| 40 |
+
PP_SIZE=8
|
| 41 |
+
TP_SIZE=1
|
| 42 |
+
SEQ_LEN=2048
|
| 43 |
+
|
| 44 |
+
# different from the training MICRO_BATCH_SIZE - no optim memory, so can do bigger BS
|
| 45 |
+
# make as big as it can fit into gpu w/o OOM, but not too close to 100%
|
| 46 |
+
EVAL_MICRO_BATCH_SIZE=1
|
| 47 |
+
|
| 48 |
+
#dummy arguments to make megatron happy.
|
| 49 |
+
MEGATRON_REQUIRED_ARGS=" \
|
| 50 |
+
--num-layers -1 \
|
| 51 |
+
--hidden-size -1 \
|
| 52 |
+
--num-attention-heads -1 \
|
| 53 |
+
--seq-length -1 \
|
| 54 |
+
--max-position-embeddings -1 \
|
| 55 |
+
"
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
ZERO_STAGE=0
|
| 59 |
+
|
| 60 |
+
config_json="./ds_config.json"
|
| 61 |
+
|
| 62 |
+
# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
|
| 63 |
+
cat <<EOT > $config_json
|
| 64 |
+
{
|
| 65 |
+
"train_micro_batch_size_per_gpu": 1,
|
| 66 |
+
"train_batch_size": 1,
|
| 67 |
+
"gradient_clipping": 1.0,
|
| 68 |
+
"zero_optimization": {
|
| 69 |
+
"stage": $ZERO_STAGE
|
| 70 |
+
},
|
| 71 |
+
"bf16": {
|
| 72 |
+
"enabled": true
|
| 73 |
+
},
|
| 74 |
+
"steps_per_print": 2000,
|
| 75 |
+
"wall_clock_breakdown": false
|
| 76 |
+
}
|
| 77 |
+
EOT
|
| 78 |
+
|
| 79 |
+
|
| 80 |
+
CMD="./tasks/eval_harness/evaluate_bsevalharness.py \
|
| 81 |
+
--load $CHECKPOINT_PATH \
|
| 82 |
+
--results_path $VARIANT-results.json \
|
| 83 |
+
--tensor-model-parallel-size $TP_SIZE \
|
| 84 |
+
--pipeline-model-parallel-size $PP_SIZE \
|
| 85 |
+
--tokenizer-type PretrainedFromHF \
|
| 86 |
+
--tokenizer-name-or-path $TOKENIZER_NAME_OR_PATH \
|
| 87 |
+
--micro-batch-size $EVAL_MICRO_BATCH_SIZE \
|
| 88 |
+
--no-load-optim \
|
| 89 |
+
--no-load-rng \
|
| 90 |
+
--bf16 \
|
| 91 |
+
--inference \
|
| 92 |
+
--seq-length $SEQ_LEN \
|
| 93 |
+
--task_list wnli \
|
| 94 |
+
--deepspeed \
|
| 95 |
+
--deepspeed_config ds_config.json \
|
| 96 |
+
--intermed_results \
|
| 97 |
+
--adaptive_seq_len \
|
| 98 |
+
--micro_bs_multiplier 16 \
|
| 99 |
+
--offloadearly \
|
| 100 |
+
$MEGATRON_REQUIRED_ARGS \
|
| 101 |
+
"
|
| 102 |
+
|
| 103 |
+
GPUS_PER_NODE=8
|
| 104 |
+
NNODES=$SLURM_NNODES
|
| 105 |
+
MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
|
| 106 |
+
MASTER_PORT=6000
|
| 107 |
+
export LAUNCHER="python -u -m torch.distributed.run \
|
| 108 |
+
--nproc_per_node $GPUS_PER_NODE \
|
| 109 |
+
--nnodes $NNODES \
|
| 110 |
+
--rdzv_endpoint $MASTER_ADDR:$MASTER_PORT \
|
| 111 |
+
--rdzv_backend c10d \
|
| 112 |
+
--max_restarts 0 \
|
| 113 |
+
--tee 3 \
|
| 114 |
+
"
|
| 115 |
+
|
| 116 |
+
export CUDA_LAUNCH_BLOCKING=1
|
| 117 |
+
|
| 118 |
+
echo $LAUNCHER $CMD
|
| 119 |
+
|
| 120 |
+
export PYTHONPATH=$MEGATRON_DEEPSPEED_REPO
|
| 121 |
+
|
| 122 |
+
$LAUNCHER $CMD 2>&1 | tee $VARIANT-eval-harness.log
|
evaluation/results/tr11/scripts/run_bsevalharness_tr11b-1b3-ml.slurm
ADDED
|
@@ -0,0 +1,122 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=run_bsevalharness-tr11b-1b3-ml
|
| 3 |
+
#SBATCH --partition=gpu_p5
|
| 4 |
+
#SBATCH --constraint=a100
|
| 5 |
+
#SBATCH --nodes=1
|
| 6 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 7 |
+
#SBATCH --cpus-per-task=8 # number of cores per tasks
|
| 8 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 9 |
+
#SBATCH --gres=gpu:1 # number of gpus
|
| 10 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 11 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 12 |
+
#SBATCH --account=six@a100
|
| 13 |
+
#SBATCH --reservation=hug
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
set -x -e
|
| 17 |
+
|
| 18 |
+
source $six_ALL_CCFRWORK/start-muennighofflmeval
|
| 19 |
+
|
| 20 |
+
echo "START TIME: $(date)"
|
| 21 |
+
|
| 22 |
+
# a unique identifier for the current eval ideally correspnding to the modelname
|
| 23 |
+
VARIANT="tr11b-1b3-ml-bsevalharness"
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
CHECKPOINT_PATH=$six_ALL_CCFRSCRATCH/checkpoints/tr11b-1B3-ml/checkpoints/main/global_step340500
|
| 27 |
+
MEGATRON_DEEPSPEED_REPO=$six_ALL_CCFRSCRATCH/commun/experiments/muennighoff/megdsbslmeval/Megatron-DeepSpeed
|
| 28 |
+
export HF_DATASETS_OFFLINE=1
|
| 29 |
+
export TRANSFORMERS_OFFLINE=1
|
| 30 |
+
|
| 31 |
+
export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
|
| 32 |
+
export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasetseval
|
| 33 |
+
export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
|
| 34 |
+
export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
|
| 35 |
+
export TOKENIZERS_PARALLELISM=false
|
| 36 |
+
|
| 37 |
+
cd $MEGATRON_DEEPSPEED_REPO
|
| 38 |
+
|
| 39 |
+
TOKENIZER_NAME_OR_PATH=bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k-whitespace-and-eos-regex-alpha-v3-dedup-lines-articles
|
| 40 |
+
|
| 41 |
+
PP_SIZE=1
|
| 42 |
+
TP_SIZE=1
|
| 43 |
+
SEQ_LEN=2048
|
| 44 |
+
|
| 45 |
+
# different from the training MICRO_BATCH_SIZE - no optim memory, so can do bigger BS
|
| 46 |
+
# make as big as it can fit into gpu w/o OOM, but not too close to 100%
|
| 47 |
+
EVAL_MICRO_BATCH_SIZE=1
|
| 48 |
+
|
| 49 |
+
#dummy arguments to make megatron happy.
|
| 50 |
+
MEGATRON_REQUIRED_ARGS=" \
|
| 51 |
+
--num-layers -1 \
|
| 52 |
+
--hidden-size -1 \
|
| 53 |
+
--num-attention-heads -1 \
|
| 54 |
+
--seq-length -1 \
|
| 55 |
+
--max-position-embeddings -1 \
|
| 56 |
+
"
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
ZERO_STAGE=0
|
| 60 |
+
|
| 61 |
+
config_json="./ds_config.json"
|
| 62 |
+
|
| 63 |
+
# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
|
| 64 |
+
cat <<EOT > $config_json
|
| 65 |
+
{
|
| 66 |
+
"train_micro_batch_size_per_gpu": 1,
|
| 67 |
+
"train_batch_size": 1,
|
| 68 |
+
"gradient_clipping": 1.0,
|
| 69 |
+
"zero_optimization": {
|
| 70 |
+
"stage": $ZERO_STAGE
|
| 71 |
+
},
|
| 72 |
+
"bf16": {
|
| 73 |
+
"enabled": false
|
| 74 |
+
},
|
| 75 |
+
"steps_per_print": 2000,
|
| 76 |
+
"wall_clock_breakdown": false
|
| 77 |
+
}
|
| 78 |
+
EOT
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
CMD="./tasks/eval_harness/evaluate_bsevalharness.py \
|
| 82 |
+
--load $CHECKPOINT_PATH \
|
| 83 |
+
--results_path $VARIANT-results.json \
|
| 84 |
+
--tensor-model-parallel-size $TP_SIZE \
|
| 85 |
+
--pipeline-model-parallel-size $PP_SIZE \
|
| 86 |
+
--tokenizer-type PretrainedFromHF \
|
| 87 |
+
--tokenizer-name-or-path $TOKENIZER_NAME_OR_PATH \
|
| 88 |
+
--micro-batch-size $EVAL_MICRO_BATCH_SIZE \
|
| 89 |
+
--no-load-optim \
|
| 90 |
+
--no-load-rng \
|
| 91 |
+
--inference \
|
| 92 |
+
--seq-length $SEQ_LEN \
|
| 93 |
+
--task_list axb,axg,boolq,cb,cola,copa,crows_pairs_english,crows_pairs_french,diabla,e2e_nlg_cleaned,mnli,mnli_mismatched,multirc,piaf,qqp,rte,sst,tydiqa_primary,tydiqa_secondary,wic,wsc,wnli,wino_bias_type1_anti,wino_bias_type1_pro,wino_bias_type2_anti,wino_bias_type2_pro,xquad_ar,xquad_en,gsarti/flores_101_afr,gsarti/flores_101_amh,gsarti/flores_101_ara,gsarti/flores_101_hye,gsarti/flores_101_asm,gsarti/flores_101_ast,gsarti/flores_101_azj,gsarti/flores_101_bel,gsarti/flores_101_ben,gsarti/flores_101_bos,gsarti/flores_101_bul,gsarti/flores_101_mya,gsarti/flores_101_cat,gsarti/flores_101_ceb,gsarti/flores_101_zho_simpl,gsarti/flores_101_zho_trad,gsarti/flores_101_hrv,gsarti/flores_101_ces,gsarti/flores_101_dan,gsarti/flores_101_nld,gsarti/flores_101_eng,gsarti/flores_101_est,gsarti/flores_101_tgl,gsarti/flores_101_fin,gsarti/flores_101_fra,gsarti/flores_101_ful,gsarti/flores_101_glg,gsarti/flores_101_lug,gsarti/flores_101_kat,gsarti/flores_101_deu,gsarti/flores_101_ell,gsarti/flores_101_guj,gsarti/flores_101_hau,gsarti/flores_101_heb,gsarti/flores_101_hin,gsarti/flores_101_hun,gsarti/flores_101_isl,gsarti/flores_101_ibo,gsarti/flores_101_ind,gsarti/flores_101_gle,gsarti/flores_101_ita,gsarti/flores_101_jpn,gsarti/flores_101_jav,gsarti/flores_101_kea,gsarti/flores_101_kam,gsarti/flores_101_kan,gsarti/flores_101_kaz,gsarti/flores_101_khm,gsarti/flores_101_kor,gsarti/flores_101_kir,gsarti/flores_101_lao,gsarti/flores_101_lav,gsarti/flores_101_lin,gsarti/flores_101_lit,gsarti/flores_101_luo,gsarti/flores_101_ltz,gsarti/flores_101_mkd,gsarti/flores_101_msa,gsarti/flores_101_mal,gsarti/flores_101_mlt,gsarti/flores_101_mri,gsarti/flores_101_mar,gsarti/flores_101_mon,gsarti/flores_101_npi,gsarti/flores_101_nso,gsarti/flores_101_nob,gsarti/flores_101_nya,gsarti/flores_101_oci,gsarti/flores_101_ory,gsarti/flores_101_orm,gsarti/flores_101_pus,gsarti/flores_101_fas,gsarti/flores_101_pol,gsarti/flores_101_por,gsarti/flores_101_pan,gsarti/flores_101_ron,gsarti/flores_101_rus,gsarti/flores_101_srp,gsarti/flores_101_sna,gsarti/flores_101_snd,gsarti/flores_101_slk,gsarti/flores_101_slv,gsarti/flores_101_som,gsarti/flores_101_ckb,gsarti/flores_101_spa,gsarti/flores_101_swh,gsarti/flores_101_swe,gsarti/flores_101_tgk,gsarti/flores_101_tam,gsarti/flores_101_tel,gsarti/flores_101_tha,gsarti/flores_101_tur,gsarti/flores_101_ukr,gsarti/flores_101_umb,gsarti/flores_101_urd,gsarti/flores_101_uzb,gsarti/flores_101_vie,gsarti/flores_101_cym,gsarti/flores_101_wol,gsarti/flores_101_xho,gsarti/flores_101_yor,gsarti/flores_101_zul \
|
| 94 |
+
--eval_fp32 \
|
| 95 |
+
--deepspeed \
|
| 96 |
+
--deepspeed_config ds_config.json \
|
| 97 |
+
--intermed_results \
|
| 98 |
+
--adaptive_seq_len \
|
| 99 |
+
--micro_bs_multiplier 8 \
|
| 100 |
+
$MEGATRON_REQUIRED_ARGS \
|
| 101 |
+
"
|
| 102 |
+
|
| 103 |
+
GPUS_PER_NODE=1
|
| 104 |
+
NNODES=$SLURM_NNODES
|
| 105 |
+
MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
|
| 106 |
+
MASTER_PORT=6000
|
| 107 |
+
export LAUNCHER="python -u -m torch.distributed.run \
|
| 108 |
+
--nproc_per_node $GPUS_PER_NODE \
|
| 109 |
+
--nnodes $NNODES \
|
| 110 |
+
--rdzv_endpoint $MASTER_ADDR:$MASTER_PORT \
|
| 111 |
+
--rdzv_backend c10d \
|
| 112 |
+
--max_restarts 0 \
|
| 113 |
+
--tee 3 \
|
| 114 |
+
"
|
| 115 |
+
|
| 116 |
+
export CUDA_LAUNCH_BLOCKING=1
|
| 117 |
+
|
| 118 |
+
echo $LAUNCHER $CMD
|
| 119 |
+
|
| 120 |
+
export PYTHONPATH=$MEGATRON_DEEPSPEED_REPO
|
| 121 |
+
|
| 122 |
+
$LAUNCHER $CMD 2>&1 | tee $VARIANT-eval-harness.log
|
evaluation/results/tr11/scripts/run_bsevalharness_tr11d-750m-ml.slurm
ADDED
|
@@ -0,0 +1,120 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=run_bsevalharness-tr11d-760m-ml
|
| 3 |
+
#SBATCH --constraint=v100-32g
|
| 4 |
+
#SBATCH --nodes=1
|
| 5 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 6 |
+
#SBATCH --cpus-per-task=10 # number of cores per tasks
|
| 7 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 8 |
+
#SBATCH --gres=gpu:1 # number of gpus
|
| 9 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 10 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 11 |
+
#SBATCH --account=six@v100
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
set -x -e
|
| 15 |
+
|
| 16 |
+
source $six_ALL_CCFRWORK/start-muennighofflmeval
|
| 17 |
+
|
| 18 |
+
echo "START TIME: $(date)"
|
| 19 |
+
|
| 20 |
+
# a unique identifier for the current eval ideally correspnding to the modelname
|
| 21 |
+
VARIANT="tr11d-760m-ml-bsevalharness"
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
CHECKPOINT_PATH=$six_ALL_CCFRSCRATCH/checkpoints/tr11d-760M-ml/checkpoints/main/global_step660750
|
| 25 |
+
MEGATRON_DEEPSPEED_REPO=$six_ALL_CCFRSCRATCH/commun/experiments/muennighoff/bslmeval/Megatron-DeepSpeed
|
| 26 |
+
export HF_DATASETS_OFFLINE=1
|
| 27 |
+
export TRANSFORMERS_OFFLINE=1
|
| 28 |
+
|
| 29 |
+
export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
|
| 30 |
+
export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasets
|
| 31 |
+
export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
|
| 32 |
+
export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
|
| 33 |
+
export TOKENIZERS_PARALLELISM=false
|
| 34 |
+
|
| 35 |
+
cd $MEGATRON_DEEPSPEED_REPO
|
| 36 |
+
|
| 37 |
+
TOKENIZER_NAME_OR_PATH=bigscience-catalogue-data-dev/byte-level-bpe-tokenizer-no-norm-250k-whitespace-and-eos-regex-alpha-v3-dedup-lines-articles
|
| 38 |
+
|
| 39 |
+
PP_SIZE=1
|
| 40 |
+
TP_SIZE=1
|
| 41 |
+
SEQ_LEN=2048
|
| 42 |
+
|
| 43 |
+
# different from the training MICRO_BATCH_SIZE - no optim memory, so can do bigger BS
|
| 44 |
+
# make as big as it can fit into gpu w/o OOM, but not too close to 100%
|
| 45 |
+
EVAL_MICRO_BATCH_SIZE=1
|
| 46 |
+
|
| 47 |
+
#dummy arguments to make megatron happy.
|
| 48 |
+
MEGATRON_REQUIRED_ARGS=" \
|
| 49 |
+
--num-layers -1 \
|
| 50 |
+
--hidden-size -1 \
|
| 51 |
+
--num-attention-heads -1 \
|
| 52 |
+
--seq-length -1 \
|
| 53 |
+
--max-position-embeddings -1 \
|
| 54 |
+
"
|
| 55 |
+
|
| 56 |
+
|
| 57 |
+
ZERO_STAGE=0
|
| 58 |
+
|
| 59 |
+
config_json="./ds_config.json"
|
| 60 |
+
|
| 61 |
+
# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size()
|
| 62 |
+
cat <<EOT > $config_json
|
| 63 |
+
{
|
| 64 |
+
"train_micro_batch_size_per_gpu": 1,
|
| 65 |
+
"train_batch_size": 1,
|
| 66 |
+
"gradient_clipping": 1.0,
|
| 67 |
+
"zero_optimization": {
|
| 68 |
+
"stage": $ZERO_STAGE
|
| 69 |
+
},
|
| 70 |
+
"bf16": {
|
| 71 |
+
"enabled": false
|
| 72 |
+
},
|
| 73 |
+
"steps_per_print": 2000,
|
| 74 |
+
"wall_clock_breakdown": false
|
| 75 |
+
}
|
| 76 |
+
EOT
|
| 77 |
+
|
| 78 |
+
|
| 79 |
+
CMD="./tasks/eval_harness/evaluate_bsevalharness.py \
|
| 80 |
+
--load $CHECKPOINT_PATH \
|
| 81 |
+
--results_path $VARIANT-results.json \
|
| 82 |
+
--tensor-model-parallel-size $TP_SIZE \
|
| 83 |
+
--pipeline-model-parallel-size $PP_SIZE \
|
| 84 |
+
--tokenizer-type PretrainedFromHF \
|
| 85 |
+
--tokenizer-name-or-path $TOKENIZER_NAME_OR_PATH \
|
| 86 |
+
--micro-batch-size $EVAL_MICRO_BATCH_SIZE \
|
| 87 |
+
--no-load-optim \
|
| 88 |
+
--no-load-rng \
|
| 89 |
+
--inference \
|
| 90 |
+
--seq-length $SEQ_LEN \
|
| 91 |
+
--task_list axb,axg,boolq,cb,cola,copa,crows_pairs_english,crows_pairs_french,diabla,e2e_nlg_cleaned,mnli,mnli_mismatched,multirc,piaf,qqp,rte,sst,tydiqa_primary,tydiqa_secondary,wic,wsc,wnli,wino_bias_type1_anti,wino_bias_type1_pro,wino_bias_type2_anti,wino_bias_type2_pro,xquad_ar,xquad_en,gsarti/flores_101_afr,gsarti/flores_101_amh,gsarti/flores_101_ara,gsarti/flores_101_hye,gsarti/flores_101_asm,gsarti/flores_101_ast,gsarti/flores_101_azj,gsarti/flores_101_bel,gsarti/flores_101_ben,gsarti/flores_101_bos,gsarti/flores_101_bul,gsarti/flores_101_mya,gsarti/flores_101_cat,gsarti/flores_101_ceb,gsarti/flores_101_zho_simpl,gsarti/flores_101_zho_trad,gsarti/flores_101_hrv,gsarti/flores_101_ces,gsarti/flores_101_dan,gsarti/flores_101_nld,gsarti/flores_101_eng,gsarti/flores_101_est,gsarti/flores_101_tgl,gsarti/flores_101_fin,gsarti/flores_101_fra,gsarti/flores_101_ful,gsarti/flores_101_glg,gsarti/flores_101_lug,gsarti/flores_101_kat,gsarti/flores_101_deu,gsarti/flores_101_ell,gsarti/flores_101_guj,gsarti/flores_101_hau,gsarti/flores_101_heb,gsarti/flores_101_hin,gsarti/flores_101_hun,gsarti/flores_101_isl,gsarti/flores_101_ibo,gsarti/flores_101_ind,gsarti/flores_101_gle,gsarti/flores_101_ita,gsarti/flores_101_jpn,gsarti/flores_101_jav,gsarti/flores_101_kea,gsarti/flores_101_kam,gsarti/flores_101_kan,gsarti/flores_101_kaz,gsarti/flores_101_khm,gsarti/flores_101_kor,gsarti/flores_101_kir,gsarti/flores_101_lao,gsarti/flores_101_lav,gsarti/flores_101_lin,gsarti/flores_101_lit,gsarti/flores_101_luo,gsarti/flores_101_ltz,gsarti/flores_101_mkd,gsarti/flores_101_msa,gsarti/flores_101_mal,gsarti/flores_101_mlt,gsarti/flores_101_mri,gsarti/flores_101_mar,gsarti/flores_101_mon,gsarti/flores_101_npi,gsarti/flores_101_nso,gsarti/flores_101_nob,gsarti/flores_101_nya,gsarti/flores_101_oci,gsarti/flores_101_ory,gsarti/flores_101_orm,gsarti/flores_101_pus,gsarti/flores_101_fas,gsarti/flores_101_pol,gsarti/flores_101_por,gsarti/flores_101_pan,gsarti/flores_101_ron,gsarti/flores_101_rus,gsarti/flores_101_srp,gsarti/flores_101_sna,gsarti/flores_101_snd,gsarti/flores_101_slk,gsarti/flores_101_slv,gsarti/flores_101_som,gsarti/flores_101_ckb,gsarti/flores_101_spa,gsarti/flores_101_swh,gsarti/flores_101_swe,gsarti/flores_101_tgk,gsarti/flores_101_tam,gsarti/flores_101_tel,gsarti/flores_101_tha,gsarti/flores_101_tur,gsarti/flores_101_ukr,gsarti/flores_101_umb,gsarti/flores_101_urd,gsarti/flores_101_uzb,gsarti/flores_101_vie,gsarti/flores_101_cym,gsarti/flores_101_wol,gsarti/flores_101_xho,gsarti/flores_101_yor,gsarti/flores_101_zul \
|
| 92 |
+
--eval_fp32 \
|
| 93 |
+
--deepspeed \
|
| 94 |
+
--deepspeed_config ds_config.json \
|
| 95 |
+
--intermed_results \
|
| 96 |
+
--adaptive_seq_len \
|
| 97 |
+
--micro_bs_multiplier 4 \
|
| 98 |
+
$MEGATRON_REQUIRED_ARGS \
|
| 99 |
+
"
|
| 100 |
+
|
| 101 |
+
GPUS_PER_NODE=1
|
| 102 |
+
NNODES=$SLURM_NNODES
|
| 103 |
+
MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1)
|
| 104 |
+
MASTER_PORT=6002
|
| 105 |
+
export LAUNCHER="python -u -m torch.distributed.run \
|
| 106 |
+
--nproc_per_node $GPUS_PER_NODE \
|
| 107 |
+
--nnodes $NNODES \
|
| 108 |
+
--rdzv_endpoint $MASTER_ADDR:$MASTER_PORT \
|
| 109 |
+
--rdzv_backend c10d \
|
| 110 |
+
--max_restarts 0 \
|
| 111 |
+
--tee 3 \
|
| 112 |
+
"
|
| 113 |
+
|
| 114 |
+
export CUDA_LAUNCH_BLOCKING=1
|
| 115 |
+
|
| 116 |
+
echo $LAUNCHER $CMD
|
| 117 |
+
|
| 118 |
+
export PYTHONPATH=$MEGATRON_DEEPSPEED_REPO
|
| 119 |
+
|
| 120 |
+
$LAUNCHER $CMD 2>&1 | tee $VARIANT-eval-harness.log
|
evaluation/results/tr11/scripts/run_trevalharness_176b.slurm
ADDED
|
@@ -0,0 +1,60 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=run_trevalharness-176b
|
| 3 |
+
#SBATCH --nodes=1
|
| 4 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 5 |
+
#SBATCH --cpus-per-task=64 # number of cores per tasks
|
| 6 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 7 |
+
#SBATCH --gres=gpu:8 # number of gpus
|
| 8 |
+
#SBATCH --constraint=a100
|
| 9 |
+
#SBATCH --reservation=hug
|
| 10 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 11 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 12 |
+
#SBATCH --account=six@a100
|
| 13 |
+
|
| 14 |
+
set -x -e
|
| 15 |
+
|
| 16 |
+
source $six_ALL_CCFRWORK/start-tr13f-6B3-ml-t0
|
| 17 |
+
#conda activate muennighofflmevalgen
|
| 18 |
+
conda activate thomas_t_zero_evaluation
|
| 19 |
+
|
| 20 |
+
echo "START TIME: $(date)"
|
| 21 |
+
|
| 22 |
+
# defining the right environment variables
|
| 23 |
+
export TRANSFORMERS_CACHE=$six_ALL_CCFRWORK/models
|
| 24 |
+
export HF_DATASETS_CACHE=$six_ALL_CCFRWORK/datasets
|
| 25 |
+
export HF_MODULES_CACHE=$six_ALL_CCFRWORK/modules
|
| 26 |
+
export HF_METRICS_CACHE=$six_ALL_CCFRWORK/metrics
|
| 27 |
+
export HF_DATASETS_OFFLINE=1
|
| 28 |
+
export TRANSFORMERS_OFFLINE=1
|
| 29 |
+
export TOKENIZERS_PARALLELISM=false
|
| 30 |
+
|
| 31 |
+
# Converted transformer checkpoint
|
| 32 |
+
MODEL_CKPT=/gpfsscratch/rech/six/commun/uan68tv-model-conversion/bloom
|
| 33 |
+
|
| 34 |
+
cd /gpfsscratch/rech/six/commun/commun/experiments/muennighoff/bslmevaltransformers/lm-evaluation-harness
|
| 35 |
+
|
| 36 |
+
|
| 37 |
+
DATASETS_AND_CONFIGS=(
|
| 38 |
+
arc_challenge
|
| 39 |
+
arc_easy
|
| 40 |
+
)
|
| 41 |
+
#,arc_easy,boolq,copa,headqa,hellaswag,lambada,logiqa,mathqa,mc_taco,mrpc,multirc,openbookqa,piqa,prost,pubmedqa,qnli,qqp,race,rte,sciq,sst,triviaqa,webqs,wic,winogrande,wnli,wsc
|
| 42 |
+
|
| 43 |
+
DATASET_AND_CONFIG=${DATASETS_AND_CONFIGS[$SLURM_ARRAY_TASK_ID]}
|
| 44 |
+
echo $ARGUMENT
|
| 45 |
+
IFS=',' read dataset_name <<< "${DATASET_AND_CONFIG}"
|
| 46 |
+
|
| 47 |
+
# Use this fork of lm-eval: https://github.com/bigscience-workshop/lm-evaluation-harness/pull/109
|
| 48 |
+
python main.py \
|
| 49 |
+
--model gpt2 \
|
| 50 |
+
--model_args pretrained=$MODEL_CKPT \
|
| 51 |
+
--use_accelerate \
|
| 52 |
+
--max_memory_per_gpu "50GB" \
|
| 53 |
+
--batch_size 16 \
|
| 54 |
+
--tasks $dataset_name \
|
| 55 |
+
--output_path $dataset_name.json \
|
| 56 |
+
--skip_tokenizer \
|
| 57 |
+
--no_cache \
|
| 58 |
+
--dtype=bfloat16
|
| 59 |
+
|
| 60 |
+
echo "END TIME: $(date)"
|
evaluation/results/tr12/tr12a-1B3-oscar-en-filtered_agg.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/results/tr12/tr12b-1B3-oscar-en-filtered-dedup_agg.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/results/tr13/merge_all_json.py
ADDED
|
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Saves a merged.json file in the provided directory
|
| 3 |
+
python merge_all_json.py DIRECTORY
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import json
|
| 7 |
+
import os
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
import sys
|
| 10 |
+
from typing import Dict
|
| 11 |
+
|
| 12 |
+
|
| 13 |
+
def find_all_json(root_dir: Path):
|
| 14 |
+
if root_dir.is_file():
|
| 15 |
+
if root_dir.name.endswith(".json"):
|
| 16 |
+
return [root_dir]
|
| 17 |
+
else:
|
| 18 |
+
return []
|
| 19 |
+
|
| 20 |
+
all_jsons = []
|
| 21 |
+
for path in root_dir.iterdir():
|
| 22 |
+
all_jsons += find_all_json(path)
|
| 23 |
+
return all_jsons
|
| 24 |
+
|
| 25 |
+
def sort_dict(dictionary: Dict) -> Dict:
|
| 26 |
+
results = {}
|
| 27 |
+
|
| 28 |
+
for key, value in sorted(dictionary.items(), key=lambda item: item[0]):
|
| 29 |
+
new_value = value
|
| 30 |
+
|
| 31 |
+
if isinstance(value, dict):
|
| 32 |
+
new_value = sort_dict(new_value)
|
| 33 |
+
elif isinstance(value, list):
|
| 34 |
+
new_value = sorted(value)
|
| 35 |
+
|
| 36 |
+
results[key] = new_value
|
| 37 |
+
|
| 38 |
+
return results
|
| 39 |
+
|
| 40 |
+
def main():
|
| 41 |
+
# find all json file in directory
|
| 42 |
+
root_dir = Path(sys.argv[1])
|
| 43 |
+
out_path = os.path.join(root_dir, "merged.json")
|
| 44 |
+
if os.path.exists(out_path):
|
| 45 |
+
os.remove(out_path)
|
| 46 |
+
|
| 47 |
+
all_jsons = find_all_json(root_dir)
|
| 48 |
+
# merge
|
| 49 |
+
results = {}
|
| 50 |
+
for json_file in all_jsons:
|
| 51 |
+
with open(json_file, "r") as fi:
|
| 52 |
+
data = json.load(fi)
|
| 53 |
+
|
| 54 |
+
if str(json_file.name).startswith("slim"):
|
| 55 |
+
print(f"Parsing {json_file} as bigscience/lm-eval-harness file.")
|
| 56 |
+
for dic in data["results"]:
|
| 57 |
+
key = dic["task_name"]
|
| 58 |
+
# Same dataset but not really comparable
|
| 59 |
+
if "en-fr" in dic["prompt_name"]:
|
| 60 |
+
key += "_en-fr"
|
| 61 |
+
elif "fr-en" in dic["prompt_name"]:
|
| 62 |
+
key += "_fr-en"
|
| 63 |
+
elif "hi-en" in dic["prompt_name"]:
|
| 64 |
+
key += "_hi-en"
|
| 65 |
+
elif "en-hi" in dic["prompt_name"]:
|
| 66 |
+
key += "_en-hi"
|
| 67 |
+
sub_key = dic["prompt_name"]
|
| 68 |
+
results.setdefault(key, {})
|
| 69 |
+
results[key].setdefault(sub_key, {})
|
| 70 |
+
results[key][sub_key] = {
|
| 71 |
+
**results[key][sub_key],
|
| 72 |
+
**{subk: subv for subk, subv in dic.items() if type(subv) in [int, float]}
|
| 73 |
+
}
|
| 74 |
+
elif str(json_file.name).startswith("agg"):
|
| 75 |
+
print(f"Skipping {json_file} from bigscience/lm-eval-harness.")
|
| 76 |
+
continue
|
| 77 |
+
else:
|
| 78 |
+
print(f"Parsing {json_file} as bigscience/t-zero file.")
|
| 79 |
+
key = f"{data['dataset_name']}_{data['dataset_config_name']}"
|
| 80 |
+
if key in results:
|
| 81 |
+
assert data["template_name"] not in results
|
| 82 |
+
results[key][data["template_name"]] = data
|
| 83 |
+
else:
|
| 84 |
+
results[key] = {
|
| 85 |
+
data["template_name"]: data
|
| 86 |
+
}
|
| 87 |
+
|
| 88 |
+
# sort
|
| 89 |
+
sorted_results = sort_dict(results)
|
| 90 |
+
|
| 91 |
+
# write
|
| 92 |
+
with open(out_path, "w") as fo:
|
| 93 |
+
json.dump(sorted_results, fo)
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
if __name__ == "__main__":
|
| 97 |
+
main()
|
evaluation/results/tr13/plot_results.py
ADDED
|
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import csv
|
| 2 |
+
import json
|
| 3 |
+
import re
|
| 4 |
+
import subprocess
|
| 5 |
+
from argparse import ArgumentParser
|
| 6 |
+
|
| 7 |
+
import matplotlib.pyplot as plt
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
|
| 10 |
+
import numpy as np
|
| 11 |
+
|
| 12 |
+
"""
|
| 13 |
+
Plot results per (dataset_name, dataset_config_name).
|
| 14 |
+
"""
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
def get_args():
|
| 18 |
+
parser = ArgumentParser()
|
| 19 |
+
parser.add_argument("--json_paths", nargs="+", type=str, help="Json files to plot together", required=True)
|
| 20 |
+
parser.add_argument("--t0_csv_path", type=str, help="T0 eval results path")
|
| 21 |
+
args = parser.parse_args()
|
| 22 |
+
|
| 23 |
+
return args
|
| 24 |
+
|
| 25 |
+
def load_t0_results(csv_path):
|
| 26 |
+
with open(csv_path, "r") as f:
|
| 27 |
+
return list(csv.DictReader(f))
|
| 28 |
+
|
| 29 |
+
def load_json(json_path):
|
| 30 |
+
with open(json_path, "r") as fi:
|
| 31 |
+
return json.load(fi)
|
| 32 |
+
|
| 33 |
+
def get_experiment_name(filename: str):
|
| 34 |
+
name = re.sub(r"_([0-9]*)$", r" [\1]", filename)
|
| 35 |
+
name = name.replace("span_corruption", "SC")
|
| 36 |
+
name = re.sub(r"^enc_dec", "ED", name)
|
| 37 |
+
name = re.sub(r"^nc_dec", "NCD", name)
|
| 38 |
+
name = re.sub(r"^c_dec", 'CD', name)
|
| 39 |
+
name = name.replace("full_lm", "FLM")
|
| 40 |
+
name = name.replace("prefix_lm", "PLM")
|
| 41 |
+
name = re.sub(r"t0_adapt_([0-9]+)", r"T0(\1)", name)
|
| 42 |
+
if name[:3] == "CD_":
|
| 43 |
+
name = re.sub(r"lm_adapt_([0-9]+)", r"FLM(\1)", name)
|
| 44 |
+
name = re.sub(r"t0_adapt_nc_([0-9]+)", r"T0 AS NC (\1)", name)
|
| 45 |
+
name = re.sub(r"nc_sc_([0-9]+)", r"SC as NC(\1)", name)
|
| 46 |
+
name = re.sub(r"nc_t0_([0-9]+)", r"T0 as NC(\1)", name)
|
| 47 |
+
elif name[:4] == "NCD_" or name[:3] == "ED_":
|
| 48 |
+
if "flm_adapt" in name:
|
| 49 |
+
name = re.sub(r"flm_adapt_([0-9]+)", r"FLM AS CD(\1)", name)
|
| 50 |
+
else:
|
| 51 |
+
name = re.sub(r"lm_adapt_([0-9]+)", r"PLM(\1)", name)
|
| 52 |
+
else:
|
| 53 |
+
raise NotImplementedError
|
| 54 |
+
name = name.replace("_", " + ")
|
| 55 |
+
return name
|
| 56 |
+
|
| 57 |
+
TASKS = {
|
| 58 |
+
# T0 evaluation
|
| 59 |
+
"super_glue_copa": ("COPA", 0.5),
|
| 60 |
+
"anli_dev_r1": ("ANLI R1", 1/3),
|
| 61 |
+
"anli_dev_r2": ("ANLI R2", 1/3),
|
| 62 |
+
"anli_dev_r3": ("ANLI R3", 1/3),
|
| 63 |
+
"super_glue_cb": ("CB", 1/3),
|
| 64 |
+
"super_glue_rte": ("RTE", 0.5),
|
| 65 |
+
"super_glue_wsc.fixed": ("WSC", 0.5),
|
| 66 |
+
"winogrande_winogrande_xl": ("Winogrande", 0.5),
|
| 67 |
+
"super_glue_wic": ("WiC", 0.5),
|
| 68 |
+
"hellaswag": ("HellaSwag", 0.25),
|
| 69 |
+
"story_cloze_2016": ("StoryCloze", 0.5),
|
| 70 |
+
|
| 71 |
+
# XNLI evaluation
|
| 72 |
+
"xnli_ar": ("XNLI ar (en prompts)", 1/3),
|
| 73 |
+
"xnli_bg": ("XNLI bg (en prompts)", 1/3),
|
| 74 |
+
"xnli_de": ("XNLI de (en prompts)", 1/3),
|
| 75 |
+
"xnli_el": ("XNLI el (en prompts)", 1/3),
|
| 76 |
+
"xnli_en": ("XNLI en (en prompts)", 1/3),
|
| 77 |
+
"xnli_es": ("XNLI es (en prompts)", 1/3),
|
| 78 |
+
"xnli_fr": ("XNLI fr (en prompts)", 1/3),
|
| 79 |
+
"xnli_hi": ("XNLI hi (en prompts)", 1/3),
|
| 80 |
+
"xnli_ru": ("XNLI ru (en prompts)", 1/3),
|
| 81 |
+
"xnli_sw": ("XNLI sw (en prompts)", 1/3),
|
| 82 |
+
"xnli_th": ("XNLI th (en prompts)", 1/3),
|
| 83 |
+
"xnli_tr": ("XNLI tr (en prompts)", 1/3),
|
| 84 |
+
"xnli_ur": ("XNLI ur (en prompts)", 1/3),
|
| 85 |
+
"xnli_vi": ("XNLI vi (en prompts)", 1/3),
|
| 86 |
+
"xnli_zh": ("XNLI zh (en prompts)", 1/3),
|
| 87 |
+
}
|
| 88 |
+
|
| 89 |
+
def plot(mtf_data, t0_data):
|
| 90 |
+
args = get_args()
|
| 91 |
+
|
| 92 |
+
assert len(TASKS) == 26
|
| 93 |
+
fig, axs = plt.subplots(3, 9, figsize=(20, 5))
|
| 94 |
+
axs = axs.flatten()
|
| 95 |
+
|
| 96 |
+
task_min_score = {}
|
| 97 |
+
task_max_score = {}
|
| 98 |
+
task_median_score = {}
|
| 99 |
+
for n, (task, (task_name, random_baseline)) in enumerate(TASKS.items()):
|
| 100 |
+
# Normalising names
|
| 101 |
+
mtf_task = task
|
| 102 |
+
t0_task = task
|
| 103 |
+
if task.startswith("anli_dev_r"):
|
| 104 |
+
t0_task = re.sub("dev_", "", task)
|
| 105 |
+
elif task == "hellaswag":
|
| 106 |
+
mtf_task = "hellaswag_None"
|
| 107 |
+
|
| 108 |
+
t5lm_scores = [float(r["score"]) for r in t0_data
|
| 109 |
+
if r["runs"] == "xxl-lm-d4-091621"
|
| 110 |
+
and r["dataset_name"] == t0_task
|
| 111 |
+
and r["metric_name"] == "accuracy (Rank)"
|
| 112 |
+
and r["score"]]
|
| 113 |
+
t0_scores = [float(r["score"]) for r in t0_data
|
| 114 |
+
if r["runs"] == "xxl-lm-d4-091621-512"
|
| 115 |
+
and r["dataset_name"] == t0_task
|
| 116 |
+
and r["metric_name"] == "accuracy (Rank)"
|
| 117 |
+
and r["score"]]
|
| 118 |
+
|
| 119 |
+
mtf_scores = [
|
| 120 |
+
(
|
| 121 |
+
name,
|
| 122 |
+
[100 * value["evaluation"]["accuracy"] for prompt, value in data[mtf_task].items()]
|
| 123 |
+
if mtf_task in data else
|
| 124 |
+
[]
|
| 125 |
+
)
|
| 126 |
+
for name, data in mtf_data.items()
|
| 127 |
+
]
|
| 128 |
+
|
| 129 |
+
all_experiment_scores_with_name = [("T5 + LM", t5lm_scores), ("T0", t0_scores), *mtf_scores]
|
| 130 |
+
# Plot
|
| 131 |
+
axs[n].axhline(100 * random_baseline, 0, len(all_experiment_scores_with_name), label="Random")
|
| 132 |
+
for i, (exp_name, scores) in enumerate(all_experiment_scores_with_name):
|
| 133 |
+
axs[n].scatter([i] * len(scores), scores, s=50, alpha=0.4, label=exp_name)
|
| 134 |
+
axs[n].set_title(task_name, fontsize=8)
|
| 135 |
+
|
| 136 |
+
# # Gather median values
|
| 137 |
+
# task_min_score[task] = [("Random", 100 * random_baseline)] + [(exp_name, np.min(scores)) for (exp_name, scores) in all_experiment_scores_with_name]
|
| 138 |
+
# task_max_score[task] = [("Random", 100 * random_baseline)] + [(exp_name, np.max(scores)) for (exp_name, scores) in all_experiment_scores_with_name]
|
| 139 |
+
# task_median_score[task] = [("Random", 100 * random_baseline)] + [(exp_name, np.median(scores)) for (exp_name, scores) in all_experiment_scores_with_name]
|
| 140 |
+
|
| 141 |
+
last_ax_id = len(TASKS) - 1
|
| 142 |
+
axs[last_ax_id].legend(bbox_to_anchor=(1, 1), loc="upper left")
|
| 143 |
+
for ax in axs[last_ax_id + 1:]:
|
| 144 |
+
ax.set_visible(False)
|
| 145 |
+
|
| 146 |
+
# if args.aggregated_results:
|
| 147 |
+
# # ====== Plot agregated values =======
|
| 148 |
+
# fig, axs = plt.subplots(1, 3, figsize=(20, 8))
|
| 149 |
+
# axs = axs.flatten()
|
| 150 |
+
# last_ax_id=0
|
| 151 |
+
# experiment_names = [elt[0] for elt in next(iter(task_median_score.values()))]
|
| 152 |
+
#
|
| 153 |
+
# def plot_scores_with_name(median_score_with_name, max_score, min_score, ax, title):
|
| 154 |
+
# assert len(median_score_with_name) == len(max_score) and len(median_score_with_name) == len(min_score)
|
| 155 |
+
# ax.axhline(
|
| 156 |
+
# median_score_with_name[0][1],
|
| 157 |
+
# 0, len(median_score_with_name) - 1,
|
| 158 |
+
# label=median_score_with_name[0][0]
|
| 159 |
+
# )
|
| 160 |
+
# for i, ((name, median_score), max_score, min_score) in enumerate(zip(median_score_with_name[1:], max_score[1:], min_score[1:])):
|
| 161 |
+
# ax.errorbar(
|
| 162 |
+
# i, median_score, ((median_score - min_score,), (max_score - median_score,)),
|
| 163 |
+
# fmt="o", elinewidth=1, label=name)
|
| 164 |
+
# ax.set_title(title)
|
| 165 |
+
#
|
| 166 |
+
# def get_average_normalised_score(task_scores):
|
| 167 |
+
# normalised_scores = []
|
| 168 |
+
# for scores_with_name in task_scores.values():
|
| 169 |
+
# random_name, random_baseline = scores_with_name[0]
|
| 170 |
+
# assert random_name == "Random"
|
| 171 |
+
# normalised_scores_per_task = [(scores - random_baseline) / (100 - random_baseline) for _, scores in
|
| 172 |
+
# scores_with_name]
|
| 173 |
+
# normalised_scores.append(normalised_scores_per_task)
|
| 174 |
+
# return np.mean(normalised_scores, axis=0)
|
| 175 |
+
#
|
| 176 |
+
# def get_average_score(task_scores):
|
| 177 |
+
# return np.mean(
|
| 178 |
+
# [[scores for _, scores in scores_with_name] for scores_with_name in task_scores.values()], axis=0)
|
| 179 |
+
#
|
| 180 |
+
# # Plot average task score
|
| 181 |
+
# average_task_median_score = get_average_score(task_median_score)
|
| 182 |
+
# assert len(experiment_names) == len(average_task_median_score)
|
| 183 |
+
# average_task_media_score_with_name = list(zip(experiment_names, average_task_median_score))
|
| 184 |
+
# del average_task_median_score
|
| 185 |
+
# plot_scores_with_name(
|
| 186 |
+
# median_score_with_name=average_task_media_score_with_name,
|
| 187 |
+
# max_score=get_average_score(task_max_score),
|
| 188 |
+
# min_score=get_average_score(task_min_score),
|
| 189 |
+
# ax=axs[last_ax_id],
|
| 190 |
+
# title=f"Average of task median scores"
|
| 191 |
+
# )
|
| 192 |
+
# last_ax_id += 1
|
| 193 |
+
#
|
| 194 |
+
# # Plot average of task median normalised scores `normalised_score = (score - random) / (1 - random)`
|
| 195 |
+
# average_task_normalised_median_score = get_average_normalised_score(task_median_score)
|
| 196 |
+
# assert len(experiment_names) == len(average_task_normalised_median_score)
|
| 197 |
+
# average_task_normalised_median_score_with_name = list(
|
| 198 |
+
# zip(experiment_names, average_task_normalised_median_score))
|
| 199 |
+
# del average_task_normalised_median_score
|
| 200 |
+
# plot_scores_with_name(
|
| 201 |
+
# median_score_with_name=average_task_normalised_median_score_with_name,
|
| 202 |
+
# max_score=get_average_normalised_score(task_max_score),
|
| 203 |
+
# min_score=get_average_normalised_score(task_min_score),
|
| 204 |
+
# ax=axs[last_ax_id],
|
| 205 |
+
# title=f"Average of task normalised median scores"
|
| 206 |
+
# )
|
| 207 |
+
# last_ax_id += 1
|
| 208 |
+
#
|
| 209 |
+
# axs[last_ax_id -1].legend(bbox_to_anchor=(1, 1), loc="upper left")
|
| 210 |
+
# for ax in axs[last_ax_id:]:
|
| 211 |
+
# ax.set_visible(False)
|
| 212 |
+
|
| 213 |
+
|
| 214 |
+
def main():
|
| 215 |
+
args = get_args()
|
| 216 |
+
|
| 217 |
+
# Load results
|
| 218 |
+
t0_data = load_t0_results(args.t0_csv_path)
|
| 219 |
+
mtf_data = {
|
| 220 |
+
re.sub(".json", "", json_path): load_json(json_path)
|
| 221 |
+
for json_path in args.json_paths
|
| 222 |
+
}
|
| 223 |
+
|
| 224 |
+
plot(mtf_data, t0_data)
|
| 225 |
+
|
| 226 |
+
plt.show()
|
| 227 |
+
print("Finished")
|
| 228 |
+
|
| 229 |
+
if __name__ == "__main__":
|
| 230 |
+
main()
|
evaluation/results/tr13/results_to_csv.py
ADDED
|
@@ -0,0 +1,72 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/usr/bin/env python
|
| 2 |
+
|
| 3 |
+
# this script converts results.json:
|
| 4 |
+
#
|
| 5 |
+
# "results": {
|
| 6 |
+
# "arc_challenge": {
|
| 7 |
+
# "acc": 0.24232081911262798,
|
| 8 |
+
# "acc_stderr": 0.01252159329580012,
|
| 9 |
+
# "acc_norm": 0.2764505119453925,
|
| 10 |
+
# "acc_norm_stderr": 0.013069662474252425
|
| 11 |
+
# },
|
| 12 |
+
#
|
| 13 |
+
# into a format expected by a spreadsheet, which is:
|
| 14 |
+
#
|
| 15 |
+
# task metric value err
|
| 16 |
+
# arc_challenge acc xxx yyy
|
| 17 |
+
# arc_challenge acc_norm xxx yyy
|
| 18 |
+
# arc_challenge f1 xxx yyy
|
| 19 |
+
#
|
| 20 |
+
# usage:
|
| 21 |
+
# report-to-csv.py results.json
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
import sys
|
| 25 |
+
import statistics
|
| 26 |
+
import json
|
| 27 |
+
import io
|
| 28 |
+
import csv
|
| 29 |
+
|
| 30 |
+
results_file = sys.argv[1]
|
| 31 |
+
|
| 32 |
+
csv_file = results_file.replace("json", "csv")
|
| 33 |
+
|
| 34 |
+
print(f"Converting {results_file} to {csv_file}")
|
| 35 |
+
|
| 36 |
+
with io.open(results_file, 'r', encoding='utf-8') as f:
|
| 37 |
+
raw_results = json.load(f)
|
| 38 |
+
|
| 39 |
+
results = {}
|
| 40 |
+
for ds_name, v in sorted(raw_results.items()):
|
| 41 |
+
results[ds_name.split("/")[-1]] = v
|
| 42 |
+
|
| 43 |
+
with io.open(csv_file, 'w', encoding='utf-8') as f:
|
| 44 |
+
|
| 45 |
+
writer = csv.writer(f)
|
| 46 |
+
writer.writerow(["dataset", "prompt", "metric", "value"])
|
| 47 |
+
medians = []
|
| 48 |
+
for ds_name, v in sorted(results.items()):
|
| 49 |
+
acc_scores, bleu_scores, rouge2_fmeasure = [], [], []
|
| 50 |
+
for prompt_name, res in sorted(v.items()):
|
| 51 |
+
# T0 Eval
|
| 52 |
+
if "evaluation" in res:
|
| 53 |
+
for metric, value in sorted(res["evaluation"].items()):
|
| 54 |
+
writer.writerow([ds_name, prompt_name, metric, value])
|
| 55 |
+
if metric == "accuracy":
|
| 56 |
+
acc_scores.append(value)
|
| 57 |
+
# LM Eval Harness Generation
|
| 58 |
+
elif "bleu" in res:
|
| 59 |
+
# Make sure BLEU is 0-1 not 0-100
|
| 60 |
+
writer.writerow([ds_name, prompt_name, "bleu", res["bleu"] / 100])
|
| 61 |
+
bleu_scores.append(res["bleu"] / 100)
|
| 62 |
+
|
| 63 |
+
if acc_scores:
|
| 64 |
+
median = statistics.median(acc_scores)
|
| 65 |
+
medians.append(median)
|
| 66 |
+
writer.writerow([ds_name, "median", "accuracy", median])
|
| 67 |
+
elif bleu_scores:
|
| 68 |
+
median = statistics.median(bleu_scores)
|
| 69 |
+
medians.append(median)
|
| 70 |
+
writer.writerow([ds_name, "median", "bleu", median])
|
| 71 |
+
if medians:
|
| 72 |
+
writer.writerow(["multiple", "average", "multiple", statistics.mean(medians)])
|
evaluation/results/tr13/tzeroeval/evaluate_t0_v100.slurm
ADDED
|
@@ -0,0 +1,751 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=evaluate_t0
|
| 3 |
+
#SBATCH --constraint=v100-32g
|
| 4 |
+
#SBATCH --nodes=1
|
| 5 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 6 |
+
#SBATCH --cpus-per-task=10 # number of cores per tasks
|
| 7 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 8 |
+
#SBATCH --gres=gpu:1 # number of gpus
|
| 9 |
+
#SBATCH --time 20:00:00 # maximum execution time (HH:MM:SS)
|
| 10 |
+
#SBATCH --output=%x-%j.out # output file name
|
| 11 |
+
#SBATCH --account=six@v100
|
| 12 |
+
#SBATCH --array=0-164
|
| 13 |
+
|
| 14 |
+
# VALIDATION:
|
| 15 |
+
# --array=0-168
|
| 16 |
+
|
| 17 |
+
# L1
|
| 18 |
+
# --array=0-169
|
| 19 |
+
|
| 20 |
+
# L2
|
| 21 |
+
# --array=0-84
|
| 22 |
+
|
| 23 |
+
# MT L1
|
| 24 |
+
# --array=0-69
|
| 25 |
+
|
| 26 |
+
# MT L2
|
| 27 |
+
# --array=0-89
|
| 28 |
+
|
| 29 |
+
# XNLIMTHT:
|
| 30 |
+
# --array=0-79
|
| 31 |
+
|
| 32 |
+
set -x -e
|
| 33 |
+
|
| 34 |
+
source $six_ALL_CCFRWORK/start-py38-pt111
|
| 35 |
+
conda activate thomas_t_zero_evaluation
|
| 36 |
+
|
| 37 |
+
CHECKPOINT_PATH=/gpfsscratch/rech/six/commun/experiments/muennighoff/bloomckpt/350m/bloom-560m
|
| 38 |
+
|
| 39 |
+
WORKDIR=/gpfswork/rech/six/commun/code/tr13f-6B3-ml-t0
|
| 40 |
+
|
| 41 |
+
pushd $WORKDIR
|
| 42 |
+
|
| 43 |
+
OUTPUT_DIR=$CHECKPOINT_PATH/evaluation
|
| 44 |
+
mkdir -p $OUTPUT_DIR
|
| 45 |
+
|
| 46 |
+
# Validation
|
| 47 |
+
DATASETS_AND_CONFIGS_VAL=(
|
| 48 |
+
head_qa,en,en,"multiple_choice_q_and_a_index_with_context_en",validation
|
| 49 |
+
head_qa,en,en,"multiple_choice_q_and_a_en",validation
|
| 50 |
+
head_qa,en,en,"multiple_choice_q_and_a_index_en",validation
|
| 51 |
+
head_qa,en,en,"multiple_choice_a_and_q_with_context_en",validation
|
| 52 |
+
head_qa,en,en,"multiple_choice_a_and_q_en",validation
|
| 53 |
+
head_qa,es,en,"multiple_choice_q_and_a_index_with_context_en",validation
|
| 54 |
+
head_qa,es,en,"multiple_choice_q_and_a_en",validation
|
| 55 |
+
head_qa,es,en,"multiple_choice_q_and_a_index_en",validation
|
| 56 |
+
head_qa,es,en,"multiple_choice_a_and_q_with_context_en",validation
|
| 57 |
+
head_qa,es,en,"multiple_choice_a_and_q_en",validation
|
| 58 |
+
climate_fever,None,None,"first_evidence_and_claim_itemization",test
|
| 59 |
+
climate_fever,None,None,"claim_and_all_supporting_evidences",test
|
| 60 |
+
climate_fever,None,None,"fifth_evidence_and_claim_itemization",test
|
| 61 |
+
climate_fever,None,None,"third_evidence_claim_pair",test
|
| 62 |
+
climate_fever,None,None,"second_evidence_and_claim_itemization",test
|
| 63 |
+
codah,codah,None,"interrogative_instruction_after_sentence_and_choices",train
|
| 64 |
+
codah,codah,None,"affirmative_instruction_before_sentence_and_choices",train
|
| 65 |
+
codah,codah,None,"affirmative_instruction_after_sentence_and_choices",train
|
| 66 |
+
aqua_rat,raw,None,"select_the_best_option",validation
|
| 67 |
+
aqua_rat,raw,None,"answer_quiz",validation
|
| 68 |
+
aqua_rat,raw,None,"Answer questions from options",validation
|
| 69 |
+
commonsense_qa,None,None,"answer_given_question_without_options",validation
|
| 70 |
+
commonsense_qa,None,None,"question_answering",validation
|
| 71 |
+
commonsense_qa,None,None,"most_suitable_answer",validation
|
| 72 |
+
amazon_reviews_multi,en,en,"prompt_title_to_star",validation
|
| 73 |
+
amazon_reviews_multi,en,en,"prompt_review_to_star",validation
|
| 74 |
+
amazon_reviews_multi,en,en,"prompt_body_title_to_star",validation
|
| 75 |
+
amazon_reviews_multi,zh,en,"prompt_title_to_star",validation
|
| 76 |
+
amazon_reviews_multi,zh,en,"prompt_review_to_star",validation
|
| 77 |
+
amazon_reviews_multi,zh,en,"prompt_body_title_to_star",validation
|
| 78 |
+
amazon_reviews_multi,fr,en,"prompt_title_to_star",validation
|
| 79 |
+
amazon_reviews_multi,fr,en,"prompt_review_to_star",validation
|
| 80 |
+
amazon_reviews_multi,fr,en,"prompt_body_title_to_star",validation
|
| 81 |
+
amazon_reviews_multi,es,en,"prompt_title_to_star",validation
|
| 82 |
+
amazon_reviews_multi,es,en,"prompt_review_to_star",validation
|
| 83 |
+
amazon_reviews_multi,es,en,"prompt_body_title_to_star",validation
|
| 84 |
+
art,None,None,"choose_hypothesis_options",validation
|
| 85 |
+
art,None,None,"choose_hypothesis_believable",validation
|
| 86 |
+
art,None,None,"choose_hypothesis",validation
|
| 87 |
+
art,None,None,"choose_hypothesis_desc",validation
|
| 88 |
+
art,None,None,"choose_hypothesis_likely",validation
|
| 89 |
+
banking77,None,None,"help_page_topic",test
|
| 90 |
+
banking77,None,None,"direct_to_which_department",test
|
| 91 |
+
banking77,None,None,"rephrase_as_banking_term",test
|
| 92 |
+
blbooksgenre,title_genre_classifiction,None,"multi-choice",train
|
| 93 |
+
blbooksgenre,title_genre_classifiction,None,"premise_context_first",train
|
| 94 |
+
blbooksgenre,title_genre_classifiction,None,"classify",train
|
| 95 |
+
blimp,adjunct_island,None,"grammatical_between_1_2",train
|
| 96 |
+
blimp,adjunct_island,None,"grammatical_between_A_B",train
|
| 97 |
+
blimp,adjunct_island,None,"grammatical_which_one_1_2",train
|
| 98 |
+
blimp,adjunct_island,None,"single_sentence_bad_yes_no",train
|
| 99 |
+
blimp,adjunct_island,None,"single_sentence_good_yes_no",train
|
| 100 |
+
conv_ai_3,None,None,"clarification_needed",validation
|
| 101 |
+
conv_ai_3,None,None,"score_give_number",validation
|
| 102 |
+
conv_ai_3,None,None,"ambiguous",validation
|
| 103 |
+
conv_ai_3,None,None,"directly_answer",validation
|
| 104 |
+
conv_ai_3,None,None,"score_how_much",validation
|
| 105 |
+
craigslist_bargains,None,None,"good deal for seller no list price implicit",validation
|
| 106 |
+
craigslist_bargains,None,None,"good deal for seller no list price",validation
|
| 107 |
+
craigslist_bargains,None,None,"good deal for seller",validation
|
| 108 |
+
craigslist_bargains,None,None,"best deal",validation
|
| 109 |
+
ecthr_cases,alleged-violation-prediction,None,"implicit_advice_number",validation
|
| 110 |
+
ecthr_cases,alleged-violation-prediction,None,"ecthr_alleged_articles_declaration_at_end",validation
|
| 111 |
+
ecthr_cases,alleged-violation-prediction,None,"ecthr_alleged_articles_question_at_start",validation
|
| 112 |
+
ecthr_cases,alleged-violation-prediction,None,"implicit_judgment_paragraph",validation
|
| 113 |
+
ecthr_cases,alleged-violation-prediction,None,"confirm number of violated articles",validation
|
| 114 |
+
emo,None,None,"persons_describe",validation
|
| 115 |
+
emo,None,None,"final_message",validation
|
| 116 |
+
emo,None,None,"what_emotion_do_you_think",validation
|
| 117 |
+
emo,None,None,"emotional_state",validation
|
| 118 |
+
emo,None,None,"dialogue_between",validation
|
| 119 |
+
emotion,None,None,"choose_the_best_emotion_label",test
|
| 120 |
+
emotion,None,None,"reply_with_emoation_label",test
|
| 121 |
+
emotion,None,None,"answer_with_class_label",test
|
| 122 |
+
emotion,None,None,"answer_question_with_emotion_label",test
|
| 123 |
+
financial_phrasebank,sentences_allagree,None,"share_price_option",train
|
| 124 |
+
financial_phrasebank,sentences_allagree,None,"sentiment",train
|
| 125 |
+
financial_phrasebank,sentences_allagree,None,"word_comes_to_mind",train
|
| 126 |
+
financial_phrasebank,sentences_allagree,None,"complementary_industries",train
|
| 127 |
+
financial_phrasebank,sentences_allagree,None,"bullish_neutral_bearish",train
|
| 128 |
+
glue,cola,None,"Make sense yes no",validation
|
| 129 |
+
glue,cola,None,"is_this_correct",validation
|
| 130 |
+
glue,cola,None,"editing",validation
|
| 131 |
+
glue,cola,None,"Following sentence acceptable",validation
|
| 132 |
+
glue,cola,None,"Previous sentence acceptable",validation
|
| 133 |
+
glue,sst2,None,"positive negative after",validation
|
| 134 |
+
glue,sst2,None,"review",validation
|
| 135 |
+
glue,sst2,None,"said",validation
|
| 136 |
+
glue,sst2,None,"following positive negative",validation
|
| 137 |
+
glue,sst2,None,"happy or mad",validation
|
| 138 |
+
health_fact,None,None,"claim_veracity_classification_after_reading_I_believe",validation
|
| 139 |
+
health_fact,None,None,"claim_explanation_classification",validation
|
| 140 |
+
health_fact,None,None,"claim_veracity_classification_tell_me",validation
|
| 141 |
+
hlgd,None,None,"is_same_event_with_time_interrogative_related",validation
|
| 142 |
+
hlgd,None,None,"is_same_event_interrogative_talk",validation
|
| 143 |
+
hlgd,None,None,"is_same_event_with_time_interrogative_talk",validation
|
| 144 |
+
hlgd,None,None,"is_same_event_refer",validation
|
| 145 |
+
hlgd,None,None,"is_same_event_editor_asks",validation
|
| 146 |
+
hyperpartisan_news_detection,byarticle,None,"consider_does_it_follow_a_hyperpartisan_argumentation",train
|
| 147 |
+
hyperpartisan_news_detection,byarticle,None,"follows_hyperpartisan_argumentation",train
|
| 148 |
+
hyperpartisan_news_detection,byarticle,None,"consume_with_caution",train
|
| 149 |
+
hyperpartisan_news_detection,byarticle,None,"extreme_left_wing_or_right_wing",train
|
| 150 |
+
hyperpartisan_news_detection,byarticle,None,"consider_it_exhibits_extreme_one_sidedness",train
|
| 151 |
+
liar,None,None,"Given statement guess category",validation
|
| 152 |
+
lince,sa_spaeng,None,"original poster expressed sentiment",validation
|
| 153 |
+
lince,sa_spaeng,None,"sentiment trying to express",validation
|
| 154 |
+
lince,sa_spaeng,None,"express sentiment",validation
|
| 155 |
+
lince,sa_spaeng,None,"negation template",validation
|
| 156 |
+
lince,sa_spaeng,None,"the author seem",validation
|
| 157 |
+
math_qa,None,None,"choose_correct_og",test
|
| 158 |
+
math_qa,None,None,"pick_the_correct",test
|
| 159 |
+
math_qa,None,None,"first_choice_then_problem",test
|
| 160 |
+
math_qa,None,None,"problem_set_type",test
|
| 161 |
+
math_qa,None,None,"gre_problem",test
|
| 162 |
+
movie_rationales,None,None,"Standard binary sentiment analysis",validation
|
| 163 |
+
movie_rationales,None,None,"Evidences sentiment classification",validation
|
| 164 |
+
movie_rationales,None,None,"Evidences + review",validation
|
| 165 |
+
movie_rationales,None,None,"Generate evidences and sentiment",validation
|
| 166 |
+
mwsc,None,None,"in-the-sentence-question-first",validation
|
| 167 |
+
mwsc,None,None,"what-think",validation
|
| 168 |
+
mwsc,None,None,"in-the-sentence",validation
|
| 169 |
+
mwsc,None,None,"options-or",validation
|
| 170 |
+
mwsc,None,None,"is-correct",validation
|
| 171 |
+
poem_sentiment,None,None,"positive_or_negative_sentiment_variation_2",validation
|
| 172 |
+
poem_sentiment,None,None,"question_answer_format",validation
|
| 173 |
+
poem_sentiment,None,None,"guess_sentiment_without_options_variation_1",validation
|
| 174 |
+
poem_sentiment,None,None,"positive_or_negative_sentiment_variation_1",validation
|
| 175 |
+
poem_sentiment,None,None,"most_appropriate_sentiment",validation
|
| 176 |
+
onestop_english,None,None,"esl_context",train
|
| 177 |
+
onestop_english,None,None,"ara_context",train
|
| 178 |
+
onestop_english,None,None,"determine_reading_level_from_the_first_three_sentences",train
|
| 179 |
+
onestop_english,None,None,"esl_variation",train
|
| 180 |
+
onestop_english,None,None,"assess",train
|
| 181 |
+
pubmed_qa,pqa_labeled,None,"Long Answer to Final Decision",train
|
| 182 |
+
pubmed_qa,pqa_labeled,None,"Question Answering (Short)",train
|
| 183 |
+
riddle_sense,None,None,"most_suitable_answer",validation
|
| 184 |
+
riddle_sense,None,None,"answer_given_question_without_options",validation
|
| 185 |
+
riddle_sense,None,None,"question_to_answer_index",validation
|
| 186 |
+
riddle_sense,None,None,"question_answering",validation
|
| 187 |
+
scicite,None,None,"Classify intent w/section (select choice)",validation
|
| 188 |
+
scicite,None,None,"Classify intent (choices first)",validation
|
| 189 |
+
scicite,None,None,"Classify intent (select choice)",validation
|
| 190 |
+
scicite,None,None,"Classify intent",validation
|
| 191 |
+
scicite,None,None,"can_describe",validation
|
| 192 |
+
selqa,answer_selection_analysis,None,"is-he-talking-about",validation
|
| 193 |
+
selqa,answer_selection_analysis,None,"would-make-sense-qu-rand",validation
|
| 194 |
+
selqa,answer_selection_analysis,None,"make-sense-rand",validation
|
| 195 |
+
selqa,answer_selection_analysis,None,"which-answer-1st-vs-random",validation
|
| 196 |
+
snips_built_in_intents,None,None,"voice_intent",train
|
| 197 |
+
snips_built_in_intents,None,None,"categorize_query",train
|
| 198 |
+
snips_built_in_intents,None,None,"intent_query",train
|
| 199 |
+
snips_built_in_intents,None,None,"categorize_query_brief",train
|
| 200 |
+
snips_built_in_intents,None,None,"query_intent",train
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
DATASETS_AND_CONFIGS_L1=(
|
| 204 |
+
super_glue,copa,None,"best_option",validation
|
| 205 |
+
super_glue,copa,None,"C1 or C2? premise, so/because…",validation
|
| 206 |
+
super_glue,copa,None,"i_am_hesitating",validation
|
| 207 |
+
super_glue,copa,None,"cause_effect",validation
|
| 208 |
+
super_glue,copa,None,"plausible_alternatives",validation
|
| 209 |
+
super_glue,rte,None,"MNLI crowdsource",validation
|
| 210 |
+
super_glue,rte,None,"GPT-3 style",validation
|
| 211 |
+
super_glue,rte,None,"does it follow that",validation
|
| 212 |
+
super_glue,rte,None,"should assume",validation
|
| 213 |
+
super_glue,rte,None,"guaranteed true",validation
|
| 214 |
+
anli,dev_r1,None,"guaranteed/possible/impossible",dev_r1
|
| 215 |
+
anli,dev_r1,None,"MNLI crowdsource",dev_r1
|
| 216 |
+
anli,dev_r1,None,"GPT-3 style",dev_r1
|
| 217 |
+
anli,dev_r1,None,"justified in saying",dev_r1
|
| 218 |
+
anli,dev_r1,None,"can we infer",dev_r1
|
| 219 |
+
anli,dev_r2,None,"guaranteed/possible/impossible",dev_r2
|
| 220 |
+
anli,dev_r2,None,"MNLI crowdsource",dev_r2
|
| 221 |
+
anli,dev_r2,None,"GPT-3 style",dev_r2
|
| 222 |
+
anli,dev_r2,None,"justified in saying",dev_r2
|
| 223 |
+
anli,dev_r2,None,"can we infer",dev_r2
|
| 224 |
+
anli,dev_r3,None,"guaranteed/possible/impossible",dev_r3
|
| 225 |
+
anli,dev_r3,None,"MNLI crowdsource",dev_r3
|
| 226 |
+
anli,dev_r3,None,"GPT-3 style",dev_r3
|
| 227 |
+
anli,dev_r3,None,"justified in saying",dev_r3
|
| 228 |
+
anli,dev_r3,None,"can we infer",dev_r3
|
| 229 |
+
super_glue,cb,None,"guaranteed/possible/impossible",validation
|
| 230 |
+
super_glue,cb,None,"MNLI crowdsource",validation
|
| 231 |
+
super_glue,cb,None,"GPT-3 style",validation
|
| 232 |
+
super_glue,cb,None,"justified in saying",validation
|
| 233 |
+
super_glue,cb,None,"can we infer",validation
|
| 234 |
+
winogrande,winogrande_xl,None,"underscore refer to",validation
|
| 235 |
+
winogrande,winogrande_xl,None,"Replace",validation
|
| 236 |
+
winogrande,winogrande_xl,None,"stand for",validation
|
| 237 |
+
winogrande,winogrande_xl,None,"does underscore refer to",validation
|
| 238 |
+
winogrande,winogrande_xl,None,"True or False",validation
|
| 239 |
+
story_cloze,2016,None,"Story Continuation and Options",validation
|
| 240 |
+
story_cloze,2016,None,"Answer Given options",validation
|
| 241 |
+
story_cloze,2016,None,"Novel Correct Ending",validation
|
| 242 |
+
story_cloze,2016,None,"Generate Ending",validation
|
| 243 |
+
story_cloze,2016,None,"Choose Story Ending",validation
|
| 244 |
+
Muennighoff/xstory_cloze,ar,en,"Story Continuation and Options",validation
|
| 245 |
+
Muennighoff/xstory_cloze,ar,en,"Answer Given options",validation
|
| 246 |
+
Muennighoff/xstory_cloze,ar,en,"Novel Correct Ending",validation
|
| 247 |
+
Muennighoff/xstory_cloze,ar,en,"Generate Ending",validation
|
| 248 |
+
Muennighoff/xstory_cloze,ar,en,"Choose Story Ending",validation
|
| 249 |
+
Muennighoff/xstory_cloze,es,en,"Story Continuation and Options",validation
|
| 250 |
+
Muennighoff/xstory_cloze,es,en,"Answer Given options",validation
|
| 251 |
+
Muennighoff/xstory_cloze,es,en,"Novel Correct Ending",validation
|
| 252 |
+
Muennighoff/xstory_cloze,es,en,"Generate Ending",validation
|
| 253 |
+
Muennighoff/xstory_cloze,es,en,"Choose Story Ending",validation
|
| 254 |
+
Muennighoff/xstory_cloze,eu,en,"Story Continuation and Options",validation
|
| 255 |
+
Muennighoff/xstory_cloze,eu,en,"Answer Given options",validation
|
| 256 |
+
Muennighoff/xstory_cloze,eu,en,"Novel Correct Ending",validation
|
| 257 |
+
Muennighoff/xstory_cloze,eu,en,"Generate Ending",validation
|
| 258 |
+
Muennighoff/xstory_cloze,eu,en,"Choose Story Ending",validation
|
| 259 |
+
Muennighoff/xstory_cloze,id,en,"Story Continuation and Options",validation
|
| 260 |
+
Muennighoff/xstory_cloze,id,en,"Answer Given options",validation
|
| 261 |
+
Muennighoff/xstory_cloze,id,en,"Novel Correct Ending",validation
|
| 262 |
+
Muennighoff/xstory_cloze,id,en,"Generate Ending",validation
|
| 263 |
+
Muennighoff/xstory_cloze,id,en,"Choose Story Ending",validation
|
| 264 |
+
Muennighoff/xstory_cloze,hi,en,"Story Continuation and Options",validation
|
| 265 |
+
Muennighoff/xstory_cloze,hi,en,"Answer Given options",validation
|
| 266 |
+
Muennighoff/xstory_cloze,hi,en,"Novel Correct Ending",validation
|
| 267 |
+
Muennighoff/xstory_cloze,hi,en,"Generate Ending",validation
|
| 268 |
+
Muennighoff/xstory_cloze,hi,en,"Choose Story Ending",validation
|
| 269 |
+
Muennighoff/xstory_cloze,sw,en,"Story Continuation and Options",validation
|
| 270 |
+
Muennighoff/xstory_cloze,sw,en,"Answer Given options",validation
|
| 271 |
+
Muennighoff/xstory_cloze,sw,en,"Novel Correct Ending",validation
|
| 272 |
+
Muennighoff/xstory_cloze,sw,en,"Generate Ending",validation
|
| 273 |
+
Muennighoff/xstory_cloze,sw,en,"Choose Story Ending",validation
|
| 274 |
+
Muennighoff/xstory_cloze,te,en,"Story Continuation and Options",validation
|
| 275 |
+
Muennighoff/xstory_cloze,te,en,"Answer Given options",validation
|
| 276 |
+
Muennighoff/xstory_cloze,te,en,"Novel Correct Ending",validation
|
| 277 |
+
Muennighoff/xstory_cloze,te,en,"Generate Ending",validation
|
| 278 |
+
Muennighoff/xstory_cloze,te,en,"Choose Story Ending",validation
|
| 279 |
+
Muennighoff/xstory_cloze,zh,en,"Story Continuation and Options",validation
|
| 280 |
+
Muennighoff/xstory_cloze,zh,en,"Answer Given options",validation
|
| 281 |
+
Muennighoff/xstory_cloze,zh,en,"Novel Correct Ending",validation
|
| 282 |
+
Muennighoff/xstory_cloze,zh,en,"Generate Ending",validation
|
| 283 |
+
Muennighoff/xstory_cloze,zh,en,"Choose Story Ending",validation
|
| 284 |
+
xnli,ar,en,"guaranteed/possible/impossible",validation
|
| 285 |
+
xnli,ar,en,"MNLI crowdsource",validation
|
| 286 |
+
xnli,ar,en,"GPT-3 style",validation
|
| 287 |
+
xnli,ar,en,"justified in saying",validation
|
| 288 |
+
xnli,ar,en,"can we infer",validation
|
| 289 |
+
xnli,en,en,"guaranteed/possible/impossible",validation
|
| 290 |
+
xnli,en,en,"MNLI crowdsource",validation
|
| 291 |
+
xnli,en,en,"GPT-3 style",validation
|
| 292 |
+
xnli,en,en,"justified in saying",validation
|
| 293 |
+
xnli,en,en,"can we infer",validation
|
| 294 |
+
xnli,es,en,"guaranteed/possible/impossible",validation
|
| 295 |
+
xnli,es,en,"MNLI crowdsource",validation
|
| 296 |
+
xnli,es,en,"GPT-3 style",validation
|
| 297 |
+
xnli,es,en,"justified in saying",validation
|
| 298 |
+
xnli,es,en,"can we infer",validation
|
| 299 |
+
xnli,fr,en,"guaranteed/possible/impossible",validation
|
| 300 |
+
xnli,fr,en,"MNLI crowdsource",validation
|
| 301 |
+
xnli,fr,en,"GPT-3 style",validation
|
| 302 |
+
xnli,fr,en,"justified in saying",validation
|
| 303 |
+
xnli,fr,en,"can we infer",validation
|
| 304 |
+
xnli,hi,en,"guaranteed/possible/impossible",validation
|
| 305 |
+
xnli,hi,en,"MNLI crowdsource",validation
|
| 306 |
+
xnli,hi,en,"GPT-3 style",validation
|
| 307 |
+
xnli,hi,en,"justified in saying",validation
|
| 308 |
+
xnli,hi,en,"can we infer",validation
|
| 309 |
+
xnli,sw,en,"guaranteed/possible/impossible",validation
|
| 310 |
+
xnli,sw,en,"MNLI crowdsource",validation
|
| 311 |
+
xnli,sw,en,"GPT-3 style",validation
|
| 312 |
+
xnli,sw,en,"justified in saying",validation
|
| 313 |
+
xnli,sw,en,"can we infer",validation
|
| 314 |
+
xnli,ur,en,"guaranteed/possible/impossible",validation
|
| 315 |
+
xnli,ur,en,"MNLI crowdsource",validation
|
| 316 |
+
xnli,ur,en,"GPT-3 style",validation
|
| 317 |
+
xnli,ur,en,"justified in saying",validation
|
| 318 |
+
xnli,ur,en,"can we infer",validation
|
| 319 |
+
xnli,vi,en,"guaranteed/possible/impossible",validation
|
| 320 |
+
xnli,vi,en,"MNLI crowdsource",validation
|
| 321 |
+
xnli,vi,en,"GPT-3 style",validation
|
| 322 |
+
xnli,vi,en,"justified in saying",validation
|
| 323 |
+
xnli,vi,en,"can we infer",validation
|
| 324 |
+
xnli,zh,en,"guaranteed/possible/impossible",validation
|
| 325 |
+
xnli,zh,en,"MNLI crowdsource",validation
|
| 326 |
+
xnli,zh,en,"GPT-3 style",validation
|
| 327 |
+
xnli,zh,en,"justified in saying",validation
|
| 328 |
+
xnli,zh,en,"can we infer",validation
|
| 329 |
+
xcopa,id,en,"best_option",validation
|
| 330 |
+
xcopa,id,en,"C1 or C2? premise, so/because…",validation
|
| 331 |
+
xcopa,id,en,"i_am_hesitating",validation
|
| 332 |
+
xcopa,id,en,"cause_effect",validation
|
| 333 |
+
xcopa,id,en,"plausible_alternatives",validation
|
| 334 |
+
xcopa,sw,en,"best_option",validation
|
| 335 |
+
xcopa,sw,en,"C1 or C2? premise, so/because…",validation
|
| 336 |
+
xcopa,sw,en,"i_am_hesitating",validation
|
| 337 |
+
xcopa,sw,en,"cause_effect",validation
|
| 338 |
+
xcopa,sw,en,"plausible_alternatives",validation
|
| 339 |
+
xcopa,ta,en,"best_option",validation
|
| 340 |
+
xcopa,ta,en,"C1 or C2? premise, so/because…",validation
|
| 341 |
+
xcopa,ta,en,"i_am_hesitating",validation
|
| 342 |
+
xcopa,ta,en,"cause_effect",validation
|
| 343 |
+
xcopa,ta,en,"plausible_alternatives",validation
|
| 344 |
+
xcopa,vi,en,"best_option",validation
|
| 345 |
+
xcopa,vi,en,"C1 or C2? premise, so/because…",validation
|
| 346 |
+
xcopa,vi,en,"i_am_hesitating",validation
|
| 347 |
+
xcopa,vi,en,"cause_effect",validation
|
| 348 |
+
xcopa,vi,en,"plausible_alternatives",validation
|
| 349 |
+
xcopa,zh,en,"best_option",validation
|
| 350 |
+
xcopa,zh,en,"C1 or C2? premise, so/because…",validation
|
| 351 |
+
xcopa,zh,en,"i_am_hesitating",validation
|
| 352 |
+
xcopa,zh,en,"cause_effect",validation
|
| 353 |
+
xcopa,zh,en,"plausible_alternatives",validation
|
| 354 |
+
Muennighoff/xwinograd,en,en,"underscore refer to",test
|
| 355 |
+
Muennighoff/xwinograd,en,en,"Replace",test
|
| 356 |
+
Muennighoff/xwinograd,en,en,"stand for",test
|
| 357 |
+
Muennighoff/xwinograd,en,en,"does underscore refer to",test
|
| 358 |
+
Muennighoff/xwinograd,en,en,"True or False",test
|
| 359 |
+
Muennighoff/xwinograd,fr,en,"underscore refer to",test
|
| 360 |
+
Muennighoff/xwinograd,fr,en,"Replace",test
|
| 361 |
+
Muennighoff/xwinograd,fr,en,"stand for",test
|
| 362 |
+
Muennighoff/xwinograd,fr,en,"does underscore refer to",test
|
| 363 |
+
Muennighoff/xwinograd,fr,en,"True or False",test
|
| 364 |
+
Muennighoff/xwinograd,pt,en,"underscore refer to",test
|
| 365 |
+
Muennighoff/xwinograd,pt,en,"Replace",test
|
| 366 |
+
Muennighoff/xwinograd,pt,en,"stand for",test
|
| 367 |
+
Muennighoff/xwinograd,pt,en,"does underscore refer to",test
|
| 368 |
+
Muennighoff/xwinograd,pt,en,"True or False",test
|
| 369 |
+
Muennighoff/xwinograd,zh,en,"underscore refer to",test
|
| 370 |
+
Muennighoff/xwinograd,zh,en,"Replace",test
|
| 371 |
+
Muennighoff/xwinograd,zh,en,"stand for",test
|
| 372 |
+
Muennighoff/xwinograd,zh,en,"does underscore refer to",test
|
| 373 |
+
Muennighoff/xwinograd,zh,en,"True or False",test
|
| 374 |
+
)
|
| 375 |
+
|
| 376 |
+
DATASETS_AND_CONFIGS_L2=(
|
| 377 |
+
Muennighoff/xstory_cloze,ru,en,"Story Continuation and Options",validation
|
| 378 |
+
Muennighoff/xstory_cloze,ru,en,"Answer Given options",validation
|
| 379 |
+
Muennighoff/xstory_cloze,ru,en,"Novel Correct Ending",validation
|
| 380 |
+
Muennighoff/xstory_cloze,ru,en,"Generate Ending",validation
|
| 381 |
+
Muennighoff/xstory_cloze,ru,en,"Choose Story Ending",validation
|
| 382 |
+
Muennighoff/xstory_cloze,my,en,"Story Continuation and Options",validation
|
| 383 |
+
Muennighoff/xstory_cloze,my,en,"Answer Given options",validation
|
| 384 |
+
Muennighoff/xstory_cloze,my,en,"Novel Correct Ending",validation
|
| 385 |
+
Muennighoff/xstory_cloze,my,en,"Generate Ending",validation
|
| 386 |
+
Muennighoff/xstory_cloze,my,en,"Choose Story Ending",validation
|
| 387 |
+
xnli,bg,en,"guaranteed/possible/impossible",validation
|
| 388 |
+
xnli,bg,en,"MNLI crowdsource",validation
|
| 389 |
+
xnli,bg,en,"GPT-3 style",validation
|
| 390 |
+
xnli,bg,en,"justified in saying",validation
|
| 391 |
+
xnli,bg,en,"can we infer",validation
|
| 392 |
+
xnli,de,en,"guaranteed/possible/impossible",validation
|
| 393 |
+
xnli,de,en,"MNLI crowdsource",validation
|
| 394 |
+
xnli,de,en,"GPT-3 style",validation
|
| 395 |
+
xnli,de,en,"justified in saying",validation
|
| 396 |
+
xnli,de,en,"can we infer",validation
|
| 397 |
+
xnli,el,en,"guaranteed/possible/impossible",validation
|
| 398 |
+
xnli,el,en,"MNLI crowdsource",validation
|
| 399 |
+
xnli,el,en,"GPT-3 style",validation
|
| 400 |
+
xnli,el,en,"justified in saying",validation
|
| 401 |
+
xnli,el,en,"can we infer",validation
|
| 402 |
+
xnli,ru,en,"guaranteed/possible/impossible",validation
|
| 403 |
+
xnli,ru,en,"MNLI crowdsource",validation
|
| 404 |
+
xnli,ru,en,"GPT-3 style",validation
|
| 405 |
+
xnli,ru,en,"justified in saying",validation
|
| 406 |
+
xnli,ru,en,"can we infer",validation
|
| 407 |
+
xnli,th,en,"guaranteed/possible/impossible",validation
|
| 408 |
+
xnli,th,en,"MNLI crowdsource",validation
|
| 409 |
+
xnli,th,en,"GPT-3 style",validation
|
| 410 |
+
xnli,th,en,"justified in saying",validation
|
| 411 |
+
xnli,th,en,"can we infer",validation
|
| 412 |
+
xnli,tr,en,"guaranteed/possible/impossible",validation
|
| 413 |
+
xnli,tr,en,"MNLI crowdsource",validation
|
| 414 |
+
xnli,tr,en,"GPT-3 style",validation
|
| 415 |
+
xnli,tr,en,"justified in saying",validation
|
| 416 |
+
xnli,tr,en,"can we infer",validation
|
| 417 |
+
Muennighoff/xwinograd,ru,en,"underscore refer to",test
|
| 418 |
+
Muennighoff/xwinograd,ru,en,"Replace",test
|
| 419 |
+
Muennighoff/xwinograd,ru,en,"stand for",test
|
| 420 |
+
Muennighoff/xwinograd,ru,en,"does underscore refer to",test
|
| 421 |
+
Muennighoff/xwinograd,ru,en,"True or False",test
|
| 422 |
+
Muennighoff/xwinograd,jp,en,"underscore refer to",test
|
| 423 |
+
Muennighoff/xwinograd,jp,en,"Replace",test
|
| 424 |
+
Muennighoff/xwinograd,jp,en,"stand for",test
|
| 425 |
+
Muennighoff/xwinograd,jp,en,"does underscore refer to",test
|
| 426 |
+
Muennighoff/xwinograd,jp,en,"True or False",test
|
| 427 |
+
xcopa,et,en,"best_option",validation
|
| 428 |
+
xcopa,et,en,"C1 or C2? premise, so/because…",validation
|
| 429 |
+
xcopa,et,en,"i_am_hesitating",validation
|
| 430 |
+
xcopa,et,en,"cause_effect",validation
|
| 431 |
+
xcopa,et,en,"plausible_alternatives",validation
|
| 432 |
+
xcopa,ht,en,"best_option",validation
|
| 433 |
+
xcopa,ht,en,"C1 or C2? premise, so/because…",validation
|
| 434 |
+
xcopa,ht,en,"i_am_hesitating",validation
|
| 435 |
+
xcopa,ht,en,"cause_effect",validation
|
| 436 |
+
xcopa,ht,en,"plausible_alternatives",validation
|
| 437 |
+
xcopa,it,en,"best_option",validation
|
| 438 |
+
xcopa,it,en,"C1 or C2? premise, so/because…",validation
|
| 439 |
+
xcopa,it,en,"i_am_hesitating",validation
|
| 440 |
+
xcopa,it,en,"cause_effect",validation
|
| 441 |
+
xcopa,it,en,"plausible_alternatives",validation
|
| 442 |
+
xcopa,qu,en,"best_option",validation
|
| 443 |
+
xcopa,qu,en,"C1 or C2? premise, so/because…",validation
|
| 444 |
+
xcopa,qu,en,"i_am_hesitating",validation
|
| 445 |
+
xcopa,qu,en,"cause_effect",validation
|
| 446 |
+
xcopa,qu,en,"plausible_alternatives",validation
|
| 447 |
+
xcopa,th,en,"best_option",validation
|
| 448 |
+
xcopa,th,en,"C1 or C2? premise, so/because…",validation
|
| 449 |
+
xcopa,th,en,"i_am_hesitating",validation
|
| 450 |
+
xcopa,th,en,"cause_effect",validation
|
| 451 |
+
xcopa,th,en,"plausible_alternatives",validation
|
| 452 |
+
xcopa,tr,en,"best_option",validation
|
| 453 |
+
xcopa,tr,en,"C1 or C2? premise, so/because…",validation
|
| 454 |
+
xcopa,tr,en,"i_am_hesitating",validation
|
| 455 |
+
xcopa,tr,en,"cause_effect",validation
|
| 456 |
+
xcopa,tr,en,"plausible_alternatives",validation
|
| 457 |
+
)
|
| 458 |
+
|
| 459 |
+
DATASETS_AND_CONFIGS_MT_L1=(
|
| 460 |
+
Muennighoff/xstory_cloze,ar,ar,"Story Continuation and Options_armt",validation
|
| 461 |
+
Muennighoff/xstory_cloze,ar,ar,"Answer Given options_armt",validation
|
| 462 |
+
Muennighoff/xstory_cloze,ar,ar,"Novel Correct Ending_armt",validation
|
| 463 |
+
Muennighoff/xstory_cloze,ar,ar,"Generate Ending_armt",validation
|
| 464 |
+
Muennighoff/xstory_cloze,ar,ar,"Choose Story Ending_armt",validation
|
| 465 |
+
Muennighoff/xstory_cloze,es,es,"Story Continuation and Options_esmt",validation
|
| 466 |
+
Muennighoff/xstory_cloze,es,es,"Answer Given options_esmt",validation
|
| 467 |
+
Muennighoff/xstory_cloze,es,es,"Novel Correct Ending_esmt",validation
|
| 468 |
+
Muennighoff/xstory_cloze,es,es,"Generate Ending_esmt",validation
|
| 469 |
+
Muennighoff/xstory_cloze,es,es,"Choose Story Ending_esmt",validation
|
| 470 |
+
Muennighoff/xstory_cloze,eu,eu,"Story Continuation and Options_eumt",validation
|
| 471 |
+
Muennighoff/xstory_cloze,eu,eu,"Answer Given options_eumt",validation
|
| 472 |
+
Muennighoff/xstory_cloze,eu,eu,"Novel Correct Ending_eumt",validation
|
| 473 |
+
Muennighoff/xstory_cloze,eu,eu,"Generate Ending_eumt",validation
|
| 474 |
+
Muennighoff/xstory_cloze,eu,eu,"Choose Story Ending_eumt",validation
|
| 475 |
+
Muennighoff/xstory_cloze,id,id,"Story Continuation and Options_idmt",validation
|
| 476 |
+
Muennighoff/xstory_cloze,id,id,"Answer Given options_idmt",validation
|
| 477 |
+
Muennighoff/xstory_cloze,id,id,"Novel Correct Ending_idmt",validation
|
| 478 |
+
Muennighoff/xstory_cloze,id,id,"Generate Ending_idmt",validation
|
| 479 |
+
Muennighoff/xstory_cloze,id,id,"Choose Story Ending_idmt",validation
|
| 480 |
+
Muennighoff/xstory_cloze,hi,hi,"Story Continuation and Options_himt",validation
|
| 481 |
+
Muennighoff/xstory_cloze,hi,hi,"Answer Given options_himt",validation
|
| 482 |
+
Muennighoff/xstory_cloze,hi,hi,"Novel Correct Ending_himt",validation
|
| 483 |
+
Muennighoff/xstory_cloze,hi,hi,"Generate Ending_himt",validation
|
| 484 |
+
Muennighoff/xstory_cloze,hi,hi,"Choose Story Ending_himt",validation
|
| 485 |
+
Muennighoff/xstory_cloze,sw,sw,"Story Continuation and Options_swmt",validation
|
| 486 |
+
Muennighoff/xstory_cloze,sw,sw,"Answer Given options_swmt",validation
|
| 487 |
+
Muennighoff/xstory_cloze,sw,sw,"Novel Correct Ending_swmt",validation
|
| 488 |
+
Muennighoff/xstory_cloze,sw,sw,"Generate Ending_swmt",validation
|
| 489 |
+
Muennighoff/xstory_cloze,sw,sw,"Choose Story Ending_swmt",validation
|
| 490 |
+
Muennighoff/xstory_cloze,te,te,"Story Continuation and Options_temt",validation
|
| 491 |
+
Muennighoff/xstory_cloze,te,te,"Answer Given options_temt",validation
|
| 492 |
+
Muennighoff/xstory_cloze,te,te,"Novel Correct Ending_temt",validation
|
| 493 |
+
Muennighoff/xstory_cloze,te,te,"Generate Ending_temt",validation
|
| 494 |
+
Muennighoff/xstory_cloze,te,te,"Choose Story Ending_temt",validation
|
| 495 |
+
Muennighoff/xstory_cloze,zh,zh,"Story Continuation and Options_zhmt",validation
|
| 496 |
+
Muennighoff/xstory_cloze,zh,zh,"Answer Given options_zhmt",validation
|
| 497 |
+
Muennighoff/xstory_cloze,zh,zh,"Novel Correct Ending_zhmt",validation
|
| 498 |
+
Muennighoff/xstory_cloze,zh,zh,"Generate Ending_zhmt",validation
|
| 499 |
+
Muennighoff/xstory_cloze,zh,zh,"Choose Story Ending_zhmt",validation
|
| 500 |
+
Muennighoff/xwinograd,fr,fr,"underscore refer to_frmt",test
|
| 501 |
+
Muennighoff/xwinograd,fr,fr,"Replace_frmt",test
|
| 502 |
+
Muennighoff/xwinograd,fr,fr,"stand for_frmt",test
|
| 503 |
+
Muennighoff/xwinograd,fr,fr,"does underscore refer to_frmt",test
|
| 504 |
+
Muennighoff/xwinograd,fr,fr,"True or False_frmt",test
|
| 505 |
+
Muennighoff/xwinograd,pt,pt,"underscore refer to_ptmt",test
|
| 506 |
+
Muennighoff/xwinograd,pt,pt,"Replace_ptmt",test
|
| 507 |
+
Muennighoff/xwinograd,pt,pt,"stand for_ptmt",test
|
| 508 |
+
Muennighoff/xwinograd,pt,pt,"does underscore refer to_ptmt",test
|
| 509 |
+
Muennighoff/xwinograd,pt,pt,"True or False_ptmt",test
|
| 510 |
+
Muennighoff/xwinograd,zh,zh,"underscore refer to_zhmt",test
|
| 511 |
+
Muennighoff/xwinograd,zh,zh,"Replace_zhmt",test
|
| 512 |
+
Muennighoff/xwinograd,zh,zh,"stand for_zhmt",test
|
| 513 |
+
Muennighoff/xwinograd,zh,zh,"does underscore refer to_zhmt",test
|
| 514 |
+
Muennighoff/xwinograd,zh,zh,"True or False_zhmt",test
|
| 515 |
+
xcopa,id,id,"best_option_idmt",validation
|
| 516 |
+
xcopa,id,id,"C1 or C2? premise_idmt",validation
|
| 517 |
+
xcopa,id,id,"i_am_hesitating_idmt",validation
|
| 518 |
+
xcopa,id,id,"cause_effect_idmt",validation
|
| 519 |
+
xcopa,id,id,"plausible_alternatives_idmt",validation
|
| 520 |
+
xcopa,sw,sw,"best_option_swmt",validation
|
| 521 |
+
xcopa,sw,sw,"C1 or C2? premise_swmt",validation
|
| 522 |
+
xcopa,sw,sw,"i_am_hesitating_swmt",validation
|
| 523 |
+
xcopa,sw,sw,"cause_effect_swmt",validation
|
| 524 |
+
xcopa,sw,sw,"plausible_alternatives_swmt",validation
|
| 525 |
+
xcopa,ta,ta,"best_option_tamt",validation
|
| 526 |
+
xcopa,ta,ta,"C1 or C2? premise_tamt",validation
|
| 527 |
+
xcopa,ta,ta,"i_am_hesitating_tamt",validation
|
| 528 |
+
xcopa,ta,ta,"cause_effect_tamt",validation
|
| 529 |
+
xcopa,ta,ta,"plausible_alternatives_tamt",validation
|
| 530 |
+
xcopa,vi,vi,"best_option_vimt",validation
|
| 531 |
+
xcopa,vi,vi,"C1 or C2? premise_vimt",validation
|
| 532 |
+
xcopa,vi,vi,"i_am_hesitating_vimt",validation
|
| 533 |
+
xcopa,vi,vi,"cause_effect_vimt",validation
|
| 534 |
+
xcopa,vi,vi,"plausible_alternatives_vimt",validation
|
| 535 |
+
xcopa,zh,zh,"best_option_zhmt",validation
|
| 536 |
+
xcopa,zh,zh,"C1 or C2? premise_zhmt",validation
|
| 537 |
+
xcopa,zh,zh,"i_am_hesitating_zhmt",validation
|
| 538 |
+
xcopa,zh,zh,"cause_effect_zhmt",validation
|
| 539 |
+
xcopa,zh,zh,"plausible_alternatives_zhmt",validation
|
| 540 |
+
)
|
| 541 |
+
|
| 542 |
+
DATASETS_AND_CONFIGS_ZHHT=(
|
| 543 |
+
Muennighoff/xstory_cloze,zh,zh,"Story Continuation and Options_zhht",validation
|
| 544 |
+
Muennighoff/xstory_cloze,zh,zh,"Answer Given options_zhht",validation
|
| 545 |
+
Muennighoff/xstory_cloze,zh,zh,"Novel Correct Ending_zhht",validation
|
| 546 |
+
Muennighoff/xstory_cloze,zh,zh,"Generate Ending_zhht",validation
|
| 547 |
+
Muennighoff/xstory_cloze,zh,zh,"Choose Story Ending_zhht",validation
|
| 548 |
+
Muennighoff/xwinograd,zh,zh,"underscore refer to_zhht",test
|
| 549 |
+
Muennighoff/xwinograd,zh,zh,"Replace_zhht",test
|
| 550 |
+
Muennighoff/xwinograd,zh,zh,"stand for_zhht",test
|
| 551 |
+
Muennighoff/xwinograd,zh,zh,"does underscore refer to_zhht",test
|
| 552 |
+
Muennighoff/xwinograd,zh,zh,"True or False_zhht",test
|
| 553 |
+
xcopa,zh,zh,"best_option_zhht",validation
|
| 554 |
+
xcopa,zh,zh,"C1 or C2? premise_zhht",validation
|
| 555 |
+
xcopa,zh,zh,"i_am_hesitating_zhht",validation
|
| 556 |
+
xcopa,zh,zh,"cause_effect_zhht",validation
|
| 557 |
+
xcopa,zh,zh,"plausible_alternatives_zhht",validation
|
| 558 |
+
)
|
| 559 |
+
|
| 560 |
+
DATASETS_AND_CONFIGS_XNLIHTMT=(
|
| 561 |
+
xnli,ar,ar,"guaranteed/possible/impossible_arht",validation
|
| 562 |
+
xnli,ar,ar,"MNLI crowdsource_arht",validation
|
| 563 |
+
xnli,ar,ar,"GPT-3 style_arht",validation
|
| 564 |
+
xnli,ar,ar,"justified in saying_arht",validation
|
| 565 |
+
xnli,ar,ar,"can we infer_arht",validation
|
| 566 |
+
xnli,ar,ar,"guaranteed/possible/impossible_armt",validation
|
| 567 |
+
xnli,ar,ar,"MNLI crowdsource_armt",validation
|
| 568 |
+
xnli,ar,ar,"GPT-3 style_armt",validation
|
| 569 |
+
xnli,ar,ar,"justified in saying_armt",validation
|
| 570 |
+
xnli,ar,ar,"can we infer_armt",validation
|
| 571 |
+
xnli,es,es,"guaranteed/possible/impossible_esht",validation
|
| 572 |
+
xnli,es,es,"MNLI crowdsource_esht",validation
|
| 573 |
+
xnli,es,es,"GPT-3 style_esht",validation
|
| 574 |
+
xnli,es,es,"justified in saying_esht",validation
|
| 575 |
+
xnli,es,es,"can we infer_esht",validation
|
| 576 |
+
xnli,es,es,"guaranteed/possible/impossible_esmt",validation
|
| 577 |
+
xnli,es,es,"MNLI crowdsource_esmt",validation
|
| 578 |
+
xnli,es,es,"GPT-3 style_esmt",validation
|
| 579 |
+
xnli,es,es,"justified in saying_esmt",validation
|
| 580 |
+
xnli,es,es,"can we infer_esmt",validation
|
| 581 |
+
xnli,fr,fr,"guaranteed/possible/impossible_frht",validation
|
| 582 |
+
xnli,fr,fr,"MNLI crowdsource_frht",validation
|
| 583 |
+
xnli,fr,fr,"GPT-3 style_frht",validation
|
| 584 |
+
xnli,fr,fr,"justified in saying_frht",validation
|
| 585 |
+
xnli,fr,fr,"can we infer_frht",validation
|
| 586 |
+
xnli,fr,fr,"guaranteed/possible/impossible_frmt",validation
|
| 587 |
+
xnli,fr,fr,"MNLI crowdsource_frmt",validation
|
| 588 |
+
xnli,fr,fr,"GPT-3 style_frmt",validation
|
| 589 |
+
xnli,fr,fr,"justified in saying_frmt",validation
|
| 590 |
+
xnli,fr,fr,"can we infer_frmt",validation
|
| 591 |
+
xnli,hi,hi,"guaranteed/possible/impossible_hiht",validation
|
| 592 |
+
xnli,hi,hi,"MNLI crowdsource_hiht",validation
|
| 593 |
+
xnli,hi,hi,"GPT-3 style_hiht",validation
|
| 594 |
+
xnli,hi,hi,"justified in saying_hiht",validation
|
| 595 |
+
xnli,hi,hi,"can we infer_hiht",validation
|
| 596 |
+
xnli,hi,hi,"guaranteed/possible/impossible_himt",validation
|
| 597 |
+
xnli,hi,hi,"MNLI crowdsource_himt",validation
|
| 598 |
+
xnli,hi,hi,"GPT-3 style_himt",validation
|
| 599 |
+
xnli,hi,hi,"justified in saying_himt",validation
|
| 600 |
+
xnli,hi,hi,"can we infer_himt",validation
|
| 601 |
+
xnli,ur,ur,"guaranteed/possible/impossible_urht",validation
|
| 602 |
+
xnli,ur,ur,"MNLI crowdsource_urht",validation
|
| 603 |
+
xnli,ur,ur,"GPT-3 style_urht",validation
|
| 604 |
+
xnli,ur,ur,"justified in saying_urht",validation
|
| 605 |
+
xnli,ur,ur,"can we infer_urht",validation
|
| 606 |
+
xnli,ur,ur,"guaranteed/possible/impossible_urmt",validation
|
| 607 |
+
xnli,ur,ur,"MNLI crowdsource_urmt",validation
|
| 608 |
+
xnli,ur,ur,"GPT-3 style_urmt",validation
|
| 609 |
+
xnli,ur,ur,"justified in saying_urmt",validation
|
| 610 |
+
xnli,ur,ur,"can we infer_urmt",validation
|
| 611 |
+
xnli,sw,sw,"guaranteed/possible/impossible_swht",validation
|
| 612 |
+
xnli,sw,sw,"MNLI crowdsource_swht",validation
|
| 613 |
+
xnli,sw,sw,"GPT-3 style_swht",validation
|
| 614 |
+
xnli,sw,sw,"justified in saying_swht",validation
|
| 615 |
+
xnli,sw,sw,"can we infer_swht",validation
|
| 616 |
+
xnli,sw,sw,"guaranteed/possible/impossible_swmt",validation
|
| 617 |
+
xnli,sw,sw,"MNLI crowdsource_swmt",validation
|
| 618 |
+
xnli,sw,sw,"GPT-3 style_swmt",validation
|
| 619 |
+
xnli,sw,sw,"justified in saying_swmt",validation
|
| 620 |
+
xnli,sw,sw,"can we infer_swmt",validation
|
| 621 |
+
xnli,vi,vi,"guaranteed/possible/impossible_viht",validation
|
| 622 |
+
xnli,vi,vi,"MNLI crowdsource_viht",validation
|
| 623 |
+
xnli,vi,vi,"GPT-3 style_viht",validation
|
| 624 |
+
xnli,vi,vi,"justified in saying_viht",validation
|
| 625 |
+
xnli,vi,vi,"can we infer_viht",validation
|
| 626 |
+
xnli,vi,vi,"guaranteed/possible/impossible_vimt",validation
|
| 627 |
+
xnli,vi,vi,"MNLI crowdsource_vimt",validation
|
| 628 |
+
xnli,vi,vi,"GPT-3 style_vimt",validation
|
| 629 |
+
xnli,vi,vi,"justified in saying_vimt",validation
|
| 630 |
+
xnli,vi,vi,"can we infer_vimt",validation
|
| 631 |
+
xnli,zh,zh,"guaranteed/possible/impossible_zhht",validation
|
| 632 |
+
xnli,zh,zh,"MNLI crowdsource_zhht",validation
|
| 633 |
+
xnli,zh,zh,"GPT-3 style_zhht",validation
|
| 634 |
+
xnli,zh,zh,"justified in saying_zhht",validation
|
| 635 |
+
xnli,zh,zh,"can we infer_zhht",validation
|
| 636 |
+
xnli,zh,zh,"guaranteed/possible/impossible_zhmt",validation
|
| 637 |
+
xnli,zh,zh,"MNLI crowdsource_zhmt",validation
|
| 638 |
+
xnli,zh,zh,"GPT-3 style_zhmt",validation
|
| 639 |
+
xnli,zh,zh,"justified in saying_zhmt",validation
|
| 640 |
+
xnli,zh,zh,"can we infer_zhmt",validation
|
| 641 |
+
)
|
| 642 |
+
|
| 643 |
+
DATASETS_AND_CONFIGS_MT_L2=(
|
| 644 |
+
Muennighoff/xstory_cloze,my,my,"Story Continuation and Options_mymt",validation
|
| 645 |
+
Muennighoff/xstory_cloze,my,my,"Answer Given options_mymt",validation
|
| 646 |
+
Muennighoff/xstory_cloze,my,my,"Novel Correct Ending_mymt",validation
|
| 647 |
+
Muennighoff/xstory_cloze,my,my,"Generate Ending_mymt",validation
|
| 648 |
+
Muennighoff/xstory_cloze,my,my,"Choose Story Ending_mymt",validation
|
| 649 |
+
Muennighoff/xstory_cloze,ru,ru,"Story Continuation and Options_rumt",validation
|
| 650 |
+
Muennighoff/xstory_cloze,ru,ru,"Answer Given options_rumt",validation
|
| 651 |
+
Muennighoff/xstory_cloze,ru,ru,"Novel Correct Ending_rumt",validation
|
| 652 |
+
Muennighoff/xstory_cloze,ru,ru,"Generate Ending_rumt",validation
|
| 653 |
+
Muennighoff/xstory_cloze,ru,ru,"Choose Story Ending_rumt",validation
|
| 654 |
+
Muennighoff/xstory_cloze,sw,sw,"Story Continuation and Options_swmt",validation
|
| 655 |
+
Muennighoff/xstory_cloze,sw,sw,"Answer Given options_swmt",validation
|
| 656 |
+
Muennighoff/xstory_cloze,sw,sw,"Novel Correct Ending_swmt",validation
|
| 657 |
+
Muennighoff/xstory_cloze,sw,sw,"Generate Ending_swmt",validation
|
| 658 |
+
Muennighoff/xstory_cloze,sw,sw,"Choose Story Ending_swmt",validation
|
| 659 |
+
Muennighoff/xstory_cloze,te,te,"Story Continuation and Options_temt",validation
|
| 660 |
+
Muennighoff/xstory_cloze,te,te,"Answer Given options_temt",validation
|
| 661 |
+
Muennighoff/xstory_cloze,te,te,"Novel Correct Ending_temt",validation
|
| 662 |
+
Muennighoff/xstory_cloze,te,te,"Generate Ending_temt",validation
|
| 663 |
+
Muennighoff/xstory_cloze,te,te,"Choose Story Ending_temt",validation
|
| 664 |
+
Muennighoff/xwinograd,jp,jp,"underscore refer to_jpmt",test
|
| 665 |
+
Muennighoff/xwinograd,jp,jp,"Replace_jpmt",test
|
| 666 |
+
Muennighoff/xwinograd,jp,jp,"stand for_jpmt",test
|
| 667 |
+
Muennighoff/xwinograd,jp,jp,"does underscore refer to_jpmt",test
|
| 668 |
+
Muennighoff/xwinograd,jp,jp,"True or False_jpmt",test
|
| 669 |
+
Muennighoff/xwinograd,ru,ru,"underscore refer to_rumt",test
|
| 670 |
+
Muennighoff/xwinograd,ru,ru,"Replace_rumt",test
|
| 671 |
+
Muennighoff/xwinograd,ru,ru,"stand for_rumt",test
|
| 672 |
+
Muennighoff/xwinograd,ru,ru,"does underscore refer to_rumt",test
|
| 673 |
+
Muennighoff/xwinograd,ru,ru,"True or False_rumt",test
|
| 674 |
+
xcopa,et,et,"best_option_etmt",validation
|
| 675 |
+
xcopa,et,et,"C1 or C2? premise_etmt",validation
|
| 676 |
+
xcopa,et,et,"i_am_hesitating_etmt",validation
|
| 677 |
+
xcopa,et,et,"cause_effect_etmt",validation
|
| 678 |
+
xcopa,et,et,"plausible_alternatives_etmt",validation
|
| 679 |
+
xcopa,ht,ht,"best_option_htmt",validation
|
| 680 |
+
xcopa,ht,ht,"C1 or C2? premise_htmt",validation
|
| 681 |
+
xcopa,ht,ht,"i_am_hesitating_htmt",validation
|
| 682 |
+
xcopa,ht,ht,"cause_effect_htmt",validation
|
| 683 |
+
xcopa,ht,ht,"plausible_alternatives_htmt",validation
|
| 684 |
+
xcopa,it,it,"best_option_itmt",validation
|
| 685 |
+
xcopa,it,it,"C1 or C2? premise_itmt",validation
|
| 686 |
+
xcopa,it,it,"i_am_hesitating_itmt",validation
|
| 687 |
+
xcopa,it,it,"cause_effect_itmt",validation
|
| 688 |
+
xcopa,it,it,"plausible_alternatives_itmt",validation
|
| 689 |
+
xcopa,qu,qu,"best_option_qumt",validation
|
| 690 |
+
xcopa,qu,qu,"C1 or C2? premise_qumt",validation
|
| 691 |
+
xcopa,qu,qu,"i_am_hesitating_qumt",validation
|
| 692 |
+
xcopa,qu,qu,"cause_effect_qumt",validation
|
| 693 |
+
xcopa,qu,qu,"plausible_alternatives_qumt",validation
|
| 694 |
+
xcopa,th,th,"best_option_thmt",validation
|
| 695 |
+
xcopa,th,th,"C1 or C2? premise_thmt",validation
|
| 696 |
+
xcopa,th,th,"i_am_hesitating_thmt",validation
|
| 697 |
+
xcopa,th,th,"cause_effect_thmt",validation
|
| 698 |
+
xcopa,th,th,"plausible_alternatives_thmt",validation
|
| 699 |
+
xcopa,tr,tr,"best_option_trmt",validation
|
| 700 |
+
xcopa,tr,tr,"C1 or C2? premise_trmt",validation
|
| 701 |
+
xcopa,tr,tr,"i_am_hesitating_trmt",validation
|
| 702 |
+
xcopa,tr,tr,"cause_effect_trmt",validation
|
| 703 |
+
xcopa,tr,tr,"plausible_alternatives_trmt",validation
|
| 704 |
+
xnli,bg,bg,"guaranteed/possible/impossible_bgmt",validation
|
| 705 |
+
xnli,bg,bg,"MNLI crowdsource_bgmt",validation
|
| 706 |
+
xnli,bg,bg,"GPT-3 style_bgmt",validation
|
| 707 |
+
xnli,bg,bg,"justified in saying_bgmt",validation
|
| 708 |
+
xnli,bg,bg,"can we infer_bgmt",validation
|
| 709 |
+
xnli,de,de,"guaranteed/possible/impossible_demt",validation
|
| 710 |
+
xnli,de,de,"MNLI crowdsource_demt",validation
|
| 711 |
+
xnli,de,de,"GPT-3 style_demt",validation
|
| 712 |
+
xnli,de,de,"justified in saying_demt",validation
|
| 713 |
+
xnli,de,de,"can we infer_demt",validation
|
| 714 |
+
xnli,el,el,"guaranteed/possible/impossible_elmt",validation
|
| 715 |
+
xnli,el,el,"MNLI crowdsource_elmt",validation
|
| 716 |
+
xnli,el,el,"GPT-3 style_elmt",validation
|
| 717 |
+
xnli,el,el,"justified in saying_elmt",validation
|
| 718 |
+
xnli,el,el,"can we infer_elmt",validation
|
| 719 |
+
xnli,ru,ru,"guaranteed/possible/impossible_rumt",validation
|
| 720 |
+
xnli,ru,ru,"MNLI crowdsource_rumt",validation
|
| 721 |
+
xnli,ru,ru,"GPT-3 style_rumt",validation
|
| 722 |
+
xnli,ru,ru,"justified in saying_rumt",validation
|
| 723 |
+
xnli,ru,ru,"can we infer_rumt",validation
|
| 724 |
+
xnli,th,th,"guaranteed/possible/impossible_thmt",validation
|
| 725 |
+
xnli,th,th,"MNLI crowdsource_thmt",validation
|
| 726 |
+
xnli,th,th,"GPT-3 style_thmt",validation
|
| 727 |
+
xnli,th,th,"justified in saying_thmt",validation
|
| 728 |
+
xnli,th,th,"can we infer_thmt",validation
|
| 729 |
+
xnli,tr,tr,"guaranteed/possible/impossible_trmt",validation
|
| 730 |
+
xnli,tr,tr,"MNLI crowdsource_trmt",validation
|
| 731 |
+
xnli,tr,tr,"GPT-3 style_trmt",validation
|
| 732 |
+
xnli,tr,tr,"justified in saying_trmt",validation
|
| 733 |
+
xnli,tr,tr,"can we infer_trmt",validation
|
| 734 |
+
)
|
| 735 |
+
|
| 736 |
+
DATASET_AND_CONFIG=${DATASETS_AND_CONFIGS_L1[$SLURM_ARRAY_TASK_ID]}
|
| 737 |
+
echo $ARGUMENT
|
| 738 |
+
|
| 739 |
+
# Run T0 evaluation
|
| 740 |
+
# For PrefixLM add --prefixlm
|
| 741 |
+
IFS=',' read dataset_name dataset_config_name template_config_name template_name <<< "${DATASET_AND_CONFIG}"
|
| 742 |
+
python t-zero/evaluation/run_eval.py \
|
| 743 |
+
--dataset_name $dataset_name \
|
| 744 |
+
--dataset_config_name $dataset_config_name \
|
| 745 |
+
--template_config_name $template_config_name \
|
| 746 |
+
--template_name "$template_name" \
|
| 747 |
+
--model_name_or_path $CHECKPOINT_PATH \
|
| 748 |
+
--output_dir $OUTPUT_DIR \
|
| 749 |
+
--per_device_eval_batch_size 8 \
|
| 750 |
+
--max_length 2048 \
|
| 751 |
+
--dtype float16
|
evaluation/results/tr3/README.md
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
We're interested in understanding when zero shot capabilities appear.
|
evaluation/results/tr3/plot_task_solve_graph.py
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import os
|
| 3 |
+
from argparse import ArgumentParser
|
| 4 |
+
|
| 5 |
+
import numpy as np
|
| 6 |
+
from matplotlib import pyplot as plt
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def get_args():
|
| 10 |
+
parser = ArgumentParser()
|
| 11 |
+
parser.add_argument('--input-files', type=lambda s: s.split(','), required=True, help='Input file that hold all evaluation metrics')
|
| 12 |
+
return parser.parse_args()
|
| 13 |
+
|
| 14 |
+
# TODO: fill it up
|
| 15 |
+
RANDOM_BASELINE={
|
| 16 |
+
"arc_challenge_acc": 0.2502, # Source: https://arxiv.org/pdf/1803.05457.pdf table 6
|
| 17 |
+
"arc_easy_acc": 0.2502, # Source: https://arxiv.org/pdf/1803.05457.pdf table 6
|
| 18 |
+
"boolq_acc": 0.5,
|
| 19 |
+
"copa_acc": 0.5,
|
| 20 |
+
"headqa_acc": 0.25, # TODO: That's a pain as some have 4, some have 5 and nobody reports random baseline
|
| 21 |
+
"hellaswag_acc": 0.25,
|
| 22 |
+
"lambada_acc": 0., # Safe to say that random models won't perform well at all.
|
| 23 |
+
"logiqa_acc": 0.25,
|
| 24 |
+
"mathqa_acc": 0.25, # TODO: That's a pain as some have 4, some have 5 and nobody reports random baseline
|
| 25 |
+
"mrpc_acc": 0.5,
|
| 26 |
+
"multirc_acc": 0., # TODO: I couldn't figure it out
|
| 27 |
+
"openbookqa_acc": 0.25,
|
| 28 |
+
"piqa_acc": 0.5,
|
| 29 |
+
"prost_acc": 0.25,
|
| 30 |
+
"pubmedqa_acc": 1/3,
|
| 31 |
+
"qnli_acc": 0.5,
|
| 32 |
+
"qqp_acc": 0.5,
|
| 33 |
+
"race_acc": 0.25, # Source: https://arxiv.org/pdf/1704.04683.pdf table 5
|
| 34 |
+
"rte_acc": 0.5,
|
| 35 |
+
"sciq_acc": 0.25,
|
| 36 |
+
"sst_acc": 0.5,
|
| 37 |
+
"triviaqa_acc": 0.,
|
| 38 |
+
"webqs_acc": 0.,
|
| 39 |
+
"wic_acc": 0.5,
|
| 40 |
+
"winogrande_acc": 0.5,
|
| 41 |
+
"wnli_acc": 0.5,
|
| 42 |
+
"wsc_acc": 0.5
|
| 43 |
+
}
|
| 44 |
+
def normalise_scores(scores_per_task):
|
| 45 |
+
normalised_scores = {}
|
| 46 |
+
for key,value in scores_per_task.items():
|
| 47 |
+
# We assume it exists, otherwise we need to figure out what the random baseline is
|
| 48 |
+
normalised_scores[key] = (value - RANDOM_BASELINE[key]) / (1. - RANDOM_BASELINE[key])
|
| 49 |
+
# TODO: we need to substract the random baseline.
|
| 50 |
+
return scores_per_task
|
| 51 |
+
|
| 52 |
+
def main():
|
| 53 |
+
args = get_args()
|
| 54 |
+
|
| 55 |
+
final = {}
|
| 56 |
+
for input_file in args.input_files:
|
| 57 |
+
assert os.path.basename(input_file).endswith("_agg.json")
|
| 58 |
+
experiment_name = os.path.basename(input_file).split("_agg.json")[0]
|
| 59 |
+
with open(input_file, "r") as fi:
|
| 60 |
+
final[experiment_name] = json.load(fi)
|
| 61 |
+
|
| 62 |
+
# We search for matching tokens
|
| 63 |
+
matching_tokens = set(next(iter(final.values()))["tokens"])
|
| 64 |
+
for experiment_name, experiment in final.items():
|
| 65 |
+
tokens = experiment["tokens"]
|
| 66 |
+
matching_tokens = matching_tokens & set(tokens)
|
| 67 |
+
# Make sure we don't override existing data
|
| 68 |
+
assert "token2checkpoint_step" not in experiment
|
| 69 |
+
experiment["token2checkpoint_step"] = {token: ckpt_step for token, ckpt_step in zip(tokens, experiment["checkpoints"])}
|
| 70 |
+
# Make sure we don't override existing data
|
| 71 |
+
assert "token2id" not in experiment
|
| 72 |
+
experiment["token2id"] = {token: _id for _id, token in enumerate(tokens)}
|
| 73 |
+
matching_tokens = sorted(matching_tokens)
|
| 74 |
+
print(f"Plotting only for tokens in {matching_tokens}")
|
| 75 |
+
|
| 76 |
+
plots_per_keys = {}
|
| 77 |
+
|
| 78 |
+
for token in matching_tokens:
|
| 79 |
+
for experiment_name, experiment in final.items():
|
| 80 |
+
_id = experiment["token2id"][token]
|
| 81 |
+
scores_per_task = {
|
| 82 |
+
"Average_acc": {
|
| 83 |
+
f"{evaluation_name}_{metric_name}": metric[_id]
|
| 84 |
+
for evaluation_name, evaluation in experiment["results"].items()
|
| 85 |
+
for metric_name, metric in evaluation.items()
|
| 86 |
+
if metric_name == "acc"
|
| 87 |
+
},
|
| 88 |
+
# "Average": {
|
| 89 |
+
# metric_name: values[i]
|
| 90 |
+
# for evaluation_name in final["results"][experiment_name]
|
| 91 |
+
# for metric_name, values in final["results"][experiment_name][evaluation_name].items()
|
| 92 |
+
# if metric_name[-7:] != "_stderr"
|
| 93 |
+
# }
|
| 94 |
+
}
|
| 95 |
+
|
| 96 |
+
# Build plot graphs
|
| 97 |
+
for key in scores_per_task:
|
| 98 |
+
if key not in plots_per_keys:
|
| 99 |
+
plots_per_keys[key] = {}
|
| 100 |
+
|
| 101 |
+
plot_per_token = plots_per_keys[key]
|
| 102 |
+
if token in plot_per_token:
|
| 103 |
+
continue
|
| 104 |
+
|
| 105 |
+
plot = plt.figure()
|
| 106 |
+
plot = plot.add_subplot(1, 1, 1)
|
| 107 |
+
plot.set_title(f"{key} - Number of tokens seen: {token}")
|
| 108 |
+
plot_per_token[token] = plot
|
| 109 |
+
|
| 110 |
+
# Plot per steps
|
| 111 |
+
for key in plots_per_keys:
|
| 112 |
+
scores = scores_per_task[key]
|
| 113 |
+
plot = plots_per_keys[key][token]
|
| 114 |
+
|
| 115 |
+
# Normalize score
|
| 116 |
+
normalised_scores = normalise_scores(scores)
|
| 117 |
+
|
| 118 |
+
# Sort scores, we order them from smalles to biggest
|
| 119 |
+
sorted_scores = sorted(normalised_scores.values())
|
| 120 |
+
|
| 121 |
+
# Compute the number of task over that sorted_scores.
|
| 122 |
+
y = np.arange(len(sorted_scores), 0, -1) / len(sorted_scores)
|
| 123 |
+
|
| 124 |
+
plot.step(x=sorted_scores, y=y, label=experiment_name)
|
| 125 |
+
|
| 126 |
+
for plots in plots_per_keys.values():
|
| 127 |
+
assert len(plots) == len(matching_tokens)
|
| 128 |
+
for plot in plots.values():
|
| 129 |
+
plot.legend()
|
| 130 |
+
plt.show()
|
| 131 |
+
|
| 132 |
+
if __name__ == "__main__":
|
| 133 |
+
main()
|
evaluation/results/tr3/switch_tokenizer_to_t5_for_tr3e.sh
ADDED
|
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
export GIT_LFS_SKIP_SMUDGE=1
|
| 2 |
+
git clone https://huggingface.co/bigscience/tr3e-1B3-c4-checkpoints
|
| 3 |
+
cd tr3e-1B3-c4-checkpoints
|
| 4 |
+
$six_ALL_CCFRWORK/code/bigscience/tools/hub-sync.py --repo-path . --patterns '*bogus*'
|
| 5 |
+
git branch -a | sort -V | perl -lne 'm|(global_step\d+)| && print qx[git checkout $1; perl -pi -e "s|\\"tokenizer_class\\": null|\\"tokenizer_class\\": \\"T5Tokenizer\\"|" config.json; git commit -m "Fix tokenizer_class to use T5 tokenizer" .; git push --set-upstream origin $1]'
|
| 6 |
+
export GIT_LFS_SKIP_SMUDGE=0
|
evaluation/results/tr3/tr3e-1B3-c4-checkpoints_agg.json
ADDED
|
@@ -0,0 +1,3084 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
{
|
| 2 |
+
"tokens": [
|
| 3 |
+
10044178432,
|
| 4 |
+
11617042432,
|
| 5 |
+
14762770432,
|
| 6 |
+
16335634432,
|
| 7 |
+
17908498432,
|
| 8 |
+
21054226432,
|
| 9 |
+
22627090432,
|
| 10 |
+
25772818432,
|
| 11 |
+
30491410432,
|
| 12 |
+
35210002432,
|
| 13 |
+
36782866432,
|
| 14 |
+
41501458432,
|
| 15 |
+
44647186432,
|
| 16 |
+
46220050432,
|
| 17 |
+
49365778432,
|
| 18 |
+
50938642432,
|
| 19 |
+
54084370432,
|
| 20 |
+
55657234432,
|
| 21 |
+
57230098432,
|
| 22 |
+
65094418432,
|
| 23 |
+
66667282432,
|
| 24 |
+
68240146432,
|
| 25 |
+
77677330432,
|
| 26 |
+
79250194432,
|
| 27 |
+
80823058432,
|
| 28 |
+
82395922432,
|
| 29 |
+
87114514432,
|
| 30 |
+
91833106432,
|
| 31 |
+
98124562432,
|
| 32 |
+
99697426432,
|
| 33 |
+
101270290432,
|
| 34 |
+
105988882432,
|
| 35 |
+
110707474432,
|
| 36 |
+
112280338432
|
| 37 |
+
],
|
| 38 |
+
"checkpoints": [
|
| 39 |
+
19500,
|
| 40 |
+
21000,
|
| 41 |
+
24000,
|
| 42 |
+
25500,
|
| 43 |
+
27000,
|
| 44 |
+
30000,
|
| 45 |
+
31500,
|
| 46 |
+
34500,
|
| 47 |
+
39000,
|
| 48 |
+
43500,
|
| 49 |
+
45000,
|
| 50 |
+
49500,
|
| 51 |
+
52500,
|
| 52 |
+
54000,
|
| 53 |
+
57000,
|
| 54 |
+
58500,
|
| 55 |
+
61500,
|
| 56 |
+
63000,
|
| 57 |
+
64500,
|
| 58 |
+
72000,
|
| 59 |
+
73500,
|
| 60 |
+
75000,
|
| 61 |
+
84000,
|
| 62 |
+
85500,
|
| 63 |
+
87000,
|
| 64 |
+
88500,
|
| 65 |
+
93000,
|
| 66 |
+
97500,
|
| 67 |
+
103500,
|
| 68 |
+
105000,
|
| 69 |
+
106500,
|
| 70 |
+
111000,
|
| 71 |
+
115500,
|
| 72 |
+
117000
|
| 73 |
+
],
|
| 74 |
+
"results": {
|
| 75 |
+
"arc_challenge": {
|
| 76 |
+
"acc": [
|
| 77 |
+
0.19197952218430034,
|
| 78 |
+
0.19795221843003413,
|
| 79 |
+
0.20392491467576793,
|
| 80 |
+
0.2030716723549488,
|
| 81 |
+
0.21075085324232082,
|
| 82 |
+
0.2175767918088737,
|
| 83 |
+
0.2030716723549488,
|
| 84 |
+
0.2098976109215017,
|
| 85 |
+
0.22610921501706485,
|
| 86 |
+
0.22440273037542663,
|
| 87 |
+
0.22696245733788395,
|
| 88 |
+
0.2226962457337884,
|
| 89 |
+
0.22098976109215018,
|
| 90 |
+
0.22610921501706485,
|
| 91 |
+
0.23037542662116042,
|
| 92 |
+
0.22610921501706485,
|
| 93 |
+
0.22525597269624573,
|
| 94 |
+
0.22440273037542663,
|
| 95 |
+
0.23293515358361774,
|
| 96 |
+
0.23464163822525597,
|
| 97 |
+
0.23037542662116042,
|
| 98 |
+
0.23464163822525597,
|
| 99 |
+
0.23720136518771331,
|
| 100 |
+
0.2354948805460751,
|
| 101 |
+
0.2363481228668942,
|
| 102 |
+
0.22866894197952217,
|
| 103 |
+
0.23976109215017063,
|
| 104 |
+
0.25170648464163825,
|
| 105 |
+
0.23122866894197952,
|
| 106 |
+
0.2295221843003413,
|
| 107 |
+
0.23720136518771331,
|
| 108 |
+
0.23976109215017063,
|
| 109 |
+
0.2440273037542662,
|
| 110 |
+
0.2431740614334471
|
| 111 |
+
],
|
| 112 |
+
"acc_stderr": [
|
| 113 |
+
0.011509598906598112,
|
| 114 |
+
0.011643990971573407,
|
| 115 |
+
0.011774262478702256,
|
| 116 |
+
0.011755899303705582,
|
| 117 |
+
0.01191827175485218,
|
| 118 |
+
0.012057262020972504,
|
| 119 |
+
0.011755899303705582,
|
| 120 |
+
0.011900548748047442,
|
| 121 |
+
0.012224202097063286,
|
| 122 |
+
0.012191404938603836,
|
| 123 |
+
0.01224049153613287,
|
| 124 |
+
0.012158314774829926,
|
| 125 |
+
0.012124929206818258,
|
| 126 |
+
0.012224202097063293,
|
| 127 |
+
0.01230492841874761,
|
| 128 |
+
0.012224202097063288,
|
| 129 |
+
0.012207839995407317,
|
| 130 |
+
0.01219140493860384,
|
| 131 |
+
0.012352507042617393,
|
| 132 |
+
0.012383873560768671,
|
| 133 |
+
0.01230492841874761,
|
| 134 |
+
0.012383873560768675,
|
| 135 |
+
0.01243039982926084,
|
| 136 |
+
0.012399451855004752,
|
| 137 |
+
0.01241496052430183,
|
| 138 |
+
0.012272853582540807,
|
| 139 |
+
0.012476304127453949,
|
| 140 |
+
0.012682496334042961,
|
| 141 |
+
0.012320858834772274,
|
| 142 |
+
0.012288926760890788,
|
| 143 |
+
0.012430399829260844,
|
| 144 |
+
0.012476304127453947,
|
| 145 |
+
0.012551447627856255,
|
| 146 |
+
0.012536554144587087
|
| 147 |
+
],
|
| 148 |
+
"acc_norm": [
|
| 149 |
+
0.24829351535836178,
|
| 150 |
+
0.24658703071672355,
|
| 151 |
+
0.25341296928327645,
|
| 152 |
+
0.2508532423208191,
|
| 153 |
+
0.2508532423208191,
|
| 154 |
+
0.25170648464163825,
|
| 155 |
+
0.2508532423208191,
|
| 156 |
+
0.2627986348122867,
|
| 157 |
+
0.2619453924914676,
|
| 158 |
+
0.24914675767918087,
|
| 159 |
+
0.257679180887372,
|
| 160 |
+
0.2627986348122867,
|
| 161 |
+
0.2696245733788396,
|
| 162 |
+
0.2636518771331058,
|
| 163 |
+
0.27047781569965873,
|
| 164 |
+
0.2713310580204778,
|
| 165 |
+
0.2619453924914676,
|
| 166 |
+
0.2619453924914676,
|
| 167 |
+
0.26535836177474403,
|
| 168 |
+
0.26706484641638223,
|
| 169 |
+
0.2687713310580205,
|
| 170 |
+
0.2713310580204778,
|
| 171 |
+
0.2773037542662116,
|
| 172 |
+
0.2858361774744027,
|
| 173 |
+
0.28754266211604096,
|
| 174 |
+
0.28071672354948807,
|
| 175 |
+
0.2790102389078498,
|
| 176 |
+
0.2841296928327645,
|
| 177 |
+
0.2713310580204778,
|
| 178 |
+
0.26535836177474403,
|
| 179 |
+
0.27559726962457337,
|
| 180 |
+
0.28242320819112626,
|
| 181 |
+
0.27474402730375425,
|
| 182 |
+
0.2738907849829352
|
| 183 |
+
],
|
| 184 |
+
"acc_norm_stderr": [
|
| 185 |
+
0.01262491286808976,
|
| 186 |
+
0.01259572626879013,
|
| 187 |
+
0.012710896778378607,
|
| 188 |
+
0.012668198621315433,
|
| 189 |
+
0.01266819862131543,
|
| 190 |
+
0.012682496334042967,
|
| 191 |
+
0.012668198621315433,
|
| 192 |
+
0.012862523175351335,
|
| 193 |
+
0.012849054826858112,
|
| 194 |
+
0.012639407111926433,
|
| 195 |
+
0.012780770562768402,
|
| 196 |
+
0.012862523175351333,
|
| 197 |
+
0.012968040686869154,
|
| 198 |
+
0.01287592915129705,
|
| 199 |
+
0.012980954547659554,
|
| 200 |
+
0.012993807727545792,
|
| 201 |
+
0.012849054826858114,
|
| 202 |
+
0.012849054826858114,
|
| 203 |
+
0.012902554762313966,
|
| 204 |
+
0.012928933196496345,
|
| 205 |
+
0.012955065963710686,
|
| 206 |
+
0.01299380772754579,
|
| 207 |
+
0.013082095839059376,
|
| 208 |
+
0.013203196088537369,
|
| 209 |
+
0.01322671905626613,
|
| 210 |
+
0.013131238126975584,
|
| 211 |
+
0.013106784883601338,
|
| 212 |
+
0.013179442447653886,
|
| 213 |
+
0.012993807727545789,
|
| 214 |
+
0.012902554762313967,
|
| 215 |
+
0.013057169655761838,
|
| 216 |
+
0.013155456884097225,
|
| 217 |
+
0.013044617212771227,
|
| 218 |
+
0.013032004972989505
|
| 219 |
+
]
|
| 220 |
+
},
|
| 221 |
+
"arc_easy": {
|
| 222 |
+
"acc": [
|
| 223 |
+
0.4713804713804714,
|
| 224 |
+
0.48947811447811446,
|
| 225 |
+
0.4978956228956229,
|
| 226 |
+
0.4936868686868687,
|
| 227 |
+
0.4936868686868687,
|
| 228 |
+
0.5008417508417509,
|
| 229 |
+
0.49915824915824913,
|
| 230 |
+
0.494949494949495,
|
| 231 |
+
0.5105218855218855,
|
| 232 |
+
0.523989898989899,
|
| 233 |
+
0.5277777777777778,
|
| 234 |
+
0.5277777777777778,
|
| 235 |
+
0.5218855218855218,
|
| 236 |
+
0.5252525252525253,
|
| 237 |
+
0.5273569023569024,
|
| 238 |
+
0.5286195286195287,
|
| 239 |
+
0.5269360269360269,
|
| 240 |
+
0.5332491582491582,
|
| 241 |
+
0.5281986531986532,
|
| 242 |
+
0.5311447811447811,
|
| 243 |
+
0.5408249158249159,
|
| 244 |
+
0.5412457912457912,
|
| 245 |
+
0.5412457912457912,
|
| 246 |
+
0.5391414141414141,
|
| 247 |
+
0.5505050505050505,
|
| 248 |
+
0.5467171717171717,
|
| 249 |
+
0.555976430976431,
|
| 250 |
+
0.5593434343434344,
|
| 251 |
+
0.5547138047138047,
|
| 252 |
+
0.5576599326599326,
|
| 253 |
+
0.5622895622895623,
|
| 254 |
+
0.553030303030303,
|
| 255 |
+
0.5652356902356902,
|
| 256 |
+
0.5614478114478114
|
| 257 |
+
],
|
| 258 |
+
"acc_stderr": [
|
| 259 |
+
0.01024296261792719,
|
| 260 |
+
0.010257511546488227,
|
| 261 |
+
0.01025969265153704,
|
| 262 |
+
0.01025896566804443,
|
| 263 |
+
0.010258965668044432,
|
| 264 |
+
0.01025976898181524,
|
| 265 |
+
0.010259768981815234,
|
| 266 |
+
0.010259260102565861,
|
| 267 |
+
0.01025751154648823,
|
| 268 |
+
0.010247967392742686,
|
| 269 |
+
0.010243938285881118,
|
| 270 |
+
0.010243938285881118,
|
| 271 |
+
0.010249950427234157,
|
| 272 |
+
0.010246690042583852,
|
| 273 |
+
0.010244415164390527,
|
| 274 |
+
0.010242962617927197,
|
| 275 |
+
0.0102448847406201,
|
| 276 |
+
0.010237073872130738,
|
| 277 |
+
0.010243454104071783,
|
| 278 |
+
0.010239860250021741,
|
| 279 |
+
0.010225526906982602,
|
| 280 |
+
0.010224815730255816,
|
| 281 |
+
0.010224815730255818,
|
| 282 |
+
0.010228298200766128,
|
| 283 |
+
0.010207308833916032,
|
| 284 |
+
0.01021490151673162,
|
| 285 |
+
0.010195285580783956,
|
| 286 |
+
0.010187264635711984,
|
| 287 |
+
0.01019817113787387,
|
| 288 |
+
0.010191334444220856,
|
| 289 |
+
0.010179856486006902,
|
| 290 |
+
0.010201914927791671,
|
| 291 |
+
0.010172083670402787,
|
| 292 |
+
0.010182010275471116
|
| 293 |
+
],
|
| 294 |
+
"acc_norm": [
|
| 295 |
+
0.4297138047138047,
|
| 296 |
+
0.4356060606060606,
|
| 297 |
+
0.44065656565656564,
|
| 298 |
+
0.44612794612794615,
|
| 299 |
+
0.4541245791245791,
|
| 300 |
+
0.4494949494949495,
|
| 301 |
+
0.4452861952861953,
|
| 302 |
+
0.44654882154882153,
|
| 303 |
+
0.4642255892255892,
|
| 304 |
+
0.46675084175084175,
|
| 305 |
+
0.47095959595959597,
|
| 306 |
+
0.47264309764309764,
|
| 307 |
+
0.4701178451178451,
|
| 308 |
+
0.48653198653198654,
|
| 309 |
+
0.4781144781144781,
|
| 310 |
+
0.4713804713804714,
|
| 311 |
+
0.4722222222222222,
|
| 312 |
+
0.48947811447811446,
|
| 313 |
+
0.47853535353535354,
|
| 314 |
+
0.4831649831649832,
|
| 315 |
+
0.4797979797979798,
|
| 316 |
+
0.4819023569023569,
|
| 317 |
+
0.4819023569023569,
|
| 318 |
+
0.4831649831649832,
|
| 319 |
+
0.4962121212121212,
|
| 320 |
+
0.49537037037037035,
|
| 321 |
+
0.5,
|
| 322 |
+
0.49873737373737376,
|
| 323 |
+
0.502104377104377,
|
| 324 |
+
0.4978956228956229,
|
| 325 |
+
0.49537037037037035,
|
| 326 |
+
0.5012626262626263,
|
| 327 |
+
0.49873737373737376,
|
| 328 |
+
0.5033670033670034
|
| 329 |
+
],
|
| 330 |
+
"acc_norm_stderr": [
|
| 331 |
+
0.010157908005763676,
|
| 332 |
+
0.010174341733665219,
|
| 333 |
+
0.010187264635711978,
|
| 334 |
+
0.01020005782876501,
|
| 335 |
+
0.010216507710244096,
|
| 336 |
+
0.010207308833916046,
|
| 337 |
+
0.010198171137873857,
|
| 338 |
+
0.010200990076245326,
|
| 339 |
+
0.01023348870972655,
|
| 340 |
+
0.010237073872130745,
|
| 341 |
+
0.010242463826395626,
|
| 342 |
+
0.010244415164390541,
|
| 343 |
+
0.010241444322886427,
|
| 344 |
+
0.010256060854840748,
|
| 345 |
+
0.01024995042723415,
|
| 346 |
+
0.010242962617927181,
|
| 347 |
+
0.010243938285881118,
|
| 348 |
+
0.010257511546488228,
|
| 349 |
+
0.010250325159456663,
|
| 350 |
+
0.010253966261288898,
|
| 351 |
+
0.010251405621305368,
|
| 352 |
+
0.010253060653479177,
|
| 353 |
+
0.010253060653479177,
|
| 354 |
+
0.010253966261288898,
|
| 355 |
+
0.010259489101351842,
|
| 356 |
+
0.010259343705889734,
|
| 357 |
+
0.01025978352085154,
|
| 358 |
+
0.010259750807991153,
|
| 359 |
+
0.010259692651537032,
|
| 360 |
+
0.010259692651537042,
|
| 361 |
+
0.010259343705889733,
|
| 362 |
+
0.010259750807991061,
|
| 363 |
+
0.010259750807991155,
|
| 364 |
+
0.01025955089379893
|
| 365 |
+
]
|
| 366 |
+
},
|
| 367 |
+
"boolq": {
|
| 368 |
+
"acc": [
|
| 369 |
+
0.5856269113149847,
|
| 370 |
+
0.6165137614678899,
|
| 371 |
+
0.6033639143730887,
|
| 372 |
+
0.6012232415902141,
|
| 373 |
+
0.5896024464831804,
|
| 374 |
+
0.5513761467889908,
|
| 375 |
+
0.5318042813455658,
|
| 376 |
+
0.5688073394495413,
|
| 377 |
+
0.5431192660550459,
|
| 378 |
+
0.5351681957186545,
|
| 379 |
+
0.5807339449541284,
|
| 380 |
+
0.5834862385321101,
|
| 381 |
+
0.6030581039755352,
|
| 382 |
+
0.5770642201834862,
|
| 383 |
+
0.5409785932721712,
|
| 384 |
+
0.6107033639143731,
|
| 385 |
+
0.5510703363914373,
|
| 386 |
+
0.536085626911315,
|
| 387 |
+
0.6021406727828746,
|
| 388 |
+
0.5192660550458715,
|
| 389 |
+
0.5654434250764526,
|
| 390 |
+
0.5516819571865443,
|
| 391 |
+
0.5477064220183486,
|
| 392 |
+
0.5345565749235474,
|
| 393 |
+
0.5507645259938838,
|
| 394 |
+
0.5180428134556575,
|
| 395 |
+
0.5342507645259938,
|
| 396 |
+
0.5293577981651376,
|
| 397 |
+
0.5266055045871559,
|
| 398 |
+
0.5850152905198777,
|
| 399 |
+
0.5755351681957187,
|
| 400 |
+
0.5403669724770642,
|
| 401 |
+
0.5694189602446483,
|
| 402 |
+
0.554434250764526
|
| 403 |
+
],
|
| 404 |
+
"acc_stderr": [
|
| 405 |
+
0.00861586377642113,
|
| 406 |
+
0.008504304838837027,
|
| 407 |
+
0.008556148582031997,
|
| 408 |
+
0.00856397398772991,
|
| 409 |
+
0.008603488048617523,
|
| 410 |
+
0.008698767182005268,
|
| 411 |
+
0.008727345583419184,
|
| 412 |
+
0.008661853128165595,
|
| 413 |
+
0.008712475433089477,
|
| 414 |
+
0.008723396352960192,
|
| 415 |
+
0.00863030207099909,
|
| 416 |
+
0.008622288020674003,
|
| 417 |
+
0.00855727696467513,
|
| 418 |
+
0.008640558744656426,
|
| 419 |
+
0.008715635308774413,
|
| 420 |
+
0.008528016290984541,
|
| 421 |
+
0.008699318031464162,
|
| 422 |
+
0.00872225010207808,
|
| 423 |
+
0.008560641169303369,
|
| 424 |
+
0.008738560570551961,
|
| 425 |
+
0.008669824006668013,
|
| 426 |
+
0.008698213008694267,
|
| 427 |
+
0.008705158179072315,
|
| 428 |
+
0.008724144040604813,
|
| 429 |
+
0.008699865557703648,
|
| 430 |
+
0.008739359336700274,
|
| 431 |
+
0.008724512941821092,
|
| 432 |
+
0.008729967580199222,
|
| 433 |
+
0.008732665775847746,
|
| 434 |
+
0.008617716361921567,
|
| 435 |
+
0.008644688121685503,
|
| 436 |
+
0.008716508381476017,
|
| 437 |
+
0.008660360145988744,
|
| 438 |
+
0.008693075769447138
|
| 439 |
+
]
|
| 440 |
+
},
|
| 441 |
+
"copa": {
|
| 442 |
+
"acc": [
|
| 443 |
+
0.71,
|
| 444 |
+
0.72,
|
| 445 |
+
0.71,
|
| 446 |
+
0.69,
|
| 447 |
+
0.69,
|
| 448 |
+
0.71,
|
| 449 |
+
0.73,
|
| 450 |
+
0.69,
|
| 451 |
+
0.7,
|
| 452 |
+
0.7,
|
| 453 |
+
0.69,
|
| 454 |
+
0.75,
|
| 455 |
+
0.69,
|
| 456 |
+
0.7,
|
| 457 |
+
0.73,
|
| 458 |
+
0.74,
|
| 459 |
+
0.69,
|
| 460 |
+
0.7,
|
| 461 |
+
0.69,
|
| 462 |
+
0.73,
|
| 463 |
+
0.67,
|
| 464 |
+
0.71,
|
| 465 |
+
0.66,
|
| 466 |
+
0.67,
|
| 467 |
+
0.68,
|
| 468 |
+
0.71,
|
| 469 |
+
0.69,
|
| 470 |
+
0.7,
|
| 471 |
+
0.69,
|
| 472 |
+
0.71,
|
| 473 |
+
0.67,
|
| 474 |
+
0.69,
|
| 475 |
+
0.7,
|
| 476 |
+
0.7
|
| 477 |
+
],
|
| 478 |
+
"acc_stderr": [
|
| 479 |
+
0.04560480215720683,
|
| 480 |
+
0.04512608598542127,
|
| 481 |
+
0.04560480215720684,
|
| 482 |
+
0.04648231987117316,
|
| 483 |
+
0.04648231987117316,
|
| 484 |
+
0.045604802157206845,
|
| 485 |
+
0.044619604333847394,
|
| 486 |
+
0.04648231987117316,
|
| 487 |
+
0.046056618647183814,
|
| 488 |
+
0.046056618647183814,
|
| 489 |
+
0.04648231987117316,
|
| 490 |
+
0.04351941398892446,
|
| 491 |
+
0.04648231987117316,
|
| 492 |
+
0.046056618647183814,
|
| 493 |
+
0.04461960433384741,
|
| 494 |
+
0.0440844002276808,
|
| 495 |
+
0.04648231987117316,
|
| 496 |
+
0.046056618647183814,
|
| 497 |
+
0.04648231987117316,
|
| 498 |
+
0.044619604333847394,
|
| 499 |
+
0.047258156262526066,
|
| 500 |
+
0.04560480215720684,
|
| 501 |
+
0.04760952285695238,
|
| 502 |
+
0.04725815626252607,
|
| 503 |
+
0.046882617226215034,
|
| 504 |
+
0.04560480215720684,
|
| 505 |
+
0.04648231987117316,
|
| 506 |
+
0.046056618647183814,
|
| 507 |
+
0.04648231987117316,
|
| 508 |
+
0.04560480215720683,
|
| 509 |
+
0.047258156262526066,
|
| 510 |
+
0.04648231987117316,
|
| 511 |
+
0.046056618647183814,
|
| 512 |
+
0.046056618647183814
|
| 513 |
+
]
|
| 514 |
+
},
|
| 515 |
+
"headqa_en": {
|
| 516 |
+
"acc": [
|
| 517 |
+
0.23085339168490154,
|
| 518 |
+
0.24106491611962072,
|
| 519 |
+
0.2323121808898614,
|
| 520 |
+
0.23304157549234136,
|
| 521 |
+
0.23413566739606126,
|
| 522 |
+
0.237417943107221,
|
| 523 |
+
0.23960612691466082,
|
| 524 |
+
0.24070021881838075,
|
| 525 |
+
0.237417943107221,
|
| 526 |
+
0.2461706783369803,
|
| 527 |
+
0.24070021881838075,
|
| 528 |
+
0.24544128373450036,
|
| 529 |
+
0.24544128373450036,
|
| 530 |
+
0.24179431072210067,
|
| 531 |
+
0.24653537563822028,
|
| 532 |
+
0.23158278628738146,
|
| 533 |
+
0.23705324580598103,
|
| 534 |
+
0.2461706783369803,
|
| 535 |
+
0.24690007293946026,
|
| 536 |
+
0.2447118891320204,
|
| 537 |
+
0.25091174325309995,
|
| 538 |
+
0.24908825674690008,
|
| 539 |
+
0.2439824945295405,
|
| 540 |
+
0.24507658643326038,
|
| 541 |
+
0.24945295404814005,
|
| 542 |
+
0.2461706783369803,
|
| 543 |
+
0.24981765134938003,
|
| 544 |
+
0.25419401896425964,
|
| 545 |
+
0.24981765134938003,
|
| 546 |
+
0.25455871626549964,
|
| 547 |
+
0.2549234135667396,
|
| 548 |
+
0.24945295404814005,
|
| 549 |
+
0.25309992706053974,
|
| 550 |
+
0.24762946754194018
|
| 551 |
+
],
|
| 552 |
+
"acc_stderr": [
|
| 553 |
+
0.00804855982758665,
|
| 554 |
+
0.008169863520957039,
|
| 555 |
+
0.008066289373760265,
|
| 556 |
+
0.008075103495030473,
|
| 557 |
+
0.00808826167279805,
|
| 558 |
+
0.008127285992179082,
|
| 559 |
+
0.008152930613263026,
|
| 560 |
+
0.008165642499601123,
|
| 561 |
+
0.008127285992179082,
|
| 562 |
+
0.008228111277828357,
|
| 563 |
+
0.008165642499601137,
|
| 564 |
+
0.008219886279844553,
|
| 565 |
+
0.00821988627984455,
|
| 566 |
+
0.008178281228165185,
|
| 567 |
+
0.008232211853559124,
|
| 568 |
+
0.008057441521692892,
|
| 569 |
+
0.008122983109676263,
|
| 570 |
+
0.008228111277828357,
|
| 571 |
+
0.008236304496286385,
|
| 572 |
+
0.008211629406841454,
|
| 573 |
+
0.008280803335771757,
|
| 574 |
+
0.00826069441827071,
|
| 575 |
+
0.00820334056257037,
|
| 576 |
+
0.00821576183371828,
|
| 577 |
+
0.00826473185835768,
|
| 578 |
+
0.008228111277828357,
|
| 579 |
+
0.008268761458717196,
|
| 580 |
+
0.008316509290190666,
|
| 581 |
+
0.008268761458717196,
|
| 582 |
+
0.008320438000609576,
|
| 583 |
+
0.008324359027712818,
|
| 584 |
+
0.008264731858357677,
|
| 585 |
+
0.008304676949891692,
|
| 586 |
+
0.008244466029964781
|
| 587 |
+
],
|
| 588 |
+
"acc_norm": [
|
| 589 |
+
0.2687819110138585,
|
| 590 |
+
0.2727935813274982,
|
| 591 |
+
0.27972283005105764,
|
| 592 |
+
0.27315827862873815,
|
| 593 |
+
0.2811816192560175,
|
| 594 |
+
0.27935813274981763,
|
| 595 |
+
0.2830051057622174,
|
| 596 |
+
0.28373450036469733,
|
| 597 |
+
0.2830051057622174,
|
| 598 |
+
0.29029905178701676,
|
| 599 |
+
0.2895696571845368,
|
| 600 |
+
0.2895696571845368,
|
| 601 |
+
0.29722830051057625,
|
| 602 |
+
0.2899343544857768,
|
| 603 |
+
0.2895696571845368,
|
| 604 |
+
0.29285193289569655,
|
| 605 |
+
0.29576951130561635,
|
| 606 |
+
0.29431072210065645,
|
| 607 |
+
0.2990517870167761,
|
| 608 |
+
0.29722830051057625,
|
| 609 |
+
0.2946754194018964,
|
| 610 |
+
0.29576951130561635,
|
| 611 |
+
0.29175784099197666,
|
| 612 |
+
0.2964989059080963,
|
| 613 |
+
0.2964989059080963,
|
| 614 |
+
0.29795769511305614,
|
| 615 |
+
0.2964989059080963,
|
| 616 |
+
0.300145878920496,
|
| 617 |
+
0.29832239241429614,
|
| 618 |
+
0.29978118161925604,
|
| 619 |
+
0.29832239241429614,
|
| 620 |
+
0.3012399708242159,
|
| 621 |
+
0.30306345733041573,
|
| 622 |
+
0.3012399708242159
|
| 623 |
+
],
|
| 624 |
+
"acc_norm_stderr": [
|
| 625 |
+
0.00846776826280965,
|
| 626 |
+
0.008507293334608307,
|
| 627 |
+
0.008573521943240946,
|
| 628 |
+
0.008510843212471874,
|
| 629 |
+
0.008587139792141176,
|
| 630 |
+
0.008570099944976721,
|
| 631 |
+
0.008604004902114399,
|
| 632 |
+
0.008610702250036304,
|
| 633 |
+
0.008604004902114396,
|
| 634 |
+
0.008669738206463492,
|
| 635 |
+
0.008663288140722399,
|
| 636 |
+
0.008663288140722397,
|
| 637 |
+
0.008729667320745451,
|
| 638 |
+
0.008666516573158855,
|
| 639 |
+
0.008663288140722392,
|
| 640 |
+
0.008692099896939174,
|
| 641 |
+
0.008717251898361419,
|
| 642 |
+
0.008704729577762882,
|
| 643 |
+
0.008745036966349153,
|
| 644 |
+
0.008729667320745456,
|
| 645 |
+
0.00870787020477325,
|
| 646 |
+
0.008717251898361426,
|
| 647 |
+
0.008682556899491154,
|
| 648 |
+
0.008723472943212272,
|
| 649 |
+
0.008723472943212273,
|
| 650 |
+
0.008735835087689374,
|
| 651 |
+
0.008723472943212272,
|
| 652 |
+
0.008754179286225806,
|
| 653 |
+
0.008738909009807233,
|
| 654 |
+
0.008751138452362178,
|
| 655 |
+
0.008738909009807234,
|
| 656 |
+
0.00876326223372493,
|
| 657 |
+
0.008778269040959834,
|
| 658 |
+
0.00876326223372493
|
| 659 |
+
]
|
| 660 |
+
},
|
| 661 |
+
"hellaswag": {
|
| 662 |
+
"acc": [
|
| 663 |
+
0.328918542123083,
|
| 664 |
+
0.33320055765783707,
|
| 665 |
+
0.3405696076478789,
|
| 666 |
+
0.3445528779127664,
|
| 667 |
+
0.34485162318263296,
|
| 668 |
+
0.35022903804023103,
|
| 669 |
+
0.3567018522206732,
|
| 670 |
+
0.3577972515435172,
|
| 671 |
+
0.36347341167098185,
|
| 672 |
+
0.3730332603067118,
|
| 673 |
+
0.3736307508464449,
|
| 674 |
+
0.37711611232822145,
|
| 675 |
+
0.37880900219079866,
|
| 676 |
+
0.3798048197570205,
|
| 677 |
+
0.3867755427205736,
|
| 678 |
+
0.385381398127863,
|
| 679 |
+
0.386476797450707,
|
| 680 |
+
0.38926508663612824,
|
| 681 |
+
0.3915554670384386,
|
| 682 |
+
0.3966341366261701,
|
| 683 |
+
0.3965345548695479,
|
| 684 |
+
0.3963353913563035,
|
| 685 |
+
0.40579565823541125,
|
| 686 |
+
0.4039036048595897,
|
| 687 |
+
0.40509858593905596,
|
| 688 |
+
0.40440151364270066,
|
| 689 |
+
0.40908185620394344,
|
| 690 |
+
0.4118701453893647,
|
| 691 |
+
0.4148575980880303,
|
| 692 |
+
0.4161521609241187,
|
| 693 |
+
0.4186417048396734,
|
| 694 |
+
0.41894045010953995,
|
| 695 |
+
0.42113124875522806,
|
| 696 |
+
0.4219279028082055
|
| 697 |
+
],
|
| 698 |
+
"acc_stderr": [
|
| 699 |
+
0.004688601416815189,
|
| 700 |
+
0.0047039423467622596,
|
| 701 |
+
0.004729322613301549,
|
| 702 |
+
0.004742510354777905,
|
| 703 |
+
0.0047434845283466625,
|
| 704 |
+
0.004760666311146298,
|
| 705 |
+
0.004780467270911765,
|
| 706 |
+
0.004783723798286501,
|
| 707 |
+
0.004800164434233259,
|
| 708 |
+
0.004826224784850442,
|
| 709 |
+
0.004827786289074841,
|
| 710 |
+
0.004836738514051329,
|
| 711 |
+
0.004840990593494684,
|
| 712 |
+
0.004843462545943492,
|
| 713 |
+
0.0048601620763309705,
|
| 714 |
+
0.004856906473719392,
|
| 715 |
+
0.004859467984155266,
|
| 716 |
+
0.004865871290143345,
|
| 717 |
+
0.004871005939407469,
|
| 718 |
+
0.004881990487628917,
|
| 719 |
+
0.004881780399499138,
|
| 720 |
+
0.004881359589149001,
|
| 721 |
+
0.004900417982582058,
|
| 722 |
+
0.004896757857022552,
|
| 723 |
+
0.004899078300184252,
|
| 724 |
+
0.004897728370737249,
|
| 725 |
+
0.004906595857916756,
|
| 726 |
+
0.004911659884506146,
|
| 727 |
+
0.004916905095810846,
|
| 728 |
+
0.004919120169394336,
|
| 729 |
+
0.004923281841828513,
|
| 730 |
+
0.0049237725818484885,
|
| 731 |
+
0.004927314729433556,
|
| 732 |
+
0.004928578106026369
|
| 733 |
+
],
|
| 734 |
+
"acc_norm": [
|
| 735 |
+
0.39026090420235016,
|
| 736 |
+
0.40001991635132444,
|
| 737 |
+
0.41037641904003186,
|
| 738 |
+
0.41565425214100776,
|
| 739 |
+
0.41983668591913964,
|
| 740 |
+
0.4311890061740689,
|
| 741 |
+
0.4358693487353117,
|
| 742 |
+
0.44523003385779725,
|
| 743 |
+
0.4552877912766381,
|
| 744 |
+
0.4702250547699661,
|
| 745 |
+
0.4735112527384983,
|
| 746 |
+
0.4805815574586736,
|
| 747 |
+
0.4832702648874726,
|
| 748 |
+
0.48665604461262696,
|
| 749 |
+
0.4894443337980482,
|
| 750 |
+
0.49432383987253536,
|
| 751 |
+
0.4978092013543119,
|
| 752 |
+
0.49970125473013344,
|
| 753 |
+
0.5053774148575981,
|
| 754 |
+
0.5126468830910177,
|
| 755 |
+
0.5134435371439953,
|
| 756 |
+
0.5147380999800837,
|
| 757 |
+
0.526090420235013,
|
| 758 |
+
0.5266879107747461,
|
| 759 |
+
0.5281816371240788,
|
| 760 |
+
0.5294761999601673,
|
| 761 |
+
0.536247759410476,
|
| 762 |
+
0.5393347938657638,
|
| 763 |
+
0.5451105357498506,
|
| 764 |
+
0.5438159729137622,
|
| 765 |
+
0.548496315475005,
|
| 766 |
+
0.5497908783110934,
|
| 767 |
+
0.5500896235809599,
|
| 768 |
+
0.550687114120693
|
| 769 |
+
],
|
| 770 |
+
"acc_norm_stderr": [
|
| 771 |
+
0.004868117598481943,
|
| 772 |
+
0.004889007921214699,
|
| 773 |
+
0.004908967278222497,
|
| 774 |
+
0.004918272352137552,
|
| 775 |
+
0.004925233680511588,
|
| 776 |
+
0.004942302768002104,
|
| 777 |
+
0.004948567856373873,
|
| 778 |
+
0.004959754882055469,
|
| 779 |
+
0.004969790407117549,
|
| 780 |
+
0.004980926198798972,
|
| 781 |
+
0.004982774293927776,
|
| 782 |
+
0.004986016938678531,
|
| 783 |
+
0.0049869875089287126,
|
| 784 |
+
0.004988004122536502,
|
| 785 |
+
0.004988669343786959,
|
| 786 |
+
0.004989459871609184,
|
| 787 |
+
0.004989733513319102,
|
| 788 |
+
0.004989780520782243,
|
| 789 |
+
0.004989492828168531,
|
| 790 |
+
0.0049881849883452855,
|
| 791 |
+
0.004987977492042154,
|
| 792 |
+
0.0049876132636781775,
|
| 793 |
+
0.004982983592459194,
|
| 794 |
+
0.004982668452118946,
|
| 795 |
+
0.004981849291299644,
|
| 796 |
+
0.004981103157940437,
|
| 797 |
+
0.004976651989757642,
|
| 798 |
+
0.004974316807920411,
|
| 799 |
+
0.004969431900874306,
|
| 800 |
+
0.004970585328297624,
|
| 801 |
+
0.0049662550892124275,
|
| 802 |
+
0.004964979120927572,
|
| 803 |
+
0.004964679845918427,
|
| 804 |
+
0.004964075870120337
|
| 805 |
+
]
|
| 806 |
+
},
|
| 807 |
+
"lambada": {
|
| 808 |
+
"ppl": [
|
| 809 |
+
32.621324227429184,
|
| 810 |
+
30.639591263041808,
|
| 811 |
+
27.824475015249064,
|
| 812 |
+
25.537821610539932,
|
| 813 |
+
23.497946335169004,
|
| 814 |
+
23.1004453640144,
|
| 815 |
+
24.36489982385264,
|
| 816 |
+
21.443992832210707,
|
| 817 |
+
21.19387768776711,
|
| 818 |
+
17.763182400833088,
|
| 819 |
+
19.773001152615144,
|
| 820 |
+
17.92660146185445,
|
| 821 |
+
16.677594695767798,
|
| 822 |
+
16.65763704756145,
|
| 823 |
+
16.40772738868533,
|
| 824 |
+
15.551082895412318,
|
| 825 |
+
17.14911063173112,
|
| 826 |
+
16.314018680134257,
|
| 827 |
+
15.297408445296128,
|
| 828 |
+
14.193282998851707,
|
| 829 |
+
14.650645874912932,
|
| 830 |
+
14.327229571268942,
|
| 831 |
+
13.514555687409516,
|
| 832 |
+
13.881934420349538,
|
| 833 |
+
13.735370217866647,
|
| 834 |
+
14.06969071816386,
|
| 835 |
+
12.815627673068203,
|
| 836 |
+
12.554895986642721,
|
| 837 |
+
12.97184974584759,
|
| 838 |
+
12.322450143856624,
|
| 839 |
+
11.807064551326473,
|
| 840 |
+
12.648077956981256,
|
| 841 |
+
11.965421508455707,
|
| 842 |
+
12.065662868384443
|
| 843 |
+
],
|
| 844 |
+
"ppl_stderr": [
|
| 845 |
+
1.1963587903700155,
|
| 846 |
+
1.0792434257051169,
|
| 847 |
+
0.9803173395443245,
|
| 848 |
+
0.8883002174180411,
|
| 849 |
+
0.8111754484638396,
|
| 850 |
+
0.7877352894334106,
|
| 851 |
+
0.8192584690276606,
|
| 852 |
+
0.7176552509710284,
|
| 853 |
+
0.7047940272111838,
|
| 854 |
+
0.5744060989196327,
|
| 855 |
+
0.6377795946534752,
|
| 856 |
+
0.5789048479873562,
|
| 857 |
+
0.5271189458009388,
|
| 858 |
+
0.5330204917365942,
|
| 859 |
+
0.5166008147645302,
|
| 860 |
+
0.4936826799464582,
|
| 861 |
+
0.5367165367715473,
|
| 862 |
+
0.5145317352139375,
|
| 863 |
+
0.4789339173617679,
|
| 864 |
+
0.4462796491467827,
|
| 865 |
+
0.4547061383498668,
|
| 866 |
+
0.4486615578291165,
|
| 867 |
+
0.4163325695298929,
|
| 868 |
+
0.42130569367413345,
|
| 869 |
+
0.4169434900832809,
|
| 870 |
+
0.42676326043093105,
|
| 871 |
+
0.3845479402613268,
|
| 872 |
+
0.377945452172566,
|
| 873 |
+
0.3829051970997864,
|
| 874 |
+
0.3633677304997388,
|
| 875 |
+
0.35032874343527404,
|
| 876 |
+
0.3774394704766126,
|
| 877 |
+
0.35372531708658533,
|
| 878 |
+
0.3559930542996243
|
| 879 |
+
],
|
| 880 |
+
"acc": [
|
| 881 |
+
0.32699398408693964,
|
| 882 |
+
0.3332039588589171,
|
| 883 |
+
0.3483407723656123,
|
| 884 |
+
0.35008732777023094,
|
| 885 |
+
0.37046380749078206,
|
| 886 |
+
0.37182223947215215,
|
| 887 |
+
0.3570735493887056,
|
| 888 |
+
0.3824956336114885,
|
| 889 |
+
0.3791965845138754,
|
| 890 |
+
0.4020958664855424,
|
| 891 |
+
0.3898699786532117,
|
| 892 |
+
0.4061711624296526,
|
| 893 |
+
0.4164564331457403,
|
| 894 |
+
0.42363671647583934,
|
| 895 |
+
0.42635358043857946,
|
| 896 |
+
0.4281001358431981,
|
| 897 |
+
0.41024645837376283,
|
| 898 |
+
0.42829419755482245,
|
| 899 |
+
0.43450417232679994,
|
| 900 |
+
0.4539103434892296,
|
| 901 |
+
0.4389675916941587,
|
| 902 |
+
0.44284882592664465,
|
| 903 |
+
0.4527459732194838,
|
| 904 |
+
0.44886473898699786,
|
| 905 |
+
0.4477003687172521,
|
| 906 |
+
0.43877352998253444,
|
| 907 |
+
0.4601203182612071,
|
| 908 |
+
0.4630312439355715,
|
| 909 |
+
0.4572093925868426,
|
| 910 |
+
0.4702115272656705,
|
| 911 |
+
0.4803027362701339,
|
| 912 |
+
0.46089656510770427,
|
| 913 |
+
0.47137589753541626,
|
| 914 |
+
0.4694352804191733
|
| 915 |
+
],
|
| 916 |
+
"acc_stderr": [
|
| 917 |
+
0.006535689740487129,
|
| 918 |
+
0.006566949181820453,
|
| 919 |
+
0.006637805195772816,
|
| 920 |
+
0.006645501658657036,
|
| 921 |
+
0.006728144610304269,
|
| 922 |
+
0.006733192522297656,
|
| 923 |
+
0.0066753118561223325,
|
| 924 |
+
0.0067708833250532535,
|
| 925 |
+
0.006759605180095818,
|
| 926 |
+
0.00683113164830145,
|
| 927 |
+
0.006794901529888733,
|
| 928 |
+
0.006842223524282646,
|
| 929 |
+
0.006868050870202006,
|
| 930 |
+
0.00688425617620753,
|
| 931 |
+
0.006889999234952311,
|
| 932 |
+
0.0068935789269446044,
|
| 933 |
+
0.006852827058720169,
|
| 934 |
+
0.0068939712541951454,
|
| 935 |
+
0.006905955107492335,
|
| 936 |
+
0.006936319475444729,
|
| 937 |
+
0.006913886988887271,
|
| 938 |
+
0.0069203227037583125,
|
| 939 |
+
0.006934798617263737,
|
| 940 |
+
0.00692945241479083,
|
| 941 |
+
0.006927765449003239,
|
| 942 |
+
0.006913553944132543,
|
| 943 |
+
0.006943785077347287,
|
| 944 |
+
0.006946910914142773,
|
| 945 |
+
0.006940420862895478,
|
| 946 |
+
0.006953604103874042,
|
| 947 |
+
0.006960570207731852,
|
| 948 |
+
0.006944641928135856,
|
| 949 |
+
0.00695455329137302,
|
| 950 |
+
0.006952950213860608
|
| 951 |
+
]
|
| 952 |
+
},
|
| 953 |
+
"logiqa": {
|
| 954 |
+
"acc": [
|
| 955 |
+
0.21351766513056836,
|
| 956 |
+
0.2073732718894009,
|
| 957 |
+
0.21812596006144394,
|
| 958 |
+
0.22119815668202766,
|
| 959 |
+
0.22580645161290322,
|
| 960 |
+
0.21812596006144394,
|
| 961 |
+
0.22119815668202766,
|
| 962 |
+
0.23809523809523808,
|
| 963 |
+
0.21658986175115208,
|
| 964 |
+
0.21812596006144394,
|
| 965 |
+
0.20890937019969277,
|
| 966 |
+
0.22887864823348694,
|
| 967 |
+
0.23348694316436253,
|
| 968 |
+
0.22119815668202766,
|
| 969 |
+
0.23655913978494625,
|
| 970 |
+
0.22119815668202766,
|
| 971 |
+
0.22119815668202766,
|
| 972 |
+
0.21812596006144394,
|
| 973 |
+
0.20890937019969277,
|
| 974 |
+
0.20430107526881722,
|
| 975 |
+
0.22580645161290322,
|
| 976 |
+
0.20583717357910905,
|
| 977 |
+
0.21505376344086022,
|
| 978 |
+
0.21658986175115208,
|
| 979 |
+
0.21044546850998463,
|
| 980 |
+
0.2119815668202765,
|
| 981 |
+
0.1966205837173579,
|
| 982 |
+
0.22119815668202766,
|
| 983 |
+
0.2073732718894009,
|
| 984 |
+
0.2012288786482335,
|
| 985 |
+
0.20890937019969277,
|
| 986 |
+
0.21044546850998463,
|
| 987 |
+
0.20430107526881722,
|
| 988 |
+
0.20583717357910905
|
| 989 |
+
],
|
| 990 |
+
"acc_stderr": [
|
| 991 |
+
0.016073287529685204,
|
| 992 |
+
0.015902084913876336,
|
| 993 |
+
0.016198149258419323,
|
| 994 |
+
0.016279743532401667,
|
| 995 |
+
0.016399713788445076,
|
| 996 |
+
0.01619814925841932,
|
| 997 |
+
0.016279743532401664,
|
| 998 |
+
0.016705867034419633,
|
| 999 |
+
0.016156860583178303,
|
| 1000 |
+
0.01619814925841932,
|
| 1001 |
+
0.015945399396423914,
|
| 1002 |
+
0.016478107276313273,
|
| 1003 |
+
0.016593362460570887,
|
| 1004 |
+
0.016279743532401657,
|
| 1005 |
+
0.016668667667174196,
|
| 1006 |
+
0.016279743532401664,
|
| 1007 |
+
0.01627974353240166,
|
| 1008 |
+
0.016198149258419316,
|
| 1009 |
+
0.015945399396423907,
|
| 1010 |
+
0.015814411436934704,
|
| 1011 |
+
0.01639971378844507,
|
| 1012 |
+
0.01585842321932389,
|
| 1013 |
+
0.01611524086412918,
|
| 1014 |
+
0.016156860583178306,
|
| 1015 |
+
0.015988369488888748,
|
| 1016 |
+
0.016030997960619395,
|
| 1017 |
+
0.015588996601449462,
|
| 1018 |
+
0.016279743532401664,
|
| 1019 |
+
0.015902084913876333,
|
| 1020 |
+
0.015725325827428208,
|
| 1021 |
+
0.015945399396423917,
|
| 1022 |
+
0.015988369488888755,
|
| 1023 |
+
0.015814411436934704,
|
| 1024 |
+
0.01585842321932389
|
| 1025 |
+
],
|
| 1026 |
+
"acc_norm": [
|
| 1027 |
+
0.26574500768049153,
|
| 1028 |
+
0.27342549923195086,
|
| 1029 |
+
0.26574500768049153,
|
| 1030 |
+
0.27956989247311825,
|
| 1031 |
+
0.27956989247311825,
|
| 1032 |
+
0.29339477726574503,
|
| 1033 |
+
0.29339477726574503,
|
| 1034 |
+
0.2995391705069124,
|
| 1035 |
+
0.2749615975422427,
|
| 1036 |
+
0.2903225806451613,
|
| 1037 |
+
0.2749615975422427,
|
| 1038 |
+
0.29339477726574503,
|
| 1039 |
+
0.29339477726574503,
|
| 1040 |
+
0.2964669738863287,
|
| 1041 |
+
0.29185867895545314,
|
| 1042 |
+
0.2857142857142857,
|
| 1043 |
+
0.282642089093702,
|
| 1044 |
+
0.2903225806451613,
|
| 1045 |
+
0.2903225806451613,
|
| 1046 |
+
0.29493087557603687,
|
| 1047 |
+
0.28417818740399386,
|
| 1048 |
+
0.28110599078341014,
|
| 1049 |
+
0.2964669738863287,
|
| 1050 |
+
0.28110599078341014,
|
| 1051 |
+
0.29185867895545314,
|
| 1052 |
+
0.27956989247311825,
|
| 1053 |
+
0.2626728110599078,
|
| 1054 |
+
0.28110599078341014,
|
| 1055 |
+
0.2764976958525346,
|
| 1056 |
+
0.2764976958525346,
|
| 1057 |
+
0.2780337941628264,
|
| 1058 |
+
0.27342549923195086,
|
| 1059 |
+
0.2672811059907834,
|
| 1060 |
+
0.2672811059907834
|
| 1061 |
+
],
|
| 1062 |
+
"acc_norm_stderr": [
|
| 1063 |
+
0.017326040808935694,
|
| 1064 |
+
0.01748247454768128,
|
| 1065 |
+
0.017326040808935694,
|
| 1066 |
+
0.017602909186822453,
|
| 1067 |
+
0.017602909186822453,
|
| 1068 |
+
0.017859032704399497,
|
| 1069 |
+
0.017859032704399497,
|
| 1070 |
+
0.01796644118858794,
|
| 1071 |
+
0.01751297178222521,
|
| 1072 |
+
0.017803862148538005,
|
| 1073 |
+
0.017512971782225217,
|
| 1074 |
+
0.017859032704399497,
|
| 1075 |
+
0.0178590327043995,
|
| 1076 |
+
0.017913222760382742,
|
| 1077 |
+
0.01783157055397193,
|
| 1078 |
+
0.01771924779845829,
|
| 1079 |
+
0.017661585370360618,
|
| 1080 |
+
0.017803862148538,
|
| 1081 |
+
0.017803862148538005,
|
| 1082 |
+
0.017886249734104378,
|
| 1083 |
+
0.017690542680190765,
|
| 1084 |
+
0.017632374626460008,
|
| 1085 |
+
0.017913222760382742,
|
| 1086 |
+
0.017632374626460008,
|
| 1087 |
+
0.017831570553971932,
|
| 1088 |
+
0.017602909186822453,
|
| 1089 |
+
0.017261598347857544,
|
| 1090 |
+
0.017632374626460008,
|
| 1091 |
+
0.017543209075825204,
|
| 1092 |
+
0.017543209075825204,
|
| 1093 |
+
0.017573187770282717,
|
| 1094 |
+
0.01748247454768128,
|
| 1095 |
+
0.0173578586224101,
|
| 1096 |
+
0.017357858622410096
|
| 1097 |
+
]
|
| 1098 |
+
},
|
| 1099 |
+
"mathqa": {
|
| 1100 |
+
"acc": [
|
| 1101 |
+
0.21608040201005024,
|
| 1102 |
+
0.21708542713567838,
|
| 1103 |
+
0.21708542713567838,
|
| 1104 |
+
0.2150753768844221,
|
| 1105 |
+
0.21574539363484088,
|
| 1106 |
+
0.22144053601340033,
|
| 1107 |
+
0.2254606365159129,
|
| 1108 |
+
0.22110552763819097,
|
| 1109 |
+
0.22948073701842547,
|
| 1110 |
+
0.22278056951423786,
|
| 1111 |
+
0.22914572864321608,
|
| 1112 |
+
0.22646566164154103,
|
| 1113 |
+
0.2338358458961474,
|
| 1114 |
+
0.23115577889447236,
|
| 1115 |
+
0.22680067001675042,
|
| 1116 |
+
0.2271356783919598,
|
| 1117 |
+
0.2241206030150754,
|
| 1118 |
+
0.2234505862646566,
|
| 1119 |
+
0.2234505862646566,
|
| 1120 |
+
0.22244556113902847,
|
| 1121 |
+
0.23082077051926297,
|
| 1122 |
+
0.23182579564489111,
|
| 1123 |
+
0.22981574539363483,
|
| 1124 |
+
0.22914572864321608,
|
| 1125 |
+
0.2254606365159129,
|
| 1126 |
+
0.22814070351758794,
|
| 1127 |
+
0.2284757118927973,
|
| 1128 |
+
0.2288107202680067,
|
| 1129 |
+
0.22948073701842547,
|
| 1130 |
+
0.23886097152428812,
|
| 1131 |
+
0.23484087102177553,
|
| 1132 |
+
0.2324958123953099,
|
| 1133 |
+
0.23618090452261306,
|
| 1134 |
+
0.23283082077051925
|
| 1135 |
+
],
|
| 1136 |
+
"acc_stderr": [
|
| 1137 |
+
0.007534319642738904,
|
| 1138 |
+
0.007546978526071601,
|
| 1139 |
+
0.007546978526071604,
|
| 1140 |
+
0.007521594451353452,
|
| 1141 |
+
0.007530085296403079,
|
| 1142 |
+
0.007601075507352047,
|
| 1143 |
+
0.007649934243740963,
|
| 1144 |
+
0.0075969575822193375,
|
| 1145 |
+
0.00769777936094425,
|
| 1146 |
+
0.007617475572803636,
|
| 1147 |
+
0.007693830518376545,
|
| 1148 |
+
0.007661989801224798,
|
| 1149 |
+
0.007748489498007528,
|
| 1150 |
+
0.007717420163974325,
|
| 1151 |
+
0.007665994295006107,
|
| 1152 |
+
0.007669991794420069,
|
| 1153 |
+
0.007633761575437846,
|
| 1154 |
+
0.0076256327861774775,
|
| 1155 |
+
0.007625632786177477,
|
| 1156 |
+
0.007613386278535906,
|
| 1157 |
+
0.007713505756203997,
|
| 1158 |
+
0.00772522842349705,
|
| 1159 |
+
0.0077017212954290535,
|
| 1160 |
+
0.007693830518376543,
|
| 1161 |
+
0.007649934243740954,
|
| 1162 |
+
0.007681942435552283,
|
| 1163 |
+
0.0076859120663839145,
|
| 1164 |
+
0.007689874757083945,
|
| 1165 |
+
0.00769777936094425,
|
| 1166 |
+
0.007805580078648699,
|
| 1167 |
+
0.007760028457552943,
|
| 1168 |
+
0.0077330093441520245,
|
| 1169 |
+
0.0077753193787470495,
|
| 1170 |
+
0.00773688957819094
|
| 1171 |
+
],
|
| 1172 |
+
"acc_norm": [
|
| 1173 |
+
0.21775544388609716,
|
| 1174 |
+
0.21273031825795644,
|
| 1175 |
+
0.2201005025125628,
|
| 1176 |
+
0.21641541038525963,
|
| 1177 |
+
0.22144053601340033,
|
| 1178 |
+
0.22914572864321608,
|
| 1179 |
+
0.22479061976549414,
|
| 1180 |
+
0.22144053601340033,
|
| 1181 |
+
0.2321608040201005,
|
| 1182 |
+
0.22814070351758794,
|
| 1183 |
+
0.22981574539363483,
|
| 1184 |
+
0.22780569514237856,
|
| 1185 |
+
0.23618090452261306,
|
| 1186 |
+
0.2304857621440536,
|
| 1187 |
+
0.22445561139028475,
|
| 1188 |
+
0.22445561139028475,
|
| 1189 |
+
0.22646566164154103,
|
| 1190 |
+
0.223785594639866,
|
| 1191 |
+
0.2221105527638191,
|
| 1192 |
+
0.2284757118927973,
|
| 1193 |
+
0.22680067001675042,
|
| 1194 |
+
0.22948073701842547,
|
| 1195 |
+
0.22512562814070353,
|
| 1196 |
+
0.2204355108877722,
|
| 1197 |
+
0.22110552763819097,
|
| 1198 |
+
0.2254606365159129,
|
| 1199 |
+
0.22177554438860972,
|
| 1200 |
+
0.2254606365159129,
|
| 1201 |
+
0.2271356783919598,
|
| 1202 |
+
0.2355108877721943,
|
| 1203 |
+
0.23082077051926297,
|
| 1204 |
+
0.2288107202680067,
|
| 1205 |
+
0.23182579564489111,
|
| 1206 |
+
0.22747068676716917
|
| 1207 |
+
],
|
| 1208 |
+
"acc_norm_stderr": [
|
| 1209 |
+
0.007555381108481066,
|
| 1210 |
+
0.007491642572152824,
|
| 1211 |
+
0.007584560639169464,
|
| 1212 |
+
0.007538546621546404,
|
| 1213 |
+
0.0076010755073520515,
|
| 1214 |
+
0.007693830518376545,
|
| 1215 |
+
0.00764186203129024,
|
| 1216 |
+
0.007601075507352056,
|
| 1217 |
+
0.007729122296015981,
|
| 1218 |
+
0.007681942435552285,
|
| 1219 |
+
0.00770172129542905,
|
| 1220 |
+
0.007677965853825286,
|
| 1221 |
+
0.00777531937874705,
|
| 1222 |
+
0.007709584482517441,
|
| 1223 |
+
0.007637815339398026,
|
| 1224 |
+
0.007637815339398025,
|
| 1225 |
+
0.007661989801224808,
|
| 1226 |
+
0.007629700728135998,
|
| 1227 |
+
0.007609289843903929,
|
| 1228 |
+
0.00768591206638392,
|
| 1229 |
+
0.0076659942950061,
|
| 1230 |
+
0.00769777936094425,
|
| 1231 |
+
0.007645901662342707,
|
| 1232 |
+
0.007588700159870971,
|
| 1233 |
+
0.007596957582219341,
|
| 1234 |
+
0.007649934243740954,
|
| 1235 |
+
0.0076051862573707244,
|
| 1236 |
+
0.007649934243740947,
|
| 1237 |
+
0.007669991794420072,
|
| 1238 |
+
0.007767687364650975,
|
| 1239 |
+
0.00771350575620399,
|
| 1240 |
+
0.00768987475708395,
|
| 1241 |
+
0.007725228423497048,
|
| 1242 |
+
0.007673982310396806
|
| 1243 |
+
]
|
| 1244 |
+
},
|
| 1245 |
+
"mc_taco": {
|
| 1246 |
+
"em": [
|
| 1247 |
+
0.12912912912912913,
|
| 1248 |
+
0.1388888888888889,
|
| 1249 |
+
0.1493993993993994,
|
| 1250 |
+
0.11636636636636637,
|
| 1251 |
+
0.12837837837837837,
|
| 1252 |
+
0.12987987987987987,
|
| 1253 |
+
0.1493993993993994,
|
| 1254 |
+
0.1313813813813814,
|
| 1255 |
+
0.13063063063063063,
|
| 1256 |
+
0.12312312312312312,
|
| 1257 |
+
0.12987987987987987,
|
| 1258 |
+
0.11411411411411411,
|
| 1259 |
+
0.11486486486486487,
|
| 1260 |
+
0.12312312312312312,
|
| 1261 |
+
0.11936936936936937,
|
| 1262 |
+
0.11936936936936937,
|
| 1263 |
+
0.12162162162162163,
|
| 1264 |
+
0.11786786786786786,
|
| 1265 |
+
0.11636636636636637,
|
| 1266 |
+
0.11861861861861862,
|
| 1267 |
+
0.12162162162162163,
|
| 1268 |
+
0.12687687687687688,
|
| 1269 |
+
0.17117117117117117,
|
| 1270 |
+
0.15090090090090091,
|
| 1271 |
+
0.13063063063063063,
|
| 1272 |
+
0.1388888888888889,
|
| 1273 |
+
0.1478978978978979,
|
| 1274 |
+
0.14114114114114115,
|
| 1275 |
+
0.1554054054054054,
|
| 1276 |
+
0.12237237237237238,
|
| 1277 |
+
0.1539039039039039,
|
| 1278 |
+
0.15990990990990991,
|
| 1279 |
+
0.16891891891891891,
|
| 1280 |
+
0.1539039039039039
|
| 1281 |
+
],
|
| 1282 |
+
"f1": [
|
| 1283 |
+
0.4021729676444149,
|
| 1284 |
+
0.4022397887099957,
|
| 1285 |
+
0.37740379193628765,
|
| 1286 |
+
0.47054069659985776,
|
| 1287 |
+
0.46284733584753573,
|
| 1288 |
+
0.41591149221178986,
|
| 1289 |
+
0.3949692061289406,
|
| 1290 |
+
0.4086179718515041,
|
| 1291 |
+
0.4056594213517856,
|
| 1292 |
+
0.38940661702521023,
|
| 1293 |
+
0.39943950866019834,
|
| 1294 |
+
0.4205400663772147,
|
| 1295 |
+
0.42344749732706344,
|
| 1296 |
+
0.3896984381226329,
|
| 1297 |
+
0.4041954945176726,
|
| 1298 |
+
0.42927400028777213,
|
| 1299 |
+
0.4382474479710931,
|
| 1300 |
+
0.43636761307666894,
|
| 1301 |
+
0.4495246629559176,
|
| 1302 |
+
0.4008632720310986,
|
| 1303 |
+
0.4058269917796999,
|
| 1304 |
+
0.376548661267549,
|
| 1305 |
+
0.339709364680583,
|
| 1306 |
+
0.38536103552491885,
|
| 1307 |
+
0.420145230882812,
|
| 1308 |
+
0.39474670362737724,
|
| 1309 |
+
0.3776497674201943,
|
| 1310 |
+
0.36598753863625705,
|
| 1311 |
+
0.39653325268030004,
|
| 1312 |
+
0.4290818848041062,
|
| 1313 |
+
0.37543526244898084,
|
| 1314 |
+
0.353530340469302,
|
| 1315 |
+
0.3416786896638351,
|
| 1316 |
+
0.360502391792038
|
| 1317 |
+
]
|
| 1318 |
+
},
|
| 1319 |
+
"mrpc": {
|
| 1320 |
+
"acc": [
|
| 1321 |
+
0.6666666666666666,
|
| 1322 |
+
0.6617647058823529,
|
| 1323 |
+
0.553921568627451,
|
| 1324 |
+
0.6838235294117647,
|
| 1325 |
+
0.5980392156862745,
|
| 1326 |
+
0.6225490196078431,
|
| 1327 |
+
0.38235294117647056,
|
| 1328 |
+
0.6642156862745098,
|
| 1329 |
+
0.6715686274509803,
|
| 1330 |
+
0.5882352941176471,
|
| 1331 |
+
0.6568627450980392,
|
| 1332 |
+
0.6764705882352942,
|
| 1333 |
+
0.6838235294117647,
|
| 1334 |
+
0.5637254901960784,
|
| 1335 |
+
0.6617647058823529,
|
| 1336 |
+
0.6838235294117647,
|
| 1337 |
+
0.6862745098039216,
|
| 1338 |
+
0.6838235294117647,
|
| 1339 |
+
0.6838235294117647,
|
| 1340 |
+
0.6838235294117647,
|
| 1341 |
+
0.6838235294117647,
|
| 1342 |
+
0.6813725490196079,
|
| 1343 |
+
0.6838235294117647,
|
| 1344 |
+
0.6838235294117647,
|
| 1345 |
+
0.6838235294117647,
|
| 1346 |
+
0.6838235294117647,
|
| 1347 |
+
0.6838235294117647,
|
| 1348 |
+
0.6838235294117647,
|
| 1349 |
+
0.6838235294117647,
|
| 1350 |
+
0.6838235294117647,
|
| 1351 |
+
0.6838235294117647,
|
| 1352 |
+
0.6838235294117647,
|
| 1353 |
+
0.6838235294117647,
|
| 1354 |
+
0.6838235294117647
|
| 1355 |
+
],
|
| 1356 |
+
"acc_stderr": [
|
| 1357 |
+
0.023366654574426104,
|
| 1358 |
+
0.023451145303506664,
|
| 1359 |
+
0.02463953717560257,
|
| 1360 |
+
0.023048336668420204,
|
| 1361 |
+
0.024302976642371545,
|
| 1362 |
+
0.02402812325398081,
|
| 1363 |
+
0.024088247338244422,
|
| 1364 |
+
0.023409253319707175,
|
| 1365 |
+
0.023279321215449105,
|
| 1366 |
+
0.024395116363488303,
|
| 1367 |
+
0.023532824020694145,
|
| 1368 |
+
0.023189113109403536,
|
| 1369 |
+
0.023048336668420204,
|
| 1370 |
+
0.02458196247982223,
|
| 1371 |
+
0.023451145303506667,
|
| 1372 |
+
0.023048336668420204,
|
| 1373 |
+
0.022999936277943434,
|
| 1374 |
+
0.023048336668420204,
|
| 1375 |
+
0.023048336668420204,
|
| 1376 |
+
0.023048336668420204,
|
| 1377 |
+
0.023048336668420204,
|
| 1378 |
+
0.023095996571841474,
|
| 1379 |
+
0.023048336668420204,
|
| 1380 |
+
0.023048336668420204,
|
| 1381 |
+
0.023048336668420204,
|
| 1382 |
+
0.023048336668420204,
|
| 1383 |
+
0.023048336668420204,
|
| 1384 |
+
0.023048336668420204,
|
| 1385 |
+
0.023048336668420204,
|
| 1386 |
+
0.023048336668420204,
|
| 1387 |
+
0.023048336668420204,
|
| 1388 |
+
0.023048336668420204,
|
| 1389 |
+
0.023048336668420204,
|
| 1390 |
+
0.023048336668420204
|
| 1391 |
+
],
|
| 1392 |
+
"f1": [
|
| 1393 |
+
0.7957957957957957,
|
| 1394 |
+
0.7934131736526946,
|
| 1395 |
+
0.662962962962963,
|
| 1396 |
+
0.8122270742358079,
|
| 1397 |
+
0.7328990228013029,
|
| 1398 |
+
0.7450331125827814,
|
| 1399 |
+
0.3076923076923077,
|
| 1400 |
+
0.7946026986506746,
|
| 1401 |
+
0.7987987987987989,
|
| 1402 |
+
0.7113402061855671,
|
| 1403 |
+
0.7852760736196319,
|
| 1404 |
+
0.807017543859649,
|
| 1405 |
+
0.8122270742358079,
|
| 1406 |
+
0.6920415224913494,
|
| 1407 |
+
0.7915407854984895,
|
| 1408 |
+
0.8122270742358079,
|
| 1409 |
+
0.8134110787172011,
|
| 1410 |
+
0.8122270742358079,
|
| 1411 |
+
0.8122270742358079,
|
| 1412 |
+
0.8122270742358079,
|
| 1413 |
+
0.8122270742358079,
|
| 1414 |
+
0.8104956268221574,
|
| 1415 |
+
0.8122270742358079,
|
| 1416 |
+
0.8122270742358079,
|
| 1417 |
+
0.8122270742358079,
|
| 1418 |
+
0.8122270742358079,
|
| 1419 |
+
0.8122270742358079,
|
| 1420 |
+
0.8122270742358079,
|
| 1421 |
+
0.8122270742358079,
|
| 1422 |
+
0.8122270742358079,
|
| 1423 |
+
0.8122270742358079,
|
| 1424 |
+
0.8122270742358079,
|
| 1425 |
+
0.8122270742358079,
|
| 1426 |
+
0.8122270742358079
|
| 1427 |
+
],
|
| 1428 |
+
"f1_stderr": [
|
| 1429 |
+
0.017147631300581046,
|
| 1430 |
+
0.017229072458670926,
|
| 1431 |
+
0.02353871767052677,
|
| 1432 |
+
0.01624762253426993,
|
| 1433 |
+
0.020166702517416132,
|
| 1434 |
+
0.019918715933978474,
|
| 1435 |
+
0.03147922057444835,
|
| 1436 |
+
0.017207203201259926,
|
| 1437 |
+
0.017020792687975135,
|
| 1438 |
+
0.021339308018119365,
|
| 1439 |
+
0.01776754583831411,
|
| 1440 |
+
0.016499561526275235,
|
| 1441 |
+
0.01624762253426993,
|
| 1442 |
+
0.021920314852868432,
|
| 1443 |
+
0.017366384073219637,
|
| 1444 |
+
0.01624762253426993,
|
| 1445 |
+
0.016223847184253872,
|
| 1446 |
+
0.01624762253426993,
|
| 1447 |
+
0.01624762253426993,
|
| 1448 |
+
0.01624762253426993,
|
| 1449 |
+
0.01624762253426993,
|
| 1450 |
+
0.016320294270046228,
|
| 1451 |
+
0.01624762253426993,
|
| 1452 |
+
0.01624762253426993,
|
| 1453 |
+
0.01624762253426993,
|
| 1454 |
+
0.01624762253426993,
|
| 1455 |
+
0.01624762253426993,
|
| 1456 |
+
0.01624762253426993,
|
| 1457 |
+
0.01624762253426993,
|
| 1458 |
+
0.01624762253426993,
|
| 1459 |
+
0.01624762253426993,
|
| 1460 |
+
0.01624762253426993,
|
| 1461 |
+
0.01624762253426993,
|
| 1462 |
+
0.01624762253426993
|
| 1463 |
+
]
|
| 1464 |
+
},
|
| 1465 |
+
"multirc": {
|
| 1466 |
+
"acc": [
|
| 1467 |
+
0.02728226652675761,
|
| 1468 |
+
0.016789087093389297,
|
| 1469 |
+
0.026232948583420776,
|
| 1470 |
+
0.023084994753410283,
|
| 1471 |
+
0.024134312696747113,
|
| 1472 |
+
0.025183630640083946,
|
| 1473 |
+
0.022035676810073453,
|
| 1474 |
+
0.017838405036726127,
|
| 1475 |
+
0.015739769150052464,
|
| 1476 |
+
0.022035676810073453,
|
| 1477 |
+
0.016789087093389297,
|
| 1478 |
+
0.011542497376705142,
|
| 1479 |
+
0.016789087093389297,
|
| 1480 |
+
0.01049317943336831,
|
| 1481 |
+
0.022035676810073453,
|
| 1482 |
+
0.015739769150052464,
|
| 1483 |
+
0.024134312696747113,
|
| 1484 |
+
0.026232948583420776,
|
| 1485 |
+
0.023084994753410283,
|
| 1486 |
+
0.02098635886673662,
|
| 1487 |
+
0.017838405036726127,
|
| 1488 |
+
0.025183630640083946,
|
| 1489 |
+
0.016789087093389297,
|
| 1490 |
+
0.023084994753410283,
|
| 1491 |
+
0.025183630640083946,
|
| 1492 |
+
0.022035676810073453,
|
| 1493 |
+
0.02728226652675761,
|
| 1494 |
+
0.02938090241343127,
|
| 1495 |
+
0.024134312696747113,
|
| 1496 |
+
0.02098635886673662,
|
| 1497 |
+
0.022035676810073453,
|
| 1498 |
+
0.01993704092339979,
|
| 1499 |
+
0.023084994753410283,
|
| 1500 |
+
0.017838405036726127
|
| 1501 |
+
],
|
| 1502 |
+
"acc_stderr": [
|
| 1503 |
+
0.0052797719723249505,
|
| 1504 |
+
0.004164073742672125,
|
| 1505 |
+
0.005180034087040346,
|
| 1506 |
+
0.004867150842341557,
|
| 1507 |
+
0.004973865274017642,
|
| 1508 |
+
0.005078109986764367,
|
| 1509 |
+
0.004757800511976072,
|
| 1510 |
+
0.0042899379467109065,
|
| 1511 |
+
0.004033997956595782,
|
| 1512 |
+
0.004757800511976068,
|
| 1513 |
+
0.004164073742672123,
|
| 1514 |
+
0.0034618673209271646,
|
| 1515 |
+
0.004164073742672121,
|
| 1516 |
+
0.0033025125109889778,
|
| 1517 |
+
0.004757800511976066,
|
| 1518 |
+
0.0040339979565957845,
|
| 1519 |
+
0.004973865274017642,
|
| 1520 |
+
0.005180034087040334,
|
| 1521 |
+
0.004867150842341551,
|
| 1522 |
+
0.004645628152687091,
|
| 1523 |
+
0.0042899379467109195,
|
| 1524 |
+
0.005078109986764365,
|
| 1525 |
+
0.004164073742672123,
|
| 1526 |
+
0.004867150842341575,
|
| 1527 |
+
0.005078109986764367,
|
| 1528 |
+
0.004757800511976068,
|
| 1529 |
+
0.005279771972324952,
|
| 1530 |
+
0.005473164573473352,
|
| 1531 |
+
0.004973865274017642,
|
| 1532 |
+
0.004645628152687106,
|
| 1533 |
+
0.004757800511976089,
|
| 1534 |
+
0.0045304241507769785,
|
| 1535 |
+
0.004867150842341557,
|
| 1536 |
+
0.0042899379467109065
|
| 1537 |
+
]
|
| 1538 |
+
},
|
| 1539 |
+
"openbookqa": {
|
| 1540 |
+
"acc": [
|
| 1541 |
+
0.186,
|
| 1542 |
+
0.192,
|
| 1543 |
+
0.186,
|
| 1544 |
+
0.194,
|
| 1545 |
+
0.2,
|
| 1546 |
+
0.182,
|
| 1547 |
+
0.19,
|
| 1548 |
+
0.184,
|
| 1549 |
+
0.19,
|
| 1550 |
+
0.208,
|
| 1551 |
+
0.214,
|
| 1552 |
+
0.19,
|
| 1553 |
+
0.214,
|
| 1554 |
+
0.216,
|
| 1555 |
+
0.2,
|
| 1556 |
+
0.21,
|
| 1557 |
+
0.218,
|
| 1558 |
+
0.212,
|
| 1559 |
+
0.218,
|
| 1560 |
+
0.232,
|
| 1561 |
+
0.214,
|
| 1562 |
+
0.214,
|
| 1563 |
+
0.212,
|
| 1564 |
+
0.226,
|
| 1565 |
+
0.22,
|
| 1566 |
+
0.22,
|
| 1567 |
+
0.212,
|
| 1568 |
+
0.224,
|
| 1569 |
+
0.21,
|
| 1570 |
+
0.214,
|
| 1571 |
+
0.214,
|
| 1572 |
+
0.212,
|
| 1573 |
+
0.206,
|
| 1574 |
+
0.22
|
| 1575 |
+
],
|
| 1576 |
+
"acc_stderr": [
|
| 1577 |
+
0.017418806780583943,
|
| 1578 |
+
0.017632180454360987,
|
| 1579 |
+
0.01741880678058395,
|
| 1580 |
+
0.017701827855304626,
|
| 1581 |
+
0.017906459241433848,
|
| 1582 |
+
0.01727277329773045,
|
| 1583 |
+
0.017561800410758985,
|
| 1584 |
+
0.01734617478175285,
|
| 1585 |
+
0.01756180041075898,
|
| 1586 |
+
0.018169542221229892,
|
| 1587 |
+
0.018359797502387035,
|
| 1588 |
+
0.017561800410758985,
|
| 1589 |
+
0.018359797502387025,
|
| 1590 |
+
0.018421909061411938,
|
| 1591 |
+
0.01790645924143384,
|
| 1592 |
+
0.018233620865305916,
|
| 1593 |
+
0.018483378223178866,
|
| 1594 |
+
0.01829703700401389,
|
| 1595 |
+
0.018483378223178866,
|
| 1596 |
+
0.018896193591952038,
|
| 1597 |
+
0.01835979750238703,
|
| 1598 |
+
0.018359797502387025,
|
| 1599 |
+
0.018297037004013885,
|
| 1600 |
+
0.018722956449139933,
|
| 1601 |
+
0.01854421137582033,
|
| 1602 |
+
0.01854421137582033,
|
| 1603 |
+
0.018297037004013885,
|
| 1604 |
+
0.0186639944647108,
|
| 1605 |
+
0.018233620865305916,
|
| 1606 |
+
0.018359797502387025,
|
| 1607 |
+
0.01835979750238703,
|
| 1608 |
+
0.018297037004013885,
|
| 1609 |
+
0.018104794037333564,
|
| 1610 |
+
0.01854421137582033
|
| 1611 |
+
],
|
| 1612 |
+
"acc_norm": [
|
| 1613 |
+
0.296,
|
| 1614 |
+
0.314,
|
| 1615 |
+
0.316,
|
| 1616 |
+
0.298,
|
| 1617 |
+
0.318,
|
| 1618 |
+
0.304,
|
| 1619 |
+
0.31,
|
| 1620 |
+
0.31,
|
| 1621 |
+
0.322,
|
| 1622 |
+
0.324,
|
| 1623 |
+
0.314,
|
| 1624 |
+
0.322,
|
| 1625 |
+
0.314,
|
| 1626 |
+
0.312,
|
| 1627 |
+
0.308,
|
| 1628 |
+
0.318,
|
| 1629 |
+
0.336,
|
| 1630 |
+
0.32,
|
| 1631 |
+
0.33,
|
| 1632 |
+
0.334,
|
| 1633 |
+
0.328,
|
| 1634 |
+
0.33,
|
| 1635 |
+
0.326,
|
| 1636 |
+
0.336,
|
| 1637 |
+
0.328,
|
| 1638 |
+
0.322,
|
| 1639 |
+
0.336,
|
| 1640 |
+
0.332,
|
| 1641 |
+
0.328,
|
| 1642 |
+
0.33,
|
| 1643 |
+
0.342,
|
| 1644 |
+
0.342,
|
| 1645 |
+
0.338,
|
| 1646 |
+
0.336
|
| 1647 |
+
],
|
| 1648 |
+
"acc_norm_stderr": [
|
| 1649 |
+
0.020435342091896135,
|
| 1650 |
+
0.020776701920308997,
|
| 1651 |
+
0.02081235951585586,
|
| 1652 |
+
0.02047511809298897,
|
| 1653 |
+
0.02084757162081401,
|
| 1654 |
+
0.020591649571224932,
|
| 1655 |
+
0.020704041021724795,
|
| 1656 |
+
0.020704041021724802,
|
| 1657 |
+
0.020916668330019886,
|
| 1658 |
+
0.020950557312477452,
|
| 1659 |
+
0.020776701920308997,
|
| 1660 |
+
0.020916668330019882,
|
| 1661 |
+
0.020776701920308997,
|
| 1662 |
+
0.02074059653648807,
|
| 1663 |
+
0.0206670329874661,
|
| 1664 |
+
0.02084757162081401,
|
| 1665 |
+
0.02114479142504885,
|
| 1666 |
+
0.02088234048876181,
|
| 1667 |
+
0.021049612166134792,
|
| 1668 |
+
0.02111349234774374,
|
| 1669 |
+
0.021017027165175492,
|
| 1670 |
+
0.021049612166134796,
|
| 1671 |
+
0.02098400956239357,
|
| 1672 |
+
0.02114479142504885,
|
| 1673 |
+
0.02101702716517549,
|
| 1674 |
+
0.02091666833001988,
|
| 1675 |
+
0.021144791425048846,
|
| 1676 |
+
0.021081766571222856,
|
| 1677 |
+
0.02101702716517549,
|
| 1678 |
+
0.0210496121661348,
|
| 1679 |
+
0.021236147199899268,
|
| 1680 |
+
0.02123614719989926,
|
| 1681 |
+
0.02117566569520941,
|
| 1682 |
+
0.021144791425048843
|
| 1683 |
+
]
|
| 1684 |
+
},
|
| 1685 |
+
"piqa": {
|
| 1686 |
+
"acc": [
|
| 1687 |
+
0.6681175190424374,
|
| 1688 |
+
0.676278563656148,
|
| 1689 |
+
0.6735582154515778,
|
| 1690 |
+
0.6882480957562568,
|
| 1691 |
+
0.6964091403699674,
|
| 1692 |
+
0.6991294885745375,
|
| 1693 |
+
0.6958650707290533,
|
| 1694 |
+
0.6926006528835691,
|
| 1695 |
+
0.7034820457018498,
|
| 1696 |
+
0.7121871599564744,
|
| 1697 |
+
0.7105549510337323,
|
| 1698 |
+
0.6996735582154516,
|
| 1699 |
+
0.705114254624592,
|
| 1700 |
+
0.7116430903155604,
|
| 1701 |
+
0.7176278563656148,
|
| 1702 |
+
0.719804134929271,
|
| 1703 |
+
0.7083786724700761,
|
| 1704 |
+
0.7094668117519043,
|
| 1705 |
+
0.7149075081610446,
|
| 1706 |
+
0.7219804134929271,
|
| 1707 |
+
0.7225244831338411,
|
| 1708 |
+
0.7181719260065288,
|
| 1709 |
+
0.7241566920565833,
|
| 1710 |
+
0.7279651795429815,
|
| 1711 |
+
0.7181719260065288,
|
| 1712 |
+
0.7252448313384113,
|
| 1713 |
+
0.7285092491838956,
|
| 1714 |
+
0.7247007616974973,
|
| 1715 |
+
0.7236126224156693,
|
| 1716 |
+
0.7252448313384113,
|
| 1717 |
+
0.7257889009793254,
|
| 1718 |
+
0.7312295973884657,
|
| 1719 |
+
0.7306855277475517,
|
| 1720 |
+
0.7323177366702938
|
| 1721 |
+
],
|
| 1722 |
+
"acc_stderr": [
|
| 1723 |
+
0.010986617776361595,
|
| 1724 |
+
0.010916765010708778,
|
| 1725 |
+
0.010940467046177302,
|
| 1726 |
+
0.010807431424873674,
|
| 1727 |
+
0.010728079893076354,
|
| 1728 |
+
0.010700745724145973,
|
| 1729 |
+
0.010733493335721319,
|
| 1730 |
+
0.01076560250693907,
|
| 1731 |
+
0.01065607892266115,
|
| 1732 |
+
0.01056325038305919,
|
| 1733 |
+
0.0105810147406756,
|
| 1734 |
+
0.010695225308183136,
|
| 1735 |
+
0.010639030620157003,
|
| 1736 |
+
0.010569190399220644,
|
| 1737 |
+
0.010502821668555377,
|
| 1738 |
+
0.010478122015577086,
|
| 1739 |
+
0.010604441527428789,
|
| 1740 |
+
0.010592765034696538,
|
| 1741 |
+
0.010533270588738937,
|
| 1742 |
+
0.010453117358332814,
|
| 1743 |
+
0.01044681828103995,
|
| 1744 |
+
0.01049667523125817,
|
| 1745 |
+
0.010427805502729115,
|
| 1746 |
+
0.010382763786247383,
|
| 1747 |
+
0.010496675231258159,
|
| 1748 |
+
0.010415033676676039,
|
| 1749 |
+
0.010376251176596135,
|
| 1750 |
+
0.01042142927736953,
|
| 1751 |
+
0.010434162388275615,
|
| 1752 |
+
0.010415033676676037,
|
| 1753 |
+
0.010408618664933382,
|
| 1754 |
+
0.010343392940090011,
|
| 1755 |
+
0.01035000407058876,
|
| 1756 |
+
0.010330111189370429
|
| 1757 |
+
],
|
| 1758 |
+
"acc_norm": [
|
| 1759 |
+
0.6692056583242655,
|
| 1760 |
+
0.6800870511425462,
|
| 1761 |
+
0.6866158868335147,
|
| 1762 |
+
0.690968443960827,
|
| 1763 |
+
0.6953210010881393,
|
| 1764 |
+
0.705114254624592,
|
| 1765 |
+
0.6969532100108814,
|
| 1766 |
+
0.6893362350380848,
|
| 1767 |
+
0.6964091403699674,
|
| 1768 |
+
0.70620239390642,
|
| 1769 |
+
0.7089227421109902,
|
| 1770 |
+
0.7127312295973884,
|
| 1771 |
+
0.7132752992383025,
|
| 1772 |
+
0.7067464635473341,
|
| 1773 |
+
0.7154515778019587,
|
| 1774 |
+
0.719804134929271,
|
| 1775 |
+
0.7083786724700761,
|
| 1776 |
+
0.7159956474428727,
|
| 1777 |
+
0.7143634385201306,
|
| 1778 |
+
0.7187159956474428,
|
| 1779 |
+
0.7159956474428727,
|
| 1780 |
+
0.7149075081610446,
|
| 1781 |
+
0.7268770402611534,
|
| 1782 |
+
0.7257889009793254,
|
| 1783 |
+
0.7187159956474428,
|
| 1784 |
+
0.7225244831338411,
|
| 1785 |
+
0.721436343852013,
|
| 1786 |
+
0.7295973884657236,
|
| 1787 |
+
0.73449401523395,
|
| 1788 |
+
0.7290533188248096,
|
| 1789 |
+
0.7290533188248096,
|
| 1790 |
+
0.7312295973884657,
|
| 1791 |
+
0.7323177366702938,
|
| 1792 |
+
0.7301414581066377
|
| 1793 |
+
],
|
| 1794 |
+
"acc_norm_stderr": [
|
| 1795 |
+
0.010977520584714432,
|
| 1796 |
+
0.010882873582092063,
|
| 1797 |
+
0.010822829929195475,
|
| 1798 |
+
0.010781419464406979,
|
| 1799 |
+
0.010738889044325161,
|
| 1800 |
+
0.010639030620156982,
|
| 1801 |
+
0.010722648689531501,
|
| 1802 |
+
0.010797078933727673,
|
| 1803 |
+
0.01072807989307637,
|
| 1804 |
+
0.010627574080514821,
|
| 1805 |
+
0.010598612490942613,
|
| 1806 |
+
0.010557291761528633,
|
| 1807 |
+
0.010551314503108084,
|
| 1808 |
+
0.010621818421101931,
|
| 1809 |
+
0.010527218464130626,
|
| 1810 |
+
0.010478122015577091,
|
| 1811 |
+
0.010604441527428794,
|
| 1812 |
+
0.01052114754245421,
|
| 1813 |
+
0.01053930394866191,
|
| 1814 |
+
0.010490509832327423,
|
| 1815 |
+
0.010521147542454206,
|
| 1816 |
+
0.010533270588738944,
|
| 1817 |
+
0.01039573026445326,
|
| 1818 |
+
0.010408618664933384,
|
| 1819 |
+
0.010490509832327423,
|
| 1820 |
+
0.010446818281039943,
|
| 1821 |
+
0.01045939723596515,
|
| 1822 |
+
0.010363167031620778,
|
| 1823 |
+
0.01030330865302443,
|
| 1824 |
+
0.010369718937426846,
|
| 1825 |
+
0.010369718937426846,
|
| 1826 |
+
0.01034339294009,
|
| 1827 |
+
0.010330111189370422,
|
| 1828 |
+
0.010356595421852195
|
| 1829 |
+
]
|
| 1830 |
+
},
|
| 1831 |
+
"prost": {
|
| 1832 |
+
"acc": [
|
| 1833 |
+
0.2493061485909479,
|
| 1834 |
+
0.2504269854824936,
|
| 1835 |
+
0.2487724167378309,
|
| 1836 |
+
0.23825789923142612,
|
| 1837 |
+
0.2410866780529462,
|
| 1838 |
+
0.254803586678053,
|
| 1839 |
+
0.22213919726729292,
|
| 1840 |
+
0.2420473953885568,
|
| 1841 |
+
0.23687019641332194,
|
| 1842 |
+
0.2538428693424424,
|
| 1843 |
+
0.2568317677198975,
|
| 1844 |
+
0.25491033304867633,
|
| 1845 |
+
0.24338172502134928,
|
| 1846 |
+
0.21776259607173357,
|
| 1847 |
+
0.22833048676345005,
|
| 1848 |
+
0.23030529461998292,
|
| 1849 |
+
0.25250853970964987,
|
| 1850 |
+
0.23921861656703672,
|
| 1851 |
+
0.2432216054654142,
|
| 1852 |
+
0.25464346712211783,
|
| 1853 |
+
0.25453672075149447,
|
| 1854 |
+
0.24295473953885569,
|
| 1855 |
+
0.2432216054654142,
|
| 1856 |
+
0.2475982066609735,
|
| 1857 |
+
0.24642399658411615,
|
| 1858 |
+
0.26473099914602904,
|
| 1859 |
+
0.24263450042698548,
|
| 1860 |
+
0.24423569598633646,
|
| 1861 |
+
0.2409265584970111,
|
| 1862 |
+
0.25816609735269,
|
| 1863 |
+
0.25117421007685736,
|
| 1864 |
+
0.2576857386848847,
|
| 1865 |
+
0.24914602903501282,
|
| 1866 |
+
0.24343509820666098
|
| 1867 |
+
],
|
| 1868 |
+
"acc_stderr": [
|
| 1869 |
+
0.00316061120513981,
|
| 1870 |
+
0.0031653423305601216,
|
| 1871 |
+
0.0031583483352019054,
|
| 1872 |
+
0.003112438544855754,
|
| 1873 |
+
0.0031250419092430427,
|
| 1874 |
+
0.0031835472332089883,
|
| 1875 |
+
0.003036943372805099,
|
| 1876 |
+
0.0031292797011103143,
|
| 1877 |
+
0.003106186793355417,
|
| 1878 |
+
0.0031795875093253087,
|
| 1879 |
+
0.0031918398325104934,
|
| 1880 |
+
0.003183985943444664,
|
| 1881 |
+
0.0031351299519621185,
|
| 1882 |
+
0.003015324686271857,
|
| 1883 |
+
0.003066696332961817,
|
| 1884 |
+
0.0030759860532235048,
|
| 1885 |
+
0.003174053949219311,
|
| 1886 |
+
0.0031167400155043606,
|
| 1887 |
+
0.003134430099234369,
|
| 1888 |
+
0.0031828886961875777,
|
| 1889 |
+
0.0031824493569084624,
|
| 1890 |
+
0.0031332623600737837,
|
| 1891 |
+
0.0031344300992343687,
|
| 1892 |
+
0.0031533473322617645,
|
| 1893 |
+
0.0031483150100985297,
|
| 1894 |
+
0.0032232847900636728,
|
| 1895 |
+
0.003131858896197636,
|
| 1896 |
+
0.0031388525013045987,
|
| 1897 |
+
0.003124333518746473,
|
| 1898 |
+
0.003197246309267525,
|
| 1899 |
+
0.0031684807322240834,
|
| 1900 |
+
0.0031953044576644046,
|
| 1901 |
+
0.0031599330195551533,
|
| 1902 |
+
0.003135363104499404
|
| 1903 |
+
],
|
| 1904 |
+
"acc_norm": [
|
| 1905 |
+
0.328298462852263,
|
| 1906 |
+
0.32557643040136636,
|
| 1907 |
+
0.3356639624252775,
|
| 1908 |
+
0.3315542271562767,
|
| 1909 |
+
0.32392186165670367,
|
| 1910 |
+
0.32872544833475664,
|
| 1911 |
+
0.305935098206661,
|
| 1912 |
+
0.3111656703672075,
|
| 1913 |
+
0.3116994022203245,
|
| 1914 |
+
0.3066289496157131,
|
| 1915 |
+
0.29147096498719044,
|
| 1916 |
+
0.3125533731853117,
|
| 1917 |
+
0.3050811272416738,
|
| 1918 |
+
0.29638129803586677,
|
| 1919 |
+
0.2951537147736977,
|
| 1920 |
+
0.2982493595217763,
|
| 1921 |
+
0.28992314261315116,
|
| 1922 |
+
0.30721605465414176,
|
| 1923 |
+
0.29019000853970967,
|
| 1924 |
+
0.3042805294619983,
|
| 1925 |
+
0.30433390264731,
|
| 1926 |
+
0.3023590947907771,
|
| 1927 |
+
0.2959543125533732,
|
| 1928 |
+
0.28746797608881297,
|
| 1929 |
+
0.30187873612297184,
|
| 1930 |
+
0.29163108454312553,
|
| 1931 |
+
0.283198121263877,
|
| 1932 |
+
0.29072374039282667,
|
| 1933 |
+
0.28133005977796754,
|
| 1934 |
+
0.29051024765157984,
|
| 1935 |
+
0.304867634500427,
|
| 1936 |
+
0.3012916310845431,
|
| 1937 |
+
0.29803586678052946,
|
| 1938 |
+
0.2931789069171648
|
| 1939 |
+
],
|
| 1940 |
+
"acc_norm_stderr": [
|
| 1941 |
+
0.003430802730181418,
|
| 1942 |
+
0.003423465847311869,
|
| 1943 |
+
0.003450002546997551,
|
| 1944 |
+
0.0034394066494682273,
|
| 1945 |
+
0.0034189419545341843,
|
| 1946 |
+
0.003431941734648863,
|
| 1947 |
+
0.0033665715177206906,
|
| 1948 |
+
0.003382411025820202,
|
| 1949 |
+
0.003383998869984893,
|
| 1950 |
+
0.003368701899628916,
|
| 1951 |
+
0.00332008824256844,
|
| 1952 |
+
0.003386528533102034,
|
| 1953 |
+
0.0033639371705738324,
|
| 1954 |
+
0.00333631644639685,
|
| 1955 |
+
0.0033323030120676355,
|
| 1956 |
+
0.0033423684254697845,
|
| 1957 |
+
0.003314875885238456,
|
| 1958 |
+
0.0033704975238698504,
|
| 1959 |
+
0.003315777903539071,
|
| 1960 |
+
0.003361455078233852,
|
| 1961 |
+
0.0033616209246963803,
|
| 1962 |
+
0.0033554489667174123,
|
| 1963 |
+
0.0033349237526995677,
|
| 1964 |
+
0.0033065118306722747,
|
| 1965 |
+
0.0033539365778799115,
|
| 1966 |
+
0.0033206247870422095,
|
| 1967 |
+
0.003291682228120563,
|
| 1968 |
+
0.0033175777669730644,
|
| 1969 |
+
0.0032850800782541142,
|
| 1970 |
+
0.0033168584889086148,
|
| 1971 |
+
0.0033632764530011133,
|
| 1972 |
+
0.0033520821863254496,
|
| 1973 |
+
0.0033416801465602436,
|
| 1974 |
+
0.003325785707384978
|
| 1975 |
+
]
|
| 1976 |
+
},
|
| 1977 |
+
"pubmedqa": {
|
| 1978 |
+
"acc": [
|
| 1979 |
+
0.549,
|
| 1980 |
+
0.551,
|
| 1981 |
+
0.553,
|
| 1982 |
+
0.543,
|
| 1983 |
+
0.554,
|
| 1984 |
+
0.551,
|
| 1985 |
+
0.54,
|
| 1986 |
+
0.566,
|
| 1987 |
+
0.532,
|
| 1988 |
+
0.547,
|
| 1989 |
+
0.553,
|
| 1990 |
+
0.553,
|
| 1991 |
+
0.554,
|
| 1992 |
+
0.554,
|
| 1993 |
+
0.551,
|
| 1994 |
+
0.553,
|
| 1995 |
+
0.518,
|
| 1996 |
+
0.569,
|
| 1997 |
+
0.561,
|
| 1998 |
+
0.554,
|
| 1999 |
+
0.571,
|
| 2000 |
+
0.567,
|
| 2001 |
+
0.556,
|
| 2002 |
+
0.554,
|
| 2003 |
+
0.557,
|
| 2004 |
+
0.56,
|
| 2005 |
+
0.567,
|
| 2006 |
+
0.551,
|
| 2007 |
+
0.592,
|
| 2008 |
+
0.568,
|
| 2009 |
+
0.584,
|
| 2010 |
+
0.577,
|
| 2011 |
+
0.572,
|
| 2012 |
+
0.573
|
| 2013 |
+
],
|
| 2014 |
+
"acc_stderr": [
|
| 2015 |
+
0.01574315237958553,
|
| 2016 |
+
0.01573679276875201,
|
| 2017 |
+
0.015730176046009084,
|
| 2018 |
+
0.015760691590136388,
|
| 2019 |
+
0.015726771166750354,
|
| 2020 |
+
0.015736792768752013,
|
| 2021 |
+
0.015768596914394372,
|
| 2022 |
+
0.015680876566375058,
|
| 2023 |
+
0.015786868759359023,
|
| 2024 |
+
0.01574925518997758,
|
| 2025 |
+
0.015730176046009084,
|
| 2026 |
+
0.015730176046009084,
|
| 2027 |
+
0.015726771166750357,
|
| 2028 |
+
0.015726771166750357,
|
| 2029 |
+
0.015736792768752016,
|
| 2030 |
+
0.015730176046009074,
|
| 2031 |
+
0.015809045699406728,
|
| 2032 |
+
0.015667944488173498,
|
| 2033 |
+
0.015701131345400767,
|
| 2034 |
+
0.015726771166750357,
|
| 2035 |
+
0.01565899754787024,
|
| 2036 |
+
0.015676630912181334,
|
| 2037 |
+
0.01571976816340209,
|
| 2038 |
+
0.015726771166750357,
|
| 2039 |
+
0.015716169953204105,
|
| 2040 |
+
0.01570498795436179,
|
| 2041 |
+
0.015676630912181334,
|
| 2042 |
+
0.01573679276875202,
|
| 2043 |
+
0.015549205052920676,
|
| 2044 |
+
0.015672320237336206,
|
| 2045 |
+
0.015594460144140603,
|
| 2046 |
+
0.015630589090476342,
|
| 2047 |
+
0.015654426245029267,
|
| 2048 |
+
0.01564978964446221
|
| 2049 |
+
]
|
| 2050 |
+
},
|
| 2051 |
+
"qnli": {
|
| 2052 |
+
"acc": [
|
| 2053 |
+
0.4946000366099213,
|
| 2054 |
+
0.49130514369394107,
|
| 2055 |
+
0.49313563975837454,
|
| 2056 |
+
0.49368478857770454,
|
| 2057 |
+
0.4995423759838916,
|
| 2058 |
+
0.49569833424858134,
|
| 2059 |
+
0.49441698700347797,
|
| 2060 |
+
0.49807797913234486,
|
| 2061 |
+
0.4925864909390445,
|
| 2062 |
+
0.4938678381841479,
|
| 2063 |
+
0.4865458539264141,
|
| 2064 |
+
0.49203734211971445,
|
| 2065 |
+
0.48215266337177376,
|
| 2066 |
+
0.48416620904265056,
|
| 2067 |
+
0.4706205381658429,
|
| 2068 |
+
0.4935017389712612,
|
| 2069 |
+
0.4962474830679114,
|
| 2070 |
+
0.4883763499908475,
|
| 2071 |
+
0.4933186893648179,
|
| 2072 |
+
0.48416620904265056,
|
| 2073 |
+
0.4953322350356947,
|
| 2074 |
+
0.4918542925132711,
|
| 2075 |
+
0.4805052169137836,
|
| 2076 |
+
0.4850814570748673,
|
| 2077 |
+
0.4914881933003844,
|
| 2078 |
+
0.48288486179754714,
|
| 2079 |
+
0.4805052169137836,
|
| 2080 |
+
0.49313563975837454,
|
| 2081 |
+
0.4894746476295076,
|
| 2082 |
+
0.4946000366099213,
|
| 2083 |
+
0.4962474830679114,
|
| 2084 |
+
0.47537982793336997,
|
| 2085 |
+
0.4876441515650741,
|
| 2086 |
+
0.47611202635914335
|
| 2087 |
+
],
|
| 2088 |
+
"acc_stderr": [
|
| 2089 |
+
0.006765015986877456,
|
| 2090 |
+
0.006764387537235329,
|
| 2091 |
+
0.006764772956998407,
|
| 2092 |
+
0.006764870895462486,
|
| 2093 |
+
0.006765407718154766,
|
| 2094 |
+
0.006765160168388145,
|
| 2095 |
+
0.006764988782474208,
|
| 2096 |
+
0.006765360566516982,
|
| 2097 |
+
0.006764666855395084,
|
| 2098 |
+
0.00676490172764847,
|
| 2099 |
+
0.00676296083958267,
|
| 2100 |
+
0.006764552590269392,
|
| 2101 |
+
0.006761099240467566,
|
| 2102 |
+
0.006762017403107074,
|
| 2103 |
+
0.006753721287612181,
|
| 2104 |
+
0.006764839156300604,
|
| 2105 |
+
0.006765220016415222,
|
| 2106 |
+
0.006763582165762024,
|
| 2107 |
+
0.006764806510150307,
|
| 2108 |
+
0.006762017403107078,
|
| 2109 |
+
0.006765115735419823,
|
| 2110 |
+
0.006764512687707302,
|
| 2111 |
+
0.0067602662538435235,
|
| 2112 |
+
0.006762398422143383,
|
| 2113 |
+
0.006764430161206517,
|
| 2114 |
+
0.00676144583429495,
|
| 2115 |
+
0.0067602662538435235,
|
| 2116 |
+
0.006764772956998408,
|
| 2117 |
+
0.006763911400147894,
|
| 2118 |
+
0.006765015986877456,
|
| 2119 |
+
0.006765220016415222,
|
| 2120 |
+
0.006757203828148094,
|
| 2121 |
+
0.006763344526576797,
|
| 2122 |
+
0.006757684976820108
|
| 2123 |
+
]
|
| 2124 |
+
},
|
| 2125 |
+
"qqp": {
|
| 2126 |
+
"acc": [
|
| 2127 |
+
0.3689586940390799,
|
| 2128 |
+
0.3707395498392283,
|
| 2129 |
+
0.37373237694781103,
|
| 2130 |
+
0.36883502349740294,
|
| 2131 |
+
0.3706900816225575,
|
| 2132 |
+
0.3874350729656196,
|
| 2133 |
+
0.4314370516942864,
|
| 2134 |
+
0.37447440019787287,
|
| 2135 |
+
0.4231511254019293,
|
| 2136 |
+
0.5152114766262677,
|
| 2137 |
+
0.3971061093247588,
|
| 2138 |
+
0.37291615137274303,
|
| 2139 |
+
0.3710116250309176,
|
| 2140 |
+
0.38560474894880037,
|
| 2141 |
+
0.39426168686618845,
|
| 2142 |
+
0.3685134800890428,
|
| 2143 |
+
0.37229779866435814,
|
| 2144 |
+
0.3689586940390799,
|
| 2145 |
+
0.36816720257234725,
|
| 2146 |
+
0.37264407618105366,
|
| 2147 |
+
0.3716299777393025,
|
| 2148 |
+
0.3771209497897601,
|
| 2149 |
+
0.40591145189215927,
|
| 2150 |
+
0.3950531783329211,
|
| 2151 |
+
0.3763047242146921,
|
| 2152 |
+
0.3961167449913431,
|
| 2153 |
+
0.38852337373237694,
|
| 2154 |
+
0.4348008904279001,
|
| 2155 |
+
0.41214444719267873,
|
| 2156 |
+
0.37506801879792234,
|
| 2157 |
+
0.375859510264655,
|
| 2158 |
+
0.4701953994558496,
|
| 2159 |
+
0.38933959930744494,
|
| 2160 |
+
0.39581993569131835
|
| 2161 |
+
],
|
| 2162 |
+
"acc_stderr": [
|
| 2163 |
+
0.0023997791094649353,
|
| 2164 |
+
0.0024021668964538355,
|
| 2165 |
+
0.0024061009348923077,
|
| 2166 |
+
0.0023996119887763337,
|
| 2167 |
+
0.002402101042054807,
|
| 2168 |
+
0.0024228639636974035,
|
| 2169 |
+
0.0024632103306330196,
|
| 2170 |
+
0.0024070610826455647,
|
| 2171 |
+
0.002457153428253151,
|
| 2172 |
+
0.002485549574839818,
|
| 2173 |
+
0.0024334768895015566,
|
| 2174 |
+
0.0024050377892805078,
|
| 2175 |
+
0.002402528613044342,
|
| 2176 |
+
0.002420742596818517,
|
| 2177 |
+
0.002430459060708425,
|
| 2178 |
+
0.0023991766825629196,
|
| 2179 |
+
0.0024042274998397057,
|
| 2180 |
+
0.0023997791094649353,
|
| 2181 |
+
0.002398706610614498,
|
| 2182 |
+
0.002404681780107917,
|
| 2183 |
+
0.0024033476604236013,
|
| 2184 |
+
0.002410436482711466,
|
| 2185 |
+
0.0024422760062499348,
|
| 2186 |
+
0.002431307445812769,
|
| 2187 |
+
0.0024094036442049794,
|
| 2188 |
+
0.002432436966054659,
|
| 2189 |
+
0.00242410823184199,
|
| 2190 |
+
0.0024654684380438145,
|
| 2191 |
+
0.002448011982492277,
|
| 2192 |
+
0.002407824852792694,
|
| 2193 |
+
0.0024088372076944186,
|
| 2194 |
+
0.0024822787571501504,
|
| 2195 |
+
0.0024250330861270287,
|
| 2196 |
+
0.0024321229611206923
|
| 2197 |
+
],
|
| 2198 |
+
"f1": [
|
| 2199 |
+
0.5381510110244202,
|
| 2200 |
+
0.5373943085735067,
|
| 2201 |
+
0.5340277522176009,
|
| 2202 |
+
0.5371807893209518,
|
| 2203 |
+
0.5338829348722177,
|
| 2204 |
+
0.4918126975007181,
|
| 2205 |
+
0.47788856837849497,
|
| 2206 |
+
0.530937013131538,
|
| 2207 |
+
0.48814853831972604,
|
| 2208 |
+
0.32245575221238937,
|
| 2209 |
+
0.5310064841359937,
|
| 2210 |
+
0.5318956444674212,
|
| 2211 |
+
0.5373335273997526,
|
| 2212 |
+
0.5208518189884649,
|
| 2213 |
+
0.5240223898002021,
|
| 2214 |
+
0.5383099151883398,
|
| 2215 |
+
0.5369991972560753,
|
| 2216 |
+
0.5379335325545594,
|
| 2217 |
+
0.5381903642773208,
|
| 2218 |
+
0.5390877703071052,
|
| 2219 |
+
0.5386193995968255,
|
| 2220 |
+
0.5376806006866038,
|
| 2221 |
+
0.5229308598327607,
|
| 2222 |
+
0.5327270643078216,
|
| 2223 |
+
0.5346227668684482,
|
| 2224 |
+
0.5334327046188537,
|
| 2225 |
+
0.5335999698147379,
|
| 2226 |
+
0.5161249338274219,
|
| 2227 |
+
0.5257507732215903,
|
| 2228 |
+
0.5393618960802188,
|
| 2229 |
+
0.5389534458817511,
|
| 2230 |
+
0.4056933577492925,
|
| 2231 |
+
0.5371651388185891,
|
| 2232 |
+
0.5300240500240501
|
| 2233 |
+
],
|
| 2234 |
+
"f1_stderr": [
|
| 2235 |
+
0.0025577823728247986,
|
| 2236 |
+
0.002563280778519078,
|
| 2237 |
+
0.0025839608679808037,
|
| 2238 |
+
0.0025608581105371125,
|
| 2239 |
+
0.0025785682687840157,
|
| 2240 |
+
0.0027782843904196664,
|
| 2241 |
+
0.002938261815444539,
|
| 2242 |
+
0.002594018333054568,
|
| 2243 |
+
0.0028704450198037677,
|
| 2244 |
+
0.003560844348119353,
|
| 2245 |
+
0.002639996507059888,
|
| 2246 |
+
0.002590826221790599,
|
| 2247 |
+
0.0025636708058552485,
|
| 2248 |
+
0.002658744427925077,
|
| 2249 |
+
0.002665736396626755,
|
| 2250 |
+
0.002555361722256689,
|
| 2251 |
+
0.0025680352075939613,
|
| 2252 |
+
0.002558651400570049,
|
| 2253 |
+
0.002555265048161791,
|
| 2254 |
+
0.0025602698460986846,
|
| 2255 |
+
0.0025592147389062883,
|
| 2256 |
+
0.0025739993062683804,
|
| 2257 |
+
0.0026914961106665495,
|
| 2258 |
+
0.002632044832508472,
|
| 2259 |
+
0.0025853740078140013,
|
| 2260 |
+
0.0026315499753008817,
|
| 2261 |
+
0.0026137422838319498,
|
| 2262 |
+
0.002793800310089035,
|
| 2263 |
+
0.0027021839440523185,
|
| 2264 |
+
0.0025657758651322906,
|
| 2265 |
+
0.0025666135054784717,
|
| 2266 |
+
0.0032483176858197032,
|
| 2267 |
+
0.002603509335340955,
|
| 2268 |
+
0.00265136623076688
|
| 2269 |
+
]
|
| 2270 |
+
},
|
| 2271 |
+
"race": {
|
| 2272 |
+
"acc": [
|
| 2273 |
+
0.291866028708134,
|
| 2274 |
+
0.2937799043062201,
|
| 2275 |
+
0.2966507177033493,
|
| 2276 |
+
0.2985645933014354,
|
| 2277 |
+
0.29952153110047847,
|
| 2278 |
+
0.3062200956937799,
|
| 2279 |
+
0.3090909090909091,
|
| 2280 |
+
0.31004784688995213,
|
| 2281 |
+
0.31100478468899523,
|
| 2282 |
+
0.3062200956937799,
|
| 2283 |
+
0.2976076555023923,
|
| 2284 |
+
0.29569377990430623,
|
| 2285 |
+
0.3119617224880383,
|
| 2286 |
+
0.31483253588516746,
|
| 2287 |
+
0.30239234449760766,
|
| 2288 |
+
0.3090909090909091,
|
| 2289 |
+
0.3167464114832536,
|
| 2290 |
+
0.30526315789473685,
|
| 2291 |
+
0.31770334928229665,
|
| 2292 |
+
0.30813397129186604,
|
| 2293 |
+
0.3282296650717703,
|
| 2294 |
+
0.30526315789473685,
|
| 2295 |
+
0.31483253588516746,
|
| 2296 |
+
0.32727272727272727,
|
| 2297 |
+
0.31004784688995213,
|
| 2298 |
+
0.33014354066985646,
|
| 2299 |
+
0.32057416267942584,
|
| 2300 |
+
0.3320574162679426,
|
| 2301 |
+
0.3339712918660287,
|
| 2302 |
+
0.33588516746411484,
|
| 2303 |
+
0.3282296650717703,
|
| 2304 |
+
0.3349282296650718,
|
| 2305 |
+
0.33588516746411484,
|
| 2306 |
+
0.33014354066985646
|
| 2307 |
+
],
|
| 2308 |
+
"acc_stderr": [
|
| 2309 |
+
0.014070166598769293,
|
| 2310 |
+
0.01409713403021856,
|
| 2311 |
+
0.014137023394252782,
|
| 2312 |
+
0.014163244242725774,
|
| 2313 |
+
0.01417624366981322,
|
| 2314 |
+
0.014265186459328803,
|
| 2315 |
+
0.014302215587018911,
|
| 2316 |
+
0.01431441479114949,
|
| 2317 |
+
0.014326542383166063,
|
| 2318 |
+
0.014265186459328807,
|
| 2319 |
+
0.014150170885906206,
|
| 2320 |
+
0.01412380156073491,
|
| 2321 |
+
0.01433859854477742,
|
| 2322 |
+
0.014374340239175165,
|
| 2323 |
+
0.014214800395178306,
|
| 2324 |
+
0.014302215587018916,
|
| 2325 |
+
0.014397814139910625,
|
| 2326 |
+
0.014252698955501603,
|
| 2327 |
+
0.014409445442050079,
|
| 2328 |
+
0.014289944587370715,
|
| 2329 |
+
0.014532792620129664,
|
| 2330 |
+
0.014252698955501603,
|
| 2331 |
+
0.014374340239175163,
|
| 2332 |
+
0.014521924541567924,
|
| 2333 |
+
0.014314414791149494,
|
| 2334 |
+
0.014554323633246916,
|
| 2335 |
+
0.014443918794282801,
|
| 2336 |
+
0.01457558212954591,
|
| 2337 |
+
0.01459656929970973,
|
| 2338 |
+
0.014617286312430693,
|
| 2339 |
+
0.014532792620129664,
|
| 2340 |
+
0.014606961503556257,
|
| 2341 |
+
0.014617286312430684,
|
| 2342 |
+
0.014554323633246916
|
| 2343 |
+
]
|
| 2344 |
+
},
|
| 2345 |
+
"rte": {
|
| 2346 |
+
"acc": [
|
| 2347 |
+
0.5306859205776173,
|
| 2348 |
+
0.5379061371841155,
|
| 2349 |
+
0.5487364620938628,
|
| 2350 |
+
0.5379061371841155,
|
| 2351 |
+
0.5379061371841155,
|
| 2352 |
+
0.5306859205776173,
|
| 2353 |
+
0.5415162454873647,
|
| 2354 |
+
0.5342960288808665,
|
| 2355 |
+
0.5740072202166066,
|
| 2356 |
+
0.4981949458483754,
|
| 2357 |
+
0.5415162454873647,
|
| 2358 |
+
0.5126353790613718,
|
| 2359 |
+
0.5306859205776173,
|
| 2360 |
+
0.5306859205776173,
|
| 2361 |
+
0.5306859205776173,
|
| 2362 |
+
0.5306859205776173,
|
| 2363 |
+
0.5270758122743683,
|
| 2364 |
+
0.5018050541516246,
|
| 2365 |
+
0.5090252707581228,
|
| 2366 |
+
0.555956678700361,
|
| 2367 |
+
0.48375451263537905,
|
| 2368 |
+
0.5342960288808665,
|
| 2369 |
+
0.51985559566787,
|
| 2370 |
+
0.4981949458483754,
|
| 2371 |
+
0.5270758122743683,
|
| 2372 |
+
0.5270758122743683,
|
| 2373 |
+
0.5054151624548736,
|
| 2374 |
+
0.516245487364621,
|
| 2375 |
+
0.516245487364621,
|
| 2376 |
+
0.5306859205776173,
|
| 2377 |
+
0.51985559566787,
|
| 2378 |
+
0.5306859205776173,
|
| 2379 |
+
0.4981949458483754,
|
| 2380 |
+
0.5018050541516246
|
| 2381 |
+
],
|
| 2382 |
+
"acc_stderr": [
|
| 2383 |
+
0.030039730592197812,
|
| 2384 |
+
0.030009848912529113,
|
| 2385 |
+
0.029953149241808946,
|
| 2386 |
+
0.030009848912529117,
|
| 2387 |
+
0.030009848912529113,
|
| 2388 |
+
0.03003973059219781,
|
| 2389 |
+
0.029992535385373314,
|
| 2390 |
+
0.030025579819366422,
|
| 2391 |
+
0.02976495674177765,
|
| 2392 |
+
0.030096267148976633,
|
| 2393 |
+
0.029992535385373314,
|
| 2394 |
+
0.030086851767188564,
|
| 2395 |
+
0.03003973059219781,
|
| 2396 |
+
0.030039730592197812,
|
| 2397 |
+
0.03003973059219781,
|
| 2398 |
+
0.030039730592197812,
|
| 2399 |
+
0.030052303463143706,
|
| 2400 |
+
0.030096267148976626,
|
| 2401 |
+
0.030091559826331334,
|
| 2402 |
+
0.029907396333795987,
|
| 2403 |
+
0.030080573208738064,
|
| 2404 |
+
0.030025579819366426,
|
| 2405 |
+
0.030072723167317184,
|
| 2406 |
+
0.030096267148976633,
|
| 2407 |
+
0.030052303463143706,
|
| 2408 |
+
0.030052303463143706,
|
| 2409 |
+
0.030094698123239966,
|
| 2410 |
+
0.030080573208738064,
|
| 2411 |
+
0.030080573208738064,
|
| 2412 |
+
0.030039730592197812,
|
| 2413 |
+
0.030072723167317184,
|
| 2414 |
+
0.030039730592197812,
|
| 2415 |
+
0.030096267148976633,
|
| 2416 |
+
0.030096267148976626
|
| 2417 |
+
]
|
| 2418 |
+
},
|
| 2419 |
+
"sciq": {
|
| 2420 |
+
"acc": [
|
| 2421 |
+
0.752,
|
| 2422 |
+
0.765,
|
| 2423 |
+
0.761,
|
| 2424 |
+
0.773,
|
| 2425 |
+
0.767,
|
| 2426 |
+
0.768,
|
| 2427 |
+
0.771,
|
| 2428 |
+
0.771,
|
| 2429 |
+
0.789,
|
| 2430 |
+
0.777,
|
| 2431 |
+
0.773,
|
| 2432 |
+
0.79,
|
| 2433 |
+
0.794,
|
| 2434 |
+
0.793,
|
| 2435 |
+
0.803,
|
| 2436 |
+
0.795,
|
| 2437 |
+
0.799,
|
| 2438 |
+
0.806,
|
| 2439 |
+
0.802,
|
| 2440 |
+
0.798,
|
| 2441 |
+
0.791,
|
| 2442 |
+
0.813,
|
| 2443 |
+
0.817,
|
| 2444 |
+
0.822,
|
| 2445 |
+
0.808,
|
| 2446 |
+
0.817,
|
| 2447 |
+
0.814,
|
| 2448 |
+
0.817,
|
| 2449 |
+
0.825,
|
| 2450 |
+
0.825,
|
| 2451 |
+
0.826,
|
| 2452 |
+
0.817,
|
| 2453 |
+
0.812,
|
| 2454 |
+
0.825
|
| 2455 |
+
],
|
| 2456 |
+
"acc_stderr": [
|
| 2457 |
+
0.013663187134877654,
|
| 2458 |
+
0.013414729030247123,
|
| 2459 |
+
0.01349300044693759,
|
| 2460 |
+
0.013253174964763921,
|
| 2461 |
+
0.013374972519220074,
|
| 2462 |
+
0.013354937452281564,
|
| 2463 |
+
0.0132941993266136,
|
| 2464 |
+
0.013294199326613606,
|
| 2465 |
+
0.01290913032104209,
|
| 2466 |
+
0.013169830843425694,
|
| 2467 |
+
0.013253174964763902,
|
| 2468 |
+
0.012886662332274545,
|
| 2469 |
+
0.01279561361278655,
|
| 2470 |
+
0.012818553557843991,
|
| 2471 |
+
0.012583693787968118,
|
| 2472 |
+
0.012772554096113116,
|
| 2473 |
+
0.012679107214617326,
|
| 2474 |
+
0.012510816141264357,
|
| 2475 |
+
0.01260773393417531,
|
| 2476 |
+
0.012702651587655133,
|
| 2477 |
+
0.012864077288499339,
|
| 2478 |
+
0.012336254828074133,
|
| 2479 |
+
0.012233587399477821,
|
| 2480 |
+
0.01210216767618359,
|
| 2481 |
+
0.012461592646659983,
|
| 2482 |
+
0.012233587399477823,
|
| 2483 |
+
0.012310790208412789,
|
| 2484 |
+
0.01223358739947782,
|
| 2485 |
+
0.012021627157731975,
|
| 2486 |
+
0.012021627157731975,
|
| 2487 |
+
0.011994493230973426,
|
| 2488 |
+
0.012233587399477825,
|
| 2489 |
+
0.012361586015103756,
|
| 2490 |
+
0.012021627157731975
|
| 2491 |
+
],
|
| 2492 |
+
"acc_norm": [
|
| 2493 |
+
0.656,
|
| 2494 |
+
0.674,
|
| 2495 |
+
0.664,
|
| 2496 |
+
0.679,
|
| 2497 |
+
0.678,
|
| 2498 |
+
0.689,
|
| 2499 |
+
0.684,
|
| 2500 |
+
0.682,
|
| 2501 |
+
0.702,
|
| 2502 |
+
0.692,
|
| 2503 |
+
0.694,
|
| 2504 |
+
0.692,
|
| 2505 |
+
0.706,
|
| 2506 |
+
0.707,
|
| 2507 |
+
0.706,
|
| 2508 |
+
0.712,
|
| 2509 |
+
0.717,
|
| 2510 |
+
0.74,
|
| 2511 |
+
0.717,
|
| 2512 |
+
0.716,
|
| 2513 |
+
0.717,
|
| 2514 |
+
0.72,
|
| 2515 |
+
0.73,
|
| 2516 |
+
0.724,
|
| 2517 |
+
0.707,
|
| 2518 |
+
0.729,
|
| 2519 |
+
0.738,
|
| 2520 |
+
0.73,
|
| 2521 |
+
0.757,
|
| 2522 |
+
0.746,
|
| 2523 |
+
0.747,
|
| 2524 |
+
0.747,
|
| 2525 |
+
0.74,
|
| 2526 |
+
0.747
|
| 2527 |
+
],
|
| 2528 |
+
"acc_norm_stderr": [
|
| 2529 |
+
0.015029633724408943,
|
| 2530 |
+
0.014830507204541049,
|
| 2531 |
+
0.014944140233795027,
|
| 2532 |
+
0.014770821817934644,
|
| 2533 |
+
0.014782913600996655,
|
| 2534 |
+
0.014645596385722695,
|
| 2535 |
+
0.014709193056057104,
|
| 2536 |
+
0.0147340793093119,
|
| 2537 |
+
0.01447084674113472,
|
| 2538 |
+
0.014606483127342763,
|
| 2539 |
+
0.014580006055436967,
|
| 2540 |
+
0.014606483127342763,
|
| 2541 |
+
0.014414290540008208,
|
| 2542 |
+
0.014399942998441275,
|
| 2543 |
+
0.01441429054000821,
|
| 2544 |
+
0.014326941797231561,
|
| 2545 |
+
0.014251810906481737,
|
| 2546 |
+
0.013877773329774166,
|
| 2547 |
+
0.014251810906481735,
|
| 2548 |
+
0.014267009061031313,
|
| 2549 |
+
0.014251810906481742,
|
| 2550 |
+
0.014205696104091493,
|
| 2551 |
+
0.014046255632633913,
|
| 2552 |
+
0.014142984975740668,
|
| 2553 |
+
0.014399942998441268,
|
| 2554 |
+
0.014062601350986186,
|
| 2555 |
+
0.01391220865102135,
|
| 2556 |
+
0.014046255632633915,
|
| 2557 |
+
0.013569640199177446,
|
| 2558 |
+
0.01377220656516854,
|
| 2559 |
+
0.01375427861358708,
|
| 2560 |
+
0.01375427861358708,
|
| 2561 |
+
0.013877773329774166,
|
| 2562 |
+
0.01375427861358708
|
| 2563 |
+
]
|
| 2564 |
+
},
|
| 2565 |
+
"sst": {
|
| 2566 |
+
"acc": [
|
| 2567 |
+
0.5814220183486238,
|
| 2568 |
+
0.7098623853211009,
|
| 2569 |
+
0.5298165137614679,
|
| 2570 |
+
0.6559633027522935,
|
| 2571 |
+
0.518348623853211,
|
| 2572 |
+
0.5711009174311926,
|
| 2573 |
+
0.555045871559633,
|
| 2574 |
+
0.5263761467889908,
|
| 2575 |
+
0.6754587155963303,
|
| 2576 |
+
0.6444954128440367,
|
| 2577 |
+
0.6892201834862385,
|
| 2578 |
+
0.5149082568807339,
|
| 2579 |
+
0.5080275229357798,
|
| 2580 |
+
0.6112385321100917,
|
| 2581 |
+
0.5263761467889908,
|
| 2582 |
+
0.551605504587156,
|
| 2583 |
+
0.6788990825688074,
|
| 2584 |
+
0.5103211009174312,
|
| 2585 |
+
0.5217889908256881,
|
| 2586 |
+
0.6662844036697247,
|
| 2587 |
+
0.6788990825688074,
|
| 2588 |
+
0.6181192660550459,
|
| 2589 |
+
0.6938073394495413,
|
| 2590 |
+
0.5080275229357798,
|
| 2591 |
+
0.533256880733945,
|
| 2592 |
+
0.6972477064220184,
|
| 2593 |
+
0.7247706422018348,
|
| 2594 |
+
0.588302752293578,
|
| 2595 |
+
0.6112385321100917,
|
| 2596 |
+
0.6330275229357798,
|
| 2597 |
+
0.5126146788990825,
|
| 2598 |
+
0.661697247706422,
|
| 2599 |
+
0.6295871559633027,
|
| 2600 |
+
0.6754587155963303
|
| 2601 |
+
],
|
| 2602 |
+
"acc_stderr": [
|
| 2603 |
+
0.016715710826534457,
|
| 2604 |
+
0.015377297714201989,
|
| 2605 |
+
0.01691170341531885,
|
| 2606 |
+
0.01609656024306282,
|
| 2607 |
+
0.01693044215061337,
|
| 2608 |
+
0.016769685197040893,
|
| 2609 |
+
0.016838871437903056,
|
| 2610 |
+
0.016918264333564144,
|
| 2611 |
+
0.015864460317721044,
|
| 2612 |
+
0.01621897641479828,
|
| 2613 |
+
0.015681814742502808,
|
| 2614 |
+
0.0169343211533256,
|
| 2615 |
+
0.016939670044361786,
|
| 2616 |
+
0.016517255666657737,
|
| 2617 |
+
0.016918264333564144,
|
| 2618 |
+
0.016851375435599603,
|
| 2619 |
+
0.01582028513171376,
|
| 2620 |
+
0.016938243838576613,
|
| 2621 |
+
0.016925759411718252,
|
| 2622 |
+
0.015977506328949537,
|
| 2623 |
+
0.01582028513171376,
|
| 2624 |
+
0.016462316115268005,
|
| 2625 |
+
0.015617364822952463,
|
| 2626 |
+
0.016939670044361782,
|
| 2627 |
+
0.01690433608610159,
|
| 2628 |
+
0.015567833948853487,
|
| 2629 |
+
0.01513347269702534,
|
| 2630 |
+
0.016675556815472843,
|
| 2631 |
+
0.016517255666657737,
|
| 2632 |
+
0.016331232646350478,
|
| 2633 |
+
0.016936460912455,
|
| 2634 |
+
0.016031470201950025,
|
| 2635 |
+
0.01636296008359423,
|
| 2636 |
+
0.01586446031772106
|
| 2637 |
+
]
|
| 2638 |
+
},
|
| 2639 |
+
"triviaqa": {
|
| 2640 |
+
"acc": [
|
| 2641 |
+
0.010607265977194379,
|
| 2642 |
+
0.01608768673207814,
|
| 2643 |
+
0.014319809069212411,
|
| 2644 |
+
0.013524264120922832,
|
| 2645 |
+
0.015910898965791568,
|
| 2646 |
+
0.017767170511800583,
|
| 2647 |
+
0.01918147264209317,
|
| 2648 |
+
0.01104923539291081,
|
| 2649 |
+
0.02015380535666932,
|
| 2650 |
+
0.02112613807124547,
|
| 2651 |
+
0.01582250508264828,
|
| 2652 |
+
0.021921683019535048,
|
| 2653 |
+
0.023689560682400777,
|
| 2654 |
+
0.02890479978785468,
|
| 2655 |
+
0.024485105630690358,
|
| 2656 |
+
0.022805621850967912,
|
| 2657 |
+
0.024043136214973924,
|
| 2658 |
+
0.021037744188102184,
|
| 2659 |
+
0.02139131972067533,
|
| 2660 |
+
0.024750287280120215,
|
| 2661 |
+
0.027313709891275524,
|
| 2662 |
+
0.022805621850967912,
|
| 2663 |
+
0.027048528241845664,
|
| 2664 |
+
0.027048528241845664,
|
| 2665 |
+
0.026341377176699373,
|
| 2666 |
+
0.023689560682400777,
|
| 2667 |
+
0.028639618138424822,
|
| 2668 |
+
0.028639618138424822,
|
| 2669 |
+
0.03261734287987271,
|
| 2670 |
+
0.02970034473614426,
|
| 2671 |
+
0.030937859100150268,
|
| 2672 |
+
0.03146822239900999,
|
| 2673 |
+
0.02916998143728454,
|
| 2674 |
+
0.02740210377441881
|
| 2675 |
+
],
|
| 2676 |
+
"acc_stderr": [
|
| 2677 |
+
0.0009631998128991687,
|
| 2678 |
+
0.001182919796828757,
|
| 2679 |
+
0.0011170353826515254,
|
| 2680 |
+
0.001086001255568268,
|
| 2681 |
+
0.001176507965063248,
|
| 2682 |
+
0.0012420716800281026,
|
| 2683 |
+
0.0012896314201776976,
|
| 2684 |
+
0.0009828420973063668,
|
| 2685 |
+
0.0013212584775471477,
|
| 2686 |
+
0.0013520841592435343,
|
| 2687 |
+
0.001173288026337696,
|
| 2688 |
+
0.0013767467634740556,
|
| 2689 |
+
0.0014298904703392034,
|
| 2690 |
+
0.0015752380305831285,
|
| 2691 |
+
0.0014531091754911747,
|
| 2692 |
+
0.0014035947693080207,
|
| 2693 |
+
0.0014402609030575888,
|
| 2694 |
+
0.0013493134847357554,
|
| 2695 |
+
0.0013603592781843991,
|
| 2696 |
+
0.001460758221854218,
|
| 2697 |
+
0.0015325231556834482,
|
| 2698 |
+
0.0014035947693080207,
|
| 2699 |
+
0.001525273451547976,
|
| 2700 |
+
0.0015252734515479667,
|
| 2701 |
+
0.001505750088713862,
|
| 2702 |
+
0.0014298904703392223,
|
| 2703 |
+
0.0015682095939512912,
|
| 2704 |
+
0.001568209593951297,
|
| 2705 |
+
0.0016701433163813651,
|
| 2706 |
+
0.0015961142885210066,
|
| 2707 |
+
0.001627988166902511,
|
| 2708 |
+
0.0016414336956661968,
|
| 2709 |
+
0.0015822313175962376,
|
| 2710 |
+
0.001534931214542274
|
| 2711 |
+
]
|
| 2712 |
+
},
|
| 2713 |
+
"webqs": {
|
| 2714 |
+
"acc": [
|
| 2715 |
+
0.0,
|
| 2716 |
+
0.006889763779527559,
|
| 2717 |
+
0.007874015748031496,
|
| 2718 |
+
0.003937007874015748,
|
| 2719 |
+
0.004921259842519685,
|
| 2720 |
+
0.008858267716535433,
|
| 2721 |
+
0.00984251968503937,
|
| 2722 |
+
0.0024606299212598425,
|
| 2723 |
+
0.0034448818897637795,
|
| 2724 |
+
0.008366141732283465,
|
| 2725 |
+
0.0014763779527559055,
|
| 2726 |
+
0.008858267716535433,
|
| 2727 |
+
0.009350393700787402,
|
| 2728 |
+
0.009350393700787402,
|
| 2729 |
+
0.004921259842519685,
|
| 2730 |
+
0.0024606299212598425,
|
| 2731 |
+
0.0063976377952755905,
|
| 2732 |
+
0.0024606299212598425,
|
| 2733 |
+
0.001968503937007874,
|
| 2734 |
+
0.004921259842519685,
|
| 2735 |
+
0.003937007874015748,
|
| 2736 |
+
0.004921259842519685,
|
| 2737 |
+
0.009350393700787402,
|
| 2738 |
+
0.003937007874015748,
|
| 2739 |
+
0.009350393700787402,
|
| 2740 |
+
0.005905511811023622,
|
| 2741 |
+
0.0063976377952755905,
|
| 2742 |
+
0.011811023622047244,
|
| 2743 |
+
0.00984251968503937,
|
| 2744 |
+
0.012303149606299213,
|
| 2745 |
+
0.008858267716535433,
|
| 2746 |
+
0.012795275590551181,
|
| 2747 |
+
0.01033464566929134,
|
| 2748 |
+
0.011811023622047244
|
| 2749 |
+
],
|
| 2750 |
+
"acc_stderr": [
|
| 2751 |
+
0.0,
|
| 2752 |
+
0.0018354642646372231,
|
| 2753 |
+
0.001961221248568131,
|
| 2754 |
+
0.0013895416930409105,
|
| 2755 |
+
0.00155278708527343,
|
| 2756 |
+
0.00207915717045096,
|
| 2757 |
+
0.0021905356257242614,
|
| 2758 |
+
0.0010993429893341362,
|
| 2759 |
+
0.0013001182915028248,
|
| 2760 |
+
0.00202107914449692,
|
| 2761 |
+
0.0008519674166442085,
|
| 2762 |
+
0.002079157170450959,
|
| 2763 |
+
0.0021356005429823527,
|
| 2764 |
+
0.002135600542982353,
|
| 2765 |
+
0.0015527870852734501,
|
| 2766 |
+
0.0010993429893341488,
|
| 2767 |
+
0.0017691357975492758,
|
| 2768 |
+
0.0010993429893341395,
|
| 2769 |
+
0.0009835247781804428,
|
| 2770 |
+
0.0015527870852734482,
|
| 2771 |
+
0.0013895416930409096,
|
| 2772 |
+
0.0015527870852734614,
|
| 2773 |
+
0.002135600542982358,
|
| 2774 |
+
0.0013895416930409094,
|
| 2775 |
+
0.002135600542982355,
|
| 2776 |
+
0.001700151576246189,
|
| 2777 |
+
0.0017691357975492708,
|
| 2778 |
+
0.0023972250639872437,
|
| 2779 |
+
0.0021905356257242545,
|
| 2780 |
+
0.002446048282219444,
|
| 2781 |
+
0.002079157170450964,
|
| 2782 |
+
0.0024938680596856277,
|
| 2783 |
+
0.0022440731905576695,
|
| 2784 |
+
0.0023972250639872545
|
| 2785 |
+
]
|
| 2786 |
+
},
|
| 2787 |
+
"wic": {
|
| 2788 |
+
"acc": [
|
| 2789 |
+
0.48119122257053293,
|
| 2790 |
+
0.5047021943573667,
|
| 2791 |
+
0.46865203761755486,
|
| 2792 |
+
0.4952978056426332,
|
| 2793 |
+
0.5,
|
| 2794 |
+
0.4843260188087774,
|
| 2795 |
+
0.4608150470219436,
|
| 2796 |
+
0.48746081504702193,
|
| 2797 |
+
0.49686520376175547,
|
| 2798 |
+
0.47648902821316613,
|
| 2799 |
+
0.5015673981191222,
|
| 2800 |
+
0.49843260188087773,
|
| 2801 |
+
0.48746081504702193,
|
| 2802 |
+
0.5015673981191222,
|
| 2803 |
+
0.48589341692789967,
|
| 2804 |
+
0.5,
|
| 2805 |
+
0.4890282131661442,
|
| 2806 |
+
0.5015673981191222,
|
| 2807 |
+
0.5,
|
| 2808 |
+
0.493730407523511,
|
| 2809 |
+
0.5,
|
| 2810 |
+
0.49059561128526646,
|
| 2811 |
+
0.4843260188087774,
|
| 2812 |
+
0.5,
|
| 2813 |
+
0.49843260188087773,
|
| 2814 |
+
0.5031347962382445,
|
| 2815 |
+
0.4952978056426332,
|
| 2816 |
+
0.4702194357366771,
|
| 2817 |
+
0.49843260188087773,
|
| 2818 |
+
0.5,
|
| 2819 |
+
0.5,
|
| 2820 |
+
0.49686520376175547,
|
| 2821 |
+
0.49216300940438873,
|
| 2822 |
+
0.4952978056426332
|
| 2823 |
+
],
|
| 2824 |
+
"acc_stderr": [
|
| 2825 |
+
0.019796699449453867,
|
| 2826 |
+
0.01980984521925977,
|
| 2827 |
+
0.019771747172942295,
|
| 2828 |
+
0.01980984521925977,
|
| 2829 |
+
0.01981072129375818,
|
| 2830 |
+
0.019800984955347854,
|
| 2831 |
+
0.01974979043110035,
|
| 2832 |
+
0.01980449058859259,
|
| 2833 |
+
0.01981033193209755,
|
| 2834 |
+
0.019788807795837516,
|
| 2835 |
+
0.019810623954060382,
|
| 2836 |
+
0.019810623954060382,
|
| 2837 |
+
0.01980449058859259,
|
| 2838 |
+
0.019810623954060382,
|
| 2839 |
+
0.01980283522800584,
|
| 2840 |
+
0.01981072129375818,
|
| 2841 |
+
0.01980595108597941,
|
| 2842 |
+
0.019810623954060382,
|
| 2843 |
+
0.01981072129375818,
|
| 2844 |
+
0.019809163801196513,
|
| 2845 |
+
0.01981072129375818,
|
| 2846 |
+
0.0198072167632715,
|
| 2847 |
+
0.01980098495534785,
|
| 2848 |
+
0.01981072129375818,
|
| 2849 |
+
0.019810623954060382,
|
| 2850 |
+
0.019810331932097542,
|
| 2851 |
+
0.01980984521925977,
|
| 2852 |
+
0.019775550529171206,
|
| 2853 |
+
0.019810623954060382,
|
| 2854 |
+
0.01981072129375818,
|
| 2855 |
+
0.01981072129375818,
|
| 2856 |
+
0.01981033193209754,
|
| 2857 |
+
0.01980828765781383,
|
| 2858 |
+
0.01980984521925977
|
| 2859 |
+
]
|
| 2860 |
+
},
|
| 2861 |
+
"winogrande": {
|
| 2862 |
+
"acc": [
|
| 2863 |
+
0.4996053670086819,
|
| 2864 |
+
0.5138121546961326,
|
| 2865 |
+
0.5082872928176796,
|
| 2866 |
+
0.5098658247829518,
|
| 2867 |
+
0.510655090765588,
|
| 2868 |
+
0.5090765588003157,
|
| 2869 |
+
0.5248618784530387,
|
| 2870 |
+
0.5280189423835833,
|
| 2871 |
+
0.5288082083662194,
|
| 2872 |
+
0.5445935280189423,
|
| 2873 |
+
0.5469613259668509,
|
| 2874 |
+
0.5327545382794001,
|
| 2875 |
+
0.5406471981057617,
|
| 2876 |
+
0.5461720599842147,
|
| 2877 |
+
0.5359116022099447,
|
| 2878 |
+
0.5469613259668509,
|
| 2879 |
+
0.5422257300710339,
|
| 2880 |
+
0.5461720599842147,
|
| 2881 |
+
0.5493291239147593,
|
| 2882 |
+
0.5603788476716653,
|
| 2883 |
+
0.55327545382794,
|
| 2884 |
+
0.5509076558800315,
|
| 2885 |
+
0.5595895816890292,
|
| 2886 |
+
0.5477505919494869,
|
| 2887 |
+
0.5485398579321231,
|
| 2888 |
+
0.5548539857932123,
|
| 2889 |
+
0.5627466456195738,
|
| 2890 |
+
0.5572217837411207,
|
| 2891 |
+
0.5706393054459353,
|
| 2892 |
+
0.5627466456195738,
|
| 2893 |
+
0.56353591160221,
|
| 2894 |
+
0.56353591160221,
|
| 2895 |
+
0.55327545382794,
|
| 2896 |
+
0.5643251775848461
|
| 2897 |
+
],
|
| 2898 |
+
"acc_stderr": [
|
| 2899 |
+
0.014052481306049516,
|
| 2900 |
+
0.014047122916440415,
|
| 2901 |
+
0.014050555322824189,
|
| 2902 |
+
0.014049749833367596,
|
| 2903 |
+
0.014049294536290403,
|
| 2904 |
+
0.014050170094497704,
|
| 2905 |
+
0.01403510288362775,
|
| 2906 |
+
0.014030404213405786,
|
| 2907 |
+
0.014029141615909622,
|
| 2908 |
+
0.013996485037729794,
|
| 2909 |
+
0.013990366632148104,
|
| 2910 |
+
0.014022300570434134,
|
| 2911 |
+
0.014005973823825131,
|
| 2912 |
+
0.013992441563707074,
|
| 2913 |
+
0.01401619343395831,
|
| 2914 |
+
0.0139903666321481,
|
| 2915 |
+
0.01400228450442244,
|
| 2916 |
+
0.013992441563707068,
|
| 2917 |
+
0.01398392886904024,
|
| 2918 |
+
0.013949649776015692,
|
| 2919 |
+
0.0139724883716167,
|
| 2920 |
+
0.013979459389140844,
|
| 2921 |
+
0.013952330311915603,
|
| 2922 |
+
0.013988256216606012,
|
| 2923 |
+
0.01398611030101776,
|
| 2924 |
+
0.013967662954355486,
|
| 2925 |
+
0.01394139331069592,
|
| 2926 |
+
0.013960157350784985,
|
| 2927 |
+
0.013911537499969165,
|
| 2928 |
+
0.013941393310695922,
|
| 2929 |
+
0.013938569465677024,
|
| 2930 |
+
0.013938569465677028,
|
| 2931 |
+
0.013972488371616692,
|
| 2932 |
+
0.013935709739615713
|
| 2933 |
+
]
|
| 2934 |
+
},
|
| 2935 |
+
"wnli": {
|
| 2936 |
+
"acc": [
|
| 2937 |
+
0.4507042253521127,
|
| 2938 |
+
0.4507042253521127,
|
| 2939 |
+
0.4647887323943662,
|
| 2940 |
+
0.4507042253521127,
|
| 2941 |
+
0.4507042253521127,
|
| 2942 |
+
0.39436619718309857,
|
| 2943 |
+
0.4084507042253521,
|
| 2944 |
+
0.49295774647887325,
|
| 2945 |
+
0.43661971830985913,
|
| 2946 |
+
0.4507042253521127,
|
| 2947 |
+
0.5070422535211268,
|
| 2948 |
+
0.4507042253521127,
|
| 2949 |
+
0.5070422535211268,
|
| 2950 |
+
0.43661971830985913,
|
| 2951 |
+
0.49295774647887325,
|
| 2952 |
+
0.4507042253521127,
|
| 2953 |
+
0.4788732394366197,
|
| 2954 |
+
0.4647887323943662,
|
| 2955 |
+
0.4507042253521127,
|
| 2956 |
+
0.5492957746478874,
|
| 2957 |
+
0.4647887323943662,
|
| 2958 |
+
0.4507042253521127,
|
| 2959 |
+
0.43661971830985913,
|
| 2960 |
+
0.5492957746478874,
|
| 2961 |
+
0.49295774647887325,
|
| 2962 |
+
0.4647887323943662,
|
| 2963 |
+
0.5492957746478874,
|
| 2964 |
+
0.49295774647887325,
|
| 2965 |
+
0.43661971830985913,
|
| 2966 |
+
0.43661971830985913,
|
| 2967 |
+
0.4507042253521127,
|
| 2968 |
+
0.5492957746478874,
|
| 2969 |
+
0.5352112676056338,
|
| 2970 |
+
0.5352112676056338
|
| 2971 |
+
],
|
| 2972 |
+
"acc_stderr": [
|
| 2973 |
+
0.05947027187737998,
|
| 2974 |
+
0.05947027187737998,
|
| 2975 |
+
0.05961305784972239,
|
| 2976 |
+
0.05947027187737998,
|
| 2977 |
+
0.05947027187737998,
|
| 2978 |
+
0.05841251085444427,
|
| 2979 |
+
0.05875113694257524,
|
| 2980 |
+
0.059755502635482904,
|
| 2981 |
+
0.0592793555841297,
|
| 2982 |
+
0.05947027187737998,
|
| 2983 |
+
0.05975550263548289,
|
| 2984 |
+
0.05947027187737998,
|
| 2985 |
+
0.05975550263548289,
|
| 2986 |
+
0.0592793555841297,
|
| 2987 |
+
0.05975550263548289,
|
| 2988 |
+
0.05947027187737998,
|
| 2989 |
+
0.05970805879899504,
|
| 2990 |
+
0.0596130578497224,
|
| 2991 |
+
0.05947027187737998,
|
| 2992 |
+
0.05947027187737999,
|
| 2993 |
+
0.0596130578497224,
|
| 2994 |
+
0.05947027187737998,
|
| 2995 |
+
0.0592793555841297,
|
| 2996 |
+
0.05947027187737999,
|
| 2997 |
+
0.05975550263548289,
|
| 2998 |
+
0.0596130578497224,
|
| 2999 |
+
0.05947027187737999,
|
| 3000 |
+
0.059755502635482904,
|
| 3001 |
+
0.0592793555841297,
|
| 3002 |
+
0.0592793555841297,
|
| 3003 |
+
0.05947027187737999,
|
| 3004 |
+
0.05947027187737999,
|
| 3005 |
+
0.0596130578497224,
|
| 3006 |
+
0.0596130578497224
|
| 3007 |
+
]
|
| 3008 |
+
},
|
| 3009 |
+
"wsc": {
|
| 3010 |
+
"acc": [
|
| 3011 |
+
0.375,
|
| 3012 |
+
0.375,
|
| 3013 |
+
0.5,
|
| 3014 |
+
0.40384615384615385,
|
| 3015 |
+
0.3557692307692308,
|
| 3016 |
+
0.5096153846153846,
|
| 3017 |
+
0.5769230769230769,
|
| 3018 |
+
0.46153846153846156,
|
| 3019 |
+
0.6057692307692307,
|
| 3020 |
+
0.5576923076923077,
|
| 3021 |
+
0.46153846153846156,
|
| 3022 |
+
0.36538461538461536,
|
| 3023 |
+
0.5192307692307693,
|
| 3024 |
+
0.4519230769230769,
|
| 3025 |
+
0.5192307692307693,
|
| 3026 |
+
0.36538461538461536,
|
| 3027 |
+
0.41346153846153844,
|
| 3028 |
+
0.375,
|
| 3029 |
+
0.36538461538461536,
|
| 3030 |
+
0.36538461538461536,
|
| 3031 |
+
0.40384615384615385,
|
| 3032 |
+
0.5192307692307693,
|
| 3033 |
+
0.5384615384615384,
|
| 3034 |
+
0.4326923076923077,
|
| 3035 |
+
0.4519230769230769,
|
| 3036 |
+
0.3942307692307692,
|
| 3037 |
+
0.4326923076923077,
|
| 3038 |
+
0.5769230769230769,
|
| 3039 |
+
0.4230769230769231,
|
| 3040 |
+
0.38461538461538464,
|
| 3041 |
+
0.4423076923076923,
|
| 3042 |
+
0.5769230769230769,
|
| 3043 |
+
0.5961538461538461,
|
| 3044 |
+
0.5384615384615384
|
| 3045 |
+
],
|
| 3046 |
+
"acc_stderr": [
|
| 3047 |
+
0.04770204856076104,
|
| 3048 |
+
0.04770204856076104,
|
| 3049 |
+
0.04926646390821466,
|
| 3050 |
+
0.04834688952654018,
|
| 3051 |
+
0.04717221961050337,
|
| 3052 |
+
0.04925735314273531,
|
| 3053 |
+
0.04867993747918684,
|
| 3054 |
+
0.04912048887947826,
|
| 3055 |
+
0.04815154775990711,
|
| 3056 |
+
0.04893740777701,
|
| 3057 |
+
0.04912048887947827,
|
| 3058 |
+
0.0474473339327792,
|
| 3059 |
+
0.049230010729780505,
|
| 3060 |
+
0.04903818696931432,
|
| 3061 |
+
0.049230010729780505,
|
| 3062 |
+
0.0474473339327792,
|
| 3063 |
+
0.04852294969729053,
|
| 3064 |
+
0.04770204856076104,
|
| 3065 |
+
0.0474473339327792,
|
| 3066 |
+
0.0474473339327792,
|
| 3067 |
+
0.04834688952654018,
|
| 3068 |
+
0.049230010729780505,
|
| 3069 |
+
0.04912048887947826,
|
| 3070 |
+
0.04881803687006195,
|
| 3071 |
+
0.049038186969314335,
|
| 3072 |
+
0.04815154775990711,
|
| 3073 |
+
0.048818036870061955,
|
| 3074 |
+
0.04867993747918684,
|
| 3075 |
+
0.048679937479186836,
|
| 3076 |
+
0.0479366886807504,
|
| 3077 |
+
0.04893740777701,
|
| 3078 |
+
0.04867993747918684,
|
| 3079 |
+
0.048346889526540184,
|
| 3080 |
+
0.04912048887947828
|
| 3081 |
+
]
|
| 3082 |
+
}
|
| 3083 |
+
}
|
| 3084 |
+
}
|
evaluation/results/tr3/tr3m-1B3-pile-checkpoints_agg.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
evaluation/utilities/convert_results_to_json.py
ADDED
|
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import math
|
| 3 |
+
import os
|
| 4 |
+
from argparse import ArgumentParser
|
| 5 |
+
from os import listdir
|
| 6 |
+
from os.path import isfile
|
| 7 |
+
|
| 8 |
+
def get_args():
|
| 9 |
+
parser = ArgumentParser()
|
| 10 |
+
# --experiments tr3d-1B3-oscar-checkpoints,tr3e-1B3-c4-checkpoints,tr3m-1B3-pile-checkpoints
|
| 11 |
+
parser.add_argument('--experiment', type=str, required=True,
|
| 12 |
+
help='Experiment we want to download.')
|
| 13 |
+
parser.add_argument('--result-dir', type=str, required=True,
|
| 14 |
+
help='Result directory containing all results, and to store aggregated json results.')
|
| 15 |
+
parser.add_argument('--batch-size', type=int, default=512,
|
| 16 |
+
help='Experiment training batch size.')
|
| 17 |
+
parser.add_argument('--sequence_length', type=int, default=2048,
|
| 18 |
+
help='Experiment training sequence length.')
|
| 19 |
+
parser.add_argument('--rampup-batch-size', type=lambda s: tuple(int(item) for item in s.split(',')), default=(32, 32, 2_000_000),
|
| 20 |
+
help='Experiment training batch size rampup.')
|
| 21 |
+
return parser.parse_args()
|
| 22 |
+
|
| 23 |
+
def checkpoint_step_to_tokens(checkpoint_step, args) -> int:
|
| 24 |
+
def fn(checkpoint_step) -> int:
|
| 25 |
+
if not hasattr(checkpoint_step_to_tokens, "CACHE"):
|
| 26 |
+
checkpoint_step_to_tokens.CACHE = {}
|
| 27 |
+
|
| 28 |
+
BATCH_SIZE=args.batch_size
|
| 29 |
+
SEQUENCE_LENGTH=args.sequence_length
|
| 30 |
+
# Linear increase in terms of samples.
|
| 31 |
+
RAMPUP_BATCH_SIZE = args.rampup_batch_size
|
| 32 |
+
|
| 33 |
+
# Compute RAMPUP checkpoint_step
|
| 34 |
+
if not hasattr(checkpoint_step_to_tokens, "RAMPUP_OFFSET"):
|
| 35 |
+
initial_batch_size, increment_batch_size, sample_limit_for_rampup = RAMPUP_BATCH_SIZE
|
| 36 |
+
number_of_increments = (BATCH_SIZE - initial_batch_size) // increment_batch_size
|
| 37 |
+
assert (BATCH_SIZE - initial_batch_size) % increment_batch_size == 0
|
| 38 |
+
|
| 39 |
+
offset_step = 0
|
| 40 |
+
start_sample = 0
|
| 41 |
+
for incr in range(number_of_increments):
|
| 42 |
+
batch_size = initial_batch_size + incr * increment_batch_size
|
| 43 |
+
end_sample = int(math.ceil((incr + 1) * sample_limit_for_rampup / number_of_increments))
|
| 44 |
+
number_of_step_per_increment = int(math.ceil((end_sample - start_sample) / batch_size))
|
| 45 |
+
checkpoint_step_to_tokens.CACHE.update({
|
| 46 |
+
offset_step + i: (start_sample + i * batch_size) * SEQUENCE_LENGTH
|
| 47 |
+
for i in range(number_of_step_per_increment)
|
| 48 |
+
})
|
| 49 |
+
offset_step += number_of_step_per_increment
|
| 50 |
+
start_sample += number_of_step_per_increment * batch_size
|
| 51 |
+
|
| 52 |
+
checkpoint_step_to_tokens.CACHE[offset_step] = start_sample * SEQUENCE_LENGTH
|
| 53 |
+
checkpoint_step_to_tokens.RAMPUP_OFFSET = offset_step
|
| 54 |
+
|
| 55 |
+
if checkpoint_step in checkpoint_step_to_tokens.CACHE:
|
| 56 |
+
return checkpoint_step_to_tokens.CACHE[checkpoint_step]
|
| 57 |
+
|
| 58 |
+
number_steps_after_rampup = checkpoint_step - checkpoint_step_to_tokens.RAMPUP_OFFSET
|
| 59 |
+
assert number_steps_after_rampup >= 0
|
| 60 |
+
|
| 61 |
+
slope = BATCH_SIZE * SEQUENCE_LENGTH
|
| 62 |
+
|
| 63 |
+
checkpoint_step_to_tokens.CACHE[checkpoint_step] = \
|
| 64 |
+
checkpoint_step_to_tokens.CACHE[checkpoint_step_to_tokens.RAMPUP_OFFSET] + \
|
| 65 |
+
slope * number_steps_after_rampup
|
| 66 |
+
return checkpoint_step_to_tokens.CACHE[checkpoint_step]
|
| 67 |
+
return fn(checkpoint_step)
|
| 68 |
+
|
| 69 |
+
def main():
|
| 70 |
+
args = get_args()
|
| 71 |
+
result_dir = args.result_dir
|
| 72 |
+
experiment = args.experiment
|
| 73 |
+
|
| 74 |
+
results_file_per_checkpoint = [
|
| 75 |
+
file
|
| 76 |
+
for file in listdir(result_dir)
|
| 77 |
+
if isfile(os.path.join(result_dir, file)) and file.startswith(experiment)
|
| 78 |
+
]
|
| 79 |
+
checkpoint_steps = sorted([int(file.split("_")[-1].split(".json")[0]) for file in results_file_per_checkpoint])
|
| 80 |
+
absolute_paths = [f"{result_dir}/{experiment}_{checkpoint_step}.json" for checkpoint_step in checkpoint_steps]
|
| 81 |
+
# format = "{EXPERIMENT_NAME}_{CHECKPOINT_STEP}.json"
|
| 82 |
+
tokens = [checkpoint_step_to_tokens(checkpoint_step, args) for checkpoint_step in checkpoint_steps]
|
| 83 |
+
|
| 84 |
+
result_json = {}
|
| 85 |
+
for absolute_path in absolute_paths:
|
| 86 |
+
with open(absolute_path, 'r') as fi:
|
| 87 |
+
results = json.load(fi)["results"]
|
| 88 |
+
|
| 89 |
+
for task in results:
|
| 90 |
+
if task not in result_json:
|
| 91 |
+
result_json[task] = {}
|
| 92 |
+
|
| 93 |
+
for metric in results[task]:
|
| 94 |
+
if metric not in result_json[task]:
|
| 95 |
+
result_json[task][metric] = []
|
| 96 |
+
|
| 97 |
+
result_json[task][metric].append(results[task][metric])
|
| 98 |
+
|
| 99 |
+
# check
|
| 100 |
+
for task in result_json:
|
| 101 |
+
assert len(tokens) == len(checkpoint_steps)
|
| 102 |
+
for metric in result_json[task]:
|
| 103 |
+
assert len(result_json[task][metric]) == len(checkpoint_steps)
|
| 104 |
+
|
| 105 |
+
output_path = os.path.join(result_dir, f"{experiment}_agg.json")
|
| 106 |
+
print(f"Printing results to {output_path}")
|
| 107 |
+
with open(output_path, 'w') as fo:
|
| 108 |
+
json.dump({"tokens": tokens, "checkpoints": checkpoint_steps, "results": result_json}, fo, indent=2)
|
| 109 |
+
|
| 110 |
+
if __name__ == "__main__":
|
| 111 |
+
main()
|
evaluation/utilities/download_all_models.py
ADDED
|
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from argparse import ArgumentParser
|
| 2 |
+
from multiprocessing import Pool
|
| 3 |
+
|
| 4 |
+
from requests import HTTPError
|
| 5 |
+
from transformers import AutoModel, AutoTokenizer
|
| 6 |
+
|
| 7 |
+
def get_args():
|
| 8 |
+
parser = ArgumentParser()
|
| 9 |
+
# --experiments bigscience/tr3d-1B3-oscar-checkpoints,bigscience/tr3e-1B3-c4-checkpoints,bigscience/tr3m-1B3-pile-checkpoints
|
| 10 |
+
parser.add_argument('--experiments', type=lambda s: s.split(','), required=True, help='Experiments we want to download.')
|
| 11 |
+
# --steps 19500,28500,37500,48000,57000,66000,76500,85500,94500,105000,114000
|
| 12 |
+
parser.add_argument('--steps', type=lambda s: [int(item) for item in s.split(',')], required=True, help='Steps we should download the model checkpoints')
|
| 13 |
+
return parser.parse_args()
|
| 14 |
+
|
| 15 |
+
def _load_model(pretrain:str, revision: str):
|
| 16 |
+
try:
|
| 17 |
+
AutoModel.from_pretrained(pretrain, revision=revision)
|
| 18 |
+
AutoTokenizer.from_pretrained(pretrain, revision=revision)
|
| 19 |
+
return f"Loaded: {{pretrain:{pretrain}, revision:{revision}}}"
|
| 20 |
+
except HTTPError:
|
| 21 |
+
return f"Failed to load: {{pretrain:{pretrain}, revision:{revision}}}"
|
| 22 |
+
|
| 23 |
+
def load_model(kwargs):
|
| 24 |
+
return _load_model(**kwargs)
|
| 25 |
+
|
| 26 |
+
def main():
|
| 27 |
+
args = get_args()
|
| 28 |
+
pretrains = args.experiments
|
| 29 |
+
steps = args.steps
|
| 30 |
+
revisions = [f"global_step{step}" for step in steps]
|
| 31 |
+
|
| 32 |
+
# with Pool(10) as pool:
|
| 33 |
+
# results = pool.imap(
|
| 34 |
+
# load_model,
|
| 35 |
+
# [{"pretrain": pretrain, "revision": revision} for pretrain in pretrains for revision in revisions],
|
| 36 |
+
# chunksize=1
|
| 37 |
+
# )
|
| 38 |
+
#
|
| 39 |
+
# for result in results:
|
| 40 |
+
# print(result)
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
for kwargs in [{"pretrain": pretrain, "revision": revision} for pretrain in pretrains for revision in revisions]:
|
| 44 |
+
print(load_model(kwargs))
|
| 45 |
+
|
| 46 |
+
if __name__ == "__main__":
|
| 47 |
+
main()
|
evaluation/utilities/download_all_models.slurm
ADDED
|
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
#!/bin/bash
|
| 2 |
+
#SBATCH --job-name=download_all_models
|
| 3 |
+
#SBATCH --nodes=1
|
| 4 |
+
#SBATCH --ntasks-per-node=1 # crucial - only 1 task per dist per node!
|
| 5 |
+
#SBATCH --cpus-per-task=10 # number of cores per tasks
|
| 6 |
+
#SBATCH --hint=nomultithread # we get physical cores not logical
|
| 7 |
+
#SBATCH --time 10:00:00 # maximum execution time (HH:MM:SS)
|
| 8 |
+
#SBATCH --output=logs/%x.out # output file name
|
| 9 |
+
#SBATCH --account=six@gpu
|
| 10 |
+
#SBATCH --partition=compil
|
| 11 |
+
|
| 12 |
+
set -x -e
|
| 13 |
+
|
| 14 |
+
source $six_ALL_CCFRWORK/start-prod
|
| 15 |
+
conda activate thomas_lm_eval
|
| 16 |
+
|
| 17 |
+
# TODO: replace with local fork of bigscience
|
| 18 |
+
BIGSCIENCE_REPO=$WORK/code/big_science/bigscience/evaluation/results/tr3
|
| 19 |
+
|
| 20 |
+
pushd $BIGSCIENCE_REPO
|
| 21 |
+
|
| 22 |
+
# TODO: replace with experiment / steps
|
| 23 |
+
EXPERIMENTS=bigscience/tr3d-1B3-oscar-checkpoints,bigscience/tr3e-1B3-c4-checkpoints,bigscience/tr3m-1B3-pile-checkpoints
|
| 24 |
+
STEPS=$(python -c "print(\",\".join([str(i) for i in range(19500, 118500, 1500)]))")
|
| 25 |
+
|
| 26 |
+
python download_all_models.py --experiments $EXPERIMENTS --steps $STEPS
|
evaluation/utilities/export_results_through_training_to_wandb.py
ADDED
|
@@ -0,0 +1,86 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
|
| 3 |
+
import numpy as np
|
| 4 |
+
import wandb
|
| 5 |
+
import json
|
| 6 |
+
import argparse
|
| 7 |
+
|
| 8 |
+
RANDOM_BASELINE={
|
| 9 |
+
"arc_challenge": 0.2502, # Source: https://arxiv.org/pdf/1803.05457.pdf table 6
|
| 10 |
+
"arc_easy": 0.2502, # Source: https://arxiv.org/pdf/1803.05457.pdf table 6
|
| 11 |
+
"boolq": 0.5,
|
| 12 |
+
"copa": 0.5,
|
| 13 |
+
"headqa_en": 0.25,
|
| 14 |
+
"hellaswag": 0.25,
|
| 15 |
+
"lambada": 0., # Safe to say that random models won't perform well at all.
|
| 16 |
+
"logiqa": 0.25,
|
| 17 |
+
"mathqa": (4360 * 1/ 5 - (4475 - 4360) * 1/ 4) / 4475,
|
| 18 |
+
"mrpc": 0.5,
|
| 19 |
+
"multirc": 0., # TODO: I couldn't figure it out
|
| 20 |
+
"openbookqa": 0.25,
|
| 21 |
+
"piqa": 0.5,
|
| 22 |
+
"prost": 0.25,
|
| 23 |
+
"pubmedqa": 1/3,
|
| 24 |
+
"qnli": 0.5,
|
| 25 |
+
"qqp": 0.5,
|
| 26 |
+
"race": 0.25, # Source: https://arxiv.org/pdf/1704.04683.pdf table 5
|
| 27 |
+
"rte": 0.5,
|
| 28 |
+
"sciq": 0.25,
|
| 29 |
+
"sst": 0.5,
|
| 30 |
+
"triviaqa": 0.,
|
| 31 |
+
"webqs": 0.,
|
| 32 |
+
"wic": 0.5,
|
| 33 |
+
"winogrande": 0.5,
|
| 34 |
+
"wnli": 0.5,
|
| 35 |
+
"wsc": 0.5
|
| 36 |
+
}
|
| 37 |
+
|
| 38 |
+
def normalise(score, task):
|
| 39 |
+
return (score - RANDOM_BASELINE[task]) / (1. - RANDOM_BASELINE[task])
|
| 40 |
+
|
| 41 |
+
def parse_args():
|
| 42 |
+
parser = argparse.ArgumentParser()
|
| 43 |
+
parser.add_argument("--input_files", type=lambda s: s.split(','), required=True)
|
| 44 |
+
parser.add_argument("--all_tasks", action="store_true")
|
| 45 |
+
parser.add_argument("--naive_average", action="store_true")
|
| 46 |
+
parser.add_argument("--acc_average", action="store_true")
|
| 47 |
+
parser.add_argument("--normalised_acc_average", action="store_true")
|
| 48 |
+
return parser.parse_args()
|
| 49 |
+
|
| 50 |
+
def main():
|
| 51 |
+
args = parse_args()
|
| 52 |
+
for input_file in args.input_files:
|
| 53 |
+
assert os.path.basename(input_file).endswith("_agg.json")
|
| 54 |
+
experiment_name = os.path.basename(input_file).split("_agg.json")[0]
|
| 55 |
+
with open(input_file, "r") as fi:
|
| 56 |
+
experiment = json.load(fi)
|
| 57 |
+
|
| 58 |
+
results = experiment["results"]
|
| 59 |
+
tokens = experiment["tokens"]
|
| 60 |
+
run = wandb.init(project="bigscience-tr3-evaluation-through-training", entity="timerobber", name=experiment_name,
|
| 61 |
+
reinit=True)
|
| 62 |
+
for i, n_tokens in enumerate(tokens):
|
| 63 |
+
all_values = []
|
| 64 |
+
acc_average = []
|
| 65 |
+
normalised_acc_average = []
|
| 66 |
+
for task, task_results in results.items():
|
| 67 |
+
values = None
|
| 68 |
+
for metric, values in task_results.items():
|
| 69 |
+
if args.all_tasks:
|
| 70 |
+
wandb.log({f"{task}_{metric}": values[i], "tokens": tokens[i]})
|
| 71 |
+
if "stderr" not in metric and "ppl" not in metric:
|
| 72 |
+
all_values.append(values[i])
|
| 73 |
+
if metric == "acc":
|
| 74 |
+
acc_average.append(values[i])
|
| 75 |
+
normalised_acc_average.append(normalise(values[i], task))
|
| 76 |
+
if args.naive_average:
|
| 77 |
+
wandb.log({f"naive_average": np.mean(all_values), "tokens": tokens[i]})
|
| 78 |
+
if args.acc_average:
|
| 79 |
+
wandb.log({f"acc_average": np.mean(acc_average), "tokens": tokens[i]})
|
| 80 |
+
if args.normalised_acc_average:
|
| 81 |
+
wandb.log({f"normalised_acc_average": np.mean(normalised_acc_average), "tokens": tokens[i]})
|
| 82 |
+
|
| 83 |
+
run.finish()
|
| 84 |
+
|
| 85 |
+
if __name__ == "__main__":
|
| 86 |
+
main()
|
evaluation/utilities/find_checkpoints_at_token_intervals.py
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import datasets
|
| 2 |
+
import json
|
| 3 |
+
|
| 4 |
+
steps_vs_samples = datasets.load_dataset("csv", data_files="run-.-tag-steps-vs-samples_y=steps,x=samples.csv")["train"]
|
| 5 |
+
|
| 6 |
+
slope = (steps_vs_samples[-1]["Step"] - steps_vs_samples[-2]["Step"]) / (
|
| 7 |
+
steps_vs_samples[-1]["Value"] - steps_vs_samples[-2]["Value"])
|
| 8 |
+
offset = steps_vs_samples[-1]["Step"] - steps_vs_samples[-1]["Value"] * slope
|
| 9 |
+
|
| 10 |
+
token_interval = 1e10
|
| 11 |
+
step_interval = 1500
|
| 12 |
+
tokens_per_sample = 2048
|
| 13 |
+
token_count = token_interval
|
| 14 |
+
|
| 15 |
+
output_checkpoints = []
|
| 16 |
+
|
| 17 |
+
for item in steps_vs_samples:
|
| 18 |
+
if item["Step"] * tokens_per_sample > token_count:
|
| 19 |
+
token_count += token_interval
|
| 20 |
+
step = step_interval * (item['Value'] // step_interval)
|
| 21 |
+
tokens = tokens_per_sample * (slope * (step_interval * (item['Value'] // step_interval)) + offset)
|
| 22 |
+
print(f"step: {step}")
|
| 23 |
+
print(f"tokens at that step: {tokens}")
|
| 24 |
+
output_checkpoints.append({"step": step, "tokens": tokens})
|
| 25 |
+
|
| 26 |
+
|
| 27 |
+
json.dump(output_checkpoints, open("steps_to_evaluate_with_tokens.json", "w"))
|
evaluation/utilities/plot_all_eval.py
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import json
|
| 2 |
+
import os
|
| 3 |
+
from argparse import ArgumentParser
|
| 4 |
+
|
| 5 |
+
from matplotlib import pyplot as plt
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
def get_args():
|
| 9 |
+
parser = ArgumentParser()
|
| 10 |
+
parser.add_argument('--input-files', type=lambda s: s.split(','), required=True, help='Input files that hold all evaluation metrics')
|
| 11 |
+
return parser.parse_args()
|
| 12 |
+
|
| 13 |
+
def main():
|
| 14 |
+
args = get_args()
|
| 15 |
+
|
| 16 |
+
plots = {} # {"{EVALUATION}_{METRIC}": plt.figure}
|
| 17 |
+
for input_file in args.input_files:
|
| 18 |
+
assert os.path.basename(input_file).endswith("_agg.json")
|
| 19 |
+
experiment_name = os.path.basename(input_file).split("_agg.json")[0]
|
| 20 |
+
with open(input_file, "r") as fi:
|
| 21 |
+
experiment = json.load(fi)
|
| 22 |
+
|
| 23 |
+
tokens = experiment["tokens"]
|
| 24 |
+
for evaluation_name, evaluation in experiment["results"].items():
|
| 25 |
+
for metric_name, metric in evaluation.items():
|
| 26 |
+
key = f"{evaluation_name}_{metric_name}"
|
| 27 |
+
if key[-7:] == "_stderr":
|
| 28 |
+
continue
|
| 29 |
+
|
| 30 |
+
if key not in plots:
|
| 31 |
+
plot = plt.figure(len(plots))
|
| 32 |
+
plot = plot.add_subplot(1,1,1)
|
| 33 |
+
plot.set_title(key)
|
| 34 |
+
plots[key] = plot
|
| 35 |
+
|
| 36 |
+
plot = plots[key]
|
| 37 |
+
|
| 38 |
+
plot.plot(tokens, metric, label=experiment_name)
|
| 39 |
+
|
| 40 |
+
for plot in plots.values():
|
| 41 |
+
plot.legend()
|
| 42 |
+
plt.show()
|
| 43 |
+
|
| 44 |
+
if __name__ == "__main__":
|
| 45 |
+
main()
|
jz/.gitignore
ADDED
|
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Byte-compiled / optimized / DLL files
|
| 2 |
+
__pycache__/
|
| 3 |
+
*.py[cod]
|
| 4 |
+
*$py.class
|
| 5 |
+
|
| 6 |
+
# C extensions
|
| 7 |
+
*.so
|
| 8 |
+
|
| 9 |
+
# Distribution / packaging
|
| 10 |
+
.Python
|
| 11 |
+
build/
|
| 12 |
+
develop-eggs/
|
| 13 |
+
dist/
|
| 14 |
+
downloads/
|
| 15 |
+
eggs/
|
| 16 |
+
.eggs/
|
| 17 |
+
lib/
|
| 18 |
+
lib64/
|
| 19 |
+
parts/
|
| 20 |
+
sdist/
|
| 21 |
+
var/
|
| 22 |
+
wheels/
|
| 23 |
+
pip-wheel-metadata/
|
| 24 |
+
share/python-wheels/
|
| 25 |
+
*.egg-info/
|
| 26 |
+
.installed.cfg
|
| 27 |
+
*.egg
|
| 28 |
+
MANIFEST
|
| 29 |
+
|
| 30 |
+
# PyInstaller
|
| 31 |
+
# Usually these files are written by a python script from a template
|
| 32 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
| 33 |
+
*.manifest
|
| 34 |
+
*.spec
|
| 35 |
+
|
| 36 |
+
# Installer logs
|
| 37 |
+
pip-log.txt
|
| 38 |
+
pip-delete-this-directory.txt
|
| 39 |
+
|
| 40 |
+
# Unit test / coverage reports
|
| 41 |
+
htmlcov/
|
| 42 |
+
.tox/
|
| 43 |
+
.nox/
|
| 44 |
+
.coverage
|
| 45 |
+
.coverage.*
|
| 46 |
+
.cache
|
| 47 |
+
nosetests.xml
|
| 48 |
+
coverage.xml
|
| 49 |
+
*.cover
|
| 50 |
+
*.py,cover
|
| 51 |
+
.hypothesis/
|
| 52 |
+
.pytest_cache/
|
| 53 |
+
|
| 54 |
+
# Translations
|
| 55 |
+
*.mo
|
| 56 |
+
*.pot
|
| 57 |
+
|
| 58 |
+
# Django stuff:
|
| 59 |
+
*.log
|
| 60 |
+
local_settings.py
|
| 61 |
+
db.sqlite3
|
| 62 |
+
db.sqlite3-journal
|
| 63 |
+
|
| 64 |
+
# Flask stuff:
|
| 65 |
+
instance/
|
| 66 |
+
.webassets-cache
|
| 67 |
+
|
| 68 |
+
# Scrapy stuff:
|
| 69 |
+
.scrapy
|
| 70 |
+
|
| 71 |
+
# Sphinx documentation
|
| 72 |
+
docs/_build/
|
| 73 |
+
|
| 74 |
+
# PyBuilder
|
| 75 |
+
target/
|
| 76 |
+
|
| 77 |
+
# Jupyter Notebook
|
| 78 |
+
.ipynb_checkpoints
|
| 79 |
+
|
| 80 |
+
# IPython
|
| 81 |
+
profile_default/
|
| 82 |
+
ipython_config.py
|
| 83 |
+
|
| 84 |
+
# pyenv
|
| 85 |
+
.python-version
|
| 86 |
+
|
| 87 |
+
# pipenv
|
| 88 |
+
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
|
| 89 |
+
# However, in case of collaboration, if having platform-specific dependencies or dependencies
|
| 90 |
+
# having no cross-platform support, pipenv may install dependencies that don't work, or not
|
| 91 |
+
# install all needed dependencies.
|
| 92 |
+
#Pipfile.lock
|
| 93 |
+
|
| 94 |
+
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
|
| 95 |
+
__pypackages__/
|
| 96 |
+
|
| 97 |
+
# Celery stuff
|
| 98 |
+
celerybeat-schedule
|
| 99 |
+
celerybeat.pid
|
| 100 |
+
|
| 101 |
+
# SageMath parsed files
|
| 102 |
+
*.sage.py
|
| 103 |
+
|
| 104 |
+
# Environments
|
| 105 |
+
.env
|
| 106 |
+
.venv
|
| 107 |
+
env/
|
| 108 |
+
venv/
|
| 109 |
+
ENV/
|
| 110 |
+
env.bak/
|
| 111 |
+
venv.bak/
|
| 112 |
+
|
| 113 |
+
# Spyder project settings
|
| 114 |
+
.spyderproject
|
| 115 |
+
.spyproject
|
| 116 |
+
|
| 117 |
+
# Rope project settings
|
| 118 |
+
.ropeproject
|
| 119 |
+
|
| 120 |
+
# mkdocs documentation
|
| 121 |
+
/site
|
| 122 |
+
|
| 123 |
+
# mypy
|
| 124 |
+
.mypy_cache/
|
| 125 |
+
.dmypy.json
|
| 126 |
+
dmypy.json
|
| 127 |
+
|
| 128 |
+
# Pyre type checker
|
| 129 |
+
.pyre/
|
| 130 |
+
|
| 131 |
+
# Slurm job output and error
|
| 132 |
+
*.err
|
| 133 |
+
*.out
|
jz/.gitmodules
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
[submodule "lm-evaluation-harness"]
|
| 2 |
+
path = lm-evaluation-harness
|
| 3 |
+
url = https://github.com/huggingface/lm-evaluation-harness.git
|
jz/README.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# jay-z
|
| 2 |
+
|
| 3 |
+
Jean Zay aka JZ pronounced "Jay-Z"
|
| 4 |
+
|
| 5 |
+
This section of the repo is all about how things are done on JZ.
|
| 6 |
+
|
| 7 |
+
Main documents:
|
| 8 |
+
|
| 9 |
+
- [Compute Resources](./compute-resources.md)
|
| 10 |
+
- [JZ Specs](./hpc-specs.md)
|
| 11 |
+
- [Framework-specific notes](./frameworks/)
|
| 12 |
+
- [Model-specific Instructions](./archs/)
|
| 13 |
+
|
| 14 |
+
Code:
|
| 15 |
+
- [Work Env and Setup](./envs/README.md)
|
| 16 |
+
- [SLURM scripts](./scripts/)
|
| 17 |
+
- [Config files](./configs/)
|
| 18 |
+
|
| 19 |
+
Tools:
|
| 20 |
+
- [SLURM HowTo](./slurm/)
|
| 21 |
+
- [Various Tools](./tools/)
|
| 22 |
+
|
| 23 |
+
General JZ Docs:
|
| 24 |
+
|
| 25 |
+
- HF Internal: https://github.com/huggingface/conf/wiki/JZ
|
| 26 |
+
- Official: http://www.idris.fr/eng/jean-zay/
|
| 27 |
+
- Collaborative doc: https://jean-zay-doc.readthedocs.io/en/latest/
|