sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
12b6d35dd29a10d2cb9f9e44b2c0666414f90dc1
|
# Dataset Card for "autotree_pmlb_letter_sgosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_pmlb_letter_sgosdt_l256_d3_sd0
|
[
"region:us"
] |
2023-09-01T07:56:29+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 523118976, "num_examples": 10000}, {"name": "validation", "num_bytes": 523120000, "num_examples": 10000}], "download_size": 61880916, "dataset_size": 1046238976}}
|
2023-09-01T07:56:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_pmlb_letter_sgosdt_l256_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_pmlb_letter_sgosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_pmlb_letter_sgosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
[
6,
31
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_pmlb_letter_sgosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
1699f8a2eef308efd4aa9207b5151f7c95f11f20
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-17T20:27:23.554125](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco/blob/main/results_2023-09-17T20-27-23.554125.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2488464765100671,
"em_stderr": 0.004427614016278926,
"f1": 0.28849937080536914,
"f1_stderr": 0.00442953185165108,
"acc": 0.372010258662628,
"acc_stderr": 0.00929094831305589
},
"harness|drop|3": {
"em": 0.2488464765100671,
"em_stderr": 0.004427614016278926,
"f1": 0.28849937080536914,
"f1_stderr": 0.00442953185165108
},
"harness|gsm8k|5": {
"acc": 0.04473085670962851,
"acc_stderr": 0.005693886131407047
},
"harness|winogrande|5": {
"acc": 0.6992896606156275,
"acc_stderr": 0.012888010494704732
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco
|
[
"region:us"
] |
2023-09-01T08:01:17+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-mctaco](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-17T20:27:23.554125](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-mctaco/blob/main/results_2023-09-17T20-27-23.554125.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2488464765100671,\n \"em_stderr\": 0.004427614016278926,\n \"f1\": 0.28849937080536914,\n \"f1_stderr\": 0.00442953185165108,\n \"acc\": 0.372010258662628,\n \"acc_stderr\": 0.00929094831305589\n },\n \"harness|drop|3\": {\n \"em\": 0.2488464765100671,\n \"em_stderr\": 0.004427614016278926,\n \"f1\": 0.28849937080536914,\n \"f1_stderr\": 0.00442953185165108\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04473085670962851,\n \"acc_stderr\": 0.005693886131407047\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6992896606156275,\n \"acc_stderr\": 0.012888010494704732\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-mctaco", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|arc:challenge|25_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|arc:challenge|25_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_17T20_27_23.554125", "path": ["**/details_harness|drop|3_2023-09-17T20-27-23.554125.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-17T20-27-23.554125.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_17T20_27_23.554125", "path": ["**/details_harness|gsm8k|5_2023-09-17T20-27-23.554125.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-17T20-27-23.554125.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hellaswag|10_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hellaswag|10_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T09:00:53.100273.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T09:03:24.370765.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T09:03:24.370765.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T09:03:24.370765.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_17T20_27_23.554125", "path": ["**/details_harness|winogrande|5_2023-09-17T20-27-23.554125.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-17T20-27-23.554125.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T09_00_53.100273", "path": ["results_2023-09-01T09:00:53.100273.parquet"]}, {"split": "2023_09_01T09_03_24.370765", "path": ["results_2023-09-01T09:03:24.370765.parquet"]}, {"split": "2023_09_17T20_27_23.554125", "path": ["results_2023-09-17T20-27-23.554125.parquet"]}, {"split": "latest", "path": ["results_2023-09-17T20-27-23.554125.parquet"]}]}]}
|
2023-09-17T19:27:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-17T20:27:23.554125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T20:27:23.554125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-17T20:27:23.554125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-mctaco## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-mctaco on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-17T20:27:23.554125(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ba3ef806166aa5b525ed5696f17a12ede79960c0
|
# Dataset Card for "dark_granger"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mickume/dark_granger
|
[
"region:us"
] |
2023-09-01T08:09:53+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 160190011, "num_examples": 925620}], "download_size": 99596348, "dataset_size": 160190011}}
|
2023-10-31T11:03:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dark_granger"
More Information needed
|
[
"# Dataset Card for \"dark_granger\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dark_granger\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dark_granger\"\n\nMore Information needed"
] |
f4bbafac5d2d95757840936213313decd9449d2e
|
# Dataset Card for "rick_and_morty_text_to_image"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
qwertyaditya/rick_and_morty_text_to_image
|
[
"region:us"
] |
2023-09-01T08:18:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3791351.0, "num_examples": 40}], "download_size": 3456089, "dataset_size": 3791351.0}}
|
2023-09-01T08:31:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rick_and_morty_text_to_image"
More Information needed
|
[
"# Dataset Card for \"rick_and_morty_text_to_image\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rick_and_morty_text_to_image\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rick_and_morty_text_to_image\"\n\nMore Information needed"
] |
3c092a5f2eecf4993a1f37de9482948286e7cac9
|
# Dataset Card for "srbd1_segmented2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Lancelot53/srbd1_segmented2
|
[
"region:us"
] |
2023-09-01T08:25:34+00:00
|
{"dataset_info": {"features": [{"name": "html", "dtype": "string"}, {"name": "response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1452582, "num_examples": 1508}], "download_size": 405675, "dataset_size": 1452582}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T08:25:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "srbd1_segmented2"
More Information needed
|
[
"# Dataset Card for \"srbd1_segmented2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"srbd1_segmented2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"srbd1_segmented2\"\n\nMore Information needed"
] |
4c918c3e2bd96dfda634afe307093e298a9d3aa4
|
This is a Non-IID split version of [databricks/databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k).
|
fedml/databricks-dolly-15k-niid
|
[
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-09-01T08:51:54+00:00
|
{"language": ["en"], "license": "cc-by-sa-3.0", "size_categories": ["10K<n<100K"], "configs": [{"config_name": "default", "default": true, "data_files": [{"split": "train", "path": "train.parquet"}, {"split": "test", "path": "test.parquet"}]}], "dataset_info": {"config_name": "default", "features": [{"name": "instruction", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "category", "dtype": "string"}]}}
|
2023-09-05T11:03:26+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us
|
This is a Non-IID split version of databricks/databricks-dolly-15k.
|
[] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us \n"
] |
[
33
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-sa-3.0 #region-us \n"
] |
cc2a593691f05888a3895be54f68950baa938be3
|
**Product name** - [MarineFlex Ultra](https://marineflex-ultra-reviews.jimdosite.com/)
**Category** - Dietary supplement, Flexibility, Mobility.
**Benefits** - Treats Joint Pain
**Dosage** - Take 2 pills everyday
**Availability** - [Online](https://www.healthsupplement24x7.com/get-marineflex-ultra)
**Official Website** - [https://www.healthsupplement24x7.com/get-marineflex-ultra](https://www.healthsupplement24x7.com/get-marineflex-ultra)
With the help of [Marine Flex Ultra](https://pdfhost.io/v/oUE.LN6TI_MarineFlex_Ultra_New_Update_2023_Reduce_Joint_Pain_Boosting_Flexibility_Mobility_Faster), people can restore their young mobility and flexibility and resume participating in their favorite activities.The joint support formula has nutrients that relieve pain and soothe inflammation and swelling. It improves physical function and reduces joint discomfort. [MarineFlex Ultra](https://www.ivoox.com/marineflex-ultra-new-update-2023-reduce-joint-pain-audios-mp3_rf_115267571_1.html) supports healthy inflammatory response and enhances the production of synovial fluid.The fluid nourishes and lubricates the cartilage and joints.
[.png)](https://www.healthsupplement24x7.com/get-marineflex-ultra)
### _**[Visit MarineFlex Ultra Official Website Here](https://www.healthsupplement24x7.com/get-marineflex-ultra)**_
**What is MarineFlex Ultra?**
-----------------------------
[MarineFlex Ultra](https://healthsupplements24x7.blogspot.com/2023/08/marineflex-ultra.html) helps you move better and have stronger bones. It helps with stiffness, aching, and swelling of joints. The supplement works to make your joints healthy and last longer.
[MarineFlex Ultra](https://soundcloud.com/marine-flex-ultra/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster) helps fix the main problem of joint decay that happens when you get older without causing side effects. It works well by making more joint jello, lowering inflammation, and feeding and oiling the joints.
**How Does It Work?**
---------------------
[MarineFlex Ultra](https://www.townscript.com/e/marineflex-ultra-323231) is made from a unique blend of necessary ingredients and systemic proteolytic proteins. Your body develops distinct characters made up of proteins in the location of pain and white blood leukocytes that treat the damage. Nonetheless, after the healing process is completed, the sticky tissues obstruct the flow of red blood cells, which deliver oxygen to all bodily areas.
**Benefits of Marine Flex Ultra**
---------------------------------
### Reduces Joint Pain and Discomfort
One of the primary benefits of Marine Flex Ultra is reducing joint pain and discomfort. The ingredients work together to target the underlying causes of pain, providing relief and improving overall joint health.
### Supports Joint Health and Mobility
[Marine Flex Ultra](https://groups.google.com/g/marineflex-ultra-pills/c/u9UnGYu1Zr8) helps support joint health by promoting synovial fluid production. This fluid cushions and lubricates the joints leading to improved mobility and decreased joint stiffness.
### Promotes a Healthy Inflammatory Response
Chronic inflammation causes joint pain and discomfort. Marine Flex Ultra contains ingredients that support a healthy inflammatory response, helping to alleviate issues caused by inflammation.
### Enhances Bone Marrow Function
Bone marrow plays a crucial role in joint health, as it produces cells that contribute to the maintenance and repair of joint tissues. Marine Flex Ultra supports bone marrow function, promoting overall joint health.
### **Increase circulation**
Marine Flex Ultra improves blood flow and the delivery of nutrients and oxygen to the joints and other parts of the body.
### Strengthen Bones
The ingredients in Marine Flex Ultra support bone and muscle strength by preventing fractures.
[.png)](https://www.healthsupplement24x7.com/get-marineflex-ultra)
### _**[Order Your Supply Of Marine Flex Ultra Now And Start Enjoying The Benefits!](https://www.healthsupplement24x7.com/get-marineflex-ultra)**_
**Ingredients of MarineFlex Ultra!**
------------------------------------
Here are all of the active ingredients in MarineFlex Ultra and how they work, according to Dr. Kahn and the official MarineFlex Ultra website:
**Green Lipped Mussel**:– Green lipped mussel, also known as Perna canaliculus, is a rare marine organism that only grows in the clean and pristine waters off the coast of New Zealand. This mussel gives Marine Flex Ultra its name, because the primary active ingredient comes from the ocean. Green lipped mussels are rich with omega-3 fatty acids, including proven inflammation fighters like DHA and EPA.
**Boswellia Serrata**:–Boswellia serrata extract comes from a tree native to India. It has been used in traditional Indian medicine (Ayurveda) for centuries as a general health and wellness aid. Today, we know boswellia serrata is rich with phytochemicals (plant-based antioxidants) and other natural ingredients that can decrease knee pain, boost mobility, and help with swelling and inflammation.
**Ashwagandha**:– Today, we know ashwagandha works because it’s rich with a substance called Withaferin A (WFA). This substance appears to help with chronic joint pain by suppressing inflammatory cytokines. In fact, WFA could help supress inflammation throughout the body, leading to positive effects on cognition, physical energy, mobility, and more. Many people with joint pain have chronic inflammation, and ashwagandha could help.
**Hyaluronic Acid**:- Hyaluronic acid is known for carrying many times its weight in water, increasing hydration throughout the body – including in the area between your joints. Hyaluronic acid support the synovial fluid and lubrication between your joints, but it also stimulates new cell formation to help repair cartilage, helping it support joint pain relief in multiple ways.
**MSM**:– MSM helps your body form new cartilage and it decreases joint inflammation. Dr. Kahn cites one study where 50 patients with knee osteoarthritis took MSM or a placebo pill. After 12 weeks, MSM significantly decreased pain and physical function impairment compared to the placebo. Today, many people with joint pain take MSM daily to help with the condition.
**Collagen**:– Collagen is the most abundant connective protein in the body, and many people take collagen daily for anti-aging, wrinkle defense, joint pain, and muscle recovery. It helps to repair cartilage and other joint cells. The reason is simple: most cartilage is made of collagen, and collagen plays a crucial role in holding your body together.
**Chondroitin Sulfate**:– Like glucosamine sulfate, chondroitin sulfate is well-known for its effects on joint pain relief and bone repair. As proof, Dr. Kahn cites one study involving 162 patients with osteoarthritis in their hand. Patients took chondroitin sulfate or a placebo pill, and those in the chondroitin sulfate group had significantly less hand pain than those taking a placebo.
15+ Other Herbs, Plants Ingredients include bromelain, calendula, burdock, cetyl myristoleate, yucca, feverfew, shark cartilage, horsetail, white willow bark, gentian root, cinnamon, shatavari, N-acetyl D-glucosamine, grape root extract, and Rehmannia root.
[.png)](https://www.healthsupplement24x7.com/get-marineflex-ultra)
### _**[\[click Here To Order\] Unlock The Benefits Of Marineflex Ultra Natural Ingredients.](https://www.healthsupplement24x7.com/get-marineflex-ultra)**_
**What Is The dosage for MarineFlex Ultra?**
--------------------------------------------
MarineFlex Ultra is a supplement for those who are looking for natural ways to relieve pain and discomfort. In every bottle, you get a month’s supply, that is, 90 capsules per container.
The serving size or recommended dosage for this supplement is 3 capsules daily. It is important to consult a doctor before using the supplement. The formula cannot be used along with blood thinning supplements.
**Marine Flex Ultra Side Effects**
----------------------------------
According to several users, taking Marine Flex Ultra is safe, but an overdose may be harmful. Taking the supplement on an empty stomach may form indigestion such as gas. But just as with any other supplement, you must give your body to adjust to it. Apart from these, Marine Flex Ultra has an excellent track record and clean safety history.
The only way that consumers can be sure to purchase [MarineFlex Ultra](https://events.humanitix.com/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster) is to go through the official website. Consumers have their choice of various quantities, and they can even opt-in to a subscription.
**Pricing Of MarineFlex Ultra**
-------------------------------
The Marine Flex Ultra supplement is very cheap because the manufacturers wanted to make it a viable pain-relieving option. There are currently three packages being offered on the Marine Flex Ultra website that we have listed here.
The packages available are:
_**1 Month Supply - $69.00/each + free shipping**_
_**3 Month SUpply - $59.00/each + free shipping**_
_**6 Month supply - $49.00/each + free shipping**_
[.png)](https://www.healthsupplement24x7.com/get-marineflex-ultra)
### _**[\[SPECIAL DISCOUNT\] Click Here To Visit MarineFlex Ultra Official Website](https://www.healthsupplement24x7.com/get-marineflex-ultra)**_
**MarineFlex Ultra™ 180-Day Money Back Guarantee!**
---------------------------------------------------
The [MarineFlex Ultra](https://marineflexultra.clubeo.com/page/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster.html)™ is backed by a 100% money back guarantee for 180 full days from your original purchase.
If you're not totally and completely satisfied with our product or your results within the first 180 days from your purchase simply let us know at [MarineFlex Ultra](https://marineflexultra.clubeo.com/calendar/2023/09/01/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster)™ and we’ll give you a refund within 48 hours of the product being returned. That’s right, simply return the [MarineFlex Ultra](https://marineflexultrareviews.hashnode.dev/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster), even empty bottles, anytime within 180 days of your purchase and you’ll receive a refund, no questions asked!
**Where To Buy?**
-----------------
When it comes to purchasing [Marine Flex Ultra](https://www.scoop.it/topic/marineflex-ultra-reviews), you have a few options. You can buy the supplement directly from their website, or you can purchase it from various retailers.
It is important to remember that Marine Flex Ultra is not FDA approved, and as such you should always take the recommended doseIf you are taking other medications, you should talk to your doctor before taking Marine Flex Ultra.
**Final Verdict**
-----------------
[MarineFlex Ultra](https://marineflexultra.clubeo.com) gives users the ability to reduce inflammation. The formula is easy to keep up with daily, though users only need three capsules to create the effect. Users can choose between a one-time purchase and a subscription, but both are covered by a 180-day money-back guarantee. With so many natural ingredients, users can simply enjoy the benefits without worrying about side effects.
[.png)](https://www.healthsupplement24x7.com/get-marineflex-ultra)
### [_**Click Here To Visit The Official MarineFlex Ultra Website And Learn More About!**_](https://www.healthsupplement24x7.com/get-marineflex-ultra)
[https://healthsupplements24x7.blogspot.com/2023/08/marineflex-ultra.html](https://healthsupplements24x7.blogspot.com/2023/08/marineflex-ultra.html)
[https://pdfhost.io/v/oUE.LN6TI\_MarineFlex\_Ultra\_New\_Update\_2023\_Reduce\_Joint\_Pain\_Boosting\_Flexibility\_Mobility\_Faster](https://pdfhost.io/v/oUE.LN6TI_MarineFlex_Ultra_New_Update_2023_Reduce_Joint_Pain_Boosting_Flexibility_Mobility_Faster)
[https://www.ivoox.com/marineflex-ultra-new-update-2023-reduce-joint-pain-audios-mp3\_rf\_115267571\_1.html](https://www.ivoox.com/marineflex-ultra-new-update-2023-reduce-joint-pain-audios-mp3_rf_115267571_1.html)
[https://soundcloud.com/marine-flex-ultra/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster](https://soundcloud.com/marine-flex-ultra/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster)
[https://marineflexultra.clubeo.com](https://marineflexultra.clubeo.com)
[https://marineflexultra.clubeo.com/calendar/2023/09/01/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster](https://marineflexultra.clubeo.com/calendar/2023/09/01/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster)
[https://marineflexultra.clubeo.com/page/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster.html](https://marineflexultra.clubeo.com/page/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster.html)
[https://marineflexultra.clubeo.com/page/marineflex-ultra-reviews-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster.html](https://marineflexultra.clubeo.com/page/marineflex-ultra-reviews-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster.html)
[https://www.scoop.it/topic/marineflex-ultra-reviews](https://www.scoop.it/topic/marineflex-ultra-reviews)
[https://marineflex-ultra-reviews.jimdosite.com/](https://marineflex-ultra-reviews.jimdosite.com/)
[https://www.fuzia.com/article\_detail/801622/marineflex-ultra-new-update-2023-reduce-joint-pain](https://www.fuzia.com/article_detail/801622/marineflex-ultra-new-update-2023-reduce-joint-pain)
[https://marineflexultrareviews.hashnode.dev/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster](https://marineflexultrareviews.hashnode.dev/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster)
[https://colab.research.google.com/drive/1MiUi9VaIIvX62XGu3sEL7F08TDTKeQ7J](https://colab.research.google.com/drive/1MiUi9VaIIvX62XGu3sEL7F08TDTKeQ7J)
[https://colab.research.google.com/drive/17T10l\_IC3pTeLwB09lnj2w0d\_pvoPe19](https://colab.research.google.com/drive/17T10l_IC3pTeLwB09lnj2w0d_pvoPe19)
[https://colab.research.google.com/drive/1xl0ppWKwuT8xnidmS60EufalpzVgg0bw](https://colab.research.google.com/drive/1xl0ppWKwuT8xnidmS60EufalpzVgg0bw)
[https://colab.research.google.com/drive/1dmfjNL8pclJmjjAxqzQvgkcdtultlygt](https://colab.research.google.com/drive/1dmfjNL8pclJmjjAxqzQvgkcdtultlygt)
[https://colab.research.google.com/drive/1OGn57UdiAlYF7G2MA2ew2pdLgFl5hKFx](https://colab.research.google.com/drive/1OGn57UdiAlYF7G2MA2ew2pdLgFl5hKFx)
[https://www.townscript.com/e/marineflex-ultra-323231](https://www.townscript.com/e/marineflex-ultra-323231)
[https://events.humanitix.com/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster](https://events.humanitix.com/marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility-mobility-faster)
[https://devfolio.co/@marineflex\_](https://devfolio.co/@marineflex_)
[https://form.jotform.com/marineflexultra/marineflex-ultra](https://form.jotform.com/marineflexultra/marineflex-ultra)
[https://devfolio.co/projects/marineflex-ultra-dd7e](https://devfolio.co/projects/marineflex-ultra-dd7e)
[https://forum.molihua.org/d/42391-marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility](https://forum.molihua.org/d/42391-marineflex-ultra-new-update-2023-reduce-joint-pain-boosting-flexibility)
[https://electronoobs.io/project/11427#](https://electronoobs.io/project/11427#)
[https://groups.google.com/g/get-marineflex-ultra/c/sYdqvo08hMI](https://groups.google.com/g/get-marineflex-ultra/c/sYdqvo08hMI)
[https://groups.google.com/g/read-marineflex-ultra-reviews/c/fC7gwflzX4E](https://groups.google.com/g/read-marineflex-ultra-reviews/c/fC7gwflzX4E)
[https://groups.google.com/g/get-marine-flex-ultra/c/vLpb\_3REicU](https://groups.google.com/g/get-marine-flex-ultra/c/vLpb_3REicU)
[https://groups.google.com/g/marine-flex-ultra-reviews/c/bR37-Eay-As](https://groups.google.com/g/marine-flex-ultra-reviews/c/bR37-Eay-As)
[https://groups.google.com/g/marineflex-ultra-pills/c/u9UnGYu1Zr8](https://groups.google.com/g/marineflex-ultra-pills/c/u9UnGYu1Zr8)
|
marineflexultra/marineflex-ultra
|
[
"region:us"
] |
2023-09-01T08:59:04+00:00
|
{}
|
2023-09-01T09:01:25+00:00
|
[] |
[] |
TAGS
#region-us
|
Product name - MarineFlex Ultra
Category - Dietary supplement, Flexibility, Mobility.
Benefits - Treats Joint Pain
Dosage - Take 2 pills everyday
Availability - Online
Official Website - URL
With the help of Marine Flex Ultra, people can restore their young mobility and flexibility and resume participating in their favorite activities.The joint support formula has nutrients that relieve pain and soothe inflammation and swelling. It improves physical function and reduces joint discomfort. MarineFlex Ultra supports healthy inflammatory response and enhances the production of synovial fluid.The fluid nourishes and lubricates the cartilage and joints.
](URL
### _Order Your Supply Of Marine Flex Ultra Now And Start Enjoying The Benefits!_
Ingredients of MarineFlex Ultra!
------------------------------------
Here are all of the active ingredients in MarineFlex Ultra and how they work, according to Dr. Kahn and the official MarineFlex Ultra website:
Green Lipped Mussel:– Green lipped mussel, also known as Perna canaliculus, is a rare marine organism that only grows in the clean and pristine waters off the coast of New Zealand. This mussel gives Marine Flex Ultra its name, because the primary active ingredient comes from the ocean. Green lipped mussels are rich with omega-3 fatty acids, including proven inflammation fighters like DHA and EPA.
Boswellia Serrata:–Boswellia serrata extract comes from a tree native to India. It has been used in traditional Indian medicine (Ayurveda) for centuries as a general health and wellness aid. Today, we know boswellia serrata is rich with phytochemicals (plant-based antioxidants) and other natural ingredients that can decrease knee pain, boost mobility, and help with swelling and inflammation.
Ashwagandha:– Today, we know ashwagandha works because it’s rich with a substance called Withaferin A (WFA). This substance appears to help with chronic joint pain by suppressing inflammatory cytokines. In fact, WFA could help supress inflammation throughout the body, leading to positive effects on cognition, physical energy, mobility, and more. Many people with joint pain have chronic inflammation, and ashwagandha could help.
Hyaluronic Acid:- Hyaluronic acid is known for carrying many times its weight in water, increasing hydration throughout the body – including in the area between your joints. Hyaluronic acid support the synovial fluid and lubrication between your joints, but it also stimulates new cell formation to help repair cartilage, helping it support joint pain relief in multiple ways.
MSM:– MSM helps your body form new cartilage and it decreases joint inflammation. Dr. Kahn cites one study where 50 patients with knee osteoarthritis took MSM or a placebo pill. After 12 weeks, MSM significantly decreased pain and physical function impairment compared to the placebo. Today, many people with joint pain take MSM daily to help with the condition.
Collagen:– Collagen is the most abundant connective protein in the body, and many people take collagen daily for anti-aging, wrinkle defense, joint pain, and muscle recovery. It helps to repair cartilage and other joint cells. The reason is simple: most cartilage is made of collagen, and collagen plays a crucial role in holding your body together.
Chondroitin Sulfate:– Like glucosamine sulfate, chondroitin sulfate is well-known for its effects on joint pain relief and bone repair. As proof, Dr. Kahn cites one study involving 162 patients with osteoarthritis in their hand. Patients took chondroitin sulfate or a placebo pill, and those in the chondroitin sulfate group had significantly less hand pain than those taking a placebo.
15+ Other Herbs, Plants Ingredients include bromelain, calendula, burdock, cetyl myristoleate, yucca, feverfew, shark cartilage, horsetail, white willow bark, gentian root, cinnamon, shatavari, N-acetyl D-glucosamine, grape root extract, and Rehmannia root.
](URL
### _[\[SPECIAL DISCOUNT\] Click Here To Visit MarineFlex Ultra Official Website](URL
MarineFlex Ultra™ 180-Day Money Back Guarantee!
---------------------------------------------------
The MarineFlex Ultra™ is backed by a 100% money back guarantee for 180 full days from your original purchase.
If you're not totally and completely satisfied with our product or your results within the first 180 days from your purchase simply let us know at MarineFlex Ultra™ and we’ll give you a refund within 48 hours of the product being returned. That’s right, simply return the MarineFlex Ultra, even empty bottles, anytime within 180 days of your purchase and you’ll receive a refund, no questions asked!
Where To Buy?
-----------------
When it comes to purchasing Marine Flex Ultra, you have a few options. You can buy the supplement directly from their website, or you can purchase it from various retailers.
It is important to remember that Marine Flex Ultra is not FDA approved, and as such you should always take the recommended doseIf you are taking other medications, you should talk to your doctor before taking Marine Flex Ultra.
Final Verdict
-----------------
MarineFlex Ultra gives users the ability to reduce inflammation. The formula is easy to keep up with daily, though users only need three capsules to create the effect. Users can choose between a one-time purchase and a subscription, but both are covered by a 180-day money-back guarantee. With so many natural ingredients, users can simply enjoy the benefits without worrying about side effects.
](URL",
"### _Order Your Supply Of Marine Flex Ultra Now And Start Enjoying The Benefits!_\n\nIngredients of MarineFlex Ultra!\n------------------------------------\n\nHere are all of the active ingredients in MarineFlex Ultra and how they work, according to Dr. Kahn and the official MarineFlex Ultra website:\n\nGreen Lipped Mussel:– Green lipped mussel, also known as Perna canaliculus, is a rare marine organism that only grows in the clean and pristine waters off the coast of New Zealand. This mussel gives Marine Flex Ultra its name, because the primary active ingredient comes from the ocean. Green lipped mussels are rich with omega-3 fatty acids, including proven inflammation fighters like DHA and EPA.\n\nBoswellia Serrata:–Boswellia serrata extract comes from a tree native to India. It has been used in traditional Indian medicine (Ayurveda) for centuries as a general health and wellness aid. Today, we know boswellia serrata is rich with phytochemicals (plant-based antioxidants) and other natural ingredients that can decrease knee pain, boost mobility, and help with swelling and inflammation.\n\nAshwagandha:– Today, we know ashwagandha works because it’s rich with a substance called Withaferin A (WFA). This substance appears to help with chronic joint pain by suppressing inflammatory cytokines. In fact, WFA could help supress inflammation throughout the body, leading to positive effects on cognition, physical energy, mobility, and more. Many people with joint pain have chronic inflammation, and ashwagandha could help.\n\nHyaluronic Acid:- Hyaluronic acid is known for carrying many times its weight in water, increasing hydration throughout the body – including in the area between your joints. Hyaluronic acid support the synovial fluid and lubrication between your joints, but it also stimulates new cell formation to help repair cartilage, helping it support joint pain relief in multiple ways.\n\nMSM:– MSM helps your body form new cartilage and it decreases joint inflammation. Dr. Kahn cites one study where 50 patients with knee osteoarthritis took MSM or a placebo pill. After 12 weeks, MSM significantly decreased pain and physical function impairment compared to the placebo. Today, many people with joint pain take MSM daily to help with the condition.\n\nCollagen:– Collagen is the most abundant connective protein in the body, and many people take collagen daily for anti-aging, wrinkle defense, joint pain, and muscle recovery. It helps to repair cartilage and other joint cells. The reason is simple: most cartilage is made of collagen, and collagen plays a crucial role in holding your body together.\n\nChondroitin Sulfate:– Like glucosamine sulfate, chondroitin sulfate is well-known for its effects on joint pain relief and bone repair. As proof, Dr. Kahn cites one study involving 162 patients with osteoarthritis in their hand. Patients took chondroitin sulfate or a placebo pill, and those in the chondroitin sulfate group had significantly less hand pain than those taking a placebo.\n\n15+ Other Herbs, Plants Ingredients include bromelain, calendula, burdock, cetyl myristoleate, yucca, feverfew, shark cartilage, horsetail, white willow bark, gentian root, cinnamon, shatavari, N-acetyl D-glucosamine, grape root extract, and Rehmannia root.\n\n](URL",
"### _[\\[SPECIAL DISCOUNT\\] Click Here To Visit MarineFlex Ultra Official Website](URL\n\nMarineFlex Ultra™ 180-Day Money Back Guarantee!\n---------------------------------------------------\n\nThe MarineFlex Ultra™ is backed by a 100% money back guarantee for 180 full days from your original purchase.\n\nIf you're not totally and completely satisfied with our product or your results within the first 180 days from your purchase simply let us know at MarineFlex Ultra™ and we’ll give you a refund within 48 hours of the product being returned. That’s right, simply return the MarineFlex Ultra, even empty bottles, anytime within 180 days of your purchase and you’ll receive a refund, no questions asked!\n\nWhere To Buy?\n-----------------\n\nWhen it comes to purchasing Marine Flex Ultra, you have a few options. You can buy the supplement directly from their website, or you can purchase it from various retailers. \nIt is important to remember that Marine Flex Ultra is not FDA approved, and as such you should always take the recommended doseIf you are taking other medications, you should talk to your doctor before taking Marine Flex Ultra.\n\nFinal Verdict\n-----------------\n\nMarineFlex Ultra gives users the ability to reduce inflammation. The formula is easy to keep up with daily, though users only need three capsules to create the effect. Users can choose between a one-time purchase and a subscription, but both are covered by a 180-day money-back guarantee. With so many natural ingredients, users can simply enjoy the benefits without worrying about side effects.\n\n](URL",
"### _Order Your Supply Of Marine Flex Ultra Now And Start Enjoying The Benefits!_\n\nIngredients of MarineFlex Ultra!\n------------------------------------\n\nHere are all of the active ingredients in MarineFlex Ultra and how they work, according to Dr. Kahn and the official MarineFlex Ultra website:\n\nGreen Lipped Mussel:– Green lipped mussel, also known as Perna canaliculus, is a rare marine organism that only grows in the clean and pristine waters off the coast of New Zealand. This mussel gives Marine Flex Ultra its name, because the primary active ingredient comes from the ocean. Green lipped mussels are rich with omega-3 fatty acids, including proven inflammation fighters like DHA and EPA.\n\nBoswellia Serrata:–Boswellia serrata extract comes from a tree native to India. It has been used in traditional Indian medicine (Ayurveda) for centuries as a general health and wellness aid. Today, we know boswellia serrata is rich with phytochemicals (plant-based antioxidants) and other natural ingredients that can decrease knee pain, boost mobility, and help with swelling and inflammation.\n\nAshwagandha:– Today, we know ashwagandha works because it’s rich with a substance called Withaferin A (WFA). This substance appears to help with chronic joint pain by suppressing inflammatory cytokines. In fact, WFA could help supress inflammation throughout the body, leading to positive effects on cognition, physical energy, mobility, and more. Many people with joint pain have chronic inflammation, and ashwagandha could help.\n\nHyaluronic Acid:- Hyaluronic acid is known for carrying many times its weight in water, increasing hydration throughout the body – including in the area between your joints. Hyaluronic acid support the synovial fluid and lubrication between your joints, but it also stimulates new cell formation to help repair cartilage, helping it support joint pain relief in multiple ways.\n\nMSM:– MSM helps your body form new cartilage and it decreases joint inflammation. Dr. Kahn cites one study where 50 patients with knee osteoarthritis took MSM or a placebo pill. After 12 weeks, MSM significantly decreased pain and physical function impairment compared to the placebo. Today, many people with joint pain take MSM daily to help with the condition.\n\nCollagen:– Collagen is the most abundant connective protein in the body, and many people take collagen daily for anti-aging, wrinkle defense, joint pain, and muscle recovery. It helps to repair cartilage and other joint cells. The reason is simple: most cartilage is made of collagen, and collagen plays a crucial role in holding your body together.\n\nChondroitin Sulfate:– Like glucosamine sulfate, chondroitin sulfate is well-known for its effects on joint pain relief and bone repair. As proof, Dr. Kahn cites one study involving 162 patients with osteoarthritis in their hand. Patients took chondroitin sulfate or a placebo pill, and those in the chondroitin sulfate group had significantly less hand pain than those taking a placebo.\n\n15+ Other Herbs, Plants Ingredients include bromelain, calendula, burdock, cetyl myristoleate, yucca, feverfew, shark cartilage, horsetail, white willow bark, gentian root, cinnamon, shatavari, N-acetyl D-glucosamine, grape root extract, and Rehmannia root.\n\n](URL",
"### _[\\[SPECIAL DISCOUNT\\] Click Here To Visit MarineFlex Ultra Official Website](URL\n\nMarineFlex Ultra™ 180-Day Money Back Guarantee!\n---------------------------------------------------\n\nThe MarineFlex Ultra™ is backed by a 100% money back guarantee for 180 full days from your original purchase.\n\nIf you're not totally and completely satisfied with our product or your results within the first 180 days from your purchase simply let us know at MarineFlex Ultra™ and we’ll give you a refund within 48 hours of the product being returned. That’s right, simply return the MarineFlex Ultra, even empty bottles, anytime within 180 days of your purchase and you’ll receive a refund, no questions asked!\n\nWhere To Buy?\n-----------------\n\nWhen it comes to purchasing Marine Flex Ultra, you have a few options. You can buy the supplement directly from their website, or you can purchase it from various retailers. \nIt is important to remember that Marine Flex Ultra is not FDA approved, and as such you should always take the recommended doseIf you are taking other medications, you should talk to your doctor before taking Marine Flex Ultra.\n\nFinal Verdict\n-----------------\n\nMarineFlex Ultra gives users the ability to reduce inflammation. The formula is easy to keep up with daily, though users only need three capsules to create the effect. Users can choose between a one-time purchase and a subscription, but both are covered by a 180-day money-back guarantee. With so many natural ingredients, users can simply enjoy the benefits without worrying about side effects.\n\n](URL",
"passage: ### _Order Your Supply Of Marine Flex Ultra Now And Start Enjoying The Benefits!_\n\nIngredients of MarineFlex Ultra!\n------------------------------------\n\nHere are all of the active ingredients in MarineFlex Ultra and how they work, according to Dr. Kahn and the official MarineFlex Ultra website:\n\nGreen Lipped Mussel:– Green lipped mussel, also known as Perna canaliculus, is a rare marine organism that only grows in the clean and pristine waters off the coast of New Zealand. This mussel gives Marine Flex Ultra its name, because the primary active ingredient comes from the ocean. Green lipped mussels are rich with omega-3 fatty acids, including proven inflammation fighters like DHA and EPA.\n\nBoswellia Serrata:–Boswellia serrata extract comes from a tree native to India. It has been used in traditional Indian medicine (Ayurveda) for centuries as a general health and wellness aid. Today, we know boswellia serrata is rich with phytochemicals (plant-based antioxidants) and other natural ingredients that can decrease knee pain, boost mobility, and help with swelling and inflammation.\n\nAshwagandha:– Today, we know ashwagandha works because it’s rich with a substance called Withaferin A (WFA). This substance appears to help with chronic joint pain by suppressing inflammatory cytokines. In fact, WFA could help supress inflammation throughout the body, leading to positive effects on cognition, physical energy, mobility, and more. Many people with joint pain have chronic inflammation, and ashwagandha could help.\n\nHyaluronic Acid:- Hyaluronic acid is known for carrying many times its weight in water, increasing hydration throughout the body – including in the area between your joints. Hyaluronic acid support the synovial fluid and lubrication between your joints, but it also stimulates new cell formation to help repair cartilage, helping it support joint pain relief in multiple ways.\n\nMSM:– MSM helps your body form new cartilage and it decreases joint inflammation. Dr. Kahn cites one study where 50 patients with knee osteoarthritis took MSM or a placebo pill. After 12 weeks, MSM significantly decreased pain and physical function impairment compared to the placebo. Today, many people with joint pain take MSM daily to help with the condition.\n\nCollagen:– Collagen is the most abundant connective protein in the body, and many people take collagen daily for anti-aging, wrinkle defense, joint pain, and muscle recovery. It helps to repair cartilage and other joint cells. The reason is simple: most cartilage is made of collagen, and collagen plays a crucial role in holding your body together.\n\nChondroitin Sulfate:– Like glucosamine sulfate, chondroitin sulfate is well-known for its effects on joint pain relief and bone repair. As proof, Dr. Kahn cites one study involving 162 patients with osteoarthritis in their hand. Patients took chondroitin sulfate or a placebo pill, and those in the chondroitin sulfate group had significantly less hand pain than those taking a placebo.\n\n15+ Other Herbs, Plants Ingredients include bromelain, calendula, burdock, cetyl myristoleate, yucca, feverfew, shark cartilage, horsetail, white willow bark, gentian root, cinnamon, shatavari, N-acetyl D-glucosamine, grape root extract, and Rehmannia root.\n\n](URL"
] |
7b34d75c10b59bd5cfc314b62504e437df9a5c66
|
# Dataset Card for "rick_and_morty_image_and_text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
qwertyaditya/rick_and_morty_image_and_text
|
[
"region:us"
] |
2023-09-01T10:03:31+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3793367.0, "num_examples": 40}], "download_size": 3457617, "dataset_size": 3793367.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T10:03:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "rick_and_morty_image_and_text"
More Information needed
|
[
"# Dataset Card for \"rick_and_morty_image_and_text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"rick_and_morty_image_and_text\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"rick_and_morty_image_and_text\"\n\nMore Information needed"
] |
ecfa844b25d8cd5ee1f2f7a1da5e85a8c2eab7f3
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l256_d3_sd0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_house_16H_gosdt_l256_d3_sd0
|
[
"region:us"
] |
2023-09-01T10:39:30+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 4616800000, "num_examples": 100000}, {"name": "validation", "num_bytes": 461680000, "num_examples": 10000}], "download_size": 1644572003, "dataset_size": 5078480000}}
|
2023-09-01T10:40:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l256_d3_sd0"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
[
6,
33
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_house_16H_gosdt_l256_d3_sd0\"\n\nMore Information needed"
] |
c549ca0525f02483c69eba6170872daf08a83ba8
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
VedCodes/Easy_Share_Instruction
|
[
"task_categories:text-generation",
"size_categories:n<1K",
"language:en",
"finance",
"medical",
"region:us"
] |
2023-09-01T10:48:29+00:00
|
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["text-generation"], "tags": ["finance", "medical"]}
|
2023-09-01T11:21:11+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-n<1K #language-English #finance #medical #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #finance #medical #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
37,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-n<1K #language-English #finance #medical #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
57185d1cee22bf5617425860308d31c50ec4028c
|
# Dataset Card for "mtet"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hieunguyen1053/mtet-filtered
|
[
"region:us"
] |
2023-09-01T11:02:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "eval", "path": "data/eval-*"}]}], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "vi", "dtype": "string"}, {"name": "loss", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1343163221, "num_examples": 4163853}, {"name": "test", "num_bytes": 336407, "num_examples": 1268}, {"name": "eval", "num_bytes": 353494, "num_examples": 1553}], "download_size": 208876651, "dataset_size": 1343853122}}
|
2023-09-01T11:10:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mtet"
More Information needed
|
[
"# Dataset Card for \"mtet\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mtet\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mtet\"\n\nMore Information needed"
] |
fcea94fe0ef289ac84bda0b755febd6eca5f226f
|
# Dataset Card for "huggingartists_raw"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/huggingartists_raw
|
[
"region:us"
] |
2023-09-01T11:47:33+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "dataset", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121693362, "num_examples": 69312}], "download_size": 56195290, "dataset_size": 121693362}}
|
2023-09-01T12:00:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "huggingartists_raw"
More Information needed
|
[
"# Dataset Card for \"huggingartists_raw\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"huggingartists_raw\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"huggingartists_raw\"\n\nMore Information needed"
] |
0433246f72fb6e8eed063f717dc9623ab3753732
|
# Dataset Card for "augmented-shrutilipi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
makaveli10/augmented-shrutilipi
|
[
"region:us"
] |
2023-09-01T11:52:22+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 28188508592.0, "num_examples": 40000}], "download_size": 28080609408, "dataset_size": 28188508592.0}}
|
2023-09-04T07:57:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "augmented-shrutilipi"
More Information needed
|
[
"# Dataset Card for \"augmented-shrutilipi\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"augmented-shrutilipi\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"augmented-shrutilipi\"\n\nMore Information needed"
] |
b6534d56d241832d38720cd1ded002b46765796c
|
<article id="23298596" class="col-md-11 pb-5 pb-md-0 px-0 article-section inviewscroll mb-3 active">
<h2 style="text-align: center;"><a href="https://sale365day.com/get-bazopril"><span style="color: #003300;">Click Here -- Official Website -- Order Now</span></a></h2>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #ff00fe;">✔For Order Official Website -</span> <a href="https://sale365day.com/get-bazopril">https://sale365day.com/get-bazopril</a></strong></p>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #800180;">✔Product Name -</span> <a href="https://lookerstudio.google.com/reporting/3006eb02-d08d-4210-b766-00b92b3047e7/page/SPSbD">Bazopril</a></strong></p>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #2b00fe;">✔Side Effect -</span> <span style="color: #800180;">No Side Effects<br /></span></strong></p>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #274e13;">✔Availability - </span><a href="https://sale365day.com/get-bazopril">Online</a></strong></p>
<p><strong><span style="color: #274e13;">✔</span></strong><strong><span style="color: #274e13;">Rating -</span>⭐⭐⭐⭐⭐</strong></p>
<p><a href="https://sale365day.com/get-bazopril"><span style="font-size: large;"><strong><span style="color: #274e13;">Hurry </span><span style="color: #274e13;">U</span><span style="color: #274e13;">p - </span><span style="color: #274e13;">Limi</span><span style="color: #274e13;">ted Time Offer - Purchase Now</span></strong></span></a></p>
<p><a href="https://sale365day.com/get-bazopril"><span style="font-size: large;"><strong><span style="color: #274e13;">Hurry Up</span><span style="color: #274e13;"> - L</span><span style="color: #274e13;">imited Time Offer - Purchase Now</span></strong></span></a></p>
<a href="https://sale365day.com/get-bazopril"><span style="font-size: large;"><strong><span style="color: #274e13;">Hu</span><span style="color: #274e13;">rry Up - </span><span style="color: #274e13;">Limited Time Offer - Purchase Now</span></strong></span></a></article>
<article id="23298596" class="col-md-11 pb-5 pb-md-0 px-0 article-section inviewscroll mb-3 active">
<p class="articleHeading mb-0"><strong><a href="https://bazopril-reviews-official.jimdosite.com/">Bazopril</a> is a blood pressure supplement featuring a blend of natural ingredients to support heart health.</strong></p>
</article>
<p style="text-align: justify;">Is Bazopril legit? Can <a href="https://yourpillsboss.blogspot.com/2023/09/bazopril-reviews-1-blood-pressure.html">Bazopril</a> lower your blood pressure naturally? Keep reading to discover everything you need to know about Bazopril and how it works today in our review.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-bazopril"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgpBOm4GcpwGYQcXokL8Z-e_5g14Ak3c7uxsTtxP1OltB8nYUZtCQjOUL8bXI-ZNZBAMuqICtp8sXNc04hJ9biQDtRUMVBqGr1bviDSXkj3WVJuM6zcSx6vBpDGvPMh7NuPt6ABPi1Km2tYbsTtP3EHeSGJQrAjJnogTO_6Zn5tGcLcIE7-LHpH9gh1/w640-h300/dfefffg.JPG" alt="" width="640" height="300" border="0" data-original-height="565" data-original-width="1208" /></a></div>
<h2 style="text-align: justify;"><strong>What is Bazopril?</strong></h2>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener">Bazopril is a nutritional supplement available exclusively online</a></u>. Developed by a man who recently suffered a heart attack, then reversed his high blood pressure using natural ingredients, Bazopril contains a blend of nutrients to keep your blood pressure in a normal range.</p>
<p style="text-align: justify;">Each capsule of Bazopril contains ingredients like mallow flower, elaion tree leaf extract, and conifer berry to maintain a healthy circulation throughout the body.</p>
<p style="text-align: justify;">The makers of <a href="https://bazopril-reviews-2023.webflow.io/">Bazopril</a> market the supplement to anyone concerned with heart health – including those with high blood pressure or anyone who dislikes the side effects of their blood pressure medication.</p>
<p style="text-align: justify;">Bazopril is priced at $69 per bottle. Qualifying purchases come with free shipping and free digital bonuses.</p>
<h2 style="text-align: justify;"><strong>Bazopril Benefits</strong></h2>
<p style="text-align: justify;">According to the official Bazopril website, the supplement can provide benefits like:</p>
<ul style="text-align: justify;">
<li>Maintain healthy blood pressure</li>
<li>Target the root cause of high blood pressure – your kidneys</li>
<li>Natural ingredients with no side effects</li>
<li>Made in the United States in FDA-registered, GMP-certified facilities</li>
<li>Backed by cutting-edge science and centuries of use in traditional medicine</li>
<li>Overall, Bazopril aims to be the ultimate blood pressure support supplement available today.</li>
</ul>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Click here to order Bazopril and experience the benefits!</strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>How Does Bazopril Work?</strong></h2>
<p style="text-align: justify;">Bazopril is primarily designed for people with high blood pressure who want to lower their blood pressure using natural ingredients. Here's how John Winters and the Bazopril formulation team describe the formula:</p>
<p style="text-align: justify;"><em>"If you struggle to keep your blood pressure in the normal range, then Bazopril is for you. If you worry about complications, then Bazopril is definitely for you."</em></p>
<p style="text-align: justify;">Doctors may prescribe blood pressure medication to lower your blood pressure. However, popular blood pressure medication often comes with unwanted side effects.</p>
<p style="text-align: justify;">Bazopril works differently, using a blend of natural ingredients to maintain healthy blood pressure levels. And according to the official website, the supplement has already "helped thousands of people of all ages" maintain healthy blood pressure levels.</p>
<h2 style="text-align: justify;"><strong>Who Created Bazopril?</strong></h2>
<p style="text-align: justify;"><a href="https://devfolio.co/@bazoprilreview">Bazopril</a> was developed by John Winters, a research scientist from the United States. John also refers to himself as John Miller.</p>
<p style="text-align: justify;">John had been dealing with high blood pressure for a long time. Nine years ago, his doctor started prescribing beta blockers, calcium channel blockers, and other blood pressure medications.</p>
<p style="text-align: justify;">Those medications didn't work, leaving John with uncomfortable side effects. His doctor doubled the dose, then prescribed more drugs to manage his blood pressure.</p>
<p style="text-align: justify;">Nothing worked, and John's blood pressure continued to rise. Even when taking blood pressure medication, John's blood pressure was 179/85 at a checkup, sending him into a panic.</p>
<p style="text-align: justify;">One day, John's blood pressure issues peaked when he suffered a heart attack at his daughter's Christmas concert. John was rushed to the hospital, but he was determined his daughter would not grow up fatherless.</p>
<p style="text-align: justify;">John started to research natural cures for high blood pressure. He stumbled upon a series of ingredients popular in ancient Egypt for lowering blood pressure naturally. After testing those ingredients in different combinations and dosages, he created Bazopril.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-bazopril"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgodCJWlAucB-1OV1Ff5yW2xheY7riGFhNQ7P6STT0_Uq99OPuKEZfF4-r1iFlBPUdlZIsXZ2f7mu2vApY54xTl_hB32WOxqBGm7KtM9t6FsBn38A072zpFjE5psvSLqfGLxWIWUQCfghXhpbIV3QfmuzJ4XhjB-arnZGpUyqBqgGj3fOqiI_WkiwNZ/w640-h436/egegeffgg.JPG" alt="" width="640" height="436" border="0" data-original-height="652" data-original-width="959" /></a></div>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Learn more on the official website >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Ingredients</strong></h2>
<p style="text-align: justify;">Plenty of cardiovascular health supplements claim to improve heart health – yet they don't work because they contain unproven ingredients. Bazopril aims to take a different approach by using natural ingredients backed by cutting-edge research.</p>
<p style="text-align: justify;">In fact, some of the ingredients in <a href="https://www.facebook.com/profile.php?id=61550666740528">Bazopril</a> have been "used by our ancestors for centuries," according to John Winters and his formulation team. Modern science has validated the use of these ingredients for health, wellness, and cardiovascular support.</p>
<p style="text-align: justify;">Here are all of the active ingredients in Bazopril and how they work, according to the official Bazopril website:</p>
<p style="text-align: justify;"><strong>Albaspine:</strong> Bazopril contains albaspine, better known as hawthorn or Crataegus. Many supplements – particularly heart health supplements – use hawthorn for its ability to promote healthy blood pressure. According to the creators, the albaspine in Bazopril "has so many proven health benefits," which is why it's been used for centuries in traditional medicine. The plant is also known as "The Crown of Jesus" because Jesus was crowned with hawthorn. While John was researching natural cures for high blood pressure, he discovered research proving hawthorn "keeps your heart relaxed so it's not pushing so hard," allowing you to lower blood pressure naturally. Albaspine also releases nitric oxide, which helps to relax your blood vessels and enhance blood flow.</p>
<p style="text-align: justify;"><strong>Conifer Berry:</strong> Conifer berry is a cone that keeps arteries relaxed and maintains healthy circulation, flooding your cardiovascular system with antioxidants to keep it healthy. It's better known as the juniper berry, and many supplements use it to support heart health and overall healthy inflammation throughout the body. Today, we know juniper berries work because they're rich in vitamin C, one of nature's best antioxidants. This vitamin works throughout the body – including in and around your heart – to support healthy circulation.</p>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Bazopril Is On Sale Now For A Limited Time!</strong></span></a></u></p>
<p style="text-align: justify;"><strong>Elaion Tree Leaf Extract:</strong> Better known as olive leaf extract, elaion tree leaf extract contains natural chemicals linked to heart health, healthy inflammation, and overall benefits. The main constituent in olive leaf extract is oleuropein. This natural chemical appears to have antioxidant effects, which can help fight free radicals throughout the body. Free radicals cause inflammation that increases your risk of disease and illness. According to John Winters' research, when developing Bazopril, elaion tree leaf extract can support a healthy inflammatory response.</p>
<p style="text-align: justify;"><strong>Mallow Flower:</strong> Mallow flower has been used as a heart remedy for thousands of years because of its effects on the kidneys. After John Winters discovered the root cause of high blood pressure was in the kidneys, he identified several herbs specifically targeting the kidneys. Mallow flower, better known as hibiscus, signals your body to produce a hormone called renin, and this hormone "maintains normal blood pressure in your body," according to John's research.</p>
<p style="text-align: justify;"><strong>Lasuna Bulb:</strong> Lasuna bulb has a long history of use in traditional medicine worldwide, and it's one of nature's best-known heart health remedies overall. We know it better as a type of garlic. John Winters describes garlic as the "Nectar of Gods because it maintains optimum arterial flexibility and stable blood pressure." People with high blood pressure tend to have poor arterial flexibility and unstable blood pressure, leading to high blood pressure readers. Popular for 5,000+ years, garlic continues to be a critical component of natural remedies like Bazopril today.</p>
<p style="text-align: justify;"><strong>Camellia Sinensis:</strong> Better known as green tea, camellia sinensis is packed with plant-based antioxidants called polyphenols that support healthy blood pressure. Many people drink green tea daily for its anti-inflammatory effects. These effects can support a healthy heart, brain, blood sugar levels, weight loss, and overall longevity, among other effects. Considered one of the world's most nutritional beverages, green tea is condensed into a powdered form and added to each capsule of Bazopril to unlock powerful effects.</p>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>This sale won't last long, so act now!</strong></span></a></u></p>
<p style="text-align: justify;">Overall, Bazopril contains ingredients to target blood pressure, heart health, kidney health, and inflammation differently. Many components are sourced from traditional medicine, while others date back to ancient Egypt. By collecting these nutrients in one place, John Winters aims to have created the ultimate cardiovascular health supplement.</p>
<h2 style="text-align: justify;"><strong>Bazopril Targets the Root Cause of High Blood Pressure: Your Kidneys</strong></h2>
<p style="text-align: justify;">John Winters, chief formulator of Bazopril, started researching the reason for his high blood pressure.</p>
<p style="text-align: justify;">After suffering a heart attack at his daughter's concert, he wanted to make a change. His research led him to discover the root cause of high blood pressure issues: your kidneys.</p>
<p style="text-align: justify;">One day at the grocery store, John met a man named James, who had seen him collapse from a heart attack at his daughter's concert. James was a researcher at a local hospital, and he told James the root cause of high blood pressure is in the kidneys.</p>
<p style="text-align: justify;">Here's how John describes the connection between kidneys and blood pressure, citing a study by the University of Virginia:</p>
<p style="text-align: justify;"><em>"…it took them 60 years to discover that the blood pressure control mechanism is hidden inside the tiny kidney cells. Kidneys are the key to normal blood pressure…because they use a special hormone called renin to set the blood pressure."</em></p>
<p style="text-align: justify;">When your kidneys are healthy, you produce a normal amount of the renin hormone, and your blood pressure remains stable.</p>
<p style="text-align: justify;">When your kidneys are imbalanced, your body isn't producing the right amount of renin, causing blood pressure to rise.</p>
<p style="text-align: justify;">Some of the ingredients in Bazopril, including mallow flower (hibiscus), are designed to specifically target your kidneys and support the production of renin, <u>helping your blood pressure remain stable</u>.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-bazopril"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjt_ynx8TLfF3uuOPD5xks9bCccMQfc5ZJ05ROrnOd2lBKdYy6d07vL2wwLKHBhYPt8QE81Xnilp5efKocwxsVVaYH9Nz80uGPQirwVdnboiRNzwesfZRHyDvUTWHBtj3p7TUJObRKaXuX3o-PikME9jMEq4uaOJvQr7WYOPWjqv33Aq6pL6bsjedBZ/w640-h256/egeggg.JPG" alt="" width="640" height="256" border="0" data-original-height="419" data-original-width="1044" /></a></div>
<h2 style="text-align: justify;"><strong>Bazopril Versus Blood Pressure Medication</strong></h2>
<p style="text-align: justify;"><a href="https://www.fuzia.com/article_detail/801663/bazopril-reviews-2023-do-not-buy-till-you-read-this">Bazopril</a> is marketed as an alternative to blood pressure medication. The chief formulator of Bazopril, John Winters (also known as John Miller), claims he was taking substantial doses of five separate blood pressure medications before he experienced lasting relief with Bazopril.</p>
<p style="text-align: justify;">So what's the difference between Bazopril and blood pressure medication? Here are some of the things to consider:</p>
<p style="text-align: justify;">One woman cited on the official Bazopril website claims she was able to stop taking her prescription blood pressure medication after taking Bazopril for six months.</p>
<p style="text-align: justify;">John Winters (John Miller) was taking very strong doses of five separate blood pressure medications, including beta blockers and calcium channel blockers, before he developed Bazopril to resolve his blood pressure issues. Even when taking these five medications, his blood pressure was 179/85 at a checkup.</p>
<p style="text-align: justify;">John spoke with colleagues in the medical industry who told him blood pressure drugs don't work for 53% of people. In other words, most people who take blood pressure medication won't experience relief.</p>
<p style="text-align: justify;">Long-term use of blood pressure medication is associated with kidney failure. As your kidneys fail, your blood pressure rises. Medication may temporarily lower blood pressure, only to raise it long-term because of the effects on your kidneys.</p>
<p style="text-align: justify;">Over 80% of doctors who promote blood pressure medication take money from big pharmaceutical companies, according to research cited by John on the <u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener">official Bazopril website</a></u>.</p>
<h2 style="text-align: justify;"><strong>Scientific Evidence for Bazopril</strong></h2>
<p style="text-align: justify;">As proof that <a href="https://www.fuzia.com/article_detail/801663/bazopril-reviews-2023-do-not-buy-till-you-read-this">Bazopril</a> works, John Winters and his team cite dozens of studies from the University of Virginia, Harvard University, and other major educational institutions. We'll review some of that research below to determine how Bazopril works and the science behind the supplement.</p>
<p style="text-align: justify;"><strong>Hawthorn</strong> has been studied for its long-term effects on heart health. A <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3249900/" target="_blank" rel="nofollow noopener">2010 study</a></u> found that hawthorn could help treat cardiovascular and ischemic heart diseases. And, even at doses as high as 1,800mg, hawthorn appears to be effective for supporting cardiovascular health. Researchers specifically praised hawthorn for its "lack of herb-drug interactions" from clinical trials, suggesting it could be an effective alternative treatment for cardiovascular disease.</p>
<p style="text-align: justify;"><strong>Juniper berry</strong> is another popular nutraceutical used in anti-aging medicine, heart health, and inflammation supplements. In a <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6726717/" target="_blank" rel="nofollow noopener">2019 study</a></u>, researchers found juniper was rich in aromatic oils, sugars, resins, catechin, terpenic acids, alkaloids, flavonoids, and other natural ingredients that appeared to have positive effects throughout the body. These natural ingredients have antioxidant, antibacterial, antifungal, anti-inflammatory, and cytotoxic effects, all of which could support heart health and your cardiovascular system.</p>
<p style="text-align: justify;">It's no secret <strong>olive oil</strong> is good for heart health. One of the most heart-health-friendly diets on the planet, the Mediterranean diet, emphasizes olive oil. Bazopril contains olive leaf extract specifically for its effects on cardiovascular health. In a <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8137474/" target="_blank" rel="nofollow noopener">2021 trial</a></u>, researchers tested the effects of olive leaf extract on a group of 77 healthy adults with mildly high cholesterol levels. Adults took olive leaf extract or a placebo for eight weeks. At the end of the eight weeks, researchers found no difference in blood pressure, cholesterol, or blood sugar levels between the two groups.</p>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Buy Bazopril Before it's SOLD OUT</strong></span></a></u></p>
<p style="text-align: justify;">A <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3730992/" target="_blank" rel="nofollow noopener">2010 study</a></u> was more beneficial, finding oleuropein (a natural chemical within olive leaf extract) was linked to cardioprotective and neuroprotective effects.</p>
<p style="text-align: justify;">A <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9086798/" target="_blank" rel="nofollow noopener">2022 meta-analysis</a></u> on hibiscus (mallow flower) found it could support cardiovascular health and blood pressure. Researchers examined 17 hibiscus and blood pressure trials and found a significant connection between consumption and lower blood pressure levels. Researchers found "hibiscus-induced reductions to BP similar to that resulting from medication," with drops of 2.13mmHg in systolic BP and 1.10mmHg in diastolic BP. Hibiscus also significantly lowered unhealthy cholesterol levels, further supporting heart health.</p>
<p style="text-align: justify;">Bazopril also contains garlic, one of nature's best-known cardiovascular supplement ingredients. Many studies have linked garlic and its natural constituent chemicals to significant heart health and blood pressure effects. A <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC139960/" target="_blank" rel="nofollow noopener">2002 review</a></u>, for example, found an inverse correlation between garlic consumption and a reduced risk of cardiovascular disease progression, supporting the idea garlic can reduce the risk of heart health problems. Researchers believe garlic works because it contains natural chemicals like allicin linked to antioxidant effects and blood flow.</p>
<p style="text-align: justify;">Green tea, the final active ingredient in Bazopril, is also popular for its effects on heart health. Green tea contains catechins that appear to help with heart health. Catechins like epigallocatechin gallate (EGCG), for example, appear to have antioxidant effects throughout the body – including on the heart and your overall cardiovascular system. A <u><a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2748751/" target="_blank" rel="nofollow noopener">2009 study</a></u> specifically connected the catechins in green tea to positive effects on heart health, finding they regulated vascular tone, promoted nitric oxide production, and supported overall cardiovascular system healing, among other effects.</p>
<p style="text-align: justify;"><a href="https://bazoprilreviews-doesitworksors.godaddysites.com/">Bazopril</a> blends natural ingredients linked to heart health and cardiovascular function. From garlic to olive leaf extract to hibiscus, the supplement contains some of nature's best-known remedies for heart health and cardiovascular function.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-bazopril"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgDtBK7XbISW0FRKelgf-WlqlArstqFhEsSm6Hn1AInj6OErnjGvhDKP0fgomtByuCysNxMh2iS4mxTcQp8zEZPqK9B4LYLdvbfRDhoFIO7dLwQCTg2Un5svdgwIZyGIIo-MsVtKJGeO0rM9Zj99S4ny9OYUvhSfMgiuM5Y0HGetlvRrO3Yp8VckqDv/w640-h440/efeghth.JPG" alt="" width="640" height="440" border="0" data-original-height="520" data-original-width="757" /></a></div>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>See what others are saying about Bazopril >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Reviews: What Do Customers Say?</strong></h2>
<p style="text-align: justify;"><a href="https://www.townscript.com/e/bazopril-reviews-1-usa-blood-pressure-support-read-reality-before-buyng-102403">Bazopril</a> has strong reviews online from customers who have experienced significant results with the supplement.</p>
<p style="text-align: justify;">Men and women have left reviews stating Bazopril works, with many agreeing their blood pressure is significantly lower after taking Bazopril. Some have even stopped taking their doctor-prescribed blood pressure medication after using Bazopril.</p>
<p style="text-align: justify;">Here are some of the reviews shared by customers on the official website:</p>
<p style="text-align: justify;">One customer claims her "blood pressure has been consistently lower" after taking Bazopril for a few months. She also has extra energy during the day.</p>
<p style="text-align: justify;">Another man claims his "blood pressure has never been lower" after taking Bazopril for just two weeks. Now, he's "feeling better than ever."</p>
<p style="text-align: justify;">Some customers have dealt with high blood pressure for years before targeting the issue with Bazopril. One customer claims she has "been living with high blood pressure for years" and had tried all types of remedies – yet nothing worked until she started taking Bazopril. Now, she feels "so much better" thanks to the supplement.</p>
<p style="text-align: justify;">One woman claims her "numbers have dropped significantly" after taking Bazopril for just two weeks.</p>
<p style="text-align: justify;">One man has been impressed with his results after ordering a six-month supply of <a href="https://sketchfab.com/3d-models/bazopril-reviews-scam-alert-2023-does-it-works-b0a9e42a4c17498abac6793d797a5333">Bazopril</a>. He claims, "Every time I checked my blood pressure, it was lower than before," thanks to <a href="https://infogram.com/bazopril-reviews-2023-shocking-truth-must-read-this-before-buying-1h7g6k0w5mww02o">Bazopril</a>.</p>
<p style="text-align: justify;">Some customers claim to have stopped taking their doctor-prescribed blood pressure medication after taking Bazopril. For example, one woman claims she "even got off one prescription drug" after taking Bazopril for six months.</p>
<p style="text-align: justify;">Overall, many customers have lowered blood pressure significantly with <a href="https://bazoprilreviewsscam.bandcamp.com/track/bazopril-reviews-scam-alert-nobody-tells-you-the-100-truth-behind-bazopril-blood-pressure-formula">Bazopril</a> – with many customers noticing results in just two weeks.</p>
<h2 style="text-align: justify;"><strong>Bazopril Pricing</strong></h2>
<p style="text-align: justify;"><a href="https://soundcloud.com/bazopril-reviews/bazopril-reviews-disclosed-beware-alarming-side-effects-or-real-benefits">Bazopril</a> typically costs $230 per bottle, according to the official website. However, as part of a 2023 promotion, you can pay $69 or less by <u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener">buying directly from the manufacturer online</a></u>. Qualifying purchases come with bulk savings, free shipping, and bonuses.</p>
<p style="text-align: justify;">Here's how pricing works when ordering <a href="https://groups.google.com/g/bazopril-reviews-offer/c/2q6RqQdiqmg">Bazopril</a> online today:</p>
<ul style="text-align: justify;">
<li>Order one bottle for $69 + $9.95 Shipping</li>
<li>Order three bottles for $177 ($59 Per bottle) + Free Shipping</li>
<li>Order six bottles for $294 ($49 Per bottle) + Free Shipping + 2 Free Bonuses</li>
</ul>
<p style="text-align: justify;">Each bottle contains a 30-day supply of Bazopril, or 30 servings (60 capsules). You take two capsules daily to support healthy blood pressure.</p>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Act quickly to secure the limited-time discounted price today!</strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bonuses Included with Bazopril</strong></h2>
<p style="text-align: justify;">As part of a 2023 promotion, all six bottle purchases of <a href="https://medium.com/@bazopril_658/bazopril-reviews-scam-or-legit-does-this-1-blood-pressure-support-formula-works-c52fa06e0521">Bazopril</a> come with two free bonus eBooks. These eBooks can complement the effects of <a href="https://bazoprilreviews.contently.com/">Bazopril</a>, giving you additional tips for lowering blood pressure by making diet and lifestyle changes.</p>
<p style="text-align: justify;">Bonuses included with <a href="https://groups.google.com/g/bazopril-reviews-offer/c/6bvAbb36Nqc">Bazopril</a> include:</p>
<p style="text-align: justify;"><strong>Free Bonus eBook 1: The Heart's Kitchen: Desserts And Superfoods That Strengthen Your Heart:</strong> This eBook describes some of the best natural ingredients, desserts, superfoods, and more to boost cardiovascular health. For example, you'll discover a "sour root" that can promote heart health and a "miraculous water cure" to revitalize your heart. One nutrient highlighted in the book was so valuable it used to be used as currency.</p>
<p style="text-align: justify;"><strong>Free Bonus eBook 2: Heart Smart:</strong> This eBook highlights some of the best tips, tricks, and strategies you can use to support cardiovascular health. Simple lifestyle changes could lower blood pressure, improve heart health, and extend your lifespan. You don't need to transform your diet or lifestyle; you can make small, incremental changes to improve your heart starting today.</p>
<p style="text-align: justify;"><u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Order now & get bonuses >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Refund Policy</strong></h2>
<p style="text-align: justify;"><a href="https://bazopril-blood-pressure-support.clubeo.com/page/bazopril-reviews-do-bazopril-blood-pressure-formula-work-my-30-days-experience-new-report.html">Bazopril</a> has a 365-day money-back guarantee. You have 365 days to try <a href="https://www.forexagone.com/forum/journal-de-trading/bazopril-reviews-formulated-with-100-pure-ingredients-that-maintain-blood-pressure-support-67644#164854">Bazopril</a>, determine if it works, and request a refund if you're unsatisfied. You can contact customer service if you have any questions about the return policy or anything else.</p>
<ul style="text-align: justify;">
<li>Email: [email protected]</li>
</ul>
<h2 style="text-align: justify;"><strong>About Bazopril</strong></h2>
<p style="text-align: justify;"><a href="https://www.facebook.com/profile.php?id=61550666740528">Bazopril</a> is made in the United States in an FDA-registered, GMP-certified facility. The supplement was formulated by Chief Research Scientist John Winters, who dealt with severe cardiovascular health issues before taking <a href="https://bazopril-reviews-usa.hashnode.dev/bazopril-reviews-1-usa-scam-or-legit-dont-buy-until-you-see-report-must-check-this-before-buying">Bazopril</a>. John Winters also periodically refers to himself as John Miller.</p>
<p style="text-align: justify;">Today, the manufacturer claims to work with a third-party lab to analyze each batch of <a href="https://groups.google.com/g/bazopril-reviews-offer">Bazopril</a> using high-performance liquid chromatography, refractive index detection, and rapid microbiology to verify purity and potency.</p>
<p style="text-align: justify;">You can contact the makers of <a href="https://community.weddingwire.in/forum/bazopril-reviews-do-not-buy-bazopril-blood-pressure-support-until-customer-truth-exposed--t145408">Bazopril</a> and the company's customer service team via the following:</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://sale365day.com/get-bazopril"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgESJuQSb5pxyzjU2_DuX3driHksbrVozLBVdKjyXFpz7hct2AckMcwK8_6O9X7tKQ_Tb7jDgn--pin19waLlAiyY4GFa35LuP3mVTY7jAzhH9GQ2JLKRePgRZCI5PCSL698G6xdK8zwo8HqXj4BlxVCaBARSuKj2jy1Q9_KXjDvNozHIdtUQMm7JRQ/w640-h464/e3ferfgg.JPG" alt="" width="640" height="464" border="0" data-original-height="579" data-original-width="798" /></a></div>
<h2 style="text-align: justify;"><strong>Final Word</strong></h2>
<p style="text-align: justify;"><a href="https://bazopril-blood-pressure-support.clubeo.com/">Bazopril</a> is a heart health supplement developed by a man who recently suffered a heart attack after dealing with high blood pressure for nine years.Featuring a blend of natural ingredients, Bazopril can support healthy blood pressure using garlic, olive leaf extract, hibiscus, and other natural ingredients. To learn more about Bazopril and how it works or to buy the heart health supplement online today, <u><a href="https://sale365day.com/get-bazopril" target="_blank" rel="nofollow noopener">visit the official website</a></u>.</p>
<p><strong>Read More:</strong></p>
<p><a href="https://yourpillsboss.blogspot.com/2023/09/bazopril-reviews-1-blood-pressure.html">https://yourpillsboss.blogspot.com/2023/09/bazopril-reviews-1-blood-pressure.html</a><br /><a href="https://bazopril-reviews-official.jimdosite.com/">https://bazopril-reviews-official.jimdosite.com/</a><br /><a href="https://lookerstudio.google.com/reporting/3006eb02-d08d-4210-b766-00b92b3047e7/page/SPSbD">https://lookerstudio.google.com/reporting/3006eb02-d08d-4210-b766-00b92b3047e7/page/SPSbD</a><br /><a href="https://bazopril-reviews-2023.webflow.io/">https://bazopril-reviews-2023.webflow.io/</a><br /><a href="https://devfolio.co/@bazoprilreview">https://devfolio.co/@bazoprilreview</a><br /><a href="https://www.facebook.com/profile.php?id=61550666740528">https://www.facebook.com/profile.php?id=61550666740528</a><br /><a href="https://bazopril-blood-pressure-support.clubeo.com/page/bazopril-reviews-do-bazopril-blood-pressure-formula-work-my-30-days-experience-new-report.html">https://bazopril-blood-pressure-support.clubeo.com/page/bazopril-reviews-do-bazopril-blood-pressure-formula-work-my-30-days-experience-new-report.html</a><br /><a href="https://bazopril-blood-pressure-support.clubeo.com/">https://bazopril-blood-pressure-support.clubeo.com/</a><br /><a href="https://groups.google.com/g/bazopril-reviews-offer">https://groups.google.com/g/bazopril-reviews-offer</a><br /><a href="https://groups.google.com/g/bazopril-reviews-offer/c/6bvAbb36Nqc">https://groups.google.com/g/bazopril-reviews-offer/c/6bvAbb36Nqc</a><br /><a href="https://groups.google.com/g/bazopril-reviews-offer/c/2q6RqQdiqmg">https://groups.google.com/g/bazopril-reviews-offer/c/2q6RqQdiqmg</a><br /><a href="https://soundcloud.com/bazopril-reviews/bazopril-reviews-disclosed-beware-alarming-side-effects-or-real-benefits">https://soundcloud.com/bazopril-reviews/bazopril-reviews-disclosed-beware-alarming-side-effects-or-real-benefits</a><br /><a href="https://bazoprilreviewsscam.bandcamp.com/track/bazopril-reviews-scam-alert-nobody-tells-you-the-100-truth-behind-bazopril-blood-pressure-formula">https://bazoprilreviewsscam.bandcamp.com/track/bazopril-reviews-scam-alert-nobody-tells-you-the-100-truth-behind-bazopril-blood-pressure-formula</a><br /><a href="https://www.townscript.com/e/bazopril-reviews-1-usa-blood-pressure-support-read-reality-before-buyng-102403">https://www.townscript.com/e/bazopril-reviews-1-usa-blood-pressure-support-read-reality-before-buyng-102403</a><br /><a href="https://www.fuzia.com/fz/bazopril-reviews002">https://www.fuzia.com/fz/bazopril-reviews002</a><br /><a href="https://www.fuzia.com/article_detail/801663/bazopril-reviews-2023-do-not-buy-till-you-read-this">https://www.fuzia.com/article_detail/801663/bazopril-reviews-2023-do-not-buy-till-you-read-this</a><br /><a href="https://bazoprilreviews-doesitworksors.godaddysites.com/">https://bazoprilreviews-doesitworksors.godaddysites.com/</a><br /><a href="https://infogram.com/bazopril-reviews-2023-shocking-truth-must-read-this-before-buying-1h7g6k0w5mww02o">https://infogram.com/bazopril-reviews-2023-shocking-truth-must-read-this-before-buying-1h7g6k0w5mww02o</a><br /><a href="https://sketchfab.com/3d-models/bazopril-reviews-scam-alert-2023-does-it-works-b0a9e42a4c17498abac6793d797a5333">https://sketchfab.com/3d-models/bazopril-reviews-scam-alert-2023-does-it-works-b0a9e42a4c17498abac6793d797a5333</a><br /><a href="https://bazoprilreviews.contently.com/">https://bazoprilreviews.contently.com/</a><br /><a href="https://medium.com/@bazopril_658/bazopril-reviews-scam-or-legit-does-this-1-blood-pressure-support-formula-works-c52fa06e0521?postPublishedType=initial">https://medium.com/@bazopril_658/bazopril-reviews-scam-or-legit-does-this-1-blood-pressure-support-formula-works-c52fa06e0521</a><br /><a href="https://medium.com/@bazopril_658">https://medium.com/@bazopril_658</a><br /><a href="https://www.forexagone.com/forum/journal-de-trading/bazopril-reviews-formulated-with-100-pure-ingredients-that-maintain-blood-pressure-support-67644#164854">https://www.forexagone.com/forum/journal-de-trading/bazopril-reviews-formulated-with-100-pure-ingredients-that-maintain-blood-pressure-support-67644#164854</a><br /><a href="https://bazopril-reviews-usa.hashnode.dev/bazopril-reviews-1-usa-scam-or-legit-dont-buy-until-you-see-report-must-check-this-before-buying?showSharer=true">https://bazopril-reviews-usa.hashnode.dev/bazopril-reviews-1-usa-scam-or-legit-dont-buy-until-you-see-report-must-check-this-before-buying</a><br /><a href="https://hashnode.com/@bazoprilreviewusa">https://hashnode.com/@bazoprilreviewusa</a><br /><a href="https://community.weddingwire.in/forum/bazopril-reviews-do-not-buy-bazopril-blood-pressure-support-until-customer-truth-exposed--t145408">https://community.weddingwire.in/forum/bazopril-reviews-do-not-buy-bazopril-blood-pressure-support-until-customer-truth-exposed--t145408</a><br /><a href="https://www.provenexpert.com/bazopril/">https://www.provenexpert.com/bazopril/</a><br /><a href="https://www.provenexpert.com/bazopril-blood-pressure-support-formula/">https://www.provenexpert.com/bazopril-blood-pressure-support-formula/</a><br /><a href="https://bazopril-blood-pressure-suppor-eb6f54.webflow.io/">https://bazopril-blood-pressure-suppor-eb6f54.webflow.io/</a><br /><a href="https://devfolio.co/@bazoprilreport">https://devfolio.co/@bazoprilreport</a><br /><a href="https://bazopril-updates.clubeo.com/page/bazopril-blood-pressure-support-1-formula-is-it-worth-the-buying-or-fake-supplement.html">https://bazopril-updates.clubeo.com/page/bazopril-blood-pressure-support-1-formula-is-it-worth-the-buying-or-fake-supplement.html</a><br /><a href="https://bazoprilreviewsreport.bandcamp.com/track/bazopril-blood-pressure-support-formula-fda-approved-2023-unexpected-details-revealed">https://bazoprilreviewsreport.bandcamp.com/track/bazopril-blood-pressure-support-formula-fda-approved-2023-unexpected-details-revealed</a><br /><a href="https://www.fuzia.com/article_detail/801700/bazopril-blood-pressure-support-formula-fake-or-legit">https://www.fuzia.com/article_detail/801700/bazopril-blood-pressure-support-formula-fake-or-legit</a><br /><a href="https://bazoprilreviewsofficial.contently.com/">https://bazoprilreviewsofficial.contently.com/</a><br /><a href="https://sketchfab.com/3d-models/bazopril-blood-pressure-support-alert-2023-use-e66d6c5ac3214bc6b43baac32b0f3c22">https://sketchfab.com/3d-models/bazopril-blood-pressure-support-alert-2023-use-e66d6c5ac3214bc6b43baac32b0f3c22</a><br /><a href="https://www.forexagone.com/forum/journal-de-trading/bazopril-reviews-is-bazopril-blood-pressure-support-formula-really-works-or-fake-truth-exposed-67703#164914">https://www.forexagone.com/forum/journal-de-trading/bazopril-reviews-is-bazopril-blood-pressure-support-formula-really-works-or-fake-truth-exposed-67703#164914</a><br /><a href="https://www.townscript.com/e/bazopril-blood-pressure-support-scam-formula-or-real-benefit-030312">https://www.townscript.com/e/bazopril-blood-pressure-support-scam-formula-or-real-benefit-030312</a></p>
|
bazoprilreview/Bazopril-Reviews
|
[
"region:us"
] |
2023-09-01T12:15:00+00:00
|
{}
|
2023-09-01T12:17:35+00:00
|
[] |
[] |
TAGS
#region-us
|
<article id="23298596" class="col-md-11 pb-5 pb-md-0 px-0 article-section inviewscroll mb-3 active">
<h2 style="text-align: center;"><a href="URL style="color: #003300;">Click Here -- Official Website -- Order Now</span></a></h2>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #ff00fe;">For Order Official Website -</span> <a href="URL/URL
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #800180;">Product Name -</span> <a href="URL
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #2b00fe;">Side Effect -</span> <span style="color: #800180;">No Side Effects<br /></span></strong></p>
<p style="color: #848c9b; font-family: 'Open Sans', sans-serif; font-size: 16px; text-align: left;"><strong><span style="color: #274e13;">Availability - </span><a href="URL
<p><strong><span style="color: #274e13;"></span></strong><strong><span style="color: #274e13;">Rating -</span>⭐⭐⭐⭐⭐</strong></p>
<p><a href="URL style="font-size: large;"><strong><span style="color: #274e13;">Hurry </span><span style="color: #274e13;">U</span><span style="color: #274e13;">p - </span><span style="color: #274e13;">Limi</span><span style="color: #274e13;">ted Time Offer - Purchase Now</span></strong></span></a></p>
<p><a href="URL style="font-size: large;"><strong><span style="color: #274e13;">Hurry Up</span><span style="color: #274e13;"> - L</span><span style="color: #274e13;">imited Time Offer - Purchase Now</span></strong></span></a></p>
<a href="URL style="font-size: large;"><strong><span style="color: #274e13;">Hu</span><span style="color: #274e13;">rry Up - </span><span style="color: #274e13;">Limited Time Offer - Purchase Now</span></strong></span></a></article>
<article id="23298596" class="col-md-11 pb-5 pb-md-0 px-0 article-section inviewscroll mb-3 active">
<p class="articleHeading mb-0"><strong><a href="URL is a blood pressure supplement featuring a blend of natural ingredients to support heart health.</strong></p>
</article>
<p style="text-align: justify;">Is Bazopril legit? Can <a href="URL lower your blood pressure naturally? Keep reading to discover everything you need to know about Bazopril and how it works today in our review.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="300" border="0" data-original-height="565" data-original-width="1208" /></a></div>
<h2 style="text-align: justify;"><strong>What is Bazopril?</strong></h2>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener">Bazopril is a nutritional supplement available exclusively online</a></u>. Developed by a man who recently suffered a heart attack, then reversed his high blood pressure using natural ingredients, Bazopril contains a blend of nutrients to keep your blood pressure in a normal range.</p>
<p style="text-align: justify;">Each capsule of Bazopril contains ingredients like mallow flower, elaion tree leaf extract, and conifer berry to maintain a healthy circulation throughout the body.</p>
<p style="text-align: justify;">The makers of <a href="URL market the supplement to anyone concerned with heart health – including those with high blood pressure or anyone who dislikes the side effects of their blood pressure medication.</p>
<p style="text-align: justify;">Bazopril is priced at $69 per bottle. Qualifying purchases come with free shipping and free digital bonuses.</p>
<h2 style="text-align: justify;"><strong>Bazopril Benefits</strong></h2>
<p style="text-align: justify;">According to the official Bazopril website, the supplement can provide benefits like:</p>
<ul style="text-align: justify;">
<li>Maintain healthy blood pressure</li>
<li>Target the root cause of high blood pressure – your kidneys</li>
<li>Natural ingredients with no side effects</li>
<li>Made in the United States in FDA-registered, GMP-certified facilities</li>
<li>Backed by cutting-edge science and centuries of use in traditional medicine</li>
<li>Overall, Bazopril aims to be the ultimate blood pressure support supplement available today.</li>
</ul>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Click here to order Bazopril and experience the benefits!</strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>How Does Bazopril Work?</strong></h2>
<p style="text-align: justify;">Bazopril is primarily designed for people with high blood pressure who want to lower their blood pressure using natural ingredients. Here's how John Winters and the Bazopril formulation team describe the formula:</p>
<p style="text-align: justify;"><em>"If you struggle to keep your blood pressure in the normal range, then Bazopril is for you. If you worry about complications, then Bazopril is definitely for you."</em></p>
<p style="text-align: justify;">Doctors may prescribe blood pressure medication to lower your blood pressure. However, popular blood pressure medication often comes with unwanted side effects.</p>
<p style="text-align: justify;">Bazopril works differently, using a blend of natural ingredients to maintain healthy blood pressure levels. And according to the official website, the supplement has already "helped thousands of people of all ages" maintain healthy blood pressure levels.</p>
<h2 style="text-align: justify;"><strong>Who Created Bazopril?</strong></h2>
<p style="text-align: justify;"><a href="URL was developed by John Winters, a research scientist from the United States. John also refers to himself as John Miller.</p>
<p style="text-align: justify;">John had been dealing with high blood pressure for a long time. Nine years ago, his doctor started prescribing beta blockers, calcium channel blockers, and other blood pressure medications.</p>
<p style="text-align: justify;">Those medications didn't work, leaving John with uncomfortable side effects. His doctor doubled the dose, then prescribed more drugs to manage his blood pressure.</p>
<p style="text-align: justify;">Nothing worked, and John's blood pressure continued to rise. Even when taking blood pressure medication, John's blood pressure was 179/85 at a checkup, sending him into a panic.</p>
<p style="text-align: justify;">One day, John's blood pressure issues peaked when he suffered a heart attack at his daughter's Christmas concert. John was rushed to the hospital, but he was determined his daughter would not grow up fatherless.</p>
<p style="text-align: justify;">John started to research natural cures for high blood pressure. He stumbled upon a series of ingredients popular in ancient Egypt for lowering blood pressure naturally. After testing those ingredients in different combinations and dosages, he created Bazopril.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="436" border="0" data-original-height="652" data-original-width="959" /></a></div>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Learn more on the official website >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Ingredients</strong></h2>
<p style="text-align: justify;">Plenty of cardiovascular health supplements claim to improve heart health – yet they don't work because they contain unproven ingredients. Bazopril aims to take a different approach by using natural ingredients backed by cutting-edge research.</p>
<p style="text-align: justify;">In fact, some of the ingredients in <a href="URL have been "used by our ancestors for centuries," according to John Winters and his formulation team. Modern science has validated the use of these ingredients for health, wellness, and cardiovascular support.</p>
<p style="text-align: justify;">Here are all of the active ingredients in Bazopril and how they work, according to the official Bazopril website:</p>
<p style="text-align: justify;"><strong>Albaspine:</strong> Bazopril contains albaspine, better known as hawthorn or Crataegus. Many supplements – particularly heart health supplements – use hawthorn for its ability to promote healthy blood pressure. According to the creators, the albaspine in Bazopril "has so many proven health benefits," which is why it's been used for centuries in traditional medicine. The plant is also known as "The Crown of Jesus" because Jesus was crowned with hawthorn. While John was researching natural cures for high blood pressure, he discovered research proving hawthorn "keeps your heart relaxed so it's not pushing so hard," allowing you to lower blood pressure naturally. Albaspine also releases nitric oxide, which helps to relax your blood vessels and enhance blood flow.</p>
<p style="text-align: justify;"><strong>Conifer Berry:</strong> Conifer berry is a cone that keeps arteries relaxed and maintains healthy circulation, flooding your cardiovascular system with antioxidants to keep it healthy. It's better known as the juniper berry, and many supplements use it to support heart health and overall healthy inflammation throughout the body. Today, we know juniper berries work because they're rich in vitamin C, one of nature's best antioxidants. This vitamin works throughout the body – including in and around your heart – to support healthy circulation.</p>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Bazopril Is On Sale Now For A Limited Time!</strong></span></a></u></p>
<p style="text-align: justify;"><strong>Elaion Tree Leaf Extract:</strong> Better known as olive leaf extract, elaion tree leaf extract contains natural chemicals linked to heart health, healthy inflammation, and overall benefits. The main constituent in olive leaf extract is oleuropein. This natural chemical appears to have antioxidant effects, which can help fight free radicals throughout the body. Free radicals cause inflammation that increases your risk of disease and illness. According to John Winters' research, when developing Bazopril, elaion tree leaf extract can support a healthy inflammatory response.</p>
<p style="text-align: justify;"><strong>Mallow Flower:</strong> Mallow flower has been used as a heart remedy for thousands of years because of its effects on the kidneys. After John Winters discovered the root cause of high blood pressure was in the kidneys, he identified several herbs specifically targeting the kidneys. Mallow flower, better known as hibiscus, signals your body to produce a hormone called renin, and this hormone "maintains normal blood pressure in your body," according to John's research.</p>
<p style="text-align: justify;"><strong>Lasuna Bulb:</strong> Lasuna bulb has a long history of use in traditional medicine worldwide, and it's one of nature's best-known heart health remedies overall. We know it better as a type of garlic. John Winters describes garlic as the "Nectar of Gods because it maintains optimum arterial flexibility and stable blood pressure." People with high blood pressure tend to have poor arterial flexibility and unstable blood pressure, leading to high blood pressure readers. Popular for 5,000+ years, garlic continues to be a critical component of natural remedies like Bazopril today.</p>
<p style="text-align: justify;"><strong>Camellia Sinensis:</strong> Better known as green tea, camellia sinensis is packed with plant-based antioxidants called polyphenols that support healthy blood pressure. Many people drink green tea daily for its anti-inflammatory effects. These effects can support a healthy heart, brain, blood sugar levels, weight loss, and overall longevity, among other effects. Considered one of the world's most nutritional beverages, green tea is condensed into a powdered form and added to each capsule of Bazopril to unlock powerful effects.</p>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>This sale won't last long, so act now!</strong></span></a></u></p>
<p style="text-align: justify;">Overall, Bazopril contains ingredients to target blood pressure, heart health, kidney health, and inflammation differently. Many components are sourced from traditional medicine, while others date back to ancient Egypt. By collecting these nutrients in one place, John Winters aims to have created the ultimate cardiovascular health supplement.</p>
<h2 style="text-align: justify;"><strong>Bazopril Targets the Root Cause of High Blood Pressure: Your Kidneys</strong></h2>
<p style="text-align: justify;">John Winters, chief formulator of Bazopril, started researching the reason for his high blood pressure.</p>
<p style="text-align: justify;">After suffering a heart attack at his daughter's concert, he wanted to make a change. His research led him to discover the root cause of high blood pressure issues: your kidneys.</p>
<p style="text-align: justify;">One day at the grocery store, John met a man named James, who had seen him collapse from a heart attack at his daughter's concert. James was a researcher at a local hospital, and he told James the root cause of high blood pressure is in the kidneys.</p>
<p style="text-align: justify;">Here's how John describes the connection between kidneys and blood pressure, citing a study by the University of Virginia:</p>
<p style="text-align: justify;"><em>"…it took them 60 years to discover that the blood pressure control mechanism is hidden inside the tiny kidney cells. Kidneys are the key to normal blood pressure…because they use a special hormone called renin to set the blood pressure."</em></p>
<p style="text-align: justify;">When your kidneys are healthy, you produce a normal amount of the renin hormone, and your blood pressure remains stable.</p>
<p style="text-align: justify;">When your kidneys are imbalanced, your body isn't producing the right amount of renin, causing blood pressure to rise.</p>
<p style="text-align: justify;">Some of the ingredients in Bazopril, including mallow flower (hibiscus), are designed to specifically target your kidneys and support the production of renin, <u>helping your blood pressure remain stable</u>.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="256" border="0" data-original-height="419" data-original-width="1044" /></a></div>
<h2 style="text-align: justify;"><strong>Bazopril Versus Blood Pressure Medication</strong></h2>
<p style="text-align: justify;"><a href="URL is marketed as an alternative to blood pressure medication. The chief formulator of Bazopril, John Winters (also known as John Miller), claims he was taking substantial doses of five separate blood pressure medications before he experienced lasting relief with Bazopril.</p>
<p style="text-align: justify;">So what's the difference between Bazopril and blood pressure medication? Here are some of the things to consider:</p>
<p style="text-align: justify;">One woman cited on the official Bazopril website claims she was able to stop taking her prescription blood pressure medication after taking Bazopril for six months.</p>
<p style="text-align: justify;">John Winters (John Miller) was taking very strong doses of five separate blood pressure medications, including beta blockers and calcium channel blockers, before he developed Bazopril to resolve his blood pressure issues. Even when taking these five medications, his blood pressure was 179/85 at a checkup.</p>
<p style="text-align: justify;">John spoke with colleagues in the medical industry who told him blood pressure drugs don't work for 53% of people. In other words, most people who take blood pressure medication won't experience relief.</p>
<p style="text-align: justify;">Long-term use of blood pressure medication is associated with kidney failure. As your kidneys fail, your blood pressure rises. Medication may temporarily lower blood pressure, only to raise it long-term because of the effects on your kidneys.</p>
<p style="text-align: justify;">Over 80% of doctors who promote blood pressure medication take money from big pharmaceutical companies, according to research cited by John on the <u><a href="URL target="_blank" rel="nofollow noopener">official Bazopril website</a></u>.</p>
<h2 style="text-align: justify;"><strong>Scientific Evidence for Bazopril</strong></h2>
<p style="text-align: justify;">As proof that <a href="URL works, John Winters and his team cite dozens of studies from the University of Virginia, Harvard University, and other major educational institutions. We'll review some of that research below to determine how Bazopril works and the science behind the supplement.</p>
<p style="text-align: justify;"><strong>Hawthorn</strong> has been studied for its long-term effects on heart health. A <u><a href="URL target="_blank" rel="nofollow noopener">2010 study</a></u> found that hawthorn could help treat cardiovascular and ischemic heart diseases. And, even at doses as high as 1,800mg, hawthorn appears to be effective for supporting cardiovascular health. Researchers specifically praised hawthorn for its "lack of herb-drug interactions" from clinical trials, suggesting it could be an effective alternative treatment for cardiovascular disease.</p>
<p style="text-align: justify;"><strong>Juniper berry</strong> is another popular nutraceutical used in anti-aging medicine, heart health, and inflammation supplements. In a <u><a href="URL target="_blank" rel="nofollow noopener">2019 study</a></u>, researchers found juniper was rich in aromatic oils, sugars, resins, catechin, terpenic acids, alkaloids, flavonoids, and other natural ingredients that appeared to have positive effects throughout the body. These natural ingredients have antioxidant, antibacterial, antifungal, anti-inflammatory, and cytotoxic effects, all of which could support heart health and your cardiovascular system.</p>
<p style="text-align: justify;">It's no secret <strong>olive oil</strong> is good for heart health. One of the most heart-health-friendly diets on the planet, the Mediterranean diet, emphasizes olive oil. Bazopril contains olive leaf extract specifically for its effects on cardiovascular health. In a <u><a href="URL target="_blank" rel="nofollow noopener">2021 trial</a></u>, researchers tested the effects of olive leaf extract on a group of 77 healthy adults with mildly high cholesterol levels. Adults took olive leaf extract or a placebo for eight weeks. At the end of the eight weeks, researchers found no difference in blood pressure, cholesterol, or blood sugar levels between the two groups.</p>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Buy Bazopril Before it's SOLD OUT</strong></span></a></u></p>
<p style="text-align: justify;">A <u><a href="URL target="_blank" rel="nofollow noopener">2010 study</a></u> was more beneficial, finding oleuropein (a natural chemical within olive leaf extract) was linked to cardioprotective and neuroprotective effects.</p>
<p style="text-align: justify;">A <u><a href="URL target="_blank" rel="nofollow noopener">2022 meta-analysis</a></u> on hibiscus (mallow flower) found it could support cardiovascular health and blood pressure. Researchers examined 17 hibiscus and blood pressure trials and found a significant connection between consumption and lower blood pressure levels. Researchers found "hibiscus-induced reductions to BP similar to that resulting from medication," with drops of 2.13mmHg in systolic BP and 1.10mmHg in diastolic BP. Hibiscus also significantly lowered unhealthy cholesterol levels, further supporting heart health.</p>
<p style="text-align: justify;">Bazopril also contains garlic, one of nature's best-known cardiovascular supplement ingredients. Many studies have linked garlic and its natural constituent chemicals to significant heart health and blood pressure effects. A <u><a href="URL target="_blank" rel="nofollow noopener">2002 review</a></u>, for example, found an inverse correlation between garlic consumption and a reduced risk of cardiovascular disease progression, supporting the idea garlic can reduce the risk of heart health problems. Researchers believe garlic works because it contains natural chemicals like allicin linked to antioxidant effects and blood flow.</p>
<p style="text-align: justify;">Green tea, the final active ingredient in Bazopril, is also popular for its effects on heart health. Green tea contains catechins that appear to help with heart health. Catechins like epigallocatechin gallate (EGCG), for example, appear to have antioxidant effects throughout the body – including on the heart and your overall cardiovascular system. A <u><a href="URL target="_blank" rel="nofollow noopener">2009 study</a></u> specifically connected the catechins in green tea to positive effects on heart health, finding they regulated vascular tone, promoted nitric oxide production, and supported overall cardiovascular system healing, among other effects.</p>
<p style="text-align: justify;"><a href="URL blends natural ingredients linked to heart health and cardiovascular function. From garlic to olive leaf extract to hibiscus, the supplement contains some of nature's best-known remedies for heart health and cardiovascular function.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="440" border="0" data-original-height="520" data-original-width="757" /></a></div>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>See what others are saying about Bazopril >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Reviews: What Do Customers Say?</strong></h2>
<p style="text-align: justify;"><a href="URL has strong reviews online from customers who have experienced significant results with the supplement.</p>
<p style="text-align: justify;">Men and women have left reviews stating Bazopril works, with many agreeing their blood pressure is significantly lower after taking Bazopril. Some have even stopped taking their doctor-prescribed blood pressure medication after using Bazopril.</p>
<p style="text-align: justify;">Here are some of the reviews shared by customers on the official website:</p>
<p style="text-align: justify;">One customer claims her "blood pressure has been consistently lower" after taking Bazopril for a few months. She also has extra energy during the day.</p>
<p style="text-align: justify;">Another man claims his "blood pressure has never been lower" after taking Bazopril for just two weeks. Now, he's "feeling better than ever."</p>
<p style="text-align: justify;">Some customers have dealt with high blood pressure for years before targeting the issue with Bazopril. One customer claims she has "been living with high blood pressure for years" and had tried all types of remedies – yet nothing worked until she started taking Bazopril. Now, she feels "so much better" thanks to the supplement.</p>
<p style="text-align: justify;">One woman claims her "numbers have dropped significantly" after taking Bazopril for just two weeks.</p>
<p style="text-align: justify;">One man has been impressed with his results after ordering a six-month supply of <a href="URL He claims, "Every time I checked my blood pressure, it was lower than before," thanks to <a href="URL
<p style="text-align: justify;">Some customers claim to have stopped taking their doctor-prescribed blood pressure medication after taking Bazopril. For example, one woman claims she "even got off one prescription drug" after taking Bazopril for six months.</p>
<p style="text-align: justify;">Overall, many customers have lowered blood pressure significantly with <a href="URL – with many customers noticing results in just two weeks.</p>
<h2 style="text-align: justify;"><strong>Bazopril Pricing</strong></h2>
<p style="text-align: justify;"><a href="URL typically costs $230 per bottle, according to the official website. However, as part of a 2023 promotion, you can pay $69 or less by <u><a href="URL target="_blank" rel="nofollow noopener">buying directly from the manufacturer online</a></u>. Qualifying purchases come with bulk savings, free shipping, and bonuses.</p>
<p style="text-align: justify;">Here's how pricing works when ordering <a href="URL online today:</p>
<ul style="text-align: justify;">
<li>Order one bottle for $69 + $9.95 Shipping</li>
<li>Order three bottles for $177 ($59 Per bottle) + Free Shipping</li>
<li>Order six bottles for $294 ($49 Per bottle) + Free Shipping + 2 Free Bonuses</li>
</ul>
<p style="text-align: justify;">Each bottle contains a 30-day supply of Bazopril, or 30 servings (60 capsules). You take two capsules daily to support healthy blood pressure.</p>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Act quickly to secure the limited-time discounted price today!</strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bonuses Included with Bazopril</strong></h2>
<p style="text-align: justify;">As part of a 2023 promotion, all six bottle purchases of <a href="URL come with two free bonus eBooks. These eBooks can complement the effects of <a href="URL giving you additional tips for lowering blood pressure by making diet and lifestyle changes.</p>
<p style="text-align: justify;">Bonuses included with <a href="URL include:</p>
<p style="text-align: justify;"><strong>Free Bonus eBook 1: The Heart's Kitchen: Desserts And Superfoods That Strengthen Your Heart:</strong> This eBook describes some of the best natural ingredients, desserts, superfoods, and more to boost cardiovascular health. For example, you'll discover a "sour root" that can promote heart health and a "miraculous water cure" to revitalize your heart. One nutrient highlighted in the book was so valuable it used to be used as currency.</p>
<p style="text-align: justify;"><strong>Free Bonus eBook 2: Heart Smart:</strong> This eBook highlights some of the best tips, tricks, and strategies you can use to support cardiovascular health. Simple lifestyle changes could lower blood pressure, improve heart health, and extend your lifespan. You don't need to transform your diet or lifestyle; you can make small, incremental changes to improve your heart starting today.</p>
<p style="text-align: justify;"><u><a href="URL target="_blank" rel="nofollow noopener"><span style="font-size: large;"><strong>Order now & get bonuses >>></strong></span></a></u></p>
<h2 style="text-align: justify;"><strong>Bazopril Refund Policy</strong></h2>
<p style="text-align: justify;"><a href="URL has a 365-day money-back guarantee. You have 365 days to try <a href="URL determine if it works, and request a refund if you're unsatisfied. You can contact customer service if you have any questions about the return policy or anything else.</p>
<ul style="text-align: justify;">
<li>Email: contact@URL</li>
</ul>
<h2 style="text-align: justify;"><strong>About Bazopril</strong></h2>
<p style="text-align: justify;"><a href="URL is made in the United States in an FDA-registered, GMP-certified facility. The supplement was formulated by Chief Research Scientist John Winters, who dealt with severe cardiovascular health issues before taking <a href="URL John Winters also periodically refers to himself as John Miller.</p>
<p style="text-align: justify;">Today, the manufacturer claims to work with a third-party lab to analyze each batch of <a href="URL using high-performance liquid chromatography, refractive index detection, and rapid microbiology to verify purity and potency.</p>
<p style="text-align: justify;">You can contact the makers of <a href="URL and the company's customer service team via the following:</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="464" border="0" data-original-height="579" data-original-width="798" /></a></div>
<h2 style="text-align: justify;"><strong>Final Word</strong></h2>
<p style="text-align: justify;"><a href="URL is a heart health supplement developed by a man who recently suffered a heart attack after dealing with high blood pressure for nine years.Featuring a blend of natural ingredients, Bazopril can support healthy blood pressure using garlic, olive leaf extract, hibiscus, and other natural ingredients. To learn more about Bazopril and how it works or to buy the heart health supplement online today, <u><a href="URL target="_blank" rel="nofollow noopener">visit the official website</a></u>.</p>
<p><strong>Read More:</strong></p>
<p><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL /><a href="URL/URL
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
a435b503b81ea77c866671a2e784f9339ab0d4e2
|
# Dataset Card for Evaluation run of Mikivis/xuanxuan
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Mikivis/xuanxuan
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Mikivis/xuanxuan](https://huggingface.co/Mikivis/xuanxuan) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Mikivis__xuanxuan",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T21:42:00.993318](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__xuanxuan/blob/main/results_2023-09-16T21-42-00.993318.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.008389261744966443,
"em_stderr": 0.000934054321686696,
"f1": 0.05742869127516786,
"f1_stderr": 0.0015884226243297857,
"acc": 0.2521704814522494,
"acc_stderr": 0.00702597803203845
},
"harness|drop|3": {
"em": 0.008389261744966443,
"em_stderr": 0.000934054321686696,
"f1": 0.05742869127516786,
"f1_stderr": 0.0015884226243297857
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.5043409629044988,
"acc_stderr": 0.0140519560640769
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Mikivis__xuanxuan
|
[
"region:us"
] |
2023-09-01T12:15:05+00:00
|
{"pretty_name": "Evaluation run of Mikivis/xuanxuan", "dataset_summary": "Dataset automatically created during the evaluation run of model [Mikivis/xuanxuan](https://huggingface.co/Mikivis/xuanxuan) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Mikivis__xuanxuan\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-16T21:42:00.993318](https://huggingface.co/datasets/open-llm-leaderboard/details_Mikivis__xuanxuan/blob/main/results_2023-09-16T21-42-00.993318.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.008389261744966443,\n \"em_stderr\": 0.000934054321686696,\n \"f1\": 0.05742869127516786,\n \"f1_stderr\": 0.0015884226243297857,\n \"acc\": 0.2521704814522494,\n \"acc_stderr\": 0.00702597803203845\n },\n \"harness|drop|3\": {\n \"em\": 0.008389261744966443,\n \"em_stderr\": 0.000934054321686696,\n \"f1\": 0.05742869127516786,\n \"f1_stderr\": 0.0015884226243297857\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5043409629044988,\n \"acc_stderr\": 0.0140519560640769\n }\n}\n```", "repo_url": "https://huggingface.co/Mikivis/xuanxuan", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|arc:challenge|25_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_16T21_42_00.993318", "path": ["**/details_harness|drop|3_2023-09-16T21-42-00.993318.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-16T21-42-00.993318.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_16T21_42_00.993318", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-42-00.993318.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-16T21-42-00.993318.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hellaswag|10_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T13:14:51.241896.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T13:14:51.241896.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T13:14:51.241896.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_16T21_42_00.993318", "path": ["**/details_harness|winogrande|5_2023-09-16T21-42-00.993318.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-16T21-42-00.993318.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T13_14_51.241896", "path": ["results_2023-09-01T13:14:51.241896.parquet"]}, {"split": "2023_09_16T21_42_00.993318", "path": ["results_2023-09-16T21-42-00.993318.parquet"]}, {"split": "latest", "path": ["results_2023-09-16T21-42-00.993318.parquet"]}]}]}
|
2023-09-16T20:42:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Mikivis/xuanxuan
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Mikivis/xuanxuan on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-16T21:42:00.993318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Mikivis/xuanxuan",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/xuanxuan on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:42:00.993318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Mikivis/xuanxuan",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/xuanxuan on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T21:42:00.993318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
17,
31,
165,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Mikivis/xuanxuan## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Mikivis/xuanxuan on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-16T21:42:00.993318(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
10b333130b2b3a82d4113d6174a099ed744220f3
|
# Dataset Card for "voxelgym_5c_critic_42x42_300000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Cubpaw/voxelgym_5c_critic_42x42_300000
|
[
"region:us"
] |
2023-09-01T12:42:11+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "astar_path", "dtype": "image"}, {"name": "pred_path", "sequence": {"sequence": "float32"}}], "splits": [{"name": "train", "num_bytes": 1814909280.0, "num_examples": 240000}, {"name": "validation", "num_bytes": 453592740.0, "num_examples": 60000}], "download_size": 261367246, "dataset_size": 2268502020.0}}
|
2023-09-01T12:46:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "voxelgym_5c_critic_42x42_300000"
More Information needed
|
[
"# Dataset Card for \"voxelgym_5c_critic_42x42_300000\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"voxelgym_5c_critic_42x42_300000\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"voxelgym_5c_critic_42x42_300000\"\n\nMore Information needed"
] |
a16c7e200b4618cd93be3143cdb973e3f21498fa
|
2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark).
Nie, Y., Li, X., Scott, A., Sun, Y., Venugopal, V., and Brandt, A. (2022). 2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark). Stanford Digital Repository. https://purl.stanford.edu/dj417rh1007
|
torchgeo/skippd
|
[
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"region:us"
] |
2023-09-01T12:42:24+00:00
|
{"license": "cc-by-4.0", "size_categories": ["100K<n<1M"]}
|
2023-10-17T11:25:36+00:00
|
[] |
[] |
TAGS
#size_categories-100K<n<1M #license-cc-by-4.0 #region-us
|
2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark).
Nie, Y., Li, X., Scott, A., Sun, Y., Venugopal, V., and Brandt, A. (2022). 2017-2019 Sky Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting (Stanford Benchmark). Stanford Digital Repository. URL
|
[] |
[
"TAGS\n#size_categories-100K<n<1M #license-cc-by-4.0 #region-us \n"
] |
[
27
] |
[
"passage: TAGS\n#size_categories-100K<n<1M #license-cc-by-4.0 #region-us \n"
] |
8f7e9a1c670a71007cb95cc536c865da911dc00d
|
# Dataset Card for "llama_2-optimized-titles-esci-sft-test-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
qazisaad/llama_2-optimized-titles-esci-sft-test-2
|
[
"region:us"
] |
2023-09-01T12:52:00+00:00
|
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "product_title", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "preds", "dtype": "string"}, {"name": "clean_preds", "dtype": "string"}, {"name": "average_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 1806374.0, "num_examples": 2385}], "download_size": 1037788, "dataset_size": 1806374.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T12:52:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama_2-optimized-titles-esci-sft-test-2"
More Information needed
|
[
"# Dataset Card for \"llama_2-optimized-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama_2-optimized-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama_2-optimized-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
186e7998facfeac9d0309cb3549ecdbc15b934bb
|
# Distil Whisper: People's Speech Other
This is a variant of the [People's Speech Other](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-dirty", "dirty")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-dirty", "dirty", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/peoples_speech-dirty
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-01T12:53:26+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "People's Speech Other"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
|
# Distil Whisper: People's Speech Other
This is a variant of the People's Speech Other dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-4.0.
|
[
"# Distil Whisper: People's Speech Other \n\nThis is a variant of the People's Speech Other dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n",
"# Distil Whisper: People's Speech Other \n\nThis is a variant of the People's Speech Other dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
35,
108,
92,
40,
16
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: People's Speech Other \n\nThis is a variant of the People's Speech Other dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0."
] |
50cb1dd0cf442dabd3e2ef81db233cf461b32011
|
# Dataset Card for Evaluation run of KnutJaegersberg/black_goo_recipe_c
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/KnutJaegersberg/black_goo_recipe_c
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [KnutJaegersberg/black_goo_recipe_c](https://huggingface.co/KnutJaegersberg/black_goo_recipe_c) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-17T13:45:50.659300](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c/blob/main/results_2023-10-17T13-45-50.659300.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335715,
"f1": 0.05227348993288602,
"f1_stderr": 0.0013368463355251383,
"acc": 0.32701072839257217,
"acc_stderr": 0.00784863260202173
},
"harness|drop|3": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335715,
"f1": 0.05227348993288602,
"f1_stderr": 0.0013368463355251383
},
"harness|gsm8k|5": {
"acc": 0.006823351023502654,
"acc_stderr": 0.002267537102254499
},
"harness|winogrande|5": {
"acc": 0.6471981057616417,
"acc_stderr": 0.013429728101788961
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c
|
[
"region:us"
] |
2023-09-01T12:59:15+00:00
|
{"pretty_name": "Evaluation run of KnutJaegersberg/black_goo_recipe_c", "dataset_summary": "Dataset automatically created during the evaluation run of model [KnutJaegersberg/black_goo_recipe_c](https://huggingface.co/KnutJaegersberg/black_goo_recipe_c) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-17T13:45:50.659300](https://huggingface.co/datasets/open-llm-leaderboard/details_KnutJaegersberg__black_goo_recipe_c/blob/main/results_2023-10-17T13-45-50.659300.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335715,\n \"f1\": 0.05227348993288602,\n \"f1_stderr\": 0.0013368463355251383,\n \"acc\": 0.32701072839257217,\n \"acc_stderr\": 0.00784863260202173\n },\n \"harness|drop|3\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335715,\n \"f1\": 0.05227348993288602,\n \"f1_stderr\": 0.0013368463355251383\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.006823351023502654,\n \"acc_stderr\": 0.002267537102254499\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6471981057616417,\n \"acc_stderr\": 0.013429728101788961\n }\n}\n```", "repo_url": "https://huggingface.co/KnutJaegersberg/black_goo_recipe_c", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|arc:challenge|25_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_17T13_45_50.659300", "path": ["**/details_harness|drop|3_2023-10-17T13-45-50.659300.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-17T13-45-50.659300.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_17T13_45_50.659300", "path": ["**/details_harness|gsm8k|5_2023-10-17T13-45-50.659300.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-17T13-45-50.659300.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hellaswag|10_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T13:58:52.647382.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T13:58:52.647382.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T13:58:52.647382.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_17T13_45_50.659300", "path": ["**/details_harness|winogrande|5_2023-10-17T13-45-50.659300.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-17T13-45-50.659300.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T13_58_52.647382", "path": ["results_2023-09-01T13:58:52.647382.parquet"]}, {"split": "2023_10_17T13_45_50.659300", "path": ["results_2023-10-17T13-45-50.659300.parquet"]}, {"split": "latest", "path": ["results_2023-10-17T13-45-50.659300.parquet"]}]}]}
|
2023-10-17T12:46:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of KnutJaegersberg/black_goo_recipe_c
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model KnutJaegersberg/black_goo_recipe_c on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-17T13:45:50.659300(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of KnutJaegersberg/black_goo_recipe_c",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/black_goo_recipe_c on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T13:45:50.659300(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of KnutJaegersberg/black_goo_recipe_c",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/black_goo_recipe_c on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-17T13:45:50.659300(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of KnutJaegersberg/black_goo_recipe_c## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model KnutJaegersberg/black_goo_recipe_c on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-17T13:45:50.659300(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9ac0a28580b36dc70ed6ac731dccc5442fbdeb2b
|
# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v3
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/llama-2-70B-ensemble-v3
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/llama-2-70B-ensemble-v3](https://huggingface.co/yeontaek/llama-2-70B-ensemble-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v3",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-01T14:01:58.848407](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v3/blob/main/results_2023-09-01T14%3A01%3A58.848407.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6813782482106774,
"acc_stderr": 0.03171011741691581,
"acc_norm": 0.6847848607826429,
"acc_norm_stderr": 0.031684498624315015,
"mc1": 0.45532435740514077,
"mc1_stderr": 0.01743349010253877,
"mc2": 0.6421820394674438,
"mc2_stderr": 0.015085186356964665
},
"harness|arc:challenge|25": {
"acc": 0.6621160409556314,
"acc_stderr": 0.013822047922283504,
"acc_norm": 0.6851535836177475,
"acc_norm_stderr": 0.013572657703084948
},
"harness|hellaswag|10": {
"acc": 0.6936865166301533,
"acc_stderr": 0.004600194559865542,
"acc_norm": 0.8716391157140012,
"acc_norm_stderr": 0.003338076015617253
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695236,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695236
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.042446332383532286,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.042446332383532286
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7828947368421053,
"acc_stderr": 0.03355045304882924,
"acc_norm": 0.7828947368421053,
"acc_norm_stderr": 0.03355045304882924
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7622641509433963,
"acc_stderr": 0.02619980880756192,
"acc_norm": 0.7622641509433963,
"acc_norm_stderr": 0.02619980880756192
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.031164899666948617,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.031164899666948617
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.55,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.04913595201274498,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.04913595201274498
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6340425531914894,
"acc_stderr": 0.0314895582974553,
"acc_norm": 0.6340425531914894,
"acc_norm_stderr": 0.0314895582974553
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.42105263157894735,
"acc_stderr": 0.04644602091222318,
"acc_norm": 0.42105263157894735,
"acc_norm_stderr": 0.04644602091222318
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.48412698412698413,
"acc_stderr": 0.025738330639412152,
"acc_norm": 0.48412698412698413,
"acc_norm_stderr": 0.025738330639412152
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677173,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677173
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.021732540689329286,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.021732540689329286
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5270935960591133,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.5270935960591133,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8484848484848485,
"acc_stderr": 0.027998073798781678,
"acc_norm": 0.8484848484848485,
"acc_norm_stderr": 0.027998073798781678
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8535353535353535,
"acc_stderr": 0.025190921114603918,
"acc_norm": 0.8535353535353535,
"acc_norm_stderr": 0.025190921114603918
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9430051813471503,
"acc_stderr": 0.01673108529360755,
"acc_norm": 0.9430051813471503,
"acc_norm_stderr": 0.01673108529360755
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6923076923076923,
"acc_stderr": 0.02340092891831049,
"acc_norm": 0.6923076923076923,
"acc_norm_stderr": 0.02340092891831049
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652459,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652459
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.773109243697479,
"acc_stderr": 0.027205371538279476,
"acc_norm": 0.773109243697479,
"acc_norm_stderr": 0.027205371538279476
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.37748344370860926,
"acc_stderr": 0.0395802723112157,
"acc_norm": 0.37748344370860926,
"acc_norm_stderr": 0.0395802723112157
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8807339449541285,
"acc_stderr": 0.013895729292588949,
"acc_norm": 0.8807339449541285,
"acc_norm_stderr": 0.013895729292588949
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5370370370370371,
"acc_stderr": 0.03400603625538272,
"acc_norm": 0.5370370370370371,
"acc_norm_stderr": 0.03400603625538272
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.9068627450980392,
"acc_stderr": 0.020397853969426998,
"acc_norm": 0.9068627450980392,
"acc_norm_stderr": 0.020397853969426998
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.890295358649789,
"acc_stderr": 0.02034340073486884,
"acc_norm": 0.890295358649789,
"acc_norm_stderr": 0.02034340073486884
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.7802690582959642,
"acc_stderr": 0.027790177064383602,
"acc_norm": 0.7802690582959642,
"acc_norm_stderr": 0.027790177064383602
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.816793893129771,
"acc_stderr": 0.03392770926494733,
"acc_norm": 0.816793893129771,
"acc_norm_stderr": 0.03392770926494733
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8347107438016529,
"acc_stderr": 0.03390780612972776,
"acc_norm": 0.8347107438016529,
"acc_norm_stderr": 0.03390780612972776
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8343558282208589,
"acc_stderr": 0.029208296231259104,
"acc_norm": 0.8343558282208589,
"acc_norm_stderr": 0.029208296231259104
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5625,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.5625,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.020237149008990915,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.020237149008990915
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8607918263090677,
"acc_stderr": 0.012378786101885145,
"acc_norm": 0.8607918263090677,
"acc_norm_stderr": 0.012378786101885145
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7196531791907514,
"acc_stderr": 0.024182427496577605,
"acc_norm": 0.7196531791907514,
"acc_norm_stderr": 0.024182427496577605
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.5787709497206703,
"acc_stderr": 0.016513676031179595,
"acc_norm": 0.5787709497206703,
"acc_norm_stderr": 0.016513676031179595
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.752411575562701,
"acc_stderr": 0.024513879973621967,
"acc_norm": 0.752411575562701,
"acc_norm_stderr": 0.024513879973621967
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7993827160493827,
"acc_stderr": 0.02228231394977488,
"acc_norm": 0.7993827160493827,
"acc_norm_stderr": 0.02228231394977488
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5709219858156028,
"acc_stderr": 0.02952591430255856,
"acc_norm": 0.5709219858156028,
"acc_norm_stderr": 0.02952591430255856
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5645371577574967,
"acc_stderr": 0.012663412101248349,
"acc_norm": 0.5645371577574967,
"acc_norm_stderr": 0.012663412101248349
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7336601307189542,
"acc_stderr": 0.017883188134667206,
"acc_norm": 0.7336601307189542,
"acc_norm_stderr": 0.017883188134667206
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8656716417910447,
"acc_stderr": 0.024112678240900794,
"acc_norm": 0.8656716417910447,
"acc_norm_stderr": 0.024112678240900794
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.03775251680686371,
"acc_norm": 0.83,
"acc_norm_stderr": 0.03775251680686371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.45532435740514077,
"mc1_stderr": 0.01743349010253877,
"mc2": 0.6421820394674438,
"mc2_stderr": 0.015085186356964665
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v3
|
[
"region:us"
] |
2023-09-01T13:02:23+00:00
|
{"pretty_name": "Evaluation run of yeontaek/llama-2-70B-ensemble-v3", "dataset_summary": "Dataset automatically created during the evaluation run of model [yeontaek/llama-2-70B-ensemble-v3](https://huggingface.co/yeontaek/llama-2-70B-ensemble-v3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v3\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-01T14:01:58.848407](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v3/blob/main/results_2023-09-01T14%3A01%3A58.848407.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6813782482106774,\n \"acc_stderr\": 0.03171011741691581,\n \"acc_norm\": 0.6847848607826429,\n \"acc_norm_stderr\": 0.031684498624315015,\n \"mc1\": 0.45532435740514077,\n \"mc1_stderr\": 0.01743349010253877,\n \"mc2\": 0.6421820394674438,\n \"mc2_stderr\": 0.015085186356964665\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6621160409556314,\n \"acc_stderr\": 0.013822047922283504,\n \"acc_norm\": 0.6851535836177475,\n \"acc_norm_stderr\": 0.013572657703084948\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6936865166301533,\n \"acc_stderr\": 0.004600194559865542,\n \"acc_norm\": 0.8716391157140012,\n \"acc_norm_stderr\": 0.003338076015617253\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.042446332383532286,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.042446332383532286\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7828947368421053,\n \"acc_stderr\": 0.03355045304882924,\n \"acc_norm\": 0.7828947368421053,\n \"acc_norm_stderr\": 0.03355045304882924\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7622641509433963,\n \"acc_stderr\": 0.02619980880756192,\n \"acc_norm\": 0.7622641509433963,\n \"acc_norm_stderr\": 0.02619980880756192\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.031164899666948617,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.031164899666948617\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6340425531914894,\n \"acc_stderr\": 0.0314895582974553,\n \"acc_norm\": 0.6340425531914894,\n \"acc_norm_stderr\": 0.0314895582974553\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.42105263157894735,\n \"acc_stderr\": 0.04644602091222318,\n \"acc_norm\": 0.42105263157894735,\n \"acc_norm_stderr\": 0.04644602091222318\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.48412698412698413,\n \"acc_stderr\": 0.025738330639412152,\n \"acc_norm\": 0.48412698412698413,\n \"acc_norm_stderr\": 0.025738330639412152\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677173,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677173\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.021732540689329286,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.021732540689329286\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8484848484848485,\n \"acc_stderr\": 0.027998073798781678,\n \"acc_norm\": 0.8484848484848485,\n \"acc_norm_stderr\": 0.027998073798781678\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8535353535353535,\n \"acc_stderr\": 0.025190921114603918,\n \"acc_norm\": 0.8535353535353535,\n \"acc_norm_stderr\": 0.025190921114603918\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9430051813471503,\n \"acc_stderr\": 0.01673108529360755,\n \"acc_norm\": 0.9430051813471503,\n \"acc_norm_stderr\": 0.01673108529360755\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6923076923076923,\n \"acc_stderr\": 0.02340092891831049,\n \"acc_norm\": 0.6923076923076923,\n \"acc_norm_stderr\": 0.02340092891831049\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652459,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652459\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.773109243697479,\n \"acc_stderr\": 0.027205371538279476,\n \"acc_norm\": 0.773109243697479,\n \"acc_norm_stderr\": 0.027205371538279476\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.0395802723112157,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.0395802723112157\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8807339449541285,\n \"acc_stderr\": 0.013895729292588949,\n \"acc_norm\": 0.8807339449541285,\n \"acc_norm_stderr\": 0.013895729292588949\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5370370370370371,\n \"acc_stderr\": 0.03400603625538272,\n \"acc_norm\": 0.5370370370370371,\n \"acc_norm_stderr\": 0.03400603625538272\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.9068627450980392,\n \"acc_stderr\": 0.020397853969426998,\n \"acc_norm\": 0.9068627450980392,\n \"acc_norm_stderr\": 0.020397853969426998\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.890295358649789,\n \"acc_stderr\": 0.02034340073486884,\n \"acc_norm\": 0.890295358649789,\n \"acc_norm_stderr\": 0.02034340073486884\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7802690582959642,\n \"acc_stderr\": 0.027790177064383602,\n \"acc_norm\": 0.7802690582959642,\n \"acc_norm_stderr\": 0.027790177064383602\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.816793893129771,\n \"acc_stderr\": 0.03392770926494733,\n \"acc_norm\": 0.816793893129771,\n \"acc_norm_stderr\": 0.03392770926494733\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8347107438016529,\n \"acc_stderr\": 0.03390780612972776,\n \"acc_norm\": 0.8347107438016529,\n \"acc_norm_stderr\": 0.03390780612972776\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8343558282208589,\n \"acc_stderr\": 0.029208296231259104,\n \"acc_norm\": 0.8343558282208589,\n \"acc_norm_stderr\": 0.029208296231259104\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5625,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.5625,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n \"acc_stderr\": 0.020237149008990915,\n \"acc_norm\": 0.8931623931623932,\n \"acc_norm_stderr\": 0.020237149008990915\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8607918263090677,\n \"acc_stderr\": 0.012378786101885145,\n \"acc_norm\": 0.8607918263090677,\n \"acc_norm_stderr\": 0.012378786101885145\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.024182427496577605,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.024182427496577605\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.5787709497206703,\n \"acc_stderr\": 0.016513676031179595,\n \"acc_norm\": 0.5787709497206703,\n \"acc_norm_stderr\": 0.016513676031179595\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.752411575562701,\n \"acc_stderr\": 0.024513879973621967,\n \"acc_norm\": 0.752411575562701,\n \"acc_norm_stderr\": 0.024513879973621967\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7993827160493827,\n \"acc_stderr\": 0.02228231394977488,\n \"acc_norm\": 0.7993827160493827,\n \"acc_norm_stderr\": 0.02228231394977488\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5709219858156028,\n \"acc_stderr\": 0.02952591430255856,\n \"acc_norm\": 0.5709219858156028,\n \"acc_norm_stderr\": 0.02952591430255856\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5645371577574967,\n \"acc_stderr\": 0.012663412101248349,\n \"acc_norm\": 0.5645371577574967,\n \"acc_norm_stderr\": 0.012663412101248349\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7336601307189542,\n \"acc_stderr\": 0.017883188134667206,\n \"acc_norm\": 0.7336601307189542,\n \"acc_norm_stderr\": 0.017883188134667206\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8656716417910447,\n \"acc_stderr\": 0.024112678240900794,\n \"acc_norm\": 0.8656716417910447,\n \"acc_norm_stderr\": 0.024112678240900794\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.03775251680686371,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.03775251680686371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.45532435740514077,\n \"mc1_stderr\": 0.01743349010253877,\n \"mc2\": 0.6421820394674438,\n \"mc2_stderr\": 0.015085186356964665\n }\n}\n```", "repo_url": "https://huggingface.co/yeontaek/llama-2-70B-ensemble-v3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|arc:challenge|25_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hellaswag|10_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T14:01:58.848407.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T14:01:58.848407.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T14_01_58.848407", "path": ["results_2023-09-01T14:01:58.848407.parquet"]}, {"split": "latest", "path": ["results_2023-09-01T14:01:58.848407.parquet"]}]}]}
|
2023-09-01T13:03:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v3 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-01T14:01:58.848407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-01T14:01:58.848407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-01T14:01:58.848407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-01T14:01:58.848407(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
51af55bbc58dd8e3365733199b44380e35cef4bc
|
# Dataset Card for "llama_2-product-titles-esci-sft-test-2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
qazisaad/llama_2-product-titles-esci-sft-test-2
|
[
"region:us"
] |
2023-09-01T13:02:42+00:00
|
{"dataset_info": {"features": [{"name": "index", "dtype": "int64"}, {"name": "query", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "preds", "dtype": "string"}, {"name": "average_score", "dtype": "float64"}, {"name": "total_score", "dtype": "float64"}, {"name": "max_score", "dtype": "float64"}, {"name": "min_score", "dtype": "float64"}, {"name": "best_title", "dtype": "string"}, {"name": "clean_preds", "dtype": "string"}, {"name": "new_score", "dtype": "float64"}, {"name": "good_pred", "dtype": "bool"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1617480.0, "num_examples": 1677}], "download_size": 828108, "dataset_size": 1617480.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T13:07:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama_2-product-titles-esci-sft-test-2"
More Information needed
|
[
"# Dataset Card for \"llama_2-product-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama_2-product-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama_2-product-titles-esci-sft-test-2\"\n\nMore Information needed"
] |
b200b568083b34c534fd2f53f842590f429151cc
|
# ICD10 Diagnosis Description Mapping Dataset
## Overview
This dataset is designed to assist in mapping ICD10 Diagnosis descriptions documented in clinical documents to the standard ICD10 Diagnosis descriptions by CMS (Centers for Medicare & Medicaid Services). The primary objective is to train a model that can map free-form disease text to ICD Codes.
## Dataset Details
The dataset consists of the following columns:
- **AnnotationString**: This column contains the disease text as described in patient clinical documents.
- **DXCode**: This column contains the standard DX code corresponding to the description in the AnnotationString column.
- **ShortDesc**: This column contains the standard ICD Description in a short form corresponding to the DXCode.
- **LongDesc**: This column contains the standard ICD Description in a long form corresponding to the DXCode.
## Model Training Objective
The goal of this dataset is to train the LLAMA2 7B Model to generate one of the following outputs based on the provided AnnotationString as input:
### Case 1: Expecting DX Code as Output
- **Instruction**: As an ICD Code helper, the model needs to generate an ICD 10 DX Code for the corresponding disease description.
- **Input**: `<Annotation String>`
- **Output**: `<DXCode>`
### Case 2: Expecting ShortDesc as Output
- **Instruction**: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Short Description for the corresponding clinical text.
- **Input**: `<Annotation String>`
- **Output**: `<ShortDesc>`
### Case 3: Expecting LongDesc as Output
- **Instruction**: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Long Description for the corresponding clinical text.
- **Input**: `<Annotation String>`
- **Output**: `<LongDesc>`
## Usage
To utilize this dataset effectively for training the LLAMA2 7B Model, you can follow these steps:
1. Preprocess your input data to match the format specified in the "Instruction" for the desired case (1, 2, or 3).
2. Use the preprocessed data as input to your model.
3. The model will generate the corresponding output (DXCode, ShortDesc, or LongDesc) based on the case you specified.
## Example
Here's an example of how to use this dataset and the model:
### Input
```python
# Instruction: As a ICD Code helper, you need to generate ICD 10 Dx Code for corresponding following Disease description
# Input: "Patient presents with persistent cough and fever."
```
### Output (Case 1)
```python
# Output: "J44.9"
```
### Output (Case 2)
```python
# Output: "Chronic obstructive pulmonary disease, unspecified"
```
### Output (Case 3)
```python
# Output: "Chronic obstructive pulmonary disease, unspecified"
```
**Warning: Potential Mapping Errors**
This dataset has been harnessed from another rule-based NLP application, and it's important to be aware of the possibility of errors in the mapping of clinical text to standard codes or standard descriptions. While every effort has been made to ensure accuracy, there are inherent challenges in automated mapping processes, especially when dealing with complex medical terminology and variations in clinical documentation.
Users should exercise caution when relying on the generated results and consider the following:
1. **Clinical Variation:** Clinical documentation can vary significantly in style and terminology, which may lead to inaccuracies in mapping.
2. **Complex Cases:** Some medical conditions or descriptions may not have a one-to-one mapping to standard codes or descriptions, resulting in potential discrepancies.
3. **Data Quality:** The accuracy of the mapping is also dependent on the quality and consistency of the input clinical text.
4. **Continuous Improvement:** Efforts are ongoing to improve the accuracy of the mapping, and users are encouraged to provide feedback and report any discrepancies.
It is advisable to cross-verify the results generated by this dataset, especially in critical healthcare applications where accuracy is paramount.
Please use this dataset responsibly and in conjunction with clinical expertise to make informed decisions.
If you encounter any issues or have suggestions for improvement, please reach out to the dataset maintainers.
Thank you for your understanding and cooperation.
## Citation
If you use this dataset or the pre-trained model in your research or applications, please consider citing the source or authors for proper attribution.
---
Feel free to modify and expand this readme file to include any additional information or instructions specific to your use case.
|
krishnareddy/icddxdescmap
|
[
"language:en",
"license:apache-2.0",
"CAC",
"ICD10",
"region:us"
] |
2023-09-01T13:06:55+00:00
|
{"language": ["en"], "license": "apache-2.0", "pretty_name": "ICD10 DX Code Description", "tags": ["CAC", "ICD10"]}
|
2023-09-04T09:56:05+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-apache-2.0 #CAC #ICD10 #region-us
|
# ICD10 Diagnosis Description Mapping Dataset
## Overview
This dataset is designed to assist in mapping ICD10 Diagnosis descriptions documented in clinical documents to the standard ICD10 Diagnosis descriptions by CMS (Centers for Medicare & Medicaid Services). The primary objective is to train a model that can map free-form disease text to ICD Codes.
## Dataset Details
The dataset consists of the following columns:
- AnnotationString: This column contains the disease text as described in patient clinical documents.
- DXCode: This column contains the standard DX code corresponding to the description in the AnnotationString column.
- ShortDesc: This column contains the standard ICD Description in a short form corresponding to the DXCode.
- LongDesc: This column contains the standard ICD Description in a long form corresponding to the DXCode.
## Model Training Objective
The goal of this dataset is to train the LLAMA2 7B Model to generate one of the following outputs based on the provided AnnotationString as input:
### Case 1: Expecting DX Code as Output
- Instruction: As an ICD Code helper, the model needs to generate an ICD 10 DX Code for the corresponding disease description.
- Input: '<Annotation String>'
- Output: '<DXCode>'
### Case 2: Expecting ShortDesc as Output
- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Short Description for the corresponding clinical text.
- Input: '<Annotation String>'
- Output: '<ShortDesc>'
### Case 3: Expecting LongDesc as Output
- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Long Description for the corresponding clinical text.
- Input: '<Annotation String>'
- Output: '<LongDesc>'
## Usage
To utilize this dataset effectively for training the LLAMA2 7B Model, you can follow these steps:
1. Preprocess your input data to match the format specified in the "Instruction" for the desired case (1, 2, or 3).
2. Use the preprocessed data as input to your model.
3. The model will generate the corresponding output (DXCode, ShortDesc, or LongDesc) based on the case you specified.
## Example
Here's an example of how to use this dataset and the model:
### Input
### Output (Case 1)
### Output (Case 2)
### Output (Case 3)
Warning: Potential Mapping Errors
This dataset has been harnessed from another rule-based NLP application, and it's important to be aware of the possibility of errors in the mapping of clinical text to standard codes or standard descriptions. While every effort has been made to ensure accuracy, there are inherent challenges in automated mapping processes, especially when dealing with complex medical terminology and variations in clinical documentation.
Users should exercise caution when relying on the generated results and consider the following:
1. Clinical Variation: Clinical documentation can vary significantly in style and terminology, which may lead to inaccuracies in mapping.
2. Complex Cases: Some medical conditions or descriptions may not have a one-to-one mapping to standard codes or descriptions, resulting in potential discrepancies.
3. Data Quality: The accuracy of the mapping is also dependent on the quality and consistency of the input clinical text.
4. Continuous Improvement: Efforts are ongoing to improve the accuracy of the mapping, and users are encouraged to provide feedback and report any discrepancies.
It is advisable to cross-verify the results generated by this dataset, especially in critical healthcare applications where accuracy is paramount.
Please use this dataset responsibly and in conjunction with clinical expertise to make informed decisions.
If you encounter any issues or have suggestions for improvement, please reach out to the dataset maintainers.
Thank you for your understanding and cooperation.
If you use this dataset or the pre-trained model in your research or applications, please consider citing the source or authors for proper attribution.
---
Feel free to modify and expand this readme file to include any additional information or instructions specific to your use case.
|
[
"# ICD10 Diagnosis Description Mapping Dataset",
"## Overview\n\nThis dataset is designed to assist in mapping ICD10 Diagnosis descriptions documented in clinical documents to the standard ICD10 Diagnosis descriptions by CMS (Centers for Medicare & Medicaid Services). The primary objective is to train a model that can map free-form disease text to ICD Codes.",
"## Dataset Details\n\nThe dataset consists of the following columns:\n\n- AnnotationString: This column contains the disease text as described in patient clinical documents.\n- DXCode: This column contains the standard DX code corresponding to the description in the AnnotationString column.\n- ShortDesc: This column contains the standard ICD Description in a short form corresponding to the DXCode.\n- LongDesc: This column contains the standard ICD Description in a long form corresponding to the DXCode.",
"## Model Training Objective\n\nThe goal of this dataset is to train the LLAMA2 7B Model to generate one of the following outputs based on the provided AnnotationString as input:",
"### Case 1: Expecting DX Code as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate an ICD 10 DX Code for the corresponding disease description.\n- Input: '<Annotation String>'\n- Output: '<DXCode>'",
"### Case 2: Expecting ShortDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Short Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<ShortDesc>'",
"### Case 3: Expecting LongDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Long Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<LongDesc>'",
"## Usage\n\nTo utilize this dataset effectively for training the LLAMA2 7B Model, you can follow these steps:\n\n1. Preprocess your input data to match the format specified in the \"Instruction\" for the desired case (1, 2, or 3).\n2. Use the preprocessed data as input to your model.\n3. The model will generate the corresponding output (DXCode, ShortDesc, or LongDesc) based on the case you specified.",
"## Example\n\nHere's an example of how to use this dataset and the model:",
"### Input",
"### Output (Case 1)",
"### Output (Case 2)",
"### Output (Case 3)\n\n\n\nWarning: Potential Mapping Errors\n\nThis dataset has been harnessed from another rule-based NLP application, and it's important to be aware of the possibility of errors in the mapping of clinical text to standard codes or standard descriptions. While every effort has been made to ensure accuracy, there are inherent challenges in automated mapping processes, especially when dealing with complex medical terminology and variations in clinical documentation.\n\nUsers should exercise caution when relying on the generated results and consider the following:\n\n1. Clinical Variation: Clinical documentation can vary significantly in style and terminology, which may lead to inaccuracies in mapping.\n\n2. Complex Cases: Some medical conditions or descriptions may not have a one-to-one mapping to standard codes or descriptions, resulting in potential discrepancies.\n\n3. Data Quality: The accuracy of the mapping is also dependent on the quality and consistency of the input clinical text.\n\n4. Continuous Improvement: Efforts are ongoing to improve the accuracy of the mapping, and users are encouraged to provide feedback and report any discrepancies.\n\nIt is advisable to cross-verify the results generated by this dataset, especially in critical healthcare applications where accuracy is paramount.\n\nPlease use this dataset responsibly and in conjunction with clinical expertise to make informed decisions.\n\nIf you encounter any issues or have suggestions for improvement, please reach out to the dataset maintainers.\n\nThank you for your understanding and cooperation.\n\n\nIf you use this dataset or the pre-trained model in your research or applications, please consider citing the source or authors for proper attribution.\n\n---\n\nFeel free to modify and expand this readme file to include any additional information or instructions specific to your use case."
] |
[
"TAGS\n#language-English #license-apache-2.0 #CAC #ICD10 #region-us \n",
"# ICD10 Diagnosis Description Mapping Dataset",
"## Overview\n\nThis dataset is designed to assist in mapping ICD10 Diagnosis descriptions documented in clinical documents to the standard ICD10 Diagnosis descriptions by CMS (Centers for Medicare & Medicaid Services). The primary objective is to train a model that can map free-form disease text to ICD Codes.",
"## Dataset Details\n\nThe dataset consists of the following columns:\n\n- AnnotationString: This column contains the disease text as described in patient clinical documents.\n- DXCode: This column contains the standard DX code corresponding to the description in the AnnotationString column.\n- ShortDesc: This column contains the standard ICD Description in a short form corresponding to the DXCode.\n- LongDesc: This column contains the standard ICD Description in a long form corresponding to the DXCode.",
"## Model Training Objective\n\nThe goal of this dataset is to train the LLAMA2 7B Model to generate one of the following outputs based on the provided AnnotationString as input:",
"### Case 1: Expecting DX Code as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate an ICD 10 DX Code for the corresponding disease description.\n- Input: '<Annotation String>'\n- Output: '<DXCode>'",
"### Case 2: Expecting ShortDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Short Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<ShortDesc>'",
"### Case 3: Expecting LongDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Long Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<LongDesc>'",
"## Usage\n\nTo utilize this dataset effectively for training the LLAMA2 7B Model, you can follow these steps:\n\n1. Preprocess your input data to match the format specified in the \"Instruction\" for the desired case (1, 2, or 3).\n2. Use the preprocessed data as input to your model.\n3. The model will generate the corresponding output (DXCode, ShortDesc, or LongDesc) based on the case you specified.",
"## Example\n\nHere's an example of how to use this dataset and the model:",
"### Input",
"### Output (Case 1)",
"### Output (Case 2)",
"### Output (Case 3)\n\n\n\nWarning: Potential Mapping Errors\n\nThis dataset has been harnessed from another rule-based NLP application, and it's important to be aware of the possibility of errors in the mapping of clinical text to standard codes or standard descriptions. While every effort has been made to ensure accuracy, there are inherent challenges in automated mapping processes, especially when dealing with complex medical terminology and variations in clinical documentation.\n\nUsers should exercise caution when relying on the generated results and consider the following:\n\n1. Clinical Variation: Clinical documentation can vary significantly in style and terminology, which may lead to inaccuracies in mapping.\n\n2. Complex Cases: Some medical conditions or descriptions may not have a one-to-one mapping to standard codes or descriptions, resulting in potential discrepancies.\n\n3. Data Quality: The accuracy of the mapping is also dependent on the quality and consistency of the input clinical text.\n\n4. Continuous Improvement: Efforts are ongoing to improve the accuracy of the mapping, and users are encouraged to provide feedback and report any discrepancies.\n\nIt is advisable to cross-verify the results generated by this dataset, especially in critical healthcare applications where accuracy is paramount.\n\nPlease use this dataset responsibly and in conjunction with clinical expertise to make informed decisions.\n\nIf you encounter any issues or have suggestions for improvement, please reach out to the dataset maintainers.\n\nThank you for your understanding and cooperation.\n\n\nIf you use this dataset or the pre-trained model in your research or applications, please consider citing the source or authors for proper attribution.\n\n---\n\nFeel free to modify and expand this readme file to include any additional information or instructions specific to your use case."
] |
[
25,
12,
73,
126,
40,
68,
73,
73,
100,
19,
4,
8,
8,
406
] |
[
"passage: TAGS\n#language-English #license-apache-2.0 #CAC #ICD10 #region-us \n# ICD10 Diagnosis Description Mapping Dataset## Overview\n\nThis dataset is designed to assist in mapping ICD10 Diagnosis descriptions documented in clinical documents to the standard ICD10 Diagnosis descriptions by CMS (Centers for Medicare & Medicaid Services). The primary objective is to train a model that can map free-form disease text to ICD Codes.## Dataset Details\n\nThe dataset consists of the following columns:\n\n- AnnotationString: This column contains the disease text as described in patient clinical documents.\n- DXCode: This column contains the standard DX code corresponding to the description in the AnnotationString column.\n- ShortDesc: This column contains the standard ICD Description in a short form corresponding to the DXCode.\n- LongDesc: This column contains the standard ICD Description in a long form corresponding to the DXCode.## Model Training Objective\n\nThe goal of this dataset is to train the LLAMA2 7B Model to generate one of the following outputs based on the provided AnnotationString as input:### Case 1: Expecting DX Code as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate an ICD 10 DX Code for the corresponding disease description.\n- Input: '<Annotation String>'\n- Output: '<DXCode>'### Case 2: Expecting ShortDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Short Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<ShortDesc>'### Case 3: Expecting LongDesc as Output\n\n- Instruction: As an ICD Code helper, the model needs to generate the standard ICD 10 DX Code Long Description for the corresponding clinical text.\n- Input: '<Annotation String>'\n- Output: '<LongDesc>'"
] |
973bfc07dbf5fe9afe4085b463563797f1f5e51d
|
# Dataset Card for "SpeakerVerification_VCTK"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DynamicSuperb/SpeakerVerification_VCTK
|
[
"region:us"
] |
2023-09-01T13:08:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "file2", "dtype": "string"}, {"name": "audio2", "dtype": "audio"}, {"name": "instruction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 2075489820.0, "num_examples": 5000}], "download_size": 1703856779, "dataset_size": 2075489820.0}}
|
2023-09-03T01:29:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SpeakerVerification_VCTK"
More Information needed
|
[
"# Dataset Card for \"SpeakerVerification_VCTK\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SpeakerVerification_VCTK\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SpeakerVerification_VCTK\"\n\nMore Information needed"
] |
47866f84c4085010dcf99a7ee4e14bd01967400f
|
# Dataset Card for "konstantinaki_paper_test_evidence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/paper_test_evidence
|
[
"region:us"
] |
2023-09-01T13:32:36+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32383653, "num_examples": 55634}], "download_size": 20834174, "dataset_size": 32383653}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-07T09:15:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "konstantinaki_paper_test_evidence"
More Information needed
|
[
"# Dataset Card for \"konstantinaki_paper_test_evidence\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"konstantinaki_paper_test_evidence\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"konstantinaki_paper_test_evidence\"\n\nMore Information needed"
] |
244004e096d670542855c46b818a9a0a8d796a92
|
# Dataset Card for "balanced_structs_reduced_labelled_large_enc_key_name_addr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
johannes-garstenauer/balanced_structs_reduced_labelled_large_enc_key_name_addr
|
[
"region:us"
] |
2023-09-01T14:14:24+00:00
|
{"dataset_info": {"features": [{"name": "struct", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 45906540.0, "num_examples": 279780}], "download_size": 9156256, "dataset_size": 45906540.0}}
|
2023-09-01T14:14:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "balanced_structs_reduced_labelled_large_enc_key_name_addr"
More Information needed
|
[
"# Dataset Card for \"balanced_structs_reduced_labelled_large_enc_key_name_addr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"balanced_structs_reduced_labelled_large_enc_key_name_addr\"\n\nMore Information needed"
] |
[
6,
34
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"balanced_structs_reduced_labelled_large_enc_key_name_addr\"\n\nMore Information needed"
] |
558e78c88871bfb42f5181ce257a5e83c4187ee8
|
# Dataset Card for "balanced_structs_reduced_labelled_large_new_key_addr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
johannes-garstenauer/balanced_structs_reduced_labelled_large_new_key_addr
|
[
"region:us"
] |
2023-09-01T14:16:12+00:00
|
{"dataset_info": {"features": [{"name": "struct", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 78719500.0, "num_examples": 279780}], "download_size": 21110038, "dataset_size": 78719500.0}}
|
2023-09-01T14:16:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "balanced_structs_reduced_labelled_large_new_key_addr"
More Information needed
|
[
"# Dataset Card for \"balanced_structs_reduced_labelled_large_new_key_addr\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"balanced_structs_reduced_labelled_large_new_key_addr\"\n\nMore Information needed"
] |
[
6,
32
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"balanced_structs_reduced_labelled_large_new_key_addr\"\n\nMore Information needed"
] |
ead4a017367d6cd3569932ca2c31a272f0215549
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_house_16H_gosdt_l512_d3
|
[
"region:us"
] |
2023-09-01T14:31:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9224800000, "num_examples": 100000}, {"name": "validation", "num_bytes": 922480000, "num_examples": 10000}], "download_size": 3199366306, "dataset_size": 10147280000}}
|
2023-09-01T14:33:02+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3\"\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3\"\n\nMore Information needed"
] |
2a1db84440b1fd7908516325a60cb63aa8e0b6a4
|
# Dataset Card for Evaluation run of marcchew/test1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/marcchew/test1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [marcchew/test1](https://huggingface.co/marcchew/test1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_marcchew__test1",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-03T19:35:56.043440](https://huggingface.co/datasets/open-llm-leaderboard/details_marcchew__test1/blob/main/results_2023-12-03T19-35-56.043440.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_marcchew__test1
|
[
"region:us"
] |
2023-09-01T14:41:30+00:00
|
{"pretty_name": "Evaluation run of marcchew/test1", "dataset_summary": "Dataset automatically created during the evaluation run of model [marcchew/test1](https://huggingface.co/marcchew/test1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_marcchew__test1\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-03T19:35:56.043440](https://huggingface.co/datasets/open-llm-leaderboard/details_marcchew__test1/blob/main/results_2023-12-03T19-35-56.043440.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/marcchew/test1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|arc:challenge|25_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_13T02_23_20.616276", "path": ["**/details_harness|drop|3_2023-10-13T02-23-20.616276.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-13T02-23-20.616276.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_13T02_23_20.616276", "path": ["**/details_harness|gsm8k|5_2023-10-13T02-23-20.616276.parquet"]}, {"split": "2023_12_03T19_35_56.043440", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-35-56.043440.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-03T19-35-56.043440.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hellaswag|10_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T15:41:12.486637.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T15:41:12.486637.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T15:41:12.486637.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_13T02_23_20.616276", "path": ["**/details_harness|winogrande|5_2023-10-13T02-23-20.616276.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-13T02-23-20.616276.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T15_41_12.486637", "path": ["results_2023-09-01T15:41:12.486637.parquet"]}, {"split": "2023_10_13T02_23_20.616276", "path": ["results_2023-10-13T02-23-20.616276.parquet"]}, {"split": "2023_12_03T19_35_56.043440", "path": ["results_2023-12-03T19-35-56.043440.parquet"]}, {"split": "latest", "path": ["results_2023-12-03T19-35-56.043440.parquet"]}]}]}
|
2023-12-03T19:36:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of marcchew/test1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model marcchew/test1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-03T19:35:56.043440(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of marcchew/test1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:35:56.043440(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of marcchew/test1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-03T19:35:56.043440(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
15,
31,
164,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of marcchew/test1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model marcchew/test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-03T19:35:56.043440(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d0e7f24f7f682a26f83db31ce6098c778efc093f
|
# Dataset Card for Letter Vibration Interference Video Data
## Dataset Description
- **Homepage: https://huggingface.co/datasets/TWTom/Letter_Vibration_Interference_Video_Dataset**
- **Paper:**
- **Point of Contact: Lee, Po Han [email protected]**
### Dataset Summary
This dataset is collected using a 1920x1080 camera running at 60fps. It records the interference pattern generated by a Michelson Interferometer. The Interferometer is very sensitive to vibration, so while different vibration modes are performed, unique inference patterns are shown. We introduce vibration to the system by hand writing letters on the table which the interferometer is set up on.
## Additional Information
### File Description
- `logs`:
- `ckpt`:
- `models`:
- `clips`:
- `lzma_compressed`:
- `tflite_model`: tflite model of lstm-attention
- `__pycache__`: just ignore it
- `Untitled.ipynb`:
- `transformer_model.ipynb`:
- `Y.p`:
- `data_lz4_RT_TF.p`:
- `laser_transformer_TF.ipynb`:
- `laser_lstm.ipynb`:
- `module.py`:
- `.gitattributes`:
- `.ipynb_checkpoints`:
- `requirements.txt`:
- `model.h5`:
- `convert_to_tflite.ipynb`:
- `main.py`:
- `model.png`: Architecture of the model
- `vivit.py`: No usage
- `distilled_model_88.h5`:
- `saved_model.pth`:
- `distilled_model.h5`:
- `data_lz4_Y_NP.p`:
- `data_x_1D.p`:
- `data_y_1D.p`:
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
|
TWTom/Letter_Vibration_Interference_Video_Dataset
|
[
"region:us"
] |
2023-09-01T14:57:54+00:00
|
{}
|
2023-10-04T08:09:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Letter Vibration Interference Video Data
## Dataset Description
- Homepage: URL
- Paper:
- Point of Contact: Lee, Po Han leepohan@URL
### Dataset Summary
This dataset is collected using a 1920x1080 camera running at 60fps. It records the interference pattern generated by a Michelson Interferometer. The Interferometer is very sensitive to vibration, so while different vibration modes are performed, unique inference patterns are shown. We introduce vibration to the system by hand writing letters on the table which the interferometer is set up on.
## Additional Information
### File Description
- 'logs':
- 'ckpt':
- 'models':
- 'clips':
- 'lzma_compressed':
- 'tflite_model': tflite model of lstm-attention
- '__pycache__': just ignore it
- 'URL':
- 'transformer_model.ipynb':
- 'Y.p':
- 'data_lz4_RT_TF.p':
- 'laser_transformer_TF.ipynb':
- 'laser_lstm.ipynb':
- 'URL':
- '.gitattributes':
- '.ipynb_checkpoints':
- 'URL':
- 'model.h5':
- 'convert_to_tflite.ipynb':
- 'URL':
- 'URL': Architecture of the model
- 'URL': No usage
- 'distilled_model_88.h5':
- 'saved_model.pth':
- 'distilled_model.h5':
- 'data_lz4_Y_NP.p':
- 'data_x_1D.p':
- 'data_y_1D.p':
### Licensing Information
|
[
"# Dataset Card for Letter Vibration Interference Video Data",
"## Dataset Description\n\n- Homepage: URL \n- Paper: \n- Point of Contact: Lee, Po Han leepohan@URL",
"### Dataset Summary\n\nThis dataset is collected using a 1920x1080 camera running at 60fps. It records the interference pattern generated by a Michelson Interferometer. The Interferometer is very sensitive to vibration, so while different vibration modes are performed, unique inference patterns are shown. We introduce vibration to the system by hand writing letters on the table which the interferometer is set up on.",
"## Additional Information",
"### File Description\n- 'logs':\n- 'ckpt':\n- 'models':\n- 'clips':\n- 'lzma_compressed': \n- 'tflite_model': tflite model of lstm-attention\n- '__pycache__': just ignore it\n\n- 'URL': \n- 'transformer_model.ipynb':\n- 'Y.p':\n- 'data_lz4_RT_TF.p':\n- 'laser_transformer_TF.ipynb':\n- 'laser_lstm.ipynb':\n- 'URL':\n- '.gitattributes':\n- '.ipynb_checkpoints':\n- 'URL':\n- 'model.h5':\n- 'convert_to_tflite.ipynb':\n- 'URL':\n- 'URL': Architecture of the model\n- 'URL': No usage\n- 'distilled_model_88.h5':\n- 'saved_model.pth':\n- 'distilled_model.h5':\n- 'data_lz4_Y_NP.p':\n- 'data_x_1D.p':\n- 'data_y_1D.p':",
"### Licensing Information"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Letter Vibration Interference Video Data",
"## Dataset Description\n\n- Homepage: URL \n- Paper: \n- Point of Contact: Lee, Po Han leepohan@URL",
"### Dataset Summary\n\nThis dataset is collected using a 1920x1080 camera running at 60fps. It records the interference pattern generated by a Michelson Interferometer. The Interferometer is very sensitive to vibration, so while different vibration modes are performed, unique inference patterns are shown. We introduce vibration to the system by hand writing letters on the table which the interferometer is set up on.",
"## Additional Information",
"### File Description\n- 'logs':\n- 'ckpt':\n- 'models':\n- 'clips':\n- 'lzma_compressed': \n- 'tflite_model': tflite model of lstm-attention\n- '__pycache__': just ignore it\n\n- 'URL': \n- 'transformer_model.ipynb':\n- 'Y.p':\n- 'data_lz4_RT_TF.p':\n- 'laser_transformer_TF.ipynb':\n- 'laser_lstm.ipynb':\n- 'URL':\n- '.gitattributes':\n- '.ipynb_checkpoints':\n- 'URL':\n- 'model.h5':\n- 'convert_to_tflite.ipynb':\n- 'URL':\n- 'URL': Architecture of the model\n- 'URL': No usage\n- 'distilled_model_88.h5':\n- 'saved_model.pth':\n- 'distilled_model.h5':\n- 'data_lz4_Y_NP.p':\n- 'data_x_1D.p':\n- 'data_y_1D.p':",
"### Licensing Information"
] |
[
6,
14,
26,
93,
5,
290,
6
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Letter Vibration Interference Video Data## Dataset Description\n\n- Homepage: URL \n- Paper: \n- Point of Contact: Lee, Po Han leepohan@URL### Dataset Summary\n\nThis dataset is collected using a 1920x1080 camera running at 60fps. It records the interference pattern generated by a Michelson Interferometer. The Interferometer is very sensitive to vibration, so while different vibration modes are performed, unique inference patterns are shown. We introduce vibration to the system by hand writing letters on the table which the interferometer is set up on.## Additional Information### File Description\n- 'logs':\n- 'ckpt':\n- 'models':\n- 'clips':\n- 'lzma_compressed': \n- 'tflite_model': tflite model of lstm-attention\n- '__pycache__': just ignore it\n\n- 'URL': \n- 'transformer_model.ipynb':\n- 'Y.p':\n- 'data_lz4_RT_TF.p':\n- 'laser_transformer_TF.ipynb':\n- 'laser_lstm.ipynb':\n- 'URL':\n- '.gitattributes':\n- '.ipynb_checkpoints':\n- 'URL':\n- 'model.h5':\n- 'convert_to_tflite.ipynb':\n- 'URL':\n- 'URL': Architecture of the model\n- 'URL': No usage\n- 'distilled_model_88.h5':\n- 'saved_model.pth':\n- 'distilled_model.h5':\n- 'data_lz4_Y_NP.p':\n- 'data_x_1D.p':\n- 'data_y_1D.p':### Licensing Information"
] |
8eb81292a5bf26b348171d2bb685e71e7ffb7624
|
# Dataset Card for "common_voice_13_0-hu-whisper"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
whizystems/common_voice_13_0-hu-whisper
|
[
"region:us"
] |
2023-09-01T15:18:39+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_features", "sequence": {"sequence": {"sequence": "float32"}}}, {"name": "labels", "sequence": "int64"}, {"name": "input_length", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 22288987288.0, "num_examples": 23204}, {"name": "test", "num_bytes": 7564531940, "num_examples": 7875}], "download_size": 0, "dataset_size": 29853519228.0}}
|
2023-09-01T17:24:45+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "common_voice_13_0-hu-whisper"
More Information needed
|
[
"# Dataset Card for \"common_voice_13_0-hu-whisper\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"common_voice_13_0-hu-whisper\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"common_voice_13_0-hu-whisper\"\n\nMore Information needed"
] |
447e34dd3ca9e9c09fe5dc774f886254c99aa897
|
# Dataset Card for "codem"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vikp/codem
|
[
"region:us"
] |
2023-09-01T16:24:21+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "kind", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 77826565, "num_examples": 48000}], "download_size": 33387111, "dataset_size": 77826565}}
|
2023-09-01T16:24:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "codem"
More Information needed
|
[
"# Dataset Card for \"codem\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"codem\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"codem\"\n\nMore Information needed"
] |
5f5b932eab50a353f277b87213d45554181304fa
|
# Dataset Card for "codem_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vikp/codem_filtered
|
[
"region:us"
] |
2023-09-01T16:35:30+00:00
|
{"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "kind", "dtype": "string"}, {"name": "quality_prob", "dtype": "float64"}, {"name": "learning_prob", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 49267861.09607679, "num_examples": 31046}], "download_size": 21584553, "dataset_size": 49267861.09607679}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T16:35:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "codem_filtered"
More Information needed
|
[
"# Dataset Card for \"codem_filtered\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"codem_filtered\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"codem_filtered\"\n\nMore Information needed"
] |
61ccfcde6d250eaf78831b2feb844297a7224bb8
|
# Data Card for Congressional Bills Corpus
The raw data come from [GovInfo](govinfo.gov) bulk downloads. Files were preprocessed minimally to create pickled dataframes of the bill text.
Data from GovInfo on the bills' committees, sponsors, and cosponsors are also available.
## Topic labels
Bill text is joined to two sets of topic labels.
The Congresssional Research Service tags bills with non-exclusive topic labels.
Because these tags are numerous and specific, the top 250 most common tags were aggregated into roughly two dozen broader policy areas, which are the labels a that appear in this dataset.
The Congressional Bills Project labeled landmark legislation from 1973 to 2017 with a single major topic label and a single minor topic label.
|
coletl/bills-congress
|
[
"task_categories:text-classification",
"language:en",
"license:afl-3.0",
"region:us"
] |
2023-09-01T16:57:13+00:00
|
{"language": ["en"], "license": "afl-3.0", "task_categories": ["text-classification"], "pretty_name": "U.S. Congressional Bills, 113\u2013117"}
|
2023-09-01T22:19:22+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #language-English #license-afl-3.0 #region-us
|
# Data Card for Congressional Bills Corpus
The raw data come from GovInfo bulk downloads. Files were preprocessed minimally to create pickled dataframes of the bill text.
Data from GovInfo on the bills' committees, sponsors, and cosponsors are also available.
## Topic labels
Bill text is joined to two sets of topic labels.
The Congresssional Research Service tags bills with non-exclusive topic labels.
Because these tags are numerous and specific, the top 250 most common tags were aggregated into roughly two dozen broader policy areas, which are the labels a that appear in this dataset.
The Congressional Bills Project labeled landmark legislation from 1973 to 2017 with a single major topic label and a single minor topic label.
|
[
"# Data Card for Congressional Bills Corpus\n\nThe raw data come from GovInfo bulk downloads. Files were preprocessed minimally to create pickled dataframes of the bill text.\nData from GovInfo on the bills' committees, sponsors, and cosponsors are also available.",
"## Topic labels\nBill text is joined to two sets of topic labels. \nThe Congresssional Research Service tags bills with non-exclusive topic labels. \nBecause these tags are numerous and specific, the top 250 most common tags were aggregated into roughly two dozen broader policy areas, which are the labels a that appear in this dataset.\nThe Congressional Bills Project labeled landmark legislation from 1973 to 2017 with a single major topic label and a single minor topic label."
] |
[
"TAGS\n#task_categories-text-classification #language-English #license-afl-3.0 #region-us \n",
"# Data Card for Congressional Bills Corpus\n\nThe raw data come from GovInfo bulk downloads. Files were preprocessed minimally to create pickled dataframes of the bill text.\nData from GovInfo on the bills' committees, sponsors, and cosponsors are also available.",
"## Topic labels\nBill text is joined to two sets of topic labels. \nThe Congresssional Research Service tags bills with non-exclusive topic labels. \nBecause these tags are numerous and specific, the top 250 most common tags were aggregated into roughly two dozen broader policy areas, which are the labels a that appear in this dataset.\nThe Congressional Bills Project labeled landmark legislation from 1973 to 2017 with a single major topic label and a single minor topic label."
] |
[
29,
66,
107
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #license-afl-3.0 #region-us \n# Data Card for Congressional Bills Corpus\n\nThe raw data come from GovInfo bulk downloads. Files were preprocessed minimally to create pickled dataframes of the bill text.\nData from GovInfo on the bills' committees, sponsors, and cosponsors are also available.## Topic labels\nBill text is joined to two sets of topic labels. \nThe Congresssional Research Service tags bills with non-exclusive topic labels. \nBecause these tags are numerous and specific, the top 250 most common tags were aggregated into roughly two dozen broader policy areas, which are the labels a that appear in this dataset.\nThe Congressional Bills Project labeled landmark legislation from 1973 to 2017 with a single major topic label and a single minor topic label."
] |
297447727ff8ba45a861675f70daa6d54f194dfb
|
# Dataset Card for Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple](https://huggingface.co/luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_luffycodes__nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T01:13:17.966803](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple/blob/main/results_2023-10-15T01-13-17.966803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.18760486577181207,
"em_stderr": 0.003998023634854269,
"f1": 0.2689041526845642,
"f1_stderr": 0.00405255679434132,
"acc": 0.4260142426906131,
"acc_stderr": 0.010340665159137691
},
"harness|drop|3": {
"em": 0.18760486577181207,
"em_stderr": 0.003998023634854269,
"f1": 0.2689041526845642,
"f1_stderr": 0.00405255679434132
},
"harness|gsm8k|5": {
"acc": 0.10538286580742987,
"acc_stderr": 0.008457575884041755
},
"harness|winogrande|5": {
"acc": 0.7466456195737964,
"acc_stderr": 0.012223754434233626
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_luffycodes__nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple
|
[
"region:us"
] |
2023-09-01T17:20:54+00:00
|
{"pretty_name": "Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple", "dataset_summary": "Dataset automatically created during the evaluation run of model [luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple](https://huggingface.co/luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_luffycodes__nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T01:13:17.966803](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple/blob/main/results_2023-10-15T01-13-17.966803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.18760486577181207,\n \"em_stderr\": 0.003998023634854269,\n \"f1\": 0.2689041526845642,\n \"f1_stderr\": 0.00405255679434132,\n \"acc\": 0.4260142426906131,\n \"acc_stderr\": 0.010340665159137691\n },\n \"harness|drop|3\": {\n \"em\": 0.18760486577181207,\n \"em_stderr\": 0.003998023634854269,\n \"f1\": 0.2689041526845642,\n \"f1_stderr\": 0.00405255679434132\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10538286580742987,\n \"acc_stderr\": 0.008457575884041755\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7466456195737964,\n \"acc_stderr\": 0.012223754434233626\n }\n}\n```", "repo_url": "https://huggingface.co/luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|arc:challenge|25_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T01_13_17.966803", "path": ["**/details_harness|drop|3_2023-10-15T01-13-17.966803.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T01-13-17.966803.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T01_13_17.966803", "path": ["**/details_harness|gsm8k|5_2023-10-15T01-13-17.966803.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T01-13-17.966803.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hellaswag|10_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T18:20:29.445308.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T18:20:29.445308.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T18:20:29.445308.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T01_13_17.966803", "path": ["**/details_harness|winogrande|5_2023-10-15T01-13-17.966803.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T01-13-17.966803.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T18_20_29.445308", "path": ["results_2023-09-01T18:20:29.445308.parquet"]}, {"split": "2023_10_15T01_13_17.966803", "path": ["results_2023-10-15T01-13-17.966803.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T01-13-17.966803.parquet"]}]}]}
|
2023-10-15T00:13:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-15T01:13:17.966803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-15T01:13:17.966803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-15T01:13:17.966803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
35,
31,
183,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T01:13:17.966803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
75b399394a9803252cfec289d103de462763db7c
|
# The Belebele Benchmark for Massively Multilingual NLU Evaluation
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the [FLORES-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to our paper for more details, [The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants](https://arxiv.org/abs/2308.16884).
Or get more details at https://github.com/facebookresearch/belebele
## Citation
If you use this data in your work, please cite:
```bibtex
@article{bandarkar2023belebele,
title={The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants},
author={Lucas Bandarkar and Davis Liang and Benjamin Muller and Mikel Artetxe and Satya Narayan Shukla and Donald Husa and Naman Goyal and Abhinandan Krishnan and Luke Zettlemoyer and Madian Khabsa},
year={2023},
journal={arXiv preprint arXiv:2308.16884}
}
```
## Composition
- 900 questions per language variant
- 488 distinct passages, there are 1-2 associated questions for each.
- For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
- 122 language/language variants (including English).
- 900 x 122 = 109,800 total questions.
## Further Stats
- 122 language variants, but 115 distinct languages (ignoring scripts)
- 27 language families
- 29 scripts
- Avg. words per passage = 79.1 (std = 26.2)
- Avg. sentences per passage = 4.1 (std = 1.4)
- Avg. words per question = 12.9(std = 4.0)
- Avg. words per answer = 4.2 (std = 2.9)
## Pausible Evaluation Settings
Thanks to the parallel nature of the dataset and the simplicity of the task, there are many possible settings in which we can evaluate language models. In all evaluation settings, the metric of interest is simple accuracy (# correct / total).
Evaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with `^`.
#### No finetuning
- **Zero-shot with natural language instructions (English instructions)**
- For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.
- For our experiments, we instruct the model to provide the letter `A`, `B`, `C`, or `D`. We perform post-processing steps and accept answers predicted as e.g. `(A)` instead of `A`. We sometimes additionally remove the prefix `The correct answer is` for predictions that do not start with one of the four accepted answers.
- Sample instructions can be found at the [dataset github repo](https://github.com/facebookresearch/belebele).
- **Zero-shot with natural language instructions (translated instructions)** ^
- Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.
- **Few-shot in-context learning (English examples)**
- A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.
- For our experiments, we use the template: ```P: <passage> \n Q: <question> \n A: <mc answer 1> \n B: <mc answer 2> \n C: <mc answer 3> \n D: <mc answer 4> \n Answer: <Correct answer letter>```. We perform prediction by picking the answer within `[A, B, C, D]` that has the highest probability relatively to the others.
- **Few-shot in-context learning (translated examples)** ^
- Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.
#### With finetuning
- **English finetune & multilingual evaluation**
- The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used [the HuggingFace library](https://huggingface.co/docs/transformers/en/model_doc/xlm-roberta#transformers.XLMRobertaForMultipleChoice).
- **English finetune & cross-lingual evaluation**
- Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language `x`, question in language `y`, and answers in language `z`.
- **Translate-train** ^
- For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.
- **Translate-train-all**
- Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.
- **Translate-train-all & cross-lingual evaluation**
- Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.
- **Translate-test**
- The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.
- This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.
In addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.
## Training Set
As discussed in the paper, we also provide an assembled training set consisting of samples at the [github repo](https://github.com/facebookresearch/belebele).
The Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be [RACE](https://www.cs.cmu.edu/~glai1/data/race/), [SciQ](https://allenai.org/data/sciq), [MultiRC](https://cogcomp.seas.upenn.edu/multirc/), [MCTest](https://mattr1.github.io/mctest/), [MCScript2.0](https://aclanthology.org/S19-1012/), and [ReClor](https://whyu.me/reclor/).
For each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script (`assemble_training_set.py`) to reconstruct this dataset for anyone to perform task finetuning.
Since the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of https://github.com/facebookresearch/belebele .
## Languages in Belebele
FLORES-200 Code | English Name | Script | Family
---|---|---|---
acm_Arab | Mesopotamian Arabic | Arab | Afro-Asiatic
afr_Latn | Afrikaans | Latn | Germanic
als_Latn | Tosk Albanian | Latn | Paleo-Balkanic
amh_Ethi | Amharic | Ethi | Afro-Asiatic
apc_Arab | North Levantine Arabic | Arab | Afro-Asiatic
arb_Arab | Modern Standard Arabic | Arab | Afro-Asiatic
arb_Latn | Modern Standard Arabic (Romanized) | Latn | Afro-Asiatic
ars_Arab | Najdi Arabic | Arab | Afro-Asiatic
ary_arab | Moroccan Arabic | Arab | Afro-Asiatic
arz_Arab | Egyptian Arabic | Arab | Afro-Asiatic
asm_Beng | Assamese | Beng | Indo-Aryan
azj_Latn | North Azerbaijani | Latn | Turkic
bam_Latn | Bambara | Latn | Mande
ben_Beng | Bengali | Beng | Indo-Aryan
ben_Latn | Bengali (Romanized) | Latn | Indo-Aryan
bod_Tibt | Standard Tibetan | Tibt | Sino-Tibetan
bul_Cyrl | Bulgarian | Cyrl | Balto-Slavic
cat_Latn | Catalan | Latn | Romance
ceb_Latn | Cebuano | Latn | Austronesian
ces_Latn | Czech | Latn | Balto-Slavic
ckb_Arab | Central Kurdish | Arab | Iranian
dan_Latn | Danish | Latn | Germanic
deu_Latn | German | Latn | Germanic
ell_Grek | Greek | Grek | Hellenic
eng_Latn | English | Latn | Germanic
est_Latn | Estonian | Latn | Uralic
eus_Latn | Basque | Latn | Basque
fin_Latn | Finnish | Latn | Uralic
fra_Latn | French | Latn | Romance
fuv_Latn | Nigerian Fulfulde | Latn | Atlantic-Congo
gaz_Latn | West Central Oromo | Latn | Afro-Asiatic
grn_Latn | Guarani | Latn | Tupian
guj_Gujr | Gujarati | Gujr | Indo-Aryan
hat_Latn | Haitian Creole | Latn | Atlantic-Congo
hau_Latn | Hausa | Latn | Afro-Asiatic
heb_Hebr | Hebrew | Hebr | Afro-Asiatic
hin_Deva | Hindi | Deva | Indo-Aryan
hin_Latn | Hindi (Romanized) | Latn | Indo-Aryan
hrv_Latn | Croatian | Latn | Balto-Slavic
hun_Latn | Hungarian | Latn | Uralic
hye_Armn | Armenian | Armn | Armenian
ibo_Latn | Igbo | Latn | Atlantic-Congo
ilo_Latn | Ilocano | Latn | Austronesian
ind_Latn | Indonesian | Latn | Austronesian
isl_Latn | Icelandic | Latn | Germanic
ita_Latn | Italian | Latn | Romance
jav_Latn | Javanese | Latn | Austronesian
jpn_Jpan | Japanese | Jpan | Japonic
kac_Latn | Jingpho | Latn | Sino-Tibetan
kan_Knda | Kannada | Knda | Dravidian
kat_Geor | Georgian | Geor | kartvelian
kaz_Cyrl | Kazakh | Cyrl | Turkic
kea_Latn | Kabuverdianu | Latn | Portuguese Creole
khk_Cyrl | Halh Mongolian | Cyrl | Mongolic
khm_Khmr | Khmer | Khmr | Austroasiatic
kin_Latn | Kinyarwanda | Latn | Atlantic-Congo
kir_Cyrl | Kyrgyz | Cyrl | Turkic
kor_Hang | Korean | Hang | Koreanic
lao_Laoo | Lao | Laoo | Kra-Dai
lin_Latn | Lingala | Latn | Atlantic-Congo
lit_Latn | Lithuanian | Latn | Balto-Slavic
lug_Latn | Ganda | Latn | Atlantic-Congo
luo_Latn | Luo | Latn | Nilo-Saharan
lvs_Latn | Standard Latvian | Latn | Balto-Slavic
mal_Mlym | Malayalam | Mlym | Dravidian
mar_Deva | Marathi | Deva | Indo-Aryan
mkd_Cyrl | Macedonian | Cyrl | Balto-Slavic
mlt_Latn | Maltese | Latn | Afro-Asiatic
mri_Latn | Maori | Latn | Austronesian
mya_Mymr | Burmese | Mymr | Sino-Tibetan
nld_Latn | Dutch | Latn | Germanic
nob_Latn | Norwegian Bokmål | Latn | Germanic
npi_Deva | Nepali | Deva | Indo-Aryan
npi_Latn | Nepali (Romanized) | Latn | Indo-Aryan
nso_Latn | Northern Sotho | Latn | Atlantic-Congo
nya_Latn | Nyanja | Latn | Afro-Asiatic
ory_Orya | Odia | Orya | Indo-Aryan
pan_Guru | Eastern Panjabi | Guru | Indo-Aryan
pbt_Arab | Southern Pashto | Arab | Indo-Aryan
pes_Arab | Western Persian | Arab | Iranian
plt_Latn | Plateau Malagasy | Latn | Austronesian
pol_Latn | Polish | Latn | Balto-Slavic
por_Latn | Portuguese | Latn | Romance
ron_Latn | Romanian | Latn | Romance
rus_Cyrl | Russian | Cyrl | Balto-Slavic
shn_Mymr | Shan | Mymr | Kra-Dai
sin_Latn | Sinhala (Romanized) | Latn | Indo-Aryan
sin_Sinh | Sinhala | Sinh | Indo-Aryan
slk_Latn | Slovak | Latn | Balto-Slavic
slv_Latn | Slovenian | Latn | Balto-Slavic
sna_Latn | Shona | Latn | Atlantic-Congo
snd_Arab | Sindhi | Arab | Indo-Aryan
som_Latn | Somali | Latn | Afro-Asiatic
sot_Latn | Southern Sotho | Latn | Atlantic-Congo
spa_Latn | Spanish | Latn | Romance
srp_Cyrl | Serbian | Cyrl | Balto-Slavic
ssw_Latn | Swati | Latn | Atlantic-Congo
sun_Latn | Sundanese | Latn | Austronesian
swe_Latn | Swedish | Latn | Germanic
swh_Latn | Swahili | Latn | Atlantic-Congo
tam_Taml | Tamil | Taml | Dravidian
tel_Telu | Telugu | Telu | Dravidian
tgk_Cyrl | Tajik | Cyrl | Iranian
tgl_Latn | Tagalog | Latn | Austronesian
tha_Thai | Thai | Thai | Kra-Dai
tir_Ethi | Tigrinya | Ethi | Afro-Asiatic
tsn_Latn | Tswana | Latn | Atlantic-Congo
tso_Latn | Tsonga | Latn | Afro-Asiatic
tur_Latn | Turkish | Latn | Turkic
ukr_Cyrl | Ukrainian | Cyrl | Balto-Slavic
urd_Arab | Urdu | Arab | Indo-Aryan
urd_Latn | Urdu (Romanized) | Latn | Indo-Aryan
uzn_Latn | Northern Uzbek | Latn | Turkic
vie_Latn | Vietnamese | Latn | Austroasiatic
war_Latn | Waray | Latn | Austronesian
wol_Latn | Wolof | Latn | Atlantic-Congo
xho_Latn | Xhosa | Latn | Atlantic-Congo
yor_Latn | Yoruba | Latn | Atlantic-Congo
zho_Hans | Chinese (Simplified) | Hans | Sino-Tibetan
zho_Hant | Chinese (Traditional) | Hant | Sino-Tibetan
zsm_Latn | Standard Malay | Latn | Austronesian
zul_Latn | Zulu | Latn | Atlantic-Congo
|
facebook/belebele
|
[
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:text-classification",
"task_categories:multiple-choice",
"size_categories:100K<n<1M",
"language:af",
"language:am",
"language:ar",
"language:az",
"language:as",
"language:bm",
"language:bn",
"language:bo",
"language:bg",
"language:ca",
"language:cs",
"language:ku",
"language:da",
"language:de",
"language:el",
"language:en",
"language:es",
"language:et",
"language:eu",
"language:fi",
"language:fr",
"language:ff",
"language:om",
"language:gu",
"language:gn",
"language:ht",
"language:ha",
"language:he",
"language:hi",
"language:hr",
"language:hu",
"language:hy",
"language:ig",
"language:id",
"language:it",
"language:is",
"language:jv",
"language:ja",
"language:ka",
"language:kn",
"language:kk",
"language:mn",
"language:km",
"language:rw",
"language:ky",
"language:ko",
"language:lo",
"language:ln",
"language:lt",
"language:lg",
"language:lv",
"language:ml",
"language:mr",
"language:mk",
"language:mt",
"language:mi",
"language:my",
"language:nl",
"language:no",
"language:ne",
"language:ny",
"language:or",
"language:pa",
"language:ps",
"language:fa",
"language:mg",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sn",
"language:si",
"language:sl",
"language:sv",
"language:sk",
"language:sd",
"language:sw",
"language:ta",
"language:te",
"language:tg",
"language:tl",
"language:th",
"language:ti",
"language:tn",
"language:ts",
"language:tr",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:wo",
"language:xh",
"language:yo",
"language:zh",
"language:ms",
"language:zu",
"license:cc-by-sa-4.0",
"arxiv:2308.16884",
"region:us"
] |
2023-09-01T17:27:13+00:00
|
{"language": ["af", "am", "ar", "az", "as", "bm", "bn", "bo", "bg", "ca", "cs", "ku", "da", "de", "el", "en", "es", "et", "eu", "fi", "fr", "ff", "om", "gu", "gn", "ht", "ha", "he", "hi", "hr", "hu", "hy", "ig", "id", "it", "is", "jv", "ja", "ka", "kn", "kk", "mn", "km", "rw", "ky", "ko", "lo", "ln", "lt", "lg", "lv", "ml", "mr", "mk", "mt", "mi", "my", "nl", "no", "ne", "ny", "or", "pa", "ps", "fa", "mg", "pl", "pt", "ro", "ru", "sn", "si", "sl", "sv", "sk", "sd", "sw", "ta", "te", "tg", "tl", "th", "ti", "tn", "ts", "tr", "uk", "ur", "uz", "vi", "wo", "xh", "yo", "zh", "ms", "zu"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "zero-shot-classification", "text-classification", "multiple-choice"], "pretty_name": "Belebele", "configs": [{"config_name": "default", "data_files": [{"split": "acm_Arab", "path": "data/acm_Arab.jsonl"}, {"split": "arz_Arab", "path": "data/arz_Arab.jsonl"}, {"split": "ceb_Latn", "path": "data/ceb_Latn.jsonl"}, {"split": "fin_Latn", "path": "data/fin_Latn.jsonl"}, {"split": "hin_Deva", "path": "data/hin_Deva.jsonl"}, {"split": "ita_Latn", "path": "data/ita_Latn.jsonl"}, {"split": "khm_Khmr", "path": "data/khm_Khmr.jsonl"}, {"split": "lvs_Latn", "path": "data/lvs_Latn.jsonl"}, {"split": "npi_Deva", "path": "data/npi_Deva.jsonl"}, {"split": "pol_Latn", "path": "data/pol_Latn.jsonl"}, {"split": "slv_Latn", "path": "data/slv_Latn.jsonl"}, {"split": "swe_Latn", "path": "data/swe_Latn.jsonl"}, {"split": "tso_Latn", "path": "data/tso_Latn.jsonl"}, {"split": "xho_Latn", "path": "data/xho_Latn.jsonl"}, {"split": "afr_Latn", "path": "data/afr_Latn.jsonl"}, {"split": "asm_Beng", "path": "data/asm_Beng.jsonl"}, {"split": "ces_Latn", "path": "data/ces_Latn.jsonl"}, {"split": "fra_Latn", "path": "data/fra_Latn.jsonl"}, {"split": "hin_Latn", "path": "data/hin_Latn.jsonl"}, {"split": "jav_Latn", "path": "data/jav_Latn.jsonl"}, {"split": "kin_Latn", "path": "data/kin_Latn.jsonl"}, {"split": "mal_Mlym", "path": "data/mal_Mlym.jsonl"}, {"split": "npi_Latn", "path": "data/npi_Latn.jsonl"}, {"split": "por_Latn", "path": "data/por_Latn.jsonl"}, {"split": "sna_Latn", "path": "data/sna_Latn.jsonl"}, {"split": "swh_Latn", "path": "data/swh_Latn.jsonl"}, {"split": "tur_Latn", "path": "data/tur_Latn.jsonl"}, {"split": "yor_Latn", "path": "data/yor_Latn.jsonl"}, {"split": "als_Latn", "path": "data/als_Latn.jsonl"}, {"split": "azj_Latn", "path": "data/azj_Latn.jsonl"}, {"split": "ckb_Arab", "path": "data/ckb_Arab.jsonl"}, {"split": "fuv_Latn", "path": "data/fuv_Latn.jsonl"}, {"split": "hrv_Latn", "path": "data/hrv_Latn.jsonl"}, {"split": "jpn_Jpan", "path": "data/jpn_Jpan.jsonl"}, {"split": "kir_Cyrl", "path": "data/kir_Cyrl.jsonl"}, {"split": "mar_Deva", "path": "data/mar_Deva.jsonl"}, {"split": "nso_Latn", "path": "data/nso_Latn.jsonl"}, {"split": "snd_Arab", "path": "data/snd_Arab.jsonl"}, {"split": "tam_Taml", "path": "data/tam_Taml.jsonl"}, {"split": "ukr_Cyrl", "path": "data/ukr_Cyrl.jsonl"}, {"split": "zho_Hans", "path": "data/zho_Hans.jsonl"}, {"split": "amh_Ethi", "path": "data/amh_Ethi.jsonl"}, {"split": "bam_Latn", "path": "data/bam_Latn.jsonl"}, {"split": "dan_Latn", "path": "data/dan_Latn.jsonl"}, {"split": "gaz_Latn", "path": "data/gaz_Latn.jsonl"}, {"split": "hun_Latn", "path": "data/hun_Latn.jsonl"}, {"split": "kac_Latn", "path": "data/kac_Latn.jsonl"}, {"split": "kor_Hang", "path": "data/kor_Hang.jsonl"}, {"split": "mkd_Cyrl", "path": "data/mkd_Cyrl.jsonl"}, {"split": "nya_Latn", "path": "data/nya_Latn.jsonl"}, {"split": "ron_Latn", "path": "data/ron_Latn.jsonl"}, {"split": "som_Latn", "path": "data/som_Latn.jsonl"}, {"split": "tel_Telu", "path": "data/tel_Telu.jsonl"}, {"split": "urd_Arab", "path": "data/urd_Arab.jsonl"}, {"split": "zho_Hant", "path": "data/zho_Hant.jsonl"}, {"split": "apc_Arab", "path": "data/apc_Arab.jsonl"}, {"split": "ben_Beng", "path": "data/ben_Beng.jsonl"}, {"split": "deu_Latn", "path": "data/deu_Latn.jsonl"}, {"split": "grn_Latn", "path": "data/grn_Latn.jsonl"}, {"split": "hye_Armn", "path": "data/hye_Armn.jsonl"}, {"split": "kan_Knda", "path": "data/kan_Knda.jsonl"}, {"split": "lao_Laoo", "path": "data/lao_Laoo.jsonl"}, {"split": "mlt_Latn", "path": "data/mlt_Latn.jsonl"}, {"split": "ory_Orya", "path": "data/ory_Orya.jsonl"}, {"split": "rus_Cyrl", "path": "data/rus_Cyrl.jsonl"}, {"split": "sot_Latn", "path": "data/sot_Latn.jsonl"}, {"split": "tgk_Cyrl", "path": "data/tgk_Cyrl.jsonl"}, {"split": "urd_Latn", "path": "data/urd_Latn.jsonl"}, {"split": "zsm_Latn", "path": "data/zsm_Latn.jsonl"}, {"split": "arb_Arab", "path": "data/arb_Arab.jsonl"}, {"split": "ben_Latn", "path": "data/ben_Latn.jsonl"}, {"split": "ell_Grek", "path": "data/ell_Grek.jsonl"}, {"split": "guj_Gujr", "path": "data/guj_Gujr.jsonl"}, {"split": "ibo_Latn", "path": "data/ibo_Latn.jsonl"}, {"split": "kat_Geor", "path": "data/kat_Geor.jsonl"}, {"split": "lin_Latn", "path": "data/lin_Latn.jsonl"}, {"split": "mri_Latn", "path": "data/mri_Latn.jsonl"}, {"split": "pan_Guru", "path": "data/pan_Guru.jsonl"}, {"split": "shn_Mymr", "path": "data/shn_Mymr.jsonl"}, {"split": "spa_Latn", "path": "data/spa_Latn.jsonl"}, {"split": "tgl_Latn", "path": "data/tgl_Latn.jsonl"}, {"split": "uzn_Latn", "path": "data/uzn_Latn.jsonl"}, {"split": "zul_Latn", "path": "data/zul_Latn.jsonl"}, {"split": "arb_Latn", "path": "data/arb_Latn.jsonl"}, {"split": "bod_Tibt", "path": "data/bod_Tibt.jsonl"}, {"split": "eng_Latn", "path": "data/eng_Latn.jsonl"}, {"split": "hat_Latn", "path": "data/hat_Latn.jsonl"}, {"split": "ilo_Latn", "path": "data/ilo_Latn.jsonl"}, {"split": "kaz_Cyrl", "path": "data/kaz_Cyrl.jsonl"}, {"split": "lit_Latn", "path": "data/lit_Latn.jsonl"}, {"split": "mya_Mymr", "path": "data/mya_Mymr.jsonl"}, {"split": "pbt_Arab", "path": "data/pbt_Arab.jsonl"}, {"split": "sin_Latn", "path": "data/sin_Latn.jsonl"}, {"split": "srp_Cyrl", "path": "data/srp_Cyrl.jsonl"}, {"split": "tha_Thai", "path": "data/tha_Thai.jsonl"}, {"split": "vie_Latn", "path": "data/vie_Latn.jsonl"}, {"split": "ars_Arab", "path": "data/ars_Arab.jsonl"}, {"split": "bul_Cyrl", "path": "data/bul_Cyrl.jsonl"}, {"split": "est_Latn", "path": "data/est_Latn.jsonl"}, {"split": "hau_Latn", "path": "data/hau_Latn.jsonl"}, {"split": "ind_Latn", "path": "data/ind_Latn.jsonl"}, {"split": "kea_Latn", "path": "data/kea_Latn.jsonl"}, {"split": "lug_Latn", "path": "data/lug_Latn.jsonl"}, {"split": "nld_Latn", "path": "data/nld_Latn.jsonl"}, {"split": "pes_Arab", "path": "data/pes_Arab.jsonl"}, {"split": "sin_Sinh", "path": "data/sin_Sinh.jsonl"}, {"split": "ssw_Latn", "path": "data/ssw_Latn.jsonl"}, {"split": "tir_Ethi", "path": "data/tir_Ethi.jsonl"}, {"split": "war_Latn", "path": "data/war_Latn.jsonl"}, {"split": "ary_Arab", "path": "data/ary_Arab.jsonl"}, {"split": "cat_Latn", "path": "data/cat_Latn.jsonl"}, {"split": "eus_Latn", "path": "data/eus_Latn.jsonl"}, {"split": "heb_Hebr", "path": "data/heb_Hebr.jsonl"}, {"split": "isl_Latn", "path": "data/isl_Latn.jsonl"}, {"split": "khk_Cyrl", "path": "data/khk_Cyrl.jsonl"}, {"split": "luo_Latn", "path": "data/luo_Latn.jsonl"}, {"split": "nob_Latn", "path": "data/nob_Latn.jsonl"}, {"split": "plt_Latn", "path": "data/plt_Latn.jsonl"}, {"split": "slk_Latn", "path": "data/slk_Latn.jsonl"}, {"split": "sun_Latn", "path": "data/sun_Latn.jsonl"}, {"split": "tsn_Latn", "path": "data/tsn_Latn.jsonl"}, {"split": "wol_Latn", "path": "data/wol_Latn.jsonl"}]}]}
|
2023-11-15T02:31:17+00:00
|
[
"2308.16884"
] |
[
"af",
"am",
"ar",
"az",
"as",
"bm",
"bn",
"bo",
"bg",
"ca",
"cs",
"ku",
"da",
"de",
"el",
"en",
"es",
"et",
"eu",
"fi",
"fr",
"ff",
"om",
"gu",
"gn",
"ht",
"ha",
"he",
"hi",
"hr",
"hu",
"hy",
"ig",
"id",
"it",
"is",
"jv",
"ja",
"ka",
"kn",
"kk",
"mn",
"km",
"rw",
"ky",
"ko",
"lo",
"ln",
"lt",
"lg",
"lv",
"ml",
"mr",
"mk",
"mt",
"mi",
"my",
"nl",
"no",
"ne",
"ny",
"or",
"pa",
"ps",
"fa",
"mg",
"pl",
"pt",
"ro",
"ru",
"sn",
"si",
"sl",
"sv",
"sk",
"sd",
"sw",
"ta",
"te",
"tg",
"tl",
"th",
"ti",
"tn",
"ts",
"tr",
"uk",
"ur",
"uz",
"vi",
"wo",
"xh",
"yo",
"zh",
"ms",
"zu"
] |
TAGS
#task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text-classification #task_categories-multiple-choice #size_categories-100K<n<1M #language-Afrikaans #language-Amharic #language-Arabic #language-Azerbaijani #language-Assamese #language-Bambara #language-Bengali #language-Tibetan #language-Bulgarian #language-Catalan #language-Czech #language-Kurdish #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Basque #language-Finnish #language-French #language-Fulah #language-Oromo #language-Gujarati #language-Guarani #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Indonesian #language-Italian #language-Icelandic #language-Javanese #language-Japanese #language-Georgian #language-Kannada #language-Kazakh #language-Mongolian #language-Khmer #language-Kinyarwanda #language-Kirghiz #language-Korean #language-Lao #language-Lingala #language-Lithuanian #language-Ganda #language-Latvian #language-Malayalam #language-Marathi #language-Macedonian #language-Maltese #language-Maori #language-Burmese #language-Dutch #language-Norwegian #language-Nepali (macrolanguage) #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Pushto #language-Persian #language-Malagasy #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Shona #language-Sinhala #language-Slovenian #language-Swedish #language-Slovak #language-Sindhi #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tswana #language-Tsonga #language-Turkish #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Malay (macrolanguage) #language-Zulu #license-cc-by-sa-4.0 #arxiv-2308.16884 #region-us
|
The Belebele Benchmark for Massively Multilingual NLU Evaluation
================================================================
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and low-resource languages. Each question has four multiple-choice answers and is linked to a short passage from the FLORES-200 dataset. The human annotation procedure was carefully curated to create questions that discriminate between different levels of generalizable language comprehension and is reinforced by extensive quality checks. While all questions directly relate to the passage, the English dataset on its own proves difficult enough to challenge state-of-the-art language models. Being fully parallel, this dataset enables direct comparison of model performance across all languages. Belebele opens up new avenues for evaluating and analyzing the multilingual abilities of language models and NLP systems.
Please refer to our paper for more details, The Belebele Benchmark: a Parallel Reading Comprehension Dataset in 122 Language Variants.
Or get more details at URL
If you use this data in your work, please cite:
Composition
-----------
* 900 questions per language variant
* 488 distinct passages, there are 1-2 associated questions for each.
* For each question, there is 4 multiple-choice answers, exactly 1 of which is correct.
* 122 language/language variants (including English).
* 900 x 122 = 109,800 total questions.
Further Stats
-------------
* 122 language variants, but 115 distinct languages (ignoring scripts)
* 27 language families
* 29 scripts
* Avg. words per passage = 79.1 (std = 26.2)
* Avg. sentences per passage = 4.1 (std = 1.4)
* Avg. words per question = 12.9(std = 4.0)
* Avg. words per answer = 4.2 (std = 2.9)
Pausible Evaluation Settings
----------------------------
Thanks to the parallel nature of the dataset and the simplicity of the task, there are many possible settings in which we can evaluate language models. In all evaluation settings, the metric of interest is simple accuracy (# correct / total).
Evaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with '^'.
#### No finetuning
* Zero-shot with natural language instructions (English instructions)
+ For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.
+ For our experiments, we instruct the model to provide the letter 'A', 'B', 'C', or 'D'. We perform post-processing steps and accept answers predicted as e.g. '(A)' instead of 'A'. We sometimes additionally remove the prefix 'The correct answer is' for predictions that do not start with one of the four accepted answers.
+ Sample instructions can be found at the dataset github repo.
* Zero-shot with natural language instructions (translated instructions) ^
+ Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.
* Few-shot in-context learning (English examples)
+ A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.
+ For our experiments, we use the template: . We perform prediction by picking the answer within '[A, B, C, D]' that has the highest probability relatively to the others.
* Few-shot in-context learning (translated examples) ^
+ Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.
#### With finetuning
* English finetune & multilingual evaluation
+ The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used the HuggingFace library.
* English finetune & cross-lingual evaluation
+ Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language 'x', question in language 'y', and answers in language 'z'.
* Translate-train ^
+ For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.
* Translate-train-all
+ Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.
* Translate-train-all & cross-lingual evaluation
+ Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.
* Translate-test
+ The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.
+ This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.
In addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.
Training Set
------------
As discussed in the paper, we also provide an assembled training set consisting of samples at the github repo.
The Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be RACE, SciQ, MultiRC, MCTest, MCScript2.0, and ReClor.
For each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script ('assemble\_training\_set.py') to reconstruct this dataset for anyone to perform task finetuning.
Since the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of URL .
Languages in Belebele
---------------------
|
[
"# correct / total).\n\n\nEvaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with '^'.",
"#### No finetuning\n\n\n* Zero-shot with natural language instructions (English instructions)\n\t+ For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.\n\t+ For our experiments, we instruct the model to provide the letter 'A', 'B', 'C', or 'D'. We perform post-processing steps and accept answers predicted as e.g. '(A)' instead of 'A'. We sometimes additionally remove the prefix 'The correct answer is' for predictions that do not start with one of the four accepted answers.\n\t+ Sample instructions can be found at the dataset github repo.\n* Zero-shot with natural language instructions (translated instructions) ^\n\t+ Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.\n* Few-shot in-context learning (English examples)\n\t+ A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.\n\t+ For our experiments, we use the template: . We perform prediction by picking the answer within '[A, B, C, D]' that has the highest probability relatively to the others.\n* Few-shot in-context learning (translated examples) ^\n\t+ Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.",
"#### With finetuning\n\n\n* English finetune & multilingual evaluation\n\t+ The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used the HuggingFace library.\n* English finetune & cross-lingual evaluation\n\t+ Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language 'x', question in language 'y', and answers in language 'z'.\n* Translate-train ^\n\t+ For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.\n* Translate-train-all\n\t+ Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.\n* Translate-train-all & cross-lingual evaluation\n\t+ Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.\n* Translate-test\n\t+ The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.\n\t+ This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.\n\n\nIn addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.\n\n\nTraining Set\n------------\n\n\nAs discussed in the paper, we also provide an assembled training set consisting of samples at the github repo.\n\n\nThe Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be RACE, SciQ, MultiRC, MCTest, MCScript2.0, and ReClor.\n\n\nFor each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script ('assemble\\_training\\_set.py') to reconstruct this dataset for anyone to perform task finetuning.\n\n\nSince the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of URL .\n\n\nLanguages in Belebele\n---------------------"
] |
[
"TAGS\n#task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text-classification #task_categories-multiple-choice #size_categories-100K<n<1M #language-Afrikaans #language-Amharic #language-Arabic #language-Azerbaijani #language-Assamese #language-Bambara #language-Bengali #language-Tibetan #language-Bulgarian #language-Catalan #language-Czech #language-Kurdish #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Basque #language-Finnish #language-French #language-Fulah #language-Oromo #language-Gujarati #language-Guarani #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Indonesian #language-Italian #language-Icelandic #language-Javanese #language-Japanese #language-Georgian #language-Kannada #language-Kazakh #language-Mongolian #language-Khmer #language-Kinyarwanda #language-Kirghiz #language-Korean #language-Lao #language-Lingala #language-Lithuanian #language-Ganda #language-Latvian #language-Malayalam #language-Marathi #language-Macedonian #language-Maltese #language-Maori #language-Burmese #language-Dutch #language-Norwegian #language-Nepali (macrolanguage) #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Pushto #language-Persian #language-Malagasy #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Shona #language-Sinhala #language-Slovenian #language-Swedish #language-Slovak #language-Sindhi #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tswana #language-Tsonga #language-Turkish #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Malay (macrolanguage) #language-Zulu #license-cc-by-sa-4.0 #arxiv-2308.16884 #region-us \n",
"# correct / total).\n\n\nEvaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with '^'.",
"#### No finetuning\n\n\n* Zero-shot with natural language instructions (English instructions)\n\t+ For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.\n\t+ For our experiments, we instruct the model to provide the letter 'A', 'B', 'C', or 'D'. We perform post-processing steps and accept answers predicted as e.g. '(A)' instead of 'A'. We sometimes additionally remove the prefix 'The correct answer is' for predictions that do not start with one of the four accepted answers.\n\t+ Sample instructions can be found at the dataset github repo.\n* Zero-shot with natural language instructions (translated instructions) ^\n\t+ Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.\n* Few-shot in-context learning (English examples)\n\t+ A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.\n\t+ For our experiments, we use the template: . We perform prediction by picking the answer within '[A, B, C, D]' that has the highest probability relatively to the others.\n* Few-shot in-context learning (translated examples) ^\n\t+ Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated.",
"#### With finetuning\n\n\n* English finetune & multilingual evaluation\n\t+ The model is finetuned to the task using the English training set, probably with a sequence classification head. Then the model is evaluated in all the target languages individually. For results presented in the paper we used the HuggingFace library.\n* English finetune & cross-lingual evaluation\n\t+ Same as above, except the model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language. For example, passage could be in language 'x', question in language 'y', and answers in language 'z'.\n* Translate-train ^\n\t+ For each target language, the model is individually finetuned on training samples that have been machine-translated from English to that language. Each model is then evaluated in the respective target language.\n* Translate-train-all\n\t+ Similar to above, except here the model is trained on translated samples from all target languages at once. The single finetuned model is then evaluated on all target languages.\n* Translate-train-all & cross-lingual evaluation\n\t+ Same as above, except the single finetuned model is evaluated in a cross-lingual setting, where for each question, the passage & answers could be provided in a different language.\n* Translate-test\n\t+ The model is finetuned using the English training data and then the evaluation dataset is machine-translated to English and evaluated on the English.\n\t+ This setting is primarily a reflection of the quality of the machine translation system, but is useful for comparison to multilingual models.\n\n\nIn addition, there are 83 additional languages in FLORES-200 for which questions were not translated for Belebele. Since the passages exist in those target languages, machine-translating the questions & answers may enable decent evaluation of machine reading comprehension in those languages.\n\n\nTraining Set\n------------\n\n\nAs discussed in the paper, we also provide an assembled training set consisting of samples at the github repo.\n\n\nThe Belebele dataset is intended to be used only as a test set, and not for training or validation. Therefore, for models that require additional task-specific training, we instead propose using an assembled training set consisting of samples from pre-existing multiple-choice QA datasets in English. We considered diverse datasets, and determine the most compatible to be RACE, SciQ, MultiRC, MCTest, MCScript2.0, and ReClor.\n\n\nFor each of the six datasets, we unpack and restructure the passages and questions from their respective formats. We then filter out less suitable samples (e.g. questions with multiple correct answers). In the end, the dataset comprises 67.5k training samples and 3.7k development samples, more than half of which are from RACE. We provide a script ('assemble\\_training\\_set.py') to reconstruct this dataset for anyone to perform task finetuning.\n\n\nSince the training set is a joint sample of other datasets, it is governed by a different license. We do not claim any of that work or datasets to be our own. See the Licenses section in the README of URL .\n\n\nLanguages in Belebele\n---------------------"
] |
[
639,
81,
393,
740
] |
[
"passage: ",
"passage: TAGS\n#task_categories-question-answering #task_categories-zero-shot-classification #task_categories-text-classification #task_categories-multiple-choice #size_categories-100K<n<1M #language-Afrikaans #language-Amharic #language-Arabic #language-Azerbaijani #language-Assamese #language-Bambara #language-Bengali #language-Tibetan #language-Bulgarian #language-Catalan #language-Czech #language-Kurdish #language-Danish #language-German #language-Modern Greek (1453-) #language-English #language-Spanish #language-Estonian #language-Basque #language-Finnish #language-French #language-Fulah #language-Oromo #language-Gujarati #language-Guarani #language-Haitian #language-Hausa #language-Hebrew #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Igbo #language-Indonesian #language-Italian #language-Icelandic #language-Javanese #language-Japanese #language-Georgian #language-Kannada #language-Kazakh #language-Mongolian #language-Khmer #language-Kinyarwanda #language-Kirghiz #language-Korean #language-Lao #language-Lingala #language-Lithuanian #language-Ganda #language-Latvian #language-Malayalam #language-Marathi #language-Macedonian #language-Maltese #language-Maori #language-Burmese #language-Dutch #language-Norwegian #language-Nepali (macrolanguage) #language-Nyanja #language-Oriya (macrolanguage) #language-Panjabi #language-Pushto #language-Persian #language-Malagasy #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Shona #language-Sinhala #language-Slovenian #language-Swedish #language-Slovak #language-Sindhi #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Tajik #language-Tagalog #language-Thai #language-Tigrinya #language-Tswana #language-Tsonga #language-Turkish #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Wolof #language-Xhosa #language-Yoruba #language-Chinese #language-Malay (macrolanguage) #language-Zulu #license-cc-by-sa-4.0 #arxiv-2308.16884 #region-us \n# correct / total).\n\n\nEvaluating models on Belebele in English can be done via finetuning, few-shot, or zero-shot. For other target languages, we propose the incomprehensive list of evaluation settings below. Settings that are compatible with evaluating non-English models (monolingual or cross-lingual) are denoted with '^'.#### No finetuning\n\n\n* Zero-shot with natural language instructions (English instructions)\n\t+ For chat-finetuned models, we give it English instructions for the task and the sample in the target language in the same input.\n\t+ For our experiments, we instruct the model to provide the letter 'A', 'B', 'C', or 'D'. We perform post-processing steps and accept answers predicted as e.g. '(A)' instead of 'A'. We sometimes additionally remove the prefix 'The correct answer is' for predictions that do not start with one of the four accepted answers.\n\t+ Sample instructions can be found at the dataset github repo.\n* Zero-shot with natural language instructions (translated instructions) ^\n\t+ Same as above, except the instructions are translated to the target language so that the instructions and samples are in the same language. The instructions can be human or machine-translated.\n* Few-shot in-context learning (English examples)\n\t+ A few samples (e.g. 5) are taken from the English training set (see below) and prompted to the model. Then, the model is evaluated with the same template but with the passages, questions, and answers in the target language.\n\t+ For our experiments, we use the template: . We perform prediction by picking the answer within '[A, B, C, D]' that has the highest probability relatively to the others.\n* Few-shot in-context learning (translated examples) ^\n\t+ Same as above, except the samples from the training set are translated to the target language so that the examples and evaluation data are in the same language. The training samples can be human or machine-translated."
] |
882c48b82ae720e1f4ffa5a9da6616c08a366914
|
# Goodtriever
This repository contains datasets and model generations from the `Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models` paper, published as a conference paper on EMNLP 2023.
[[Paper]]()[[Code]]()[[Data]](https://huggingface.co/datasets/luizapzbn/goodtriever-data)
- `data`: prompts and datasets used for datastore creation.
- `continual_mitigation`: clustered WILDS data and prompts
- `datastore_quality`: for the experiments on how automatic labeling impacts mitigation results
- `jigsaw`: main dataset, jigsaw unintended bias
- `outputs`: model generations and results for all experiments from the paper.
- `alpha_temperature`
- `datastore_quality`
- `datastore_size`
- `k_neighbors`
- `model_families` (and main table results)
# Citation
|
luizapzbn/goodtriever-data
|
[
"license:apache-2.0",
"region:us"
] |
2023-09-01T17:27:28+00:00
|
{"license": "apache-2.0"}
|
2023-09-27T09:20:50+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
# Goodtriever
This repository contains datasets and model generations from the 'Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models' paper, published as a conference paper on EMNLP 2023.
[[Paper]]()[[Code]]()[[Data]](URL
- 'data': prompts and datasets used for datastore creation.
- 'continual_mitigation': clustered WILDS data and prompts
- 'datastore_quality': for the experiments on how automatic labeling impacts mitigation results
- 'jigsaw': main dataset, jigsaw unintended bias
- 'outputs': model generations and results for all experiments from the paper.
- 'alpha_temperature'
- 'datastore_quality'
- 'datastore_size'
- 'k_neighbors'
- 'model_families' (and main table results)
|
[
"# Goodtriever\n\nThis repository contains datasets and model generations from the 'Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models' paper, published as a conference paper on EMNLP 2023.\n\n[[Paper]]()[[Code]]()[[Data]](URL\n\n- 'data': prompts and datasets used for datastore creation.\n - 'continual_mitigation': clustered WILDS data and prompts\n - 'datastore_quality': for the experiments on how automatic labeling impacts mitigation results\n - 'jigsaw': main dataset, jigsaw unintended bias\n- 'outputs': model generations and results for all experiments from the paper.\n - 'alpha_temperature'\n - 'datastore_quality'\n - 'datastore_size'\n - 'k_neighbors'\n - 'model_families' (and main table results)"
] |
[
"TAGS\n#license-apache-2.0 #region-us \n",
"# Goodtriever\n\nThis repository contains datasets and model generations from the 'Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models' paper, published as a conference paper on EMNLP 2023.\n\n[[Paper]]()[[Code]]()[[Data]](URL\n\n- 'data': prompts and datasets used for datastore creation.\n - 'continual_mitigation': clustered WILDS data and prompts\n - 'datastore_quality': for the experiments on how automatic labeling impacts mitigation results\n - 'jigsaw': main dataset, jigsaw unintended bias\n- 'outputs': model generations and results for all experiments from the paper.\n - 'alpha_temperature'\n - 'datastore_quality'\n - 'datastore_size'\n - 'k_neighbors'\n - 'model_families' (and main table results)"
] |
[
14,
214
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n# Goodtriever\n\nThis repository contains datasets and model generations from the 'Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models' paper, published as a conference paper on EMNLP 2023.\n\n[[Paper]]()[[Code]]()[[Data]](URL\n\n- 'data': prompts and datasets used for datastore creation.\n - 'continual_mitigation': clustered WILDS data and prompts\n - 'datastore_quality': for the experiments on how automatic labeling impacts mitigation results\n - 'jigsaw': main dataset, jigsaw unintended bias\n- 'outputs': model generations and results for all experiments from the paper.\n - 'alpha_temperature'\n - 'datastore_quality'\n - 'datastore_size'\n - 'k_neighbors'\n - 'model_families' (and main table results)"
] |
5f3eff71f0ca03180d3a38b3d99397b8d1dc51d0
|
# Summary
`aya-telugu-poems` is an open source dataset of instruct-style records generated by webscraping a Telugu poems website. This was created as part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Apache 2.0](https://opensource.org/license/apache-2-0) License.
Supported Tasks:
- Training LLMs
- Synthetic Data Generation
- Data Augmentation
Languages: Telugu Version: 1.0
# Dataset Overview
`aya-telugu-poems` is a corpus of more than 5k records generated by webscraping of the Telugu poetry website. This Dataset can be used for the following three tasks:
- Given the poem and type of poetry, explain the meaning of the poem.
- Given the meaning and the type of poetry, generate the corresponding poem.
- Given the partial poem and type of poetry, generate the rest of the poem.
# Intended Uses
While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate additional poems and their explanations.
# Dataset
## Load with Datasets
To load this dataset with Datasets, you'll just need to install Datasets as `pip install datasets --upgrade` and then use the following code:
```python
from datasets import load_dataset
ds = load_dataset('SuryaKrishna02/aya-telugu-poems')
```
## Purpose of Collection
Telugu is a low-resource language where there are no poetry instruct-style dataset to the best of my knowledge. This was created as a part of [Aya Open Science Initiative](https://sites.google.com/cohere.com/aya-en/home) from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
## Sources
- **Telugu Padyaluu Website**: Performed webscraping from [Telugu Padyaluu Website](https://telugu-padyaalu1.blogspot.com/) which consists of following 11 different types of poetry.
1. వేమన శతకం
2. శ్రీ కాళహస్తీశ్వర శతకం
3. భాస్కర శతకం
4. దాశరథి శతకం
5. కృష్ణ శతకం
6. సుమతీ శతకం
7. భర్తృహరి సుభాషితాలు
8. కుమార శతకం
9. నరసింహ శతకం
10. కుమారీ శతకం
11. పోతన పద్యాలు
- Next, performed some pre-processing of the data like removing unwanted characters and similar poems by calculating the similarity score from the scraped data.
- Finally, converted the scraped data into Instruct-style prompts and completions.
## Data Fields
- `inputs` : Prompt or input to the language model.
- `targets` : Completion or output of the language model.
- `template_id` : Id of the template used in `inputs` and `targets`.
- `template_lang`: ISO code of the language used in the `inputs` and `targets` where *tel* refers to Telugu.
## Templates
For the creation of instruct-style prompts and completions from the scraped data, the following three templates categories with total of 18 different templates were used:
1. Given the poem and type of poetry, explain the meaning of the poem.
| template_id | inputs | targets |
|-------------|--------|---------|
| 1 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి తాత్పర్యం ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి తాత్పర్యం:\n{{Meaning}}``` |
| 2 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి భావం ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి భావం:\n{{Meaning}}``` |
| 3 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి భావము ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి భావము:\n{{Meaning}}``` |
| 4 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి తాత్పర్యము ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి తాత్పర్యము:\n{{Meaning}}``` |
| 5 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి అర్ధం ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి అర్ధం:\n{{Meaning}}``` |
| 6 | ```క్రింద ఇచ్చిన {{poetry_type}}లోని పద్యానికి అర్ధము ఇవ్వండి:\n{{Poem}}``` | ```ఇచ్చిన {{poetry_type}}లోని పద్యానికి అర్ధము:\n{{Meaning}}``` |
2. Given the meaning and the type of poetry, generate the corresponding poem.
| template_id | inputs | targets |
|-------------|--------|---------|
| 7 | ```క్రింద ఇచ్చిన తాత్పర్యం వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన తాత్పర్యం వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
| 8 | ```క్రింద ఇచ్చిన భావం వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన భావం వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
| 9 | ```క్రింద ఇచ్చిన భావము వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన భావము వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
| 10 | ```క్రింద ఇచ్చిన తాత్పర్యము వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన తాత్పర్యము వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
| 11 | ```క్రింద ఇచ్చిన అర్ధం వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన అర్ధం వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
| 12 | ```క్రింద ఇచ్చిన అర్ధము వచ్చే లాగా {{poetry_type}} శైలిలో పద్యం రాయండి:\n{{Meaning}}``` | ```ఇచ్చిన అర్ధము వచ్చే {{poetry_type}} శైలి పద్యం:\n{{Poem}}``` |
3. Given the partial poem and type of poetry, generate the rest of the poem.
| template_id | inputs | targets |
|-------------|--------|---------|
| 13 | ```క్రింద ఇచ్చిన తాత్పర్యం అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nతాత్పర్యం:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
| 14 | ```క్రింద ఇచ్చిన భావం అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nభావం:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
| 15 | ```క్రింద ఇచ్చిన భావము అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nభావము:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
| 16 | ```క్రింద ఇచ్చిన తాత్పర్యము అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nతాత్పర్యము:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
| 17 | ```క్రింద ఇచ్చిన అర్ధం అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nఅర్ధం:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
| 18 | ```క్రింద ఇచ్చిన అర్ధము అనుసరించి అసంపూర్ణమైయిన పద్యాన్ని {{poetry_type}} శైలిలో పూర్తిచేసి రాయండి:\nఅర్ధము:\n{{Meaning}}\n\nఅసంపూర్ణమైయిన పద్యం:\n{{Partial Poem}}``` | ```పూర్తిచేయబడ్డ పద్యం క్రింద ఇవ్వబడింది:\nపద్యం:\n{{Poem}}``` |
## Personal or Sensitive Data
This dataset contains public information. To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Language
Telugu
# Known Limitations
- The Dataset is scraped from the poetry website and the contents of this dataset may reflect the bias, factual errors and sensitive matters.
- Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu.
# Contributors
[SuryaKrishna02](https://github.com/SuryaKrishna02) and [Desik98](https://github.com/desik1998)
|
SuryaKrishna02/aya-telugu-poems
|
[
"task_categories:text-generation",
"task_ids:language-modeling",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:te",
"license:apache-2.0",
"literature",
"poems",
"region:us"
] |
2023-09-01T17:34:09+00:00
|
{"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["te"], "license": ["apache-2.0"], "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "Telugu Poems", "tags": ["literature", "poems"]}
|
2024-01-24T06:13:06+00:00
|
[] |
[
"te"
] |
TAGS
#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Telugu #license-apache-2.0 #literature #poems #region-us
|
Summary
=======
'aya-telugu-poems' is an open source dataset of instruct-style records generated by webscraping a Telugu poems website. This was created as part of Aya Open Science Initiative from Cohere For AI.
This dataset can be used for any purpose, whether academic or commercial, under the terms of the Apache 2.0 License.
Supported Tasks:
* Training LLMs
* Synthetic Data Generation
* Data Augmentation
Languages: Telugu Version: 1.0
Dataset Overview
================
'aya-telugu-poems' is a corpus of more than 5k records generated by webscraping of the Telugu poetry website. This Dataset can be used for the following three tasks:
* Given the poem and type of poetry, explain the meaning of the poem.
* Given the meaning and the type of poetry, generate the corresponding poem.
* Given the partial poem and type of poetry, generate the rest of the poem.
Intended Uses
=============
While immediately valuable for instruction fine tuning large language models, as a corpus of instruction prompts, this dataset also presents a valuable opportunity for synthetic data generation in the methods. For example, prompt-completions could be submitted as few-shot examples to a large open language model to generate additional poems and their explanations.
Dataset
=======
Load with Datasets
------------------
To load this dataset with Datasets, you'll just need to install Datasets as 'pip install datasets --upgrade' and then use the following code:
Purpose of Collection
---------------------
Telugu is a low-resource language where there are no poetry instruct-style dataset to the best of my knowledge. This was created as a part of Aya Open Science Initiative from Cohere For AI to make sure Telugu is well represented in the space of AI/ML. Unlike other datasets that are limited to non-commercial use, this dataset can be used, modified, and extended for any purpose, including academic or commercial applications.
Sources
-------
* Telugu Padyaluu Website: Performed webscraping from Telugu Padyaluu Website which consists of following 11 different types of poetry.
1. వేమన శతకం
2. శ్రీ కాళహస్తీశ్వర శతకం
3. భాస్కర శతకం
4. దాశరథి శతకం
5. కృష్ణ శతకం
6. సుమతీ శతకం
7. భర్తృహరి సుభాషితాలు
8. కుమార శతకం
9. నరసింహ శతకం
10. కుమారీ శతకం
11. పోతన పద్యాలు
* Next, performed some pre-processing of the data like removing unwanted characters and similar poems by calculating the similarity score from the scraped data.
* Finally, converted the scraped data into Instruct-style prompts and completions.
Data Fields
-----------
* 'inputs' : Prompt or input to the language model.
* 'targets' : Completion or output of the language model.
* 'template\_id' : Id of the template used in 'inputs' and 'targets'.
* 'template\_lang': ISO code of the language used in the 'inputs' and 'targets' where *tel* refers to Telugu.
Templates
---------
For the creation of instruct-style prompts and completions from the scraped data, the following three templates categories with total of 18 different templates were used:
1. Given the poem and type of poetry, explain the meaning of the poem.
template\_id: 1, inputs: , targets:
template\_id: 2, inputs: , targets:
template\_id: 3, inputs: , targets:
template\_id: 4, inputs: , targets:
template\_id: 5, inputs: , targets:
template\_id: 6, inputs: , targets:
2. Given the meaning and the type of poetry, generate the corresponding poem.
template\_id: 7, inputs: , targets:
template\_id: 8, inputs: , targets:
template\_id: 9, inputs: , targets:
template\_id: 10, inputs: , targets:
template\_id: 11, inputs: , targets:
template\_id: 12, inputs: , targets:
3. Given the partial poem and type of poetry, generate the rest of the poem.
template\_id: 13, inputs: , targets:
template\_id: 14, inputs: , targets:
template\_id: 15, inputs: , targets:
template\_id: 16, inputs: , targets:
template\_id: 17, inputs: , targets:
template\_id: 18, inputs: , targets:
Personal or Sensitive Data
--------------------------
This dataset contains public information. To our knowledge, there are no private person’s personal identifiers or sensitive information.
Language
--------
Telugu
Known Limitations
=================
* The Dataset is scraped from the poetry website and the contents of this dataset may reflect the bias, factual errors and sensitive matters.
* Although there is utmost care taken to keep the dataset as monolingual, there might be some records that may contain English Language along with Telugu.
Contributors
============
SuryaKrishna02 and Desik98
|
[] |
[
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Telugu #license-apache-2.0 #literature #poems #region-us \n"
] |
[
99
] |
[
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #annotations_creators-expert-generated #language_creators-expert-generated #multilinguality-monolingual #size_categories-1K<n<10K #source_datasets-original #language-Telugu #license-apache-2.0 #literature #poems #region-us \n"
] |
b7dcca4b4a864d51cbe3c21e01f852268b2b9d9d
|
# Dataset Card: PIPPA-ShareGPT
This is a conversion of [PygmalionAI's PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) deduped dataset to ShareGPT format for finetuning with Axolotl.
The reformat was completed via the following [TypeScript project](https://github.com/bdashore3/ShareGPT-Reformat) called ShareGPT-Reformat.
# Files and explanations
- pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software.
- pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for finetuning with axolotl
- pippa_sharegpt_trimmed.jsonl: A shareGPT dataset that has trimmed newlines, randomized system prompts, removes empty messages, and removes examples without a character description. Roles are USER and CHARACTER.
The best file to use is `pippa_sharegpt_trimmed.jsonl` if you want a finetune without bugs or inconsistencies. The best dataset to modify is either the original PIPPA deduped dataset with the ShareGPT reformat project or `pippa_sharegpt.jsonl`.
# Required Axolotl patches
To make this dataset usable in its entirety, some axolotl patches are needed:
- [This patch](https://github.com/bdashore3/axolotl/commit/995557bdf3c6c8b3e839b224ef9513fc2b097f30) allows the ability to use custom system prompts with ShareGPT format.
- [This patch](https://github.com/bdashore3/axolotl/commit/8970280de2ea01e41c044406051922715f4086cb) allows for custom roles for the USER and ASSISTANT and allows for GPT prompts to come before human ones without cutoff.
You WILL experience unideal results with base axolotl at the time of publishing this README.
# Citations
Paper for the original dataset:
```bibtex
@misc{gosling2023pippa,
title={PIPPA: A Partially Synthetic Conversational Dataset},
author={Tear Gosling and Alpin Dale and Yinhe Zheng},
year={2023},
eprint={2308.05884},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
kingbri/PIPPA-shareGPT
|
[
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:agpl-3.0",
"conversational",
"roleplay",
"custom-format",
"a.",
"arxiv:2308.05884",
"region:us"
] |
2023-09-01T17:40:19+00:00
|
{"language": ["en"], "license": "agpl-3.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "tags": ["conversational", "roleplay", "custom-format", "a."], "viewer": false}
|
2023-09-03T02:12:50+00:00
|
[
"2308.05884"
] |
[
"en"
] |
TAGS
#task_categories-conversational #size_categories-10K<n<100K #language-English #license-agpl-3.0 #conversational #roleplay #custom-format #a. #arxiv-2308.05884 #region-us
|
# Dataset Card: PIPPA-ShareGPT
This is a conversion of PygmalionAI's PIPPA deduped dataset to ShareGPT format for finetuning with Axolotl.
The reformat was completed via the following TypeScript project called ShareGPT-Reformat.
# Files and explanations
- pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software.
- pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for finetuning with axolotl
- pippa_sharegpt_trimmed.jsonl: A shareGPT dataset that has trimmed newlines, randomized system prompts, removes empty messages, and removes examples without a character description. Roles are USER and CHARACTER.
The best file to use is 'pippa_sharegpt_trimmed.jsonl' if you want a finetune without bugs or inconsistencies. The best dataset to modify is either the original PIPPA deduped dataset with the ShareGPT reformat project or 'pippa_sharegpt.jsonl'.
# Required Axolotl patches
To make this dataset usable in its entirety, some axolotl patches are needed:
- This patch allows the ability to use custom system prompts with ShareGPT format.
- This patch allows for custom roles for the USER and ASSISTANT and allows for GPT prompts to come before human ones without cutoff.
You WILL experience unideal results with base axolotl at the time of publishing this README.
s
Paper for the original dataset:
|
[
"# Dataset Card: PIPPA-ShareGPT\n\nThis is a conversion of PygmalionAI's PIPPA deduped dataset to ShareGPT format for finetuning with Axolotl.\n\nThe reformat was completed via the following TypeScript project called ShareGPT-Reformat.",
"# Files and explanations\n\n- pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software.\n- pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for finetuning with axolotl\n- pippa_sharegpt_trimmed.jsonl: A shareGPT dataset that has trimmed newlines, randomized system prompts, removes empty messages, and removes examples without a character description. Roles are USER and CHARACTER.\n\nThe best file to use is 'pippa_sharegpt_trimmed.jsonl' if you want a finetune without bugs or inconsistencies. The best dataset to modify is either the original PIPPA deduped dataset with the ShareGPT reformat project or 'pippa_sharegpt.jsonl'.",
"# Required Axolotl patches\n\nTo make this dataset usable in its entirety, some axolotl patches are needed:\n- This patch allows the ability to use custom system prompts with ShareGPT format.\n- This patch allows for custom roles for the USER and ASSISTANT and allows for GPT prompts to come before human ones without cutoff.\n\nYou WILL experience unideal results with base axolotl at the time of publishing this README.\n\ns\n\nPaper for the original dataset:"
] |
[
"TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #license-agpl-3.0 #conversational #roleplay #custom-format #a. #arxiv-2308.05884 #region-us \n",
"# Dataset Card: PIPPA-ShareGPT\n\nThis is a conversion of PygmalionAI's PIPPA deduped dataset to ShareGPT format for finetuning with Axolotl.\n\nThe reformat was completed via the following TypeScript project called ShareGPT-Reformat.",
"# Files and explanations\n\n- pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software.\n- pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for finetuning with axolotl\n- pippa_sharegpt_trimmed.jsonl: A shareGPT dataset that has trimmed newlines, randomized system prompts, removes empty messages, and removes examples without a character description. Roles are USER and CHARACTER.\n\nThe best file to use is 'pippa_sharegpt_trimmed.jsonl' if you want a finetune without bugs or inconsistencies. The best dataset to modify is either the original PIPPA deduped dataset with the ShareGPT reformat project or 'pippa_sharegpt.jsonl'.",
"# Required Axolotl patches\n\nTo make this dataset usable in its entirety, some axolotl patches are needed:\n- This patch allows the ability to use custom system prompts with ShareGPT format.\n- This patch allows for custom roles for the USER and ASSISTANT and allows for GPT prompts to come before human ones without cutoff.\n\nYou WILL experience unideal results with base axolotl at the time of publishing this README.\n\ns\n\nPaper for the original dataset:"
] |
[
64,
64,
225,
114
] |
[
"passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #license-agpl-3.0 #conversational #roleplay #custom-format #a. #arxiv-2308.05884 #region-us \n# Dataset Card: PIPPA-ShareGPT\n\nThis is a conversion of PygmalionAI's PIPPA deduped dataset to ShareGPT format for finetuning with Axolotl.\n\nThe reformat was completed via the following TypeScript project called ShareGPT-Reformat.# Files and explanations\n\n- pippa_sharegpt_raw.jsonl: The raw deduped dataset file converted to shareGPT. Roles will be defaulted to your finetuning software.\n- pippa_sharegpt.jsonl: A shareGPT dataset with the roles as USER: and CHARACTER: for finetuning with axolotl\n- pippa_sharegpt_trimmed.jsonl: A shareGPT dataset that has trimmed newlines, randomized system prompts, removes empty messages, and removes examples without a character description. Roles are USER and CHARACTER.\n\nThe best file to use is 'pippa_sharegpt_trimmed.jsonl' if you want a finetune without bugs or inconsistencies. The best dataset to modify is either the original PIPPA deduped dataset with the ShareGPT reformat project or 'pippa_sharegpt.jsonl'.# Required Axolotl patches\n\nTo make this dataset usable in its entirety, some axolotl patches are needed:\n- This patch allows the ability to use custom system prompts with ShareGPT format.\n- This patch allows for custom roles for the USER and ASSISTANT and allows for GPT prompts to come before human ones without cutoff.\n\nYou WILL experience unideal results with base axolotl at the time of publishing this README.\n\ns\n\nPaper for the original dataset:"
] |
95a7c1e97144c2b430804cf8cadd9b4d364200ea
|
# Dataset Card for "demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Tverous/demo
|
[
"region:us"
] |
2023-09-01T17:45:30+00:00
|
{"dataset_info": {"features": [{"name": "uid", "dtype": "string"}, {"name": "article", "sequence": "string"}, {"name": "premise", "dtype": "string"}, {"name": "image", "sequence": "string"}, {"name": "claim", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "claim_cleaned_amr", "dtype": "string"}, {"name": "amr_penman", "dtype": "string"}, {"name": "amr_tokens", "sequence": "string"}, {"name": "amr_nodes", "dtype": "string"}, {"name": "amr_alignments", "dtype": "string"}, {"name": "amr_edges", "sequence": {"sequence": "string"}}], "splits": [{"name": "train", "num_bytes": 10508, "num_examples": 1}], "download_size": 29322, "dataset_size": 10508}}
|
2023-09-01T19:06:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "demo"
More Information needed
|
[
"# Dataset Card for \"demo\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"demo\"\n\nMore Information needed"
] |
[
6,
11
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"demo\"\n\nMore Information needed"
] |
ded2d6940529d78d2184d614a433156dda45de62
|
# Dataset Card for "saleswiz_is_positive"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
factored/saleswiz_is_positive
|
[
"region:us"
] |
2023-09-01T18:00:30+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 187533.28961748633, "num_examples": 640}, {"name": "validation", "num_bytes": 80580.71038251366, "num_examples": 275}], "download_size": 178227, "dataset_size": 268114.0}}
|
2023-09-14T19:01:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "saleswiz_is_positive"
More Information needed
|
[
"# Dataset Card for \"saleswiz_is_positive\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"saleswiz_is_positive\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"saleswiz_is_positive\"\n\nMore Information needed"
] |
5f9f59e218ab9f3d0d316ac963689bb5914c6677
|
# Dataset Card for "saleswiz_is_relevant"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
factored/saleswiz_is_relevant
|
[
"region:us"
] |
2023-09-01T18:00:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 187533.28961748633, "num_examples": 640}, {"name": "validation", "num_bytes": 80580.71038251366, "num_examples": 275}], "download_size": 178216, "dataset_size": 268114.0}}
|
2023-09-14T19:01:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "saleswiz_is_relevant"
More Information needed
|
[
"# Dataset Card for \"saleswiz_is_relevant\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"saleswiz_is_relevant\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"saleswiz_is_relevant\"\n\nMore Information needed"
] |
711444151b78ddf6ab5286b8a916e01c6556b5ed
|
# Dataset Card for "saleswiz_is_about_company"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
factored/saleswiz_is_about_company
|
[
"region:us"
] |
2023-09-01T18:00:40+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "0", "1": "1"}}}}], "splits": [{"name": "train", "num_bytes": 187533.28961748633, "num_examples": 640}, {"name": "validation", "num_bytes": 80580.71038251366, "num_examples": 275}], "download_size": 177218, "dataset_size": 268114.0}}
|
2023-09-14T19:01:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "saleswiz_is_about_company"
More Information needed
|
[
"# Dataset Card for \"saleswiz_is_about_company\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"saleswiz_is_about_company\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"saleswiz_is_about_company\"\n\nMore Information needed"
] |
934d2e285a4e6807b5a75d5b943cda363a0eb1cf
|
# i2b2 query data 1.0
This is a dataset of [i2b2](https://www.i2b2.org/) query builder [examples](https://community.i2b2.org/wiki/display/webclient/3.+Query+Tool) that are taken from a test environment of i2b2 and then pre-processed with AI descriptions.
|
nmitchko/i2b2-query-data-1.0
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-09-01T18:09:57+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"]}
|
2023-09-01T18:19:43+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us
|
# i2b2 query data 1.0
This is a dataset of i2b2 query builder examples that are taken from a test environment of i2b2 and then pre-processed with AI descriptions.
|
[
"# i2b2 query data 1.0\n\nThis is a dataset of i2b2 query builder examples that are taken from a test environment of i2b2 and then pre-processed with AI descriptions."
] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n",
"# i2b2 query data 1.0\n\nThis is a dataset of i2b2 query builder examples that are taken from a test environment of i2b2 and then pre-processed with AI descriptions."
] |
[
38,
48
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n# i2b2 query data 1.0\n\nThis is a dataset of i2b2 query builder examples that are taken from a test environment of i2b2 and then pre-processed with AI descriptions."
] |
84a4de91e414273047b5f20e3fc705042daef7f6
|
# Dataset Card for "20k_claims_train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/claim_verification_training_set
|
[
"region:us"
] |
2023-09-01T18:15:03+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int32"}, {"name": "label", "dtype": "string"}, {"name": "claim", "dtype": "string"}, {"name": "evidence_annotation_id", "dtype": "int32"}, {"name": "evidence_id", "dtype": "int32"}, {"name": "evidence_wiki_url", "dtype": "string"}, {"name": "evidence_sentence_id", "dtype": "int32"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2086271, "num_examples": 19998}], "download_size": 1366415, "dataset_size": 2086271}}
|
2023-09-01T18:15:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "20k_claims_train"
More Information needed
|
[
"# Dataset Card for \"20k_claims_train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"20k_claims_train\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"20k_claims_train\"\n\nMore Information needed"
] |
48498a3036223be5f95aef0b084f0ccec4d3d47d
|
# Dataset Card for "20k_claims_evidence"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/claim_verification_training_set_evidence
|
[
"region:us"
] |
2023-09-01T18:19:22+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "lines", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 89974716, "num_examples": 58850}], "download_size": 52193086, "dataset_size": 89974716}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-11T17:47:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "20k_claims_evidence"
More Information needed
|
[
"# Dataset Card for \"20k_claims_evidence\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"20k_claims_evidence\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"20k_claims_evidence\"\n\nMore Information needed"
] |
20ebea91dc67a374b0f827f7f6251a795949b8be
|
# Dataset Card for "20k_claims_train_final"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
nikchar/20k_claims_train_final
|
[
"region:us"
] |
2023-09-01T18:52:28+00:00
|
{"dataset_info": {"features": [{"name": "claim", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 30738751.0, "num_examples": 19998}], "download_size": 17098290, "dataset_size": 30738751.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T18:52:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "20k_claims_train_final"
More Information needed
|
[
"# Dataset Card for \"20k_claims_train_final\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"20k_claims_train_final\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"20k_claims_train_final\"\n\nMore Information needed"
] |
2a8e729874c636886fb4f2add8875557c1f34b8c
|
# Dataset Card for Evaluation run of budecosystem/genz-13b-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/budecosystem/genz-13b-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [budecosystem/genz-13b-v2](https://huggingface.co/budecosystem/genz-13b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_budecosystem__genz-13b-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T15:10:42.007664](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__genz-13b-v2/blob/main/results_2023-09-22T15-10-42.007664.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.1649538590604027,
"em_stderr": 0.0038008097202810163,
"f1": 0.2284354026845635,
"f1_stderr": 0.003875004173850451,
"acc": 0.434338336007104,
"acc_stderr": 0.010638707911291463
},
"harness|drop|3": {
"em": 0.1649538590604027,
"em_stderr": 0.0038008097202810163,
"f1": 0.2284354026845635,
"f1_stderr": 0.003875004173850451
},
"harness|gsm8k|5": {
"acc": 0.12282031842304776,
"acc_stderr": 0.009041108602874659
},
"harness|winogrande|5": {
"acc": 0.7458563535911602,
"acc_stderr": 0.012236307219708266
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_budecosystem__genz-13b-v2
|
[
"region:us"
] |
2023-09-01T19:11:22+00:00
|
{"pretty_name": "Evaluation run of budecosystem/genz-13b-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [budecosystem/genz-13b-v2](https://huggingface.co/budecosystem/genz-13b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_budecosystem__genz-13b-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T15:10:42.007664](https://huggingface.co/datasets/open-llm-leaderboard/details_budecosystem__genz-13b-v2/blob/main/results_2023-09-22T15-10-42.007664.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.1649538590604027,\n \"em_stderr\": 0.0038008097202810163,\n \"f1\": 0.2284354026845635,\n \"f1_stderr\": 0.003875004173850451,\n \"acc\": 0.434338336007104,\n \"acc_stderr\": 0.010638707911291463\n },\n \"harness|drop|3\": {\n \"em\": 0.1649538590604027,\n \"em_stderr\": 0.0038008097202810163,\n \"f1\": 0.2284354026845635,\n \"f1_stderr\": 0.003875004173850451\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12282031842304776,\n \"acc_stderr\": 0.009041108602874659\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7458563535911602,\n \"acc_stderr\": 0.012236307219708266\n }\n}\n```", "repo_url": "https://huggingface.co/budecosystem/genz-13b-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|arc:challenge|25_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T15_10_42.007664", "path": ["**/details_harness|drop|3_2023-09-22T15-10-42.007664.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T15-10-42.007664.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T15_10_42.007664", "path": ["**/details_harness|gsm8k|5_2023-09-22T15-10-42.007664.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T15-10-42.007664.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hellaswag|10_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T20:10:58.208495.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T20:10:58.208495.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T20:10:58.208495.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T15_10_42.007664", "path": ["**/details_harness|winogrande|5_2023-09-22T15-10-42.007664.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T15-10-42.007664.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T20_10_58.208495", "path": ["results_2023-09-01T20:10:58.208495.parquet"]}, {"split": "2023_09_22T15_10_42.007664", "path": ["results_2023-09-22T15-10-42.007664.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T15-10-42.007664.parquet"]}]}]}
|
2023-09-22T14:10:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of budecosystem/genz-13b-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model budecosystem/genz-13b-v2 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T15:10:42.007664(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of budecosystem/genz-13b-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model budecosystem/genz-13b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T15:10:42.007664(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of budecosystem/genz-13b-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model budecosystem/genz-13b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T15:10:42.007664(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of budecosystem/genz-13b-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model budecosystem/genz-13b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T15:10:42.007664(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2321fedf40361866a2e4c5ff8767f42148a1b34c
|
# Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dmlea/github-issues
|
[
"region:us"
] |
2023-09-01T19:15:07+00:00
|
{"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "dtype": "null"}, {"name": "comments", "dtype": "int64"}, {"name": "created_at", "dtype": "timestamp[s]"}, {"name": "updated_at", "dtype": "timestamp[s]"}, {"name": "closed_at", "dtype": "timestamp[s]"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}, {"name": "draft", "dtype": "bool"}, {"name": "pull_request", "struct": [{"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "diff_url", "dtype": "string"}, {"name": "patch_url", "dtype": "string"}, {"name": "merged_at", "dtype": "timestamp[s]"}]}, {"name": "is_pull_request", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1660862, "num_examples": 500}], "download_size": 437911, "dataset_size": 1660862}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-01T19:15:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "github-issues"
More Information needed
|
[
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"github-issues\"\n\nMore Information needed"
] |
b6644dac3a35c3b8c76f66faff63e63ba1740e36
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-hermes-orca-platypus-13b](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T05:34:26.777818](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b/blob/main/results_2023-10-18T05-34-26.777818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.020763422818791948,
"em_stderr": 0.0014602692459797255,
"f1": 0.1284448406040266,
"f1_stderr": 0.002326999769158526,
"acc": 0.42460684800436577,
"acc_stderr": 0.010144077147210034
},
"harness|drop|3": {
"em": 0.020763422818791948,
"em_stderr": 0.0014602692459797255,
"f1": 0.1284448406040266,
"f1_stderr": 0.002326999769158526
},
"harness|gsm8k|5": {
"acc": 0.09704321455648218,
"acc_stderr": 0.00815376827455472
},
"harness|winogrande|5": {
"acc": 0.7521704814522494,
"acc_stderr": 0.012134386019865348
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b
|
[
"region:us"
] |
2023-09-01T19:32:36+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-hermes-orca-platypus-13b](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-18T05:34:26.777818](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-13b/blob/main/results_2023-10-18T05-34-26.777818.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.020763422818791948,\n \"em_stderr\": 0.0014602692459797255,\n \"f1\": 0.1284448406040266,\n \"f1_stderr\": 0.002326999769158526,\n \"acc\": 0.42460684800436577,\n \"acc_stderr\": 0.010144077147210034\n },\n \"harness|drop|3\": {\n \"em\": 0.020763422818791948,\n \"em_stderr\": 0.0014602692459797255,\n \"f1\": 0.1284448406040266,\n \"f1_stderr\": 0.002326999769158526\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09704321455648218,\n \"acc_stderr\": 0.00815376827455472\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7521704814522494,\n \"acc_stderr\": 0.012134386019865348\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|arc:challenge|25_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_18T05_34_26.777818", "path": ["**/details_harness|drop|3_2023-10-18T05-34-26.777818.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-18T05-34-26.777818.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_18T05_34_26.777818", "path": ["**/details_harness|gsm8k|5_2023-10-18T05-34-26.777818.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-18T05-34-26.777818.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hellaswag|10_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-01T20:32:11.554116.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T20:32:11.554116.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-01T20:32:11.554116.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_18T05_34_26.777818", "path": ["**/details_harness|winogrande|5_2023-10-18T05-34-26.777818.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-18T05-34-26.777818.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_01T20_32_11.554116", "path": ["results_2023-09-01T20:32:11.554116.parquet"]}, {"split": "2023_10_18T05_34_26.777818", "path": ["results_2023-10-18T05-34-26.777818.parquet"]}, {"split": "latest", "path": ["results_2023-10-18T05-34-26.777818.parquet"]}]}]}
|
2023-10-18T04:34:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-13b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-18T05:34:26.777818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T05:34:26.777818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T05:34:26.777818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
32,
31,
180,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-18T05:34:26.777818(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9ad25fbd86a3fd352c7a0fc1f99132fbb5802378
|
# Open Vocab Mobile Manipulation episodes
We describe the fields in our OVMM episodes.
## Initial Configuration
- `rigid_objs`: Initial poses of all pickable objects inserted into the scene specified via a 4x4 array
- `start_position`: Position of the agent at the start of the episode.
- `start_rotation`: Orientation of the agent at the start of the episode.
## Goal specification
`object_category`, `start_recep_category`, and `end_recep_category`: These fields specify the OVMM task goal - Move `object_category` from `start_recep_category` to `goal_recep_category`
## Candidate targets
Each target is specified via an `object_id` along with information on `position` and a set of `view_points` (agent poses from where the target is visible).
- `candidate_objects`: object instances belonging to `object_category` on top of a receptacle belonging to `start_recep_category`
- `candidate_objects_hard`: all object instances belonging to `object_category`
- `candidate_start_receps`: all receptacle instances belonging to `start_recep_category`
- `candidate_goal_receps`: all receptacle instances belonging to `goal_recep_category`
|
ai-habitat/OVMM_episodes
|
[
"region:us"
] |
2023-09-01T19:38:14+00:00
|
{}
|
2023-09-12T23:16:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Open Vocab Mobile Manipulation episodes
We describe the fields in our OVMM episodes.
## Initial Configuration
- 'rigid_objs': Initial poses of all pickable objects inserted into the scene specified via a 4x4 array
- 'start_position': Position of the agent at the start of the episode.
- 'start_rotation': Orientation of the agent at the start of the episode.
## Goal specification
'object_category', 'start_recep_category', and 'end_recep_category': These fields specify the OVMM task goal - Move 'object_category' from 'start_recep_category' to 'goal_recep_category'
## Candidate targets
Each target is specified via an 'object_id' along with information on 'position' and a set of 'view_points' (agent poses from where the target is visible).
- 'candidate_objects': object instances belonging to 'object_category' on top of a receptacle belonging to 'start_recep_category'
- 'candidate_objects_hard': all object instances belonging to 'object_category'
- 'candidate_start_receps': all receptacle instances belonging to 'start_recep_category'
- 'candidate_goal_receps': all receptacle instances belonging to 'goal_recep_category'
|
[
"# Open Vocab Mobile Manipulation episodes\n\nWe describe the fields in our OVMM episodes.",
"## Initial Configuration\n\n- 'rigid_objs': Initial poses of all pickable objects inserted into the scene specified via a 4x4 array\n- 'start_position': Position of the agent at the start of the episode.\n- 'start_rotation': Orientation of the agent at the start of the episode.",
"## Goal specification\n\n'object_category', 'start_recep_category', and 'end_recep_category': These fields specify the OVMM task goal - Move 'object_category' from 'start_recep_category' to 'goal_recep_category'",
"## Candidate targets\nEach target is specified via an 'object_id' along with information on 'position' and a set of 'view_points' (agent poses from where the target is visible).\n\n- 'candidate_objects': object instances belonging to 'object_category' on top of a receptacle belonging to 'start_recep_category'\n- 'candidate_objects_hard': all object instances belonging to 'object_category'\n- 'candidate_start_receps': all receptacle instances belonging to 'start_recep_category'\n- 'candidate_goal_receps': all receptacle instances belonging to 'goal_recep_category'"
] |
[
"TAGS\n#region-us \n",
"# Open Vocab Mobile Manipulation episodes\n\nWe describe the fields in our OVMM episodes.",
"## Initial Configuration\n\n- 'rigid_objs': Initial poses of all pickable objects inserted into the scene specified via a 4x4 array\n- 'start_position': Position of the agent at the start of the episode.\n- 'start_rotation': Orientation of the agent at the start of the episode.",
"## Goal specification\n\n'object_category', 'start_recep_category', and 'end_recep_category': These fields specify the OVMM task goal - Move 'object_category' from 'start_recep_category' to 'goal_recep_category'",
"## Candidate targets\nEach target is specified via an 'object_id' along with information on 'position' and a set of 'view_points' (agent poses from where the target is visible).\n\n- 'candidate_objects': object instances belonging to 'object_category' on top of a receptacle belonging to 'start_recep_category'\n- 'candidate_objects_hard': all object instances belonging to 'object_category'\n- 'candidate_start_receps': all receptacle instances belonging to 'start_recep_category'\n- 'candidate_goal_receps': all receptacle instances belonging to 'goal_recep_category'"
] |
[
6,
22,
78,
79,
174
] |
[
"passage: TAGS\n#region-us \n# Open Vocab Mobile Manipulation episodes\n\nWe describe the fields in our OVMM episodes.## Initial Configuration\n\n- 'rigid_objs': Initial poses of all pickable objects inserted into the scene specified via a 4x4 array\n- 'start_position': Position of the agent at the start of the episode.\n- 'start_rotation': Orientation of the agent at the start of the episode.## Goal specification\n\n'object_category', 'start_recep_category', and 'end_recep_category': These fields specify the OVMM task goal - Move 'object_category' from 'start_recep_category' to 'goal_recep_category'## Candidate targets\nEach target is specified via an 'object_id' along with information on 'position' and a set of 'view_points' (agent poses from where the target is visible).\n\n- 'candidate_objects': object instances belonging to 'object_category' on top of a receptacle belonging to 'start_recep_category'\n- 'candidate_objects_hard': all object instances belonging to 'object_category'\n- 'candidate_start_receps': all receptacle instances belonging to 'start_recep_category'\n- 'candidate_goal_receps': all receptacle instances belonging to 'goal_recep_category'"
] |
05c6af715b7225ddeb143cada195e055808a43f2
|
# Dataset Card for "huggingartists_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/huggingartists_prompts
|
[
"region:us"
] |
2023-09-01T19:45:28+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "song", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121653811, "num_examples": 64006}], "download_size": 57680864, "dataset_size": 121653811}}
|
2023-09-01T19:46:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "huggingartists_prompts"
More Information needed
|
[
"# Dataset Card for \"huggingartists_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"huggingartists_prompts\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"huggingartists_prompts\"\n\nMore Information needed"
] |
7d8c16825ad9fa9d17fe3a37dc2024400a794724
|
# Dataset Card for "lurk_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/lurk_prompts
|
[
"region:us"
] |
2023-09-01T19:51:35+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "word", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 92132815, "num_examples": 5671}], "download_size": 49004405, "dataset_size": 92132815}}
|
2023-09-01T19:56:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lurk_prompts"
More Information needed
|
[
"# Dataset Card for \"lurk_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lurk_prompts\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lurk_prompts\"\n\nMore Information needed"
] |
bc3d33de20c680502b6e5e1b48e3b279dc3eca77
|
# Dataset Card for "af66c1c4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/af66c1c4
|
[
"region:us"
] |
2023-09-01T19:52:17+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 186, "num_examples": 10}], "download_size": 1330, "dataset_size": 186}}
|
2023-09-01T19:52:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "af66c1c4"
More Information needed
|
[
"# Dataset Card for \"af66c1c4\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"af66c1c4\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"af66c1c4\"\n\nMore Information needed"
] |
ed9e116c2a2574db6408f49db75b15b36c15dae0
|
# Dataset Card for "yandex_q_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/yandex_q_10k
|
[
"region:us"
] |
2023-09-01T20:11:36+00:00
|
{"dataset_info": {"features": [{"name": "description", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 14596364.404151473, "num_examples": 10000}], "download_size": 7769074, "dataset_size": 14596364.404151473}}
|
2023-09-01T20:11:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "yandex_q_10k"
More Information needed
|
[
"# Dataset Card for \"yandex_q_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"yandex_q_10k\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"yandex_q_10k\"\n\nMore Information needed"
] |
66946989797c46275887d04777bf1869827e0054
|
# Dataset Card for "resh_edu_short_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/resh_edu_short_prompts
|
[
"region:us"
] |
2023-09-01T20:36:36+00:00
|
{"dataset_info": {"features": [{"name": "solution", "dtype": "string"}, {"name": "prompt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12371576, "num_examples": 2106}], "download_size": 5361614, "dataset_size": 12371576}}
|
2023-09-01T21:24:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "resh_edu_short_prompts"
More Information needed
|
[
"# Dataset Card for \"resh_edu_short_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"resh_edu_short_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"resh_edu_short_prompts\"\n\nMore Information needed"
] |
3cbaec668718f9b1ee46bb314a635b619fd40a0e
|
# Dataset Card for "autotree_snnxor_n0_l2_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_snnxor_n0_l2_2
|
[
"region:us"
] |
2023-09-01T20:40:16+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 371440000, "num_examples": 10000}, {"name": "validation", "num_bytes": 371440000, "num_examples": 10000}, {"name": "test", "num_bytes": 371440000, "num_examples": 10000}], "download_size": 349774676, "dataset_size": 1114320000}}
|
2023-09-01T20:40:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_snnxor_n0_l2_2"
More Information needed
|
[
"# Dataset Card for \"autotree_snnxor_n0_l2_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_snnxor_n0_l2_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snnxor_n0_l2_2\"\n\nMore Information needed"
] |
17df315978b53bc0eefa7c65a52ea1bc49beda28
|
configs:
- config_name: default
data_files:
- split: train
path: "train.csv"
- split: test
path: "test.csv"
WORK IN PROGRESS
This is a tiny dataset that I created to classify text for heresy. It's probably best that you carefully study it before using it for anything.
|
JWBickel/FalseGospels
|
[
"task_categories:text-classification",
"language:en",
"region:us"
] |
2023-09-01T20:48:51+00:00
|
{"language": ["en"], "task_categories": ["text-classification"]}
|
2023-10-26T15:30:28+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #language-English #region-us
|
configs:
- config_name: default
data_files:
- split: train
path: "URL"
- split: test
path: "URL"
WORK IN PROGRESS
This is a tiny dataset that I created to classify text for heresy. It's probably best that you carefully study it before using it for anything.
|
[] |
[
"TAGS\n#task_categories-text-classification #language-English #region-us \n"
] |
[
21
] |
[
"passage: TAGS\n#task_categories-text-classification #language-English #region-us \n"
] |
30d7d4e4ceff7b725ce1beda1c84af032b79d40a
|
# Dataset Card for "autotree_snnxor_n0_l1_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_snnxor_n0_l1_2
|
[
"region:us"
] |
2023-09-01T20:58:09+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 123760000, "num_examples": 10000}, {"name": "validation", "num_bytes": 123760000, "num_examples": 10000}, {"name": "test", "num_bytes": 123760000, "num_examples": 10000}], "download_size": 183536639, "dataset_size": 371280000}}
|
2023-09-01T20:58:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_snnxor_n0_l1_2"
More Information needed
|
[
"# Dataset Card for \"autotree_snnxor_n0_l1_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_snnxor_n0_l1_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snnxor_n0_l1_2\"\n\nMore Information needed"
] |
56560c6963e6b1df0ee24c7783c039acd20acb42
|
# Dataset Card for "autotree_snnxor_n30_l1_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_snnxor_n30_l1_2
|
[
"region:us"
] |
2023-09-01T21:02:37+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 154520000, "num_examples": 10000}, {"name": "validation", "num_bytes": 154520000, "num_examples": 10000}, {"name": "test", "num_bytes": 154520000, "num_examples": 10000}], "download_size": 185693440, "dataset_size": 463560000}}
|
2023-09-05T17:39:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_snnxor_n30_l1_2"
More Information needed
|
[
"# Dataset Card for \"autotree_snnxor_n30_l1_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_snnxor_n30_l1_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snnxor_n30_l1_2\"\n\nMore Information needed"
] |
606cdc0aeb410289fadecf375521b3a1dd246b7a
|
# Dataset Card for "autotree_snnxor_n15_l2_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_snnxor_n15_l2_2
|
[
"region:us"
] |
2023-09-01T21:08:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 402200000, "num_examples": 10000}, {"name": "validation", "num_bytes": 402200000, "num_examples": 10000}, {"name": "test", "num_bytes": 402200000, "num_examples": 10000}], "download_size": 351932552, "dataset_size": 1206600000}}
|
2023-09-05T18:59:23+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_snnxor_n15_l2_2"
More Information needed
|
[
"# Dataset Card for \"autotree_snnxor_n15_l2_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_snnxor_n15_l2_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snnxor_n15_l2_2\"\n\nMore Information needed"
] |
e5934d23bf2f2bf71df489ec252642ed2e8e335b
|
# Dataset Card for "autotree_snnxor_n30_l2_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_snnxor_n30_l2_2
|
[
"region:us"
] |
2023-09-01T21:18:57+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float32"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "input_y_clean", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float32"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 402200000, "num_examples": 10000}, {"name": "validation", "num_bytes": 402200000, "num_examples": 10000}, {"name": "test", "num_bytes": 402200000, "num_examples": 10000}], "download_size": 351933707, "dataset_size": 1206600000}}
|
2023-09-05T19:04:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_snnxor_n30_l2_2"
More Information needed
|
[
"# Dataset Card for \"autotree_snnxor_n30_l2_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_snnxor_n30_l2_2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_snnxor_n30_l2_2\"\n\nMore Information needed"
] |
3579b2a0d6e8b405c2d4477121ae5879b6f21105
|
StackExchange Paired 500K is a subset of lvwerra/stack-exchange-paired
which is a processed version of the HuggingFaceH4/stack-exchange-preferences. The following steps were applied:
Parse HTML to Markdown with markdownify
Create pairs (response_j, response_k) where j was rated better than k
Sample at most 10 pairs per question
Shuffle the dataset globally
This dataset is designed to be used for preference learning.
---
license: mit
---
|
mychen76/stack-exchange-paired-500k
|
[
"region:us"
] |
2023-09-01T22:18:07+00:00
|
{}
|
2023-09-01T22:55:09+00:00
|
[] |
[] |
TAGS
#region-us
|
StackExchange Paired 500K is a subset of lvwerra/stack-exchange-paired
which is a processed version of the HuggingFaceH4/stack-exchange-preferences. The following steps were applied:
Parse HTML to Markdown with markdownify
Create pairs (response_j, response_k) where j was rated better than k
Sample at most 10 pairs per question
Shuffle the dataset globally
This dataset is designed to be used for preference learning.
---
license: mit
---
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
8113be4354b97f8770b1b18f7f5860b3611dfd64
|
# Dataset Card for "bugurt_completion_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/bugurt_completion_prompts
|
[
"region:us"
] |
2023-09-01T22:28:23+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "bugurt", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5451066, "num_examples": 5000}], "download_size": 2806557, "dataset_size": 5451066}}
|
2023-09-01T22:28:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "bugurt_completion_prompts"
More Information needed
|
[
"# Dataset Card for \"bugurt_completion_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"bugurt_completion_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"bugurt_completion_prompts\"\n\nMore Information needed"
] |
07cb2d614904b3d3dc45ba049a88715a12c59099
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-wizardlm-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-15T13:11:43.680043](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-wizardlm-13b/blob/main/results_2023-10-15T13-11-43.680043.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.057466442953020135,
"em_stderr": 0.0023833905882384896,
"f1": 0.17808829697986514,
"f1_stderr": 0.002972308703760267,
"acc": 0.44245449154575855,
"acc_stderr": 0.010703432271512695
},
"harness|drop|3": {
"em": 0.057466442953020135,
"em_stderr": 0.0023833905882384896,
"f1": 0.17808829697986514,
"f1_stderr": 0.002972308703760267
},
"harness|gsm8k|5": {
"acc": 0.13115996967399546,
"acc_stderr": 0.009298499235587858
},
"harness|winogrande|5": {
"acc": 0.7537490134175217,
"acc_stderr": 0.012108365307437531
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-wizardlm-13b
|
[
"region:us"
] |
2023-09-01T23:07:36+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b](https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-wizardlm-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-15T13:11:43.680043](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-hermes-orca-platypus-wizardlm-13b/blob/main/results_2023-10-15T13-11-43.680043.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.057466442953020135,\n \"em_stderr\": 0.0023833905882384896,\n \"f1\": 0.17808829697986514,\n \"f1_stderr\": 0.002972308703760267,\n \"acc\": 0.44245449154575855,\n \"acc_stderr\": 0.010703432271512695\n },\n \"harness|drop|3\": {\n \"em\": 0.057466442953020135,\n \"em_stderr\": 0.0023833905882384896,\n \"f1\": 0.17808829697986514,\n \"f1_stderr\": 0.002972308703760267\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.13115996967399546,\n \"acc_stderr\": 0.009298499235587858\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7537490134175217,\n \"acc_stderr\": 0.012108365307437531\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|arc:challenge|25_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T13_11_43.680043", "path": ["**/details_harness|drop|3_2023-10-15T13-11-43.680043.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T13-11-43.680043.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T13_11_43.680043", "path": ["**/details_harness|gsm8k|5_2023-10-15T13-11-43.680043.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-15T13-11-43.680043.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hellaswag|10_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T00:07:11.850382.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T00:07:11.850382.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T00:07:11.850382.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T13_11_43.680043", "path": ["**/details_harness|winogrande|5_2023-10-15T13-11-43.680043.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-15T13-11-43.680043.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T00_07_11.850382", "path": ["results_2023-09-02T00:07:11.850382.parquet"]}, {"split": "2023_09_12T15_48_02.156025", "path": ["results_2023-09-12T15-48-02.156025.parquet"]}, {"split": "2023_10_15T13_11_43.680043", "path": ["results_2023-10-15T13-11-43.680043.parquet"]}, {"split": "latest", "path": ["results_2023-10-15T13-11-43.680043.parquet"]}]}]}
|
2023-10-15T12:11:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-15T13:11:43.680043(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-15T13:11:43.680043(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-15T13:11:43.680043(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
36,
31,
184,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-hermes-orca-platypus-wizardlm-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-15T13:11:43.680043(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e957ec6741f38a03cc71b7c59387f9d8c6a7fe7e
|
# Dataset Card for "7bd33ae8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
results-sd-v1-5-sd-v2-1-if-v1-0-karlo/7bd33ae8
|
[
"region:us"
] |
2023-09-01T23:21:11+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 184, "num_examples": 10}], "download_size": 1336, "dataset_size": 184}}
|
2023-09-01T23:21:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "7bd33ae8"
More Information needed
|
[
"# Dataset Card for \"7bd33ae8\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"7bd33ae8\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"7bd33ae8\"\n\nMore Information needed"
] |
a45072bc7ddc3ea6792ac73c42bb3752b4ef6f39
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3_sd3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_house_16H_gosdt_l512_d3_sd3
|
[
"region:us"
] |
2023-09-01T23:26:58+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9224800000, "num_examples": 100000}, {"name": "validation", "num_bytes": 922480000, "num_examples": 10000}], "download_size": 3198840988, "dataset_size": 10147280000}}
|
2023-09-01T23:30:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3_sd3"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd3\"\n\nMore Information needed"
] |
[
6,
34
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd3\"\n\nMore Information needed"
] |
8d7e04ec1567ad270df2b48bd41a392a002df1ad
|
The data is organized in alpaca format.
This dataset is collected and cleaned from the following datasets: alpaca-gpt4-data-zh, alpaca-data-gpt4-chinese and Open-Platypus.
It is cleaned in the following metods: remove duplications by hash of instruction, input and output.
|
Guanglong/mojing-llm
|
[
"license:apache-2.0",
"region:us"
] |
2023-09-01T23:33:36+00:00
|
{"license": "apache-2.0"}
|
2023-09-03T13:33:17+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
The data is organized in alpaca format.
This dataset is collected and cleaned from the following datasets: alpaca-gpt4-data-zh, alpaca-data-gpt4-chinese and Open-Platypus.
It is cleaned in the following metods: remove duplications by hash of instruction, input and output.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
fcc6bffb552847b28fe44c8d2733b4846d609d79
|
# Dataset Card for Evaluation run of acrastt/Bean-3B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/acrastt/Bean-3B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [acrastt/Bean-3B](https://huggingface.co/acrastt/Bean-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_acrastt__Bean-3B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T08:17:12.906813](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B/blob/main/results_2023-10-23T08-17-12.906813.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177354,
"f1": 0.05282403523489944,
"f1_stderr": 0.001300151908516004,
"acc": 0.3309881741746665,
"acc_stderr": 0.007673064453153027
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.00040584511324177354,
"f1": 0.05282403523489944,
"f1_stderr": 0.001300151908516004
},
"harness|gsm8k|5": {
"acc": 0.00530705079605762,
"acc_stderr": 0.002001305720948046
},
"harness|winogrande|5": {
"acc": 0.6566692975532754,
"acc_stderr": 0.013344823185358007
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_acrastt__Bean-3B
|
[
"region:us"
] |
2023-09-02T00:14:17+00:00
|
{"pretty_name": "Evaluation run of acrastt/Bean-3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [acrastt/Bean-3B](https://huggingface.co/acrastt/Bean-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_acrastt__Bean-3B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T08:17:12.906813](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Bean-3B/blob/main/results_2023-10-23T08-17-12.906813.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177354,\n \"f1\": 0.05282403523489944,\n \"f1_stderr\": 0.001300151908516004,\n \"acc\": 0.3309881741746665,\n \"acc_stderr\": 0.007673064453153027\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.00040584511324177354,\n \"f1\": 0.05282403523489944,\n \"f1_stderr\": 0.001300151908516004\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \"acc_stderr\": 0.002001305720948046\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6566692975532754,\n \"acc_stderr\": 0.013344823185358007\n }\n}\n```", "repo_url": "https://huggingface.co/acrastt/Bean-3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|arc:challenge|25_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T08_17_12.906813", "path": ["**/details_harness|drop|3_2023-10-23T08-17-12.906813.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T08-17-12.906813.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T08_17_12.906813", "path": ["**/details_harness|gsm8k|5_2023-10-23T08-17-12.906813.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T08-17-12.906813.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hellaswag|10_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T01:13:58.991012.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T01:13:58.991012.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T01:13:58.991012.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T08_17_12.906813", "path": ["**/details_harness|winogrande|5_2023-10-23T08-17-12.906813.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T08-17-12.906813.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T01_13_58.991012", "path": ["results_2023-09-02T01:13:58.991012.parquet"]}, {"split": "2023_10_23T08_17_12.906813", "path": ["results_2023-10-23T08-17-12.906813.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T08-17-12.906813.parquet"]}]}]}
|
2023-10-23T07:17:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of acrastt/Bean-3B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model acrastt/Bean-3B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T08:17:12.906813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of acrastt/Bean-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model acrastt/Bean-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T08:17:12.906813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of acrastt/Bean-3B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model acrastt/Bean-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T08:17:12.906813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
17,
31,
165,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of acrastt/Bean-3B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model acrastt/Bean-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T08:17:12.906813(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6c98b68828bc4be87da51eebc0a8483e82dda578
|
# Dataset Card for "dermatology_anomaly_detection_vit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/dermatology_anomaly_detection_vit
|
[
"region:us"
] |
2023-09-02T00:21:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "benign", "1": "malignant"}}}}], "splits": [{"name": "train", "num_bytes": 51521841.0, "num_examples": 656}], "download_size": 51530132, "dataset_size": 51521841.0}}
|
2023-09-02T00:21:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dermatology_anomaly_detection_vit"
More Information needed
|
[
"# Dataset Card for \"dermatology_anomaly_detection_vit\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dermatology_anomaly_detection_vit\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dermatology_anomaly_detection_vit\"\n\nMore Information needed"
] |
01646fe7d7ed965062959bcdda23e5847e73b069
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3_sd2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yzhuang/autotree_automl_house_16H_gosdt_l512_d3_sd2
|
[
"region:us"
] |
2023-09-02T00:35:17+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "input_x", "sequence": {"sequence": "float64"}}, {"name": "input_y", "sequence": {"sequence": "float32"}}, {"name": "rtg", "sequence": "float64"}, {"name": "status", "sequence": {"sequence": "float32"}}, {"name": "split_threshold", "sequence": {"sequence": "float64"}}, {"name": "split_dimension", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 9224800000, "num_examples": 100000}, {"name": "validation", "num_bytes": 922480000, "num_examples": 10000}], "download_size": 3195628749, "dataset_size": 10147280000}}
|
2023-09-02T00:38:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "autotree_automl_house_16H_gosdt_l512_d3_sd2"
More Information needed
|
[
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
[
6,
34
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"autotree_automl_house_16H_gosdt_l512_d3_sd2\"\n\nMore Information needed"
] |
729d4575c84046bad0b6ef6f82008b1000fba25a
|
# Dataset Card for Evaluation run of Devio/test-22B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Devio/test-22B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Devio/test-22B](https://huggingface.co/Devio/test-22B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Devio__test-22B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-16T03:23:54.397499](https://huggingface.co/datasets/open-llm-leaderboard/details_Devio__test-22B/blob/main/results_2023-10-16T03-23-54.397499.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709917,
"f1": 0.03323510906040272,
"f1_stderr": 0.0011026689087019657,
"acc": 0.2903720919378185,
"acc_stderr": 0.0077888780496033275
},
"harness|drop|3": {
"em": 0.002936241610738255,
"em_stderr": 0.0005541113054709917,
"f1": 0.03323510906040272,
"f1_stderr": 0.0011026689087019657
},
"harness|gsm8k|5": {
"acc": 0.0037907505686125853,
"acc_stderr": 0.0016927007401501832
},
"harness|winogrande|5": {
"acc": 0.5769534333070244,
"acc_stderr": 0.013885055359056472
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Devio__test-22B
|
[
"region:us"
] |
2023-09-02T00:39:12+00:00
|
{"pretty_name": "Evaluation run of Devio/test-22B", "dataset_summary": "Dataset automatically created during the evaluation run of model [Devio/test-22B](https://huggingface.co/Devio/test-22B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Devio__test-22B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-16T03:23:54.397499](https://huggingface.co/datasets/open-llm-leaderboard/details_Devio__test-22B/blob/main/results_2023-10-16T03-23-54.397499.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709917,\n \"f1\": 0.03323510906040272,\n \"f1_stderr\": 0.0011026689087019657,\n \"acc\": 0.2903720919378185,\n \"acc_stderr\": 0.0077888780496033275\n },\n \"harness|drop|3\": {\n \"em\": 0.002936241610738255,\n \"em_stderr\": 0.0005541113054709917,\n \"f1\": 0.03323510906040272,\n \"f1_stderr\": 0.0011026689087019657\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \"acc_stderr\": 0.0016927007401501832\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5769534333070244,\n \"acc_stderr\": 0.013885055359056472\n }\n}\n```", "repo_url": "https://huggingface.co/Devio/test-22B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|arc:challenge|25_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_16T03_23_54.397499", "path": ["**/details_harness|drop|3_2023-10-16T03-23-54.397499.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-16T03-23-54.397499.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_16T03_23_54.397499", "path": ["**/details_harness|gsm8k|5_2023-10-16T03-23-54.397499.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-16T03-23-54.397499.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hellaswag|10_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T01:38:52.675251.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T01:38:52.675251.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T01:38:52.675251.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_16T03_23_54.397499", "path": ["**/details_harness|winogrande|5_2023-10-16T03-23-54.397499.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-16T03-23-54.397499.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T01_38_52.675251", "path": ["results_2023-09-02T01:38:52.675251.parquet"]}, {"split": "2023_10_16T03_23_54.397499", "path": ["results_2023-10-16T03-23-54.397499.parquet"]}, {"split": "latest", "path": ["results_2023-10-16T03-23-54.397499.parquet"]}]}]}
|
2023-10-16T02:24:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Devio/test-22B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Devio/test-22B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-16T03:23:54.397499(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Devio/test-22B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Devio/test-22B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T03:23:54.397499(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Devio/test-22B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Devio/test-22B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-16T03:23:54.397499(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
15,
31,
163,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Devio/test-22B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Devio/test-22B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-16T03:23:54.397499(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
4fece0584a2b452de4ba93d2b8a606c805f393f1
|
# Dataset Card for Evaluation run of venkycs/llama-v2-7b-32kC-Security
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/venkycs/llama-v2-7b-32kC-Security
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [venkycs/llama-v2-7b-32kC-Security](https://huggingface.co/venkycs/llama-v2-7b-32kC-Security) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_venkycs__llama-v2-7b-32kC-Security",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-16T22:12:53.343595](https://huggingface.co/datasets/open-llm-leaderboard/details_venkycs__llama-v2-7b-32kC-Security/blob/main/results_2023-09-16T22-12-53.343595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.00039210421902984526,
"f1": 0.05811556208053706,
"f1_stderr": 0.0013703157651110908,
"acc": 0.37805421700805364,
"acc_stderr": 0.00898232300653474
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.00039210421902984526,
"f1": 0.05811556208053706,
"f1_stderr": 0.0013703157651110908
},
"harness|gsm8k|5": {
"acc": 0.03866565579984837,
"acc_stderr": 0.005310583162098087
},
"harness|winogrande|5": {
"acc": 0.7174427782162589,
"acc_stderr": 0.012654062850971393
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_venkycs__llama-v2-7b-32kC-Security
|
[
"region:us"
] |
2023-09-02T01:09:30+00:00
|
{"pretty_name": "Evaluation run of venkycs/llama-v2-7b-32kC-Security", "dataset_summary": "Dataset automatically created during the evaluation run of model [venkycs/llama-v2-7b-32kC-Security](https://huggingface.co/venkycs/llama-v2-7b-32kC-Security) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_venkycs__llama-v2-7b-32kC-Security\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-16T22:12:53.343595](https://huggingface.co/datasets/open-llm-leaderboard/details_venkycs__llama-v2-7b-32kC-Security/blob/main/results_2023-09-16T22-12-53.343595.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.00039210421902984526,\n \"f1\": 0.05811556208053706,\n \"f1_stderr\": 0.0013703157651110908,\n \"acc\": 0.37805421700805364,\n \"acc_stderr\": 0.00898232300653474\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.00039210421902984526,\n \"f1\": 0.05811556208053706,\n \"f1_stderr\": 0.0013703157651110908\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03866565579984837,\n \"acc_stderr\": 0.005310583162098087\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7174427782162589,\n \"acc_stderr\": 0.012654062850971393\n }\n}\n```", "repo_url": "https://huggingface.co/venkycs/llama-v2-7b-32kC-Security", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|arc:challenge|25_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_16T22_12_53.343595", "path": ["**/details_harness|drop|3_2023-09-16T22-12-53.343595.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-16T22-12-53.343595.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_16T22_12_53.343595", "path": ["**/details_harness|gsm8k|5_2023-09-16T22-12-53.343595.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-16T22-12-53.343595.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hellaswag|10_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T02:09:05.697790.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T02:09:05.697790.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T02:09:05.697790.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_16T22_12_53.343595", "path": ["**/details_harness|winogrande|5_2023-09-16T22-12-53.343595.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-16T22-12-53.343595.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T02_09_05.697790", "path": ["results_2023-09-02T02:09:05.697790.parquet"]}, {"split": "2023_09_16T22_12_53.343595", "path": ["results_2023-09-16T22-12-53.343595.parquet"]}, {"split": "latest", "path": ["results_2023-09-16T22-12-53.343595.parquet"]}]}]}
|
2023-09-16T21:13:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of venkycs/llama-v2-7b-32kC-Security
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model venkycs/llama-v2-7b-32kC-Security on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-16T22:12:53.343595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of venkycs/llama-v2-7b-32kC-Security",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model venkycs/llama-v2-7b-32kC-Security on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T22:12:53.343595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of venkycs/llama-v2-7b-32kC-Security",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model venkycs/llama-v2-7b-32kC-Security on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-16T22:12:53.343595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of venkycs/llama-v2-7b-32kC-Security## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model venkycs/llama-v2-7b-32kC-Security on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-16T22:12:53.343595(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
46b04d8ed4dbbbe53b43ca313f6f70215e70bc1b
|
# Dataset Card for "mura_dataset_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/mura_dataset_processed
|
[
"region:us"
] |
2023-09-02T01:14:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abnormal", "1": "normal"}}}}], "splits": [{"name": "train", "num_bytes": 235536715.375, "num_examples": 40005}], "download_size": 238614754, "dataset_size": 235536715.375}}
|
2023-09-02T01:14:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mura_dataset_processed"
More Information needed
|
[
"# Dataset Card for \"mura_dataset_processed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mura_dataset_processed\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mura_dataset_processed\"\n\nMore Information needed"
] |
cc92fd792a35e521bd1045cda1b7dfd51957e958
|
# Dataset Card for "mura_dataset_processed_224px"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/mura_dataset_processed_224px
|
[
"region:us"
] |
2023-09-02T01:36:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abnormal", "1": "normal"}}}}], "splits": [{"name": "train", "num_bytes": 997379264.375, "num_examples": 40005}], "download_size": 997532653, "dataset_size": 997379264.375}}
|
2023-09-02T01:37:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mura_dataset_processed_224px"
More Information needed
|
[
"# Dataset Card for \"mura_dataset_processed_224px\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mura_dataset_processed_224px\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mura_dataset_processed_224px\"\n\nMore Information needed"
] |
58e975659a72c3a41c6dd0fbc685afe84f8c3afc
|
# Dataset Card for "mura_dataset_processed_224px_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/mura_dataset_processed_224px_split
|
[
"region:us"
] |
2023-09-02T01:38:43+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abnormal", "1": "normal"}}}}], "splits": [{"name": "train", "num_bytes": 897597368.7549056, "num_examples": 36004}, {"name": "test", "num_bytes": 99746891.24509436, "num_examples": 4001}], "download_size": 997622999, "dataset_size": 997344260.0}}
|
2023-09-02T01:39:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mura_dataset_processed_224px_split"
More Information needed
|
[
"# Dataset Card for \"mura_dataset_processed_224px_split\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mura_dataset_processed_224px_split\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mura_dataset_processed_224px_split\"\n\nMore Information needed"
] |
fb9bd9c6da99237fbd208a833c0015016815cf76
|
# Dataset Card for "mura_dataset_processed_224px_train_val"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
KhalfounMehdi/mura_dataset_processed_224px_train_val
|
[
"region:us"
] |
2023-09-02T02:29:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "abnormal", "1": "normal"}}}}], "splits": [{"name": "train", "num_bytes": 897597369.6548932, "num_examples": 36004}, {"name": "test", "num_bytes": 99746891.34510686, "num_examples": 4001}, {"name": "validation", "num_bytes": 99746891.34510686, "num_examples": 4001}], "download_size": 1097353191, "dataset_size": 1097091152.3451068}}
|
2023-09-02T02:30:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "mura_dataset_processed_224px_train_val"
More Information needed
|
[
"# Dataset Card for \"mura_dataset_processed_224px_train_val\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"mura_dataset_processed_224px_train_val\"\n\nMore Information needed"
] |
[
6,
26
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"mura_dataset_processed_224px_train_val\"\n\nMore Information needed"
] |
1e0db287941004e838aac0e5bf51ca8a7a2bb582
|
# Dataset Card for Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons](https://huggingface.co/NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_NobodyExistsOnTheInternet__GiftedConvo13bLoraNoEcons",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-14T20:19:59.176152](https://huggingface.co/datasets/open-llm-leaderboard/details_NobodyExistsOnTheInternet__GiftedConvo13bLoraNoEcons/blob/main/results_2023-10-14T20-19-59.176152.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.02160234899328859,
"em_stderr": 0.0014888393578850604,
"f1": 0.07773175335570466,
"f1_stderr": 0.0019038640159988432,
"acc": 0.4092104767130632,
"acc_stderr": 0.009856677593330436
},
"harness|drop|3": {
"em": 0.02160234899328859,
"em_stderr": 0.0014888393578850604,
"f1": 0.07773175335570466,
"f1_stderr": 0.0019038640159988432
},
"harness|gsm8k|5": {
"acc": 0.07808946171341925,
"acc_stderr": 0.0073906544811082045
},
"harness|winogrande|5": {
"acc": 0.7403314917127072,
"acc_stderr": 0.012322700705552667
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_NobodyExistsOnTheInternet__GiftedConvo13bLoraNoEcons
|
[
"region:us"
] |
2023-09-02T02:30:29+00:00
|
{"pretty_name": "Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons", "dataset_summary": "Dataset automatically created during the evaluation run of model [NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons](https://huggingface.co/NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NobodyExistsOnTheInternet__GiftedConvo13bLoraNoEcons\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-14T20:19:59.176152](https://huggingface.co/datasets/open-llm-leaderboard/details_NobodyExistsOnTheInternet__GiftedConvo13bLoraNoEcons/blob/main/results_2023-10-14T20-19-59.176152.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.02160234899328859,\n \"em_stderr\": 0.0014888393578850604,\n \"f1\": 0.07773175335570466,\n \"f1_stderr\": 0.0019038640159988432,\n \"acc\": 0.4092104767130632,\n \"acc_stderr\": 0.009856677593330436\n },\n \"harness|drop|3\": {\n \"em\": 0.02160234899328859,\n \"em_stderr\": 0.0014888393578850604,\n \"f1\": 0.07773175335570466,\n \"f1_stderr\": 0.0019038640159988432\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07808946171341925,\n \"acc_stderr\": 0.0073906544811082045\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7403314917127072,\n \"acc_stderr\": 0.012322700705552667\n }\n}\n```", "repo_url": "https://huggingface.co/NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|arc:challenge|25_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_14T20_19_59.176152", "path": ["**/details_harness|drop|3_2023-10-14T20-19-59.176152.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-14T20-19-59.176152.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_14T20_19_59.176152", "path": ["**/details_harness|gsm8k|5_2023-10-14T20-19-59.176152.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-14T20-19-59.176152.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hellaswag|10_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T03:30:04.510392.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T03:30:04.510392.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T03:30:04.510392.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_14T20_19_59.176152", "path": ["**/details_harness|winogrande|5_2023-10-14T20-19-59.176152.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-14T20-19-59.176152.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T03_30_04.510392", "path": ["results_2023-09-02T03:30:04.510392.parquet"]}, {"split": "2023_10_14T20_19_59.176152", "path": ["results_2023-10-14T20-19-59.176152.parquet"]}, {"split": "latest", "path": ["results_2023-10-14T20-19-59.176152.parquet"]}]}]}
|
2023-10-14T19:20:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-14T20:19:59.176152(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-14T20:19:59.176152(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-14T20:19:59.176152(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
29,
31,
177,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model NobodyExistsOnTheInternet/GiftedConvo13bLoraNoEcons on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-14T20:19:59.176152(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
965bf5e433c664528b11214dcedcd352ba2a4416
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-13b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-llama2-13b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-13b](https://huggingface.co/uukuguy/speechless-llama2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-llama2-13b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-09T16:49:00.911665](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-13b/blob/main/results_2023-12-09T16-49-00.911665.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5860683187201721,
"acc_stderr": 0.033316954491979946,
"acc_norm": 0.5913412721401082,
"acc_norm_stderr": 0.0340008049750402,
"mc1": 0.3880048959608323,
"mc1_stderr": 0.017058761501347972,
"mc2": 0.5565985023189125,
"mc2_stderr": 0.015435738665954496
},
"harness|arc:challenge|25": {
"acc": 0.5784982935153583,
"acc_stderr": 0.014430197069326023,
"acc_norm": 0.6203071672354948,
"acc_norm_stderr": 0.014182119866974872
},
"harness|hellaswag|10": {
"acc": 0.6212905795658236,
"acc_stderr": 0.004840742206718088,
"acc_norm": 0.8181637124078869,
"acc_norm_stderr": 0.0038492126228151643
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5481481481481482,
"acc_stderr": 0.042992689054808644,
"acc_norm": 0.5481481481481482,
"acc_norm_stderr": 0.042992689054808644
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5657894736842105,
"acc_stderr": 0.0403356566784832,
"acc_norm": 0.5657894736842105,
"acc_norm_stderr": 0.0403356566784832
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.04960449637488583,
"acc_norm": 0.58,
"acc_norm_stderr": 0.04960449637488583
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.630188679245283,
"acc_stderr": 0.02971142188010793,
"acc_norm": 0.630188679245283,
"acc_norm_stderr": 0.02971142188010793
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6458333333333334,
"acc_stderr": 0.039994111357535424,
"acc_norm": 0.6458333333333334,
"acc_norm_stderr": 0.039994111357535424
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.047258156262526045,
"acc_norm": 0.33,
"acc_norm_stderr": 0.047258156262526045
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5491329479768786,
"acc_stderr": 0.037940126746970296,
"acc_norm": 0.5491329479768786,
"acc_norm_stderr": 0.037940126746970296
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319616,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319616
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.49361702127659574,
"acc_stderr": 0.032683358999363366,
"acc_norm": 0.49361702127659574,
"acc_norm_stderr": 0.032683358999363366
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2807017543859649,
"acc_stderr": 0.042270544512322,
"acc_norm": 0.2807017543859649,
"acc_norm_stderr": 0.042270544512322
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192118,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192118
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.37566137566137564,
"acc_stderr": 0.024942368931159798,
"acc_norm": 0.37566137566137564,
"acc_norm_stderr": 0.024942368931159798
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.373015873015873,
"acc_stderr": 0.04325506042017087,
"acc_norm": 0.373015873015873,
"acc_norm_stderr": 0.04325506042017087
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6838709677419355,
"acc_stderr": 0.026450874489042764,
"acc_norm": 0.6838709677419355,
"acc_norm_stderr": 0.026450874489042764
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.703030303030303,
"acc_stderr": 0.03567969772268049,
"acc_norm": 0.703030303030303,
"acc_norm_stderr": 0.03567969772268049
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.030532892233932026,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.030532892233932026
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8393782383419689,
"acc_stderr": 0.02649905770139744,
"acc_norm": 0.8393782383419689,
"acc_norm_stderr": 0.02649905770139744
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6230769230769231,
"acc_stderr": 0.024570975364225995,
"acc_norm": 0.6230769230769231,
"acc_norm_stderr": 0.024570975364225995
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3148148148148148,
"acc_stderr": 0.02831753349606648,
"acc_norm": 0.3148148148148148,
"acc_norm_stderr": 0.02831753349606648
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5966386554621849,
"acc_stderr": 0.031866081214088314,
"acc_norm": 0.5966386554621849,
"acc_norm_stderr": 0.031866081214088314
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31125827814569534,
"acc_stderr": 0.03780445850526733,
"acc_norm": 0.31125827814569534,
"acc_norm_stderr": 0.03780445850526733
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.7908256880733945,
"acc_stderr": 0.01743793717334323,
"acc_norm": 0.7908256880733945,
"acc_norm_stderr": 0.01743793717334323
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.39814814814814814,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.39814814814814814,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7990196078431373,
"acc_stderr": 0.02812597226565438,
"acc_norm": 0.7990196078431373,
"acc_norm_stderr": 0.02812597226565438
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7763713080168776,
"acc_stderr": 0.027123298205229966,
"acc_norm": 0.7763713080168776,
"acc_norm_stderr": 0.027123298205229966
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.031024411740572206,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.031024411740572206
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.648854961832061,
"acc_stderr": 0.04186445163013751,
"acc_norm": 0.648854961832061,
"acc_norm_stderr": 0.04186445163013751
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7272727272727273,
"acc_stderr": 0.04065578140908706,
"acc_norm": 0.7272727272727273,
"acc_norm_stderr": 0.04065578140908706
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.040191074725573483,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.040191074725573483
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6809815950920245,
"acc_stderr": 0.03661997551073836,
"acc_norm": 0.6809815950920245,
"acc_norm_stderr": 0.03661997551073836
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.7475728155339806,
"acc_stderr": 0.04301250399690878,
"acc_norm": 0.7475728155339806,
"acc_norm_stderr": 0.04301250399690878
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.02441494730454368,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.02441494730454368
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.59,
"acc_stderr": 0.04943110704237102,
"acc_norm": 0.59,
"acc_norm_stderr": 0.04943110704237102
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7624521072796935,
"acc_stderr": 0.015218733046150191,
"acc_norm": 0.7624521072796935,
"acc_norm_stderr": 0.015218733046150191
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.025722802200895803,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.025722802200895803
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41899441340782123,
"acc_stderr": 0.01650157930686168,
"acc_norm": 0.41899441340782123,
"acc_norm_stderr": 0.01650157930686168
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6568627450980392,
"acc_stderr": 0.027184498909941616,
"acc_norm": 0.6568627450980392,
"acc_norm_stderr": 0.027184498909941616
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6591639871382636,
"acc_stderr": 0.026920841260776165,
"acc_norm": 0.6591639871382636,
"acc_norm_stderr": 0.026920841260776165
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.654320987654321,
"acc_stderr": 0.026462487777001872,
"acc_norm": 0.654320987654321,
"acc_norm_stderr": 0.026462487777001872
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46808510638297873,
"acc_stderr": 0.029766675075873866,
"acc_norm": 0.46808510638297873,
"acc_norm_stderr": 0.029766675075873866
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4367666232073012,
"acc_stderr": 0.012667701919603654,
"acc_norm": 0.4367666232073012,
"acc_norm_stderr": 0.012667701919603654
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5808823529411765,
"acc_stderr": 0.029972807170464622,
"acc_norm": 0.5808823529411765,
"acc_norm_stderr": 0.029972807170464622
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5669934640522876,
"acc_stderr": 0.02004544247332422,
"acc_norm": 0.5669934640522876,
"acc_norm_stderr": 0.02004544247332422
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6775510204081633,
"acc_stderr": 0.029923100563683906,
"acc_norm": 0.6775510204081633,
"acc_norm_stderr": 0.029923100563683906
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7562189054726368,
"acc_stderr": 0.030360490154014645,
"acc_norm": 0.7562189054726368,
"acc_norm_stderr": 0.030360490154014645
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.83,
"acc_stderr": 0.0377525168068637,
"acc_norm": 0.83,
"acc_norm_stderr": 0.0377525168068637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.4879518072289157,
"acc_stderr": 0.03891364495835821,
"acc_norm": 0.4879518072289157,
"acc_norm_stderr": 0.03891364495835821
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.031885780176863984,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.031885780176863984
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3880048959608323,
"mc1_stderr": 0.017058761501347972,
"mc2": 0.5565985023189125,
"mc2_stderr": 0.015435738665954496
},
"harness|winogrande|5": {
"acc": 0.7600631412786109,
"acc_stderr": 0.012002078629485739
},
"harness|gsm8k|5": {
"acc": 0.33965125094768767,
"acc_stderr": 0.013045045067665269
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-llama2-13b
|
[
"region:us"
] |
2023-09-02T02:45:47+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-llama2-13b", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-llama2-13b](https://huggingface.co/uukuguy/speechless-llama2-13b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-llama2-13b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-09T16:49:00.911665](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-llama2-13b/blob/main/results_2023-12-09T16-49-00.911665.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5860683187201721,\n \"acc_stderr\": 0.033316954491979946,\n \"acc_norm\": 0.5913412721401082,\n \"acc_norm_stderr\": 0.0340008049750402,\n \"mc1\": 0.3880048959608323,\n \"mc1_stderr\": 0.017058761501347972,\n \"mc2\": 0.5565985023189125,\n \"mc2_stderr\": 0.015435738665954496\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5784982935153583,\n \"acc_stderr\": 0.014430197069326023,\n \"acc_norm\": 0.6203071672354948,\n \"acc_norm_stderr\": 0.014182119866974872\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6212905795658236,\n \"acc_stderr\": 0.004840742206718088,\n \"acc_norm\": 0.8181637124078869,\n \"acc_norm_stderr\": 0.0038492126228151643\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5481481481481482,\n \"acc_stderr\": 0.042992689054808644,\n \"acc_norm\": 0.5481481481481482,\n \"acc_norm_stderr\": 0.042992689054808644\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5657894736842105,\n \"acc_stderr\": 0.0403356566784832,\n \"acc_norm\": 0.5657894736842105,\n \"acc_norm_stderr\": 0.0403356566784832\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.04960449637488583,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.04960449637488583\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.630188679245283,\n \"acc_stderr\": 0.02971142188010793,\n \"acc_norm\": 0.630188679245283,\n \"acc_norm_stderr\": 0.02971142188010793\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6458333333333334,\n \"acc_stderr\": 0.039994111357535424,\n \"acc_norm\": 0.6458333333333334,\n \"acc_norm_stderr\": 0.039994111357535424\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5491329479768786,\n \"acc_stderr\": 0.037940126746970296,\n \"acc_norm\": 0.5491329479768786,\n \"acc_norm_stderr\": 0.037940126746970296\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319616,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.49361702127659574,\n \"acc_stderr\": 0.032683358999363366,\n \"acc_norm\": 0.49361702127659574,\n \"acc_norm_stderr\": 0.032683358999363366\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2807017543859649,\n \"acc_stderr\": 0.042270544512322,\n \"acc_norm\": 0.2807017543859649,\n \"acc_norm_stderr\": 0.042270544512322\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192118,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192118\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37566137566137564,\n \"acc_stderr\": 0.024942368931159798,\n \"acc_norm\": 0.37566137566137564,\n \"acc_norm_stderr\": 0.024942368931159798\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.373015873015873,\n \"acc_stderr\": 0.04325506042017087,\n \"acc_norm\": 0.373015873015873,\n \"acc_norm_stderr\": 0.04325506042017087\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6838709677419355,\n \"acc_stderr\": 0.026450874489042764,\n \"acc_norm\": 0.6838709677419355,\n \"acc_norm_stderr\": 0.026450874489042764\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.703030303030303,\n \"acc_stderr\": 0.03567969772268049,\n \"acc_norm\": 0.703030303030303,\n \"acc_norm_stderr\": 0.03567969772268049\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.030532892233932026,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.030532892233932026\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8393782383419689,\n \"acc_stderr\": 0.02649905770139744,\n \"acc_norm\": 0.8393782383419689,\n \"acc_norm_stderr\": 0.02649905770139744\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6230769230769231,\n \"acc_stderr\": 0.024570975364225995,\n \"acc_norm\": 0.6230769230769231,\n \"acc_norm_stderr\": 0.024570975364225995\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.02831753349606648,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.02831753349606648\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5966386554621849,\n \"acc_stderr\": 0.031866081214088314,\n \"acc_norm\": 0.5966386554621849,\n \"acc_norm_stderr\": 0.031866081214088314\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526733,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526733\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7908256880733945,\n \"acc_stderr\": 0.01743793717334323,\n \"acc_norm\": 0.7908256880733945,\n \"acc_norm_stderr\": 0.01743793717334323\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.39814814814814814,\n \"acc_stderr\": 0.033384734032074016,\n \"acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.033384734032074016\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7990196078431373,\n \"acc_stderr\": 0.02812597226565438,\n \"acc_norm\": 0.7990196078431373,\n \"acc_norm_stderr\": 0.02812597226565438\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7763713080168776,\n \"acc_stderr\": 0.027123298205229966,\n \"acc_norm\": 0.7763713080168776,\n \"acc_norm_stderr\": 0.027123298205229966\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.031024411740572206,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.031024411740572206\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.648854961832061,\n \"acc_stderr\": 0.04186445163013751,\n \"acc_norm\": 0.648854961832061,\n \"acc_norm_stderr\": 0.04186445163013751\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908706,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908706\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.040191074725573483,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.040191074725573483\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6809815950920245,\n \"acc_stderr\": 0.03661997551073836,\n \"acc_norm\": 0.6809815950920245,\n \"acc_norm_stderr\": 0.03661997551073836\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7475728155339806,\n \"acc_stderr\": 0.04301250399690878,\n \"acc_norm\": 0.7475728155339806,\n \"acc_norm_stderr\": 0.04301250399690878\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.02441494730454368,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.02441494730454368\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.59,\n \"acc_stderr\": 0.04943110704237102,\n \"acc_norm\": 0.59,\n \"acc_norm_stderr\": 0.04943110704237102\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7624521072796935,\n \"acc_stderr\": 0.015218733046150191,\n \"acc_norm\": 0.7624521072796935,\n \"acc_norm_stderr\": 0.015218733046150191\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.025722802200895803,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.025722802200895803\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41899441340782123,\n \"acc_stderr\": 0.01650157930686168,\n \"acc_norm\": 0.41899441340782123,\n \"acc_norm_stderr\": 0.01650157930686168\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6568627450980392,\n \"acc_stderr\": 0.027184498909941616,\n \"acc_norm\": 0.6568627450980392,\n \"acc_norm_stderr\": 0.027184498909941616\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6591639871382636,\n \"acc_stderr\": 0.026920841260776165,\n \"acc_norm\": 0.6591639871382636,\n \"acc_norm_stderr\": 0.026920841260776165\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.654320987654321,\n \"acc_stderr\": 0.026462487777001872,\n \"acc_norm\": 0.654320987654321,\n \"acc_norm_stderr\": 0.026462487777001872\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46808510638297873,\n \"acc_stderr\": 0.029766675075873866,\n \"acc_norm\": 0.46808510638297873,\n \"acc_norm_stderr\": 0.029766675075873866\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4367666232073012,\n \"acc_stderr\": 0.012667701919603654,\n \"acc_norm\": 0.4367666232073012,\n \"acc_norm_stderr\": 0.012667701919603654\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5808823529411765,\n \"acc_stderr\": 0.029972807170464622,\n \"acc_norm\": 0.5808823529411765,\n \"acc_norm_stderr\": 0.029972807170464622\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5669934640522876,\n \"acc_stderr\": 0.02004544247332422,\n \"acc_norm\": 0.5669934640522876,\n \"acc_norm_stderr\": 0.02004544247332422\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6775510204081633,\n \"acc_stderr\": 0.029923100563683906,\n \"acc_norm\": 0.6775510204081633,\n \"acc_norm_stderr\": 0.029923100563683906\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7562189054726368,\n \"acc_stderr\": 0.030360490154014645,\n \"acc_norm\": 0.7562189054726368,\n \"acc_norm_stderr\": 0.030360490154014645\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.83,\n \"acc_stderr\": 0.0377525168068637,\n \"acc_norm\": 0.83,\n \"acc_norm_stderr\": 0.0377525168068637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4879518072289157,\n \"acc_stderr\": 0.03891364495835821,\n \"acc_norm\": 0.4879518072289157,\n \"acc_norm_stderr\": 0.03891364495835821\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.031885780176863984,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.031885780176863984\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3880048959608323,\n \"mc1_stderr\": 0.017058761501347972,\n \"mc2\": 0.5565985023189125,\n \"mc2_stderr\": 0.015435738665954496\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7600631412786109,\n \"acc_stderr\": 0.012002078629485739\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.33965125094768767,\n \"acc_stderr\": 0.013045045067665269\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-llama2-13b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|arc:challenge|25_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|arc:challenge|25_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|arc:challenge|25_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|arc:challenge|25_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_15T22_09_28.481990", "path": ["**/details_harness|drop|3_2023-10-15T22-09-28.481990.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-15T22-09-28.481990.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_15T22_09_28.481990", "path": ["**/details_harness|gsm8k|5_2023-10-15T22-09-28.481990.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|gsm8k|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hellaswag|10_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hellaswag|10_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hellaswag|10_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hellaswag|10_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T03:45:23.206143.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T15:58:18.299905.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-12T13-57-07.476950.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-09T16-49-00.911665.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_15T22_09_28.481990", "path": ["**/details_harness|winogrande|5_2023-10-15T22-09-28.481990.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["**/details_harness|winogrande|5_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-09T16-49-00.911665.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T03_45_23.206143", "path": ["results_2023-09-02T03:45:23.206143.parquet"]}, {"split": "2023_09_02T15_58_18.299905", "path": ["results_2023-09-02T15:58:18.299905.parquet"]}, {"split": "2023_09_12T13_57_07.476950", "path": ["results_2023-09-12T13-57-07.476950.parquet"]}, {"split": "2023_10_15T22_09_28.481990", "path": ["results_2023-10-15T22-09-28.481990.parquet"]}, {"split": "2023_12_09T16_49_00.911665", "path": ["results_2023-12-09T16-49-00.911665.parquet"]}, {"split": "latest", "path": ["results_2023-12-09T16-49-00.911665.parquet"]}]}]}
|
2023-12-09T16:52:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-llama2-13b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-llama2-13b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-09T16:49:00.911665(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-09T16:49:00.911665(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-llama2-13b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-09T16:49:00.911665(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-llama2-13b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-llama2-13b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 5 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-09T16:49:00.911665(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c389881266fdc64079cae5ef21ee795c6a4ce5d9
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj](https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-19T01:12:50.738411](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj/blob/main/results_2023-10-19T01-12-50.738411.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08137583892617449,
"em_stderr": 0.0027999889835206245,
"f1": 0.13315016778523447,
"f1_stderr": 0.0029419319985989354,
"acc": 0.4454347335673805,
"acc_stderr": 0.010395126943573653
},
"harness|drop|3": {
"em": 0.08137583892617449,
"em_stderr": 0.0027999889835206245,
"f1": 0.13315016778523447,
"f1_stderr": 0.0029419319985989354
},
"harness|gsm8k|5": {
"acc": 0.12054586808188021,
"acc_stderr": 0.008968608285309067
},
"harness|winogrande|5": {
"acc": 0.7703235990528808,
"acc_stderr": 0.011821645601838238
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj
|
[
"region:us"
] |
2023-09-02T03:14:22+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj](https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-19T01:12:50.738411](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj/blob/main/results_2023-10-19T01-12-50.738411.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08137583892617449,\n \"em_stderr\": 0.0027999889835206245,\n \"f1\": 0.13315016778523447,\n \"f1_stderr\": 0.0029419319985989354,\n \"acc\": 0.4454347335673805,\n \"acc_stderr\": 0.010395126943573653\n },\n \"harness|drop|3\": {\n \"em\": 0.08137583892617449,\n \"em_stderr\": 0.0027999889835206245,\n \"f1\": 0.13315016778523447,\n \"f1_stderr\": 0.0029419319985989354\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12054586808188021,\n \"acc_stderr\": 0.008968608285309067\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7703235990528808,\n \"acc_stderr\": 0.011821645601838238\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|arc:challenge|25_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_19T01_12_50.738411", "path": ["**/details_harness|drop|3_2023-10-19T01-12-50.738411.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-19T01-12-50.738411.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_19T01_12_50.738411", "path": ["**/details_harness|gsm8k|5_2023-10-19T01-12-50.738411.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-19T01-12-50.738411.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hellaswag|10_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T04:13:58.324897.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T04:13:58.324897.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T04:13:58.324897.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_19T01_12_50.738411", "path": ["**/details_harness|winogrande|5_2023-10-19T01-12-50.738411.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-19T01-12-50.738411.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T04_13_58.324897", "path": ["results_2023-09-02T04:13:58.324897.parquet"]}, {"split": "2023_10_19T01_12_50.738411", "path": ["results_2023-10-19T01-12-50.738411.parquet"]}, {"split": "latest", "path": ["results_2023-10-19T01-12-50.738411.parquet"]}]}]}
|
2023-10-19T00:13:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-19T01:12:50.738411(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-19T01:12:50.738411(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-19T01:12:50.738411(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
41,
31,
189,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-gate_up_down_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-19T01:12:50.738411(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
47aecdb6ee4f5bea1b51068d9c0dd050d7c52dfa
|
# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v4
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/yeontaek/llama-2-70B-ensemble-v4
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [yeontaek/llama-2-70B-ensemble-v4](https://huggingface.co/yeontaek/llama-2-70B-ensemble-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v4",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-02T05:22:36.145219](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v4/blob/main/results_2023-09-02T05%3A22%3A36.145219.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6965991479686341,
"acc_stderr": 0.03128858284011723,
"acc_norm": 0.7002900066329676,
"acc_norm_stderr": 0.03126026366396146,
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6260206771095533,
"mc2_stderr": 0.014926739687315194
},
"harness|arc:challenge|25": {
"acc": 0.6808873720136519,
"acc_stderr": 0.013621696119173302,
"acc_norm": 0.7090443686006825,
"acc_norm_stderr": 0.01327307786590759
},
"harness|hellaswag|10": {
"acc": 0.6838279227245568,
"acc_stderr": 0.0046403067196280675,
"acc_norm": 0.8734315873332006,
"acc_norm_stderr": 0.00331809357970292
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.35,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.35,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5925925925925926,
"acc_stderr": 0.042446332383532286,
"acc_norm": 0.5925925925925926,
"acc_norm_stderr": 0.042446332383532286
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7763157894736842,
"acc_stderr": 0.033911609343436025,
"acc_norm": 0.7763157894736842,
"acc_norm_stderr": 0.033911609343436025
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7433962264150943,
"acc_stderr": 0.026880647889051982,
"acc_norm": 0.7433962264150943,
"acc_norm_stderr": 0.026880647889051982
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.8541666666666666,
"acc_stderr": 0.02951424596429177,
"acc_norm": 0.8541666666666666,
"acc_norm_stderr": 0.02951424596429177
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.45,
"acc_stderr": 0.049999999999999996,
"acc_norm": 0.45,
"acc_norm_stderr": 0.049999999999999996
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04690650298201943,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04690650298201943
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909281,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909281
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6680851063829787,
"acc_stderr": 0.03078373675774565,
"acc_norm": 0.6680851063829787,
"acc_norm_stderr": 0.03078373675774565
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.45614035087719296,
"acc_stderr": 0.04685473041907789,
"acc_norm": 0.45614035087719296,
"acc_norm_stderr": 0.04685473041907789
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555498,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555498
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4708994708994709,
"acc_stderr": 0.025707658614154954,
"acc_norm": 0.4708994708994709,
"acc_norm_stderr": 0.025707658614154954
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5238095238095238,
"acc_stderr": 0.04467062628403273,
"acc_norm": 0.5238095238095238,
"acc_norm_stderr": 0.04467062628403273
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8225806451612904,
"acc_stderr": 0.021732540689329276,
"acc_norm": 0.8225806451612904,
"acc_norm_stderr": 0.021732540689329276
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.78,
"acc_stderr": 0.04163331998932262,
"acc_norm": 0.78,
"acc_norm_stderr": 0.04163331998932262
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8606060606060606,
"acc_stderr": 0.027045948825865376,
"acc_norm": 0.8606060606060606,
"acc_norm_stderr": 0.027045948825865376
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8737373737373737,
"acc_stderr": 0.023664359402880236,
"acc_norm": 0.8737373737373737,
"acc_norm_stderr": 0.023664359402880236
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9481865284974094,
"acc_stderr": 0.01599622932024412,
"acc_norm": 0.9481865284974094,
"acc_norm_stderr": 0.01599622932024412
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.7205128205128205,
"acc_stderr": 0.022752388839776826,
"acc_norm": 0.7205128205128205,
"acc_norm_stderr": 0.022752388839776826
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.02866120111652459,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.02866120111652459
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7941176470588235,
"acc_stderr": 0.02626502460827588,
"acc_norm": 0.7941176470588235,
"acc_norm_stderr": 0.02626502460827588
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.46357615894039733,
"acc_stderr": 0.04071636065944215,
"acc_norm": 0.46357615894039733,
"acc_norm_stderr": 0.04071636065944215
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8880733944954129,
"acc_stderr": 0.013517352714958792,
"acc_norm": 0.8880733944954129,
"acc_norm_stderr": 0.013517352714958792
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.6064814814814815,
"acc_stderr": 0.03331747876370312,
"acc_norm": 0.6064814814814815,
"acc_norm_stderr": 0.03331747876370312
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8970588235294118,
"acc_stderr": 0.02132833757080437,
"acc_norm": 0.8970588235294118,
"acc_norm_stderr": 0.02132833757080437
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.890295358649789,
"acc_stderr": 0.020343400734868834,
"acc_norm": 0.890295358649789,
"acc_norm_stderr": 0.020343400734868834
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.8161434977578476,
"acc_stderr": 0.025998379092356513,
"acc_norm": 0.8161434977578476,
"acc_norm_stderr": 0.025998379092356513
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.8320610687022901,
"acc_stderr": 0.032785485373431386,
"acc_norm": 0.8320610687022901,
"acc_norm_stderr": 0.032785485373431386
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.859504132231405,
"acc_stderr": 0.031722334260021585,
"acc_norm": 0.859504132231405,
"acc_norm_stderr": 0.031722334260021585
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037182,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037182
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.8159509202453987,
"acc_stderr": 0.030446777687971726,
"acc_norm": 0.8159509202453987,
"acc_norm_stderr": 0.030446777687971726
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5892857142857143,
"acc_stderr": 0.04669510663875191,
"acc_norm": 0.5892857142857143,
"acc_norm_stderr": 0.04669510663875191
},
"harness|hendrycksTest-management|5": {
"acc": 0.8155339805825242,
"acc_stderr": 0.03840423627288276,
"acc_norm": 0.8155339805825242,
"acc_norm_stderr": 0.03840423627288276
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8931623931623932,
"acc_stderr": 0.02023714900899093,
"acc_norm": 0.8931623931623932,
"acc_norm_stderr": 0.02023714900899093
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8659003831417624,
"acc_stderr": 0.012185528166499978,
"acc_norm": 0.8659003831417624,
"acc_norm_stderr": 0.012185528166499978
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.6502793296089385,
"acc_stderr": 0.015949308790233645,
"acc_norm": 0.6502793296089385,
"acc_norm_stderr": 0.015949308790233645
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7418300653594772,
"acc_stderr": 0.025058503316958154,
"acc_norm": 0.7418300653594772,
"acc_norm_stderr": 0.025058503316958154
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7556270096463023,
"acc_stderr": 0.024406162094668893,
"acc_norm": 0.7556270096463023,
"acc_norm_stderr": 0.024406162094668893
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.8240740740740741,
"acc_stderr": 0.021185893615225163,
"acc_norm": 0.8240740740740741,
"acc_norm_stderr": 0.021185893615225163
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.5673758865248227,
"acc_stderr": 0.029555454236778845,
"acc_norm": 0.5673758865248227,
"acc_norm_stderr": 0.029555454236778845
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.5919165580182529,
"acc_stderr": 0.012552598958563671,
"acc_norm": 0.5919165580182529,
"acc_norm_stderr": 0.012552598958563671
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7095588235294118,
"acc_stderr": 0.02757646862274054,
"acc_norm": 0.7095588235294118,
"acc_norm_stderr": 0.02757646862274054
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.017479487001364764,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.017479487001364764
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7090909090909091,
"acc_stderr": 0.04350271442923243,
"acc_norm": 0.7090909090909091,
"acc_norm_stderr": 0.04350271442923243
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7795918367346939,
"acc_stderr": 0.02653704531214529,
"acc_norm": 0.7795918367346939,
"acc_norm_stderr": 0.02653704531214529
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8756218905472637,
"acc_stderr": 0.023335401790166327,
"acc_norm": 0.8756218905472637,
"acc_norm_stderr": 0.023335401790166327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.03487350880197769,
"acc_norm": 0.86,
"acc_norm_stderr": 0.03487350880197769
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070813,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070813
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6260206771095533,
"mc2_stderr": 0.014926739687315194
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v4
|
[
"region:us"
] |
2023-09-02T04:22:59+00:00
|
{"pretty_name": "Evaluation run of yeontaek/llama-2-70B-ensemble-v4", "dataset_summary": "Dataset automatically created during the evaluation run of model [yeontaek/llama-2-70B-ensemble-v4](https://huggingface.co/yeontaek/llama-2-70B-ensemble-v4) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v4\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-02T05:22:36.145219](https://huggingface.co/datasets/open-llm-leaderboard/details_yeontaek__llama-2-70B-ensemble-v4/blob/main/results_2023-09-02T05%3A22%3A36.145219.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6965991479686341,\n \"acc_stderr\": 0.03128858284011723,\n \"acc_norm\": 0.7002900066329676,\n \"acc_norm_stderr\": 0.03126026366396146,\n \"mc1\": 0.4418604651162791,\n \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6260206771095533,\n \"mc2_stderr\": 0.014926739687315194\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6808873720136519,\n \"acc_stderr\": 0.013621696119173302,\n \"acc_norm\": 0.7090443686006825,\n \"acc_norm_stderr\": 0.01327307786590759\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6838279227245568,\n \"acc_stderr\": 0.0046403067196280675,\n \"acc_norm\": 0.8734315873332006,\n \"acc_norm_stderr\": 0.00331809357970292\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.042446332383532286,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.042446332383532286\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7763157894736842,\n \"acc_stderr\": 0.033911609343436025,\n \"acc_norm\": 0.7763157894736842,\n \"acc_norm_stderr\": 0.033911609343436025\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7433962264150943,\n \"acc_stderr\": 0.026880647889051982,\n \"acc_norm\": 0.7433962264150943,\n \"acc_norm_stderr\": 0.026880647889051982\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.8541666666666666,\n \"acc_stderr\": 0.02951424596429177,\n \"acc_norm\": 0.8541666666666666,\n \"acc_norm_stderr\": 0.02951424596429177\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201943,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201943\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6680851063829787,\n \"acc_stderr\": 0.03078373675774565,\n \"acc_norm\": 0.6680851063829787,\n \"acc_norm_stderr\": 0.03078373675774565\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.45614035087719296,\n \"acc_stderr\": 0.04685473041907789,\n \"acc_norm\": 0.45614035087719296,\n \"acc_norm_stderr\": 0.04685473041907789\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555498,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555498\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4708994708994709,\n \"acc_stderr\": 0.025707658614154954,\n \"acc_norm\": 0.4708994708994709,\n \"acc_norm_stderr\": 0.025707658614154954\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5238095238095238,\n \"acc_stderr\": 0.04467062628403273,\n \"acc_norm\": 0.5238095238095238,\n \"acc_norm_stderr\": 0.04467062628403273\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8225806451612904,\n \"acc_stderr\": 0.021732540689329276,\n \"acc_norm\": 0.8225806451612904,\n \"acc_norm_stderr\": 0.021732540689329276\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932262,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932262\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8606060606060606,\n \"acc_stderr\": 0.027045948825865376,\n \"acc_norm\": 0.8606060606060606,\n \"acc_norm_stderr\": 0.027045948825865376\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8737373737373737,\n \"acc_stderr\": 0.023664359402880236,\n \"acc_norm\": 0.8737373737373737,\n \"acc_norm_stderr\": 0.023664359402880236\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9481865284974094,\n \"acc_stderr\": 0.01599622932024412,\n \"acc_norm\": 0.9481865284974094,\n \"acc_norm_stderr\": 0.01599622932024412\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.7205128205128205,\n \"acc_stderr\": 0.022752388839776826,\n \"acc_norm\": 0.7205128205128205,\n \"acc_norm_stderr\": 0.022752388839776826\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.02866120111652459,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.02866120111652459\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.02626502460827588,\n \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.02626502460827588\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.46357615894039733,\n \"acc_stderr\": 0.04071636065944215,\n \"acc_norm\": 0.46357615894039733,\n \"acc_norm_stderr\": 0.04071636065944215\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8880733944954129,\n \"acc_stderr\": 0.013517352714958792,\n \"acc_norm\": 0.8880733944954129,\n \"acc_norm_stderr\": 0.013517352714958792\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.6064814814814815,\n \"acc_stderr\": 0.03331747876370312,\n \"acc_norm\": 0.6064814814814815,\n \"acc_norm_stderr\": 0.03331747876370312\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8970588235294118,\n \"acc_stderr\": 0.02132833757080437,\n \"acc_norm\": 0.8970588235294118,\n \"acc_norm_stderr\": 0.02132833757080437\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.890295358649789,\n \"acc_stderr\": 0.020343400734868834,\n \"acc_norm\": 0.890295358649789,\n \"acc_norm_stderr\": 0.020343400734868834\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.8161434977578476,\n \"acc_stderr\": 0.025998379092356513,\n \"acc_norm\": 0.8161434977578476,\n \"acc_norm_stderr\": 0.025998379092356513\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8320610687022901,\n \"acc_stderr\": 0.032785485373431386,\n \"acc_norm\": 0.8320610687022901,\n \"acc_norm_stderr\": 0.032785485373431386\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.859504132231405,\n \"acc_stderr\": 0.031722334260021585,\n \"acc_norm\": 0.859504132231405,\n \"acc_norm_stderr\": 0.031722334260021585\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037182,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037182\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.8159509202453987,\n \"acc_stderr\": 0.030446777687971726,\n \"acc_norm\": 0.8159509202453987,\n \"acc_norm_stderr\": 0.030446777687971726\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5892857142857143,\n \"acc_stderr\": 0.04669510663875191,\n \"acc_norm\": 0.5892857142857143,\n \"acc_norm_stderr\": 0.04669510663875191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8155339805825242,\n \"acc_stderr\": 0.03840423627288276,\n \"acc_norm\": 0.8155339805825242,\n \"acc_norm_stderr\": 0.03840423627288276\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n \"acc_stderr\": 0.02023714900899093,\n \"acc_norm\": 0.8931623931623932,\n \"acc_norm_stderr\": 0.02023714900899093\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8659003831417624,\n \"acc_stderr\": 0.012185528166499978,\n \"acc_norm\": 0.8659003831417624,\n \"acc_norm_stderr\": 0.012185528166499978\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.6502793296089385,\n \"acc_stderr\": 0.015949308790233645,\n \"acc_norm\": 0.6502793296089385,\n \"acc_norm_stderr\": 0.015949308790233645\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7418300653594772,\n \"acc_stderr\": 0.025058503316958154,\n \"acc_norm\": 0.7418300653594772,\n \"acc_norm_stderr\": 0.025058503316958154\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7556270096463023,\n \"acc_stderr\": 0.024406162094668893,\n \"acc_norm\": 0.7556270096463023,\n \"acc_norm_stderr\": 0.024406162094668893\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.021185893615225163,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.021185893615225163\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5673758865248227,\n \"acc_stderr\": 0.029555454236778845,\n \"acc_norm\": 0.5673758865248227,\n \"acc_norm_stderr\": 0.029555454236778845\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.5919165580182529,\n \"acc_stderr\": 0.012552598958563671,\n \"acc_norm\": 0.5919165580182529,\n \"acc_norm_stderr\": 0.012552598958563671\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7095588235294118,\n \"acc_stderr\": 0.02757646862274054,\n \"acc_norm\": 0.7095588235294118,\n \"acc_norm_stderr\": 0.02757646862274054\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.017479487001364764,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.017479487001364764\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7795918367346939,\n \"acc_stderr\": 0.02653704531214529,\n \"acc_norm\": 0.7795918367346939,\n \"acc_norm_stderr\": 0.02653704531214529\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8756218905472637,\n \"acc_stderr\": 0.023335401790166327,\n \"acc_norm\": 0.8756218905472637,\n \"acc_norm_stderr\": 0.023335401790166327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070813,\n \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070813\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4418604651162791,\n \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6260206771095533,\n \"mc2_stderr\": 0.014926739687315194\n }\n}\n```", "repo_url": "https://huggingface.co/yeontaek/llama-2-70B-ensemble-v4", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|arc:challenge|25_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hellaswag|10_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T05:22:36.145219.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T05:22:36.145219.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T05_22_36.145219", "path": ["results_2023-09-02T05:22:36.145219.parquet"]}, {"split": "latest", "path": ["results_2023-09-02T05:22:36.145219.parquet"]}]}]}
|
2023-09-02T04:23:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v4
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v4 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-02T05:22:36.145219(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-02T05:22:36.145219(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v4",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-02T05:22:36.145219(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of yeontaek/llama-2-70B-ensemble-v4## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model yeontaek/llama-2-70B-ensemble-v4 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-02T05:22:36.145219(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
98e0d5d14f06e8fade79f55650c3d8f98dfca9c3
|
Translate English to Indonesian
|
vhtran/id-en
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-09-02T05:32:44+00:00
|
{"license": "cc-by-4.0"}
|
2023-09-02T05:34:49+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #region-us
|
Translate English to Indonesian
|
[] |
[
"TAGS\n#license-cc-by-4.0 #region-us \n"
] |
[
15
] |
[
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
] |
d5e15179821c8ef3d89384a0729a3bb80bef9c22
|
# Dataset Card for "Llama-2-oasst1-es-test-format"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dariolopez/Llama-2-oasst1-es-test-format
|
[
"region:us"
] |
2023-09-02T05:58:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4680420, "num_examples": 3909}], "download_size": 2537064, "dataset_size": 4680420}}
|
2023-09-02T05:58:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Llama-2-oasst1-es-test-format"
More Information needed
|
[
"# Dataset Card for \"Llama-2-oasst1-es-test-format\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Llama-2-oasst1-es-test-format\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Llama-2-oasst1-es-test-format\"\n\nMore Information needed"
] |
76727a210d47d68ef7f00bc403c1f71754acf070
|
## SQUALL Dataset
To explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.
WARN: labels of test set are unknown.
## Source
Please refer to [github repo](https://github.com/tzshi/squall/) for source data.
## Use
```python
from datasets import load_dataset
dataset = load_dataset("siyue/squall","0")
```
Example:
```python
{
'nt': 'nt-10922',
'tbl': '204_879',
'columns':
{
'raw_header': ['year', 'host / location', 'division i overall', 'division i undergraduate', 'division ii overall', 'division ii community college'],
'tokenized_header': [['year'], ['host', '\\\\/', 'location'], ['division', 'i', 'overall'], ['division', 'i', 'undergraduate'], ['division', 'ii', 'overall'], ['division', 'ii', 'community', 'college']],
'column_suffixes': [['number'], ['address'], [], [], [], []],
'column_dtype': ['number', 'address', 'text', 'text', 'text', 'text'],
'example': ['1997', 'penn', 'chicago', 'swarthmore', 'harvard', 'valencia cc']
},
'nl': ['when', 'was', 'the', 'last', 'time', 'the', 'event', 'was', 'held', 'in', 'minnesota', '?'],
'nl_pos': ['WRB', 'VBD-AUX', 'DT', 'JJ', 'NN', 'DT', 'NN', 'VBD-AUX', 'VBN', 'IN', 'NNP', '.'],
'nl_ner': ['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'LOCATION', 'O'],
'nl_incolumns': [False, False, False, False, False, False, False, False, False, False, False, False],
'nl_incells': [False, False, False, False, False, False, False, False, False, False, True, False],
'columns_innl': [False, False, False, False, False, False],
'tgt': '2007',
'sql':
{
'sql_type': ['Keyword', 'Column', 'Keyword', 'Keyword', 'Keyword', 'Column', 'Keyword', 'Literal.String', 'Keyword', 'Keyword', 'Column', 'Keyword', 'Keyword', 'Keyword'],
'value': ['select', 'c1', 'from', 'w', 'where', 'c2', '=', "'minnesota'", 'order', 'by', 'c1_number', 'desc', 'limit', '1'],
'span_indices': [[], [], [], [], [], [], [], [10, 10], [], [], [], [], [], []]
},
'nl_ralign':
{
'aligned_sql_token_type': ['None', 'None', 'Column', 'Column', 'Column', 'None', 'None', 'None', 'Column', 'Column', 'Literal', 'None'],
'aligned_sql_token_info': [None, None, 'c1_number', 'c1_number', 'c1', None, None, None, 'c2', 'c2', None, None],
'align':
{
'nl_indices': [[10], [9, 8], [4], [3, 2]],
'sql_indices': [[7], [5], [1], [8, 9, 10, 11, 12, 13]]
}
},
'align':
{
'nl_indices': [[10], [9, 8], [4], [3, 2]],
'sql_indices': [[7], [5], [1], [8, 9, 10, 11, 12, 13]]
}
}
```
## Contact
For any issues or questions, kindly email us at: Siyue Zhang ([email protected]).
## Citation
```
@inproceedings{Shi:Zhao:Boyd-Graber:Daume-III:Lee-2020,
Title = {On the Potential of Lexico-logical Alignments for Semantic Parsing to {SQL} Queries},
Author = {Tianze Shi and Chen Zhao and Jordan Boyd-Graber and Hal {Daum\'{e} III} and Lillian Lee},
Booktitle = {Findings of EMNLP},
Year = {2020},
}
```
|
siyue/squall
|
[
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-09-02T05:59:17+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["table-question-answering"], "pretty_name": "SQUALL"}
|
2023-09-08T05:08:06+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-table-question-answering #size_categories-10K<n<100K #language-English #license-mit #region-us
|
## SQUALL Dataset
To explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.
WARN: labels of test set are unknown.
## Source
Please refer to github repo for source data.
## Use
Example:
## Contact
For any issues or questions, kindly email us at: Siyue Zhang ([email protected]).
|
[
"## SQUALL Dataset\nTo explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.\n\nWARN: labels of test set are unknown.",
"## Source\nPlease refer to github repo for source data.",
"## Use\n\nExample:",
"## Contact\nFor any issues or questions, kindly email us at: Siyue Zhang ([email protected])."
] |
[
"TAGS\n#task_categories-table-question-answering #size_categories-10K<n<100K #language-English #license-mit #region-us \n",
"## SQUALL Dataset\nTo explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.\n\nWARN: labels of test set are unknown.",
"## Source\nPlease refer to github repo for source data.",
"## Use\n\nExample:",
"## Contact\nFor any issues or questions, kindly email us at: Siyue Zhang ([email protected])."
] |
[
41,
120,
12,
5,
29
] |
[
"passage: TAGS\n#task_categories-table-question-answering #size_categories-10K<n<100K #language-English #license-mit #region-us \n## SQUALL Dataset\nTo explore the utility of fine-grained, lexical-level supervision, authors introduce SQUALL, a dataset that enriches 11,276 WikiTableQuestions English-language questions with manually created SQL equivalents plus alignments between SQL and question fragments. 5-fold splits are applied to the full dataset (1 fold as dev set at each time). The subset defines which fold is selected as the validation dataset.\n\nWARN: labels of test set are unknown.## Source\nPlease refer to github repo for source data.## Use\n\nExample:## Contact\nFor any issues or questions, kindly email us at: Siyue Zhang ([email protected])."
] |
de7cab27b4eb3c4ce9b1511e92890bdbe2d41b74
|
# Dataset Card for "95-cloud-train-only-v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jaygala223/95-cloud-train-only-v1
|
[
"region:us"
] |
2023-09-02T06:23:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 1907784030.75, "num_examples": 26301}], "download_size": 1897837399, "dataset_size": 1907784030.75}}
|
2023-09-02T09:38:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "95-cloud-train-only-v1"
More Information needed
|
[
"# Dataset Card for \"95-cloud-train-only-v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"95-cloud-train-only-v1\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"95-cloud-train-only-v1\"\n\nMore Information needed"
] |
340719cb9d17eecc7917b0f1f22bf3fe9ab1a424
|
# Dataset Card for "result_with_w2v2_aug_on_fly_01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
quocanh34/result_with_w2v2_aug_on_fly_01
|
[
"region:us"
] |
2023-09-02T06:29:00+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}, {"name": "w2v2_baseline_transcription", "dtype": "string"}, {"name": "w2v2_baseline_norm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174371470.027, "num_examples": 1299}], "download_size": 164200334, "dataset_size": 174371470.027}}
|
2023-09-02T06:29:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "result_with_w2v2_aug_on_fly_01"
More Information needed
|
[
"# Dataset Card for \"result_with_w2v2_aug_on_fly_01\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"result_with_w2v2_aug_on_fly_01\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"result_with_w2v2_aug_on_fly_01\"\n\nMore Information needed"
] |
82c6597c20e5808db789b718071623f8808c2d05
|
# Dataset Card for "Llama-2-oasst1-es-test-format2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dariolopez/Llama-2-oasst1-es-test-format2
|
[
"region:us"
] |
2023-09-02T06:57:57+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4567059, "num_examples": 3909}], "download_size": 2525724, "dataset_size": 4567059}}
|
2023-09-02T06:57:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Llama-2-oasst1-es-test-format2"
More Information needed
|
[
"# Dataset Card for \"Llama-2-oasst1-es-test-format2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Llama-2-oasst1-es-test-format2\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Llama-2-oasst1-es-test-format2\"\n\nMore Information needed"
] |
980870279bd950dd6b62cabfb0cb0c6a9af9c42c
|
# Bangumi Image Base of Bang Dream! It's Mygo!!!!!
This is the image base of bangumi BanG Dream! It's MyGO!!!!!, we detected 23 characters, 3511 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 100 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 163 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 155 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 393 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 314 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 416 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 417 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 559 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 458 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 19 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 9 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 25 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 19 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 8 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 7 | [Download](14/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 15 | 26 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 11 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 8 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 41 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 10 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 8 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 8 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 337 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/bangdreamitsmygo
|
[
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] |
2023-09-02T07:50:07+00:00
|
{"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]}
|
2023-09-29T04:15:44+00:00
|
[] |
[] |
TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
|
Bangumi Image Base of Bang Dream! It's Mygo!!!!!
================================================
This is the image base of bangumi BanG Dream! It's MyGO!!!!!, we detected 23 characters, 3511 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
[
25
] |
[
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
c3a125e63523f1ea742059ac46f137aaa60c19d8
|
# Dataset Card for "SpeakerVerification_LibriSpeechTestClean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
DynamicSuperb/SpeakerVerification_LibriSpeech-TestClean
|
[
"region:us"
] |
2023-09-02T07:58:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "file", "dtype": "string"}, {"name": "audio", "dtype": "audio"}, {"name": "file2", "dtype": "string"}, {"name": "audio2", "dtype": "audio"}, {"name": "instruction", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1498410406.0, "num_examples": 5000}], "download_size": 691287710, "dataset_size": 1498410406.0}}
|
2023-11-01T08:24:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "SpeakerVerification_LibriSpeechTestClean"
More Information needed
|
[
"# Dataset Card for \"SpeakerVerification_LibriSpeechTestClean\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"SpeakerVerification_LibriSpeechTestClean\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"SpeakerVerification_LibriSpeechTestClean\"\n\nMore Information needed"
] |
3e1fae684100460a7174873722bbfed4608c202a
|
# Dataset Card for "guanaco-llama2-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Sagar0934/guanaco-llama2-1k
|
[
"region:us"
] |
2023-09-02T08:11:44+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1654448, "num_examples": 1000}], "download_size": 0, "dataset_size": 1654448}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-03T09:12:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "guanaco-llama2-1k"
More Information needed
|
[
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-1k\"\n\nMore Information needed"
] |
4fcbd95665ff7a7f900933d90420172fdb0973f2
|
# Dataset Card for "gsm8k_train_pairwise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
reciprocate/gsm8k_train_pairwise
|
[
"region:us"
] |
2023-09-02T08:17:37+00:00
|
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "selected", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5031130, "num_examples": 7043}], "download_size": 2891682, "dataset_size": 5031130}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-02T10:29:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "gsm8k_train_pairwise"
More Information needed
|
[
"# Dataset Card for \"gsm8k_train_pairwise\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"gsm8k_train_pairwise\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"gsm8k_train_pairwise\"\n\nMore Information needed"
] |
522cbbd13d6270f07b66d003e52b16851142c3c8
|
# Dataset Card for "c4_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/c4_subset
|
[
"region:us"
] |
2023-09-02T08:28:27+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[s]"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2250873538.7162814, "num_examples": 1000000}, {"name": "test", "num_bytes": 1828234, "num_examples": 1000}], "download_size": 1190601303, "dataset_size": 2252701772.7162814}}
|
2023-09-20T04:35:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4_subset"
More Information needed
|
[
"# Dataset Card for \"c4_subset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4_subset\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4_subset\"\n\nMore Information needed"
] |
6ed79f0c88c75a0941c77083d907e088155e794f
|
# Dataset Card for FudanSELab ClassEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/FudanSELab/ClassEval)
- **Paper:** [ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation](https://arxiv.org/abs/2308.01861)
### Dataset Summary
We manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.
For 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.
For 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.
### Languages
The programming language is Python. The natural language used in the comments and docstrings is English.
## Dataset Structure
```python
from datasets import load_dataset
dataset = load_dataset("FudanSELab/ClassEval")
DatasetDict({
test: Dataset({
features: ['task_id', 'skeleton', 'test', 'solution_code', 'import_statement', 'class_description', 'methods_info',
'class_name', 'test_classes', 'class_constructor', 'fields'],
num_rows: 100
})
})
```
### Data Fields
The specific data fields for each task are delineated as follows:
* task_id: the unique identifier for each task.
* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks.
* test: all test cases for the whole class.
* solution_code: the ground-truth class-level code for each task.
More fine-grained class-level information from the class skeleton, including:
* import_statement: the import statements for each task.
* class_name: the name of the class.
* class_description: a concise description of the purpose and functionality of the class.
* class_constructor: the whole constructor of the class.
* fields: the fields defined in the class_constructor.
Detailed information for each method in the "methods_info" field, including:
* method_name: the method signature.
* method_input: the method contract design, including all input descriptions in the method.
* test_code: the test cases for the method.
* solution_code: the ground-truth method-level code.
* dependencies: the dependency information of the method.
### Data Splits
The dataset only consists of a test split with 100 samples.
## Dataset Creation
### Source Data
Manually-crafted
## Additional Information
### Licensing Information
This repository is under [MIT](https://github.com/FudanSELab/ClassEval/blob/master/LICENSE) license. But the data is distributes through [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) license.
### Citation Information
```
@misc{du2023classeval,
title={ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation},
author={Xueying Du and Mingwei Liu and Kaixin Wang and Hanlin Wang and Junwei Liu and Yixuan Chen and Jiayi Feng and Chaofeng Sha and Xin Peng and Yiling Lou},
year={2023},
eprint={2308.01861},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Contributions
Xueying Du [email protected]
Mingwei Liu [email protected]
Kaixin Wang [email protected]
Hanlin Wang [email protected]
Junwei Liu [email protected]
Yixuan Chen [email protected]
Jiayi Feng [email protected]
Chaofeng Sha [email protected]
Xin Peng [email protected]
Yiling Lou [email protected]
|
FudanSELab/ClassEval
|
[
"task_categories:text2text-generation",
"size_categories:n<1K",
"language:en",
"license:mit",
"code-generation",
"arxiv:2308.01861",
"region:us"
] |
2023-09-02T08:28:37+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["text2text-generation"], "pretty_name": "ClassEval", "tags": ["code-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "task_id", "dtype": "string"}, {"name": "skeleton", "dtype": "string"}, {"name": "test", "dtype": "string"}, {"name": "solution_code", "dtype": "string"}, {"name": "import_statement", "sequence": "string"}, {"name": "class_description", "dtype": "string"}, {"name": "methods_info", "list": [{"name": "method_name", "dtype": "string"}, {"name": "method_description", "dtype": "string"}, {"name": "test_class", "dtype": "string"}, {"name": "test_code", "dtype": "string"}, {"name": "solution_code", "dtype": "string"}, {"name": "dependencies", "struct": [{"name": "Standalone", "dtype": "bool"}, {"name": "lib_dependencies", "sequence": "string"}, {"name": "field_dependencies", "sequence": "string"}, {"name": "method_dependencies", "sequence": "string"}]}]}, {"name": "class_name", "dtype": "string"}, {"name": "test_classes", "sequence": "string"}, {"name": "class_constructor", "dtype": "string"}, {"name": "fields", "sequence": "string"}], "splits": [{"name": "test", "num_bytes": 2045743, "num_examples": 100}], "download_size": 504216, "dataset_size": 2045743}}
|
2024-01-08T07:15:46+00:00
|
[
"2308.01861"
] |
[
"en"
] |
TAGS
#task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #code-generation #arxiv-2308.01861 #region-us
|
# Dataset Card for FudanSELab ClassEval
## Dataset Description
- Repository: GitHub Repository
- Paper: ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation
### Dataset Summary
We manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.
For 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.
For 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.
### Languages
The programming language is Python. The natural language used in the comments and docstrings is English.
## Dataset Structure
### Data Fields
The specific data fields for each task are delineated as follows:
* task_id: the unique identifier for each task.
* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks.
* test: all test cases for the whole class.
* solution_code: the ground-truth class-level code for each task.
More fine-grained class-level information from the class skeleton, including:
* import_statement: the import statements for each task.
* class_name: the name of the class.
* class_description: a concise description of the purpose and functionality of the class.
* class_constructor: the whole constructor of the class.
* fields: the fields defined in the class_constructor.
Detailed information for each method in the "methods_info" field, including:
* method_name: the method signature.
* method_input: the method contract design, including all input descriptions in the method.
* test_code: the test cases for the method.
* solution_code: the ground-truth method-level code.
* dependencies: the dependency information of the method.
### Data Splits
The dataset only consists of a test split with 100 samples.
## Dataset Creation
### Source Data
Manually-crafted
## Additional Information
### Licensing Information
This repository is under MIT license. But the data is distributes through CC BY-NC 4.0 license.
### Contributions
Xueying Du [email protected]
Mingwei Liu liumingwei@URL
Kaixin Wang [email protected]
Hanlin Wang [email protected]
Junwei Liu [email protected]
Yixuan Chen [email protected]
Jiayi Feng [email protected]
Chaofeng Sha cfsha@URL
Xin Peng pengxin@URL
Yiling Lou yilinglou@URL
|
[
"# Dataset Card for FudanSELab ClassEval",
"## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation",
"### Dataset Summary\n\nWe manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.\n\nFor 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.\n\nFor 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.",
"### Languages\n\nThe programming language is Python. The natural language used in the comments and docstrings is English.",
"## Dataset Structure",
"### Data Fields\n\nThe specific data fields for each task are delineated as follows:\n\n* task_id: the unique identifier for each task.\n\n* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks. \n\n* test: all test cases for the whole class.\n\n* solution_code: the ground-truth class-level code for each task.\n\nMore fine-grained class-level information from the class skeleton, including:\n\n* import_statement: the import statements for each task.\n\n* class_name: the name of the class.\n\n* class_description: a concise description of the purpose and functionality of the class.\n\n* class_constructor: the whole constructor of the class.\n\n* fields: the fields defined in the class_constructor.\n\nDetailed information for each method in the \"methods_info\" field, including:\n\n* method_name: the method signature.\n\n* method_input: the method contract design, including all input descriptions in the method.\n\n* test_code: the test cases for the method.\n\n* solution_code: the ground-truth method-level code.\n\n* dependencies: the dependency information of the method.",
"### Data Splits\n\nThe dataset only consists of a test split with 100 samples.",
"## Dataset Creation",
"### Source Data\n\nManually-crafted",
"## Additional Information",
"### Licensing Information\n\nThis repository is under MIT license. But the data is distributes through CC BY-NC 4.0 license.",
"### Contributions\n\nXueying Du [email protected]\n\nMingwei Liu liumingwei@URL\n\nKaixin Wang [email protected]\n\nHanlin Wang [email protected]\n\nJunwei Liu [email protected]\n\nYixuan Chen [email protected]\n\nJiayi Feng [email protected]\n\nChaofeng Sha cfsha@URL\n\nXin Peng pengxin@URL\n\nYiling Lou yilinglou@URL"
] |
[
"TAGS\n#task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #code-generation #arxiv-2308.01861 #region-us \n",
"# Dataset Card for FudanSELab ClassEval",
"## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation",
"### Dataset Summary\n\nWe manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.\n\nFor 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.\n\nFor 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.",
"### Languages\n\nThe programming language is Python. The natural language used in the comments and docstrings is English.",
"## Dataset Structure",
"### Data Fields\n\nThe specific data fields for each task are delineated as follows:\n\n* task_id: the unique identifier for each task.\n\n* skeleton: the class skeleton, including all input descriptions in our class-level coding tasks. \n\n* test: all test cases for the whole class.\n\n* solution_code: the ground-truth class-level code for each task.\n\nMore fine-grained class-level information from the class skeleton, including:\n\n* import_statement: the import statements for each task.\n\n* class_name: the name of the class.\n\n* class_description: a concise description of the purpose and functionality of the class.\n\n* class_constructor: the whole constructor of the class.\n\n* fields: the fields defined in the class_constructor.\n\nDetailed information for each method in the \"methods_info\" field, including:\n\n* method_name: the method signature.\n\n* method_input: the method contract design, including all input descriptions in the method.\n\n* test_code: the test cases for the method.\n\n* solution_code: the ground-truth method-level code.\n\n* dependencies: the dependency information of the method.",
"### Data Splits\n\nThe dataset only consists of a test split with 100 samples.",
"## Dataset Creation",
"### Source Data\n\nManually-crafted",
"## Additional Information",
"### Licensing Information\n\nThis repository is under MIT license. But the data is distributes through CC BY-NC 4.0 license.",
"### Contributions\n\nXueying Du [email protected]\n\nMingwei Liu liumingwei@URL\n\nKaixin Wang [email protected]\n\nHanlin Wang [email protected]\n\nJunwei Liu [email protected]\n\nYixuan Chen [email protected]\n\nJiayi Feng [email protected]\n\nChaofeng Sha cfsha@URL\n\nXin Peng pengxin@URL\n\nYiling Lou yilinglou@URL"
] |
[
51,
12,
44,
214,
25,
6,
270,
20,
5,
9,
5,
29,
109
] |
[
"passage: TAGS\n#task_categories-text2text-generation #size_categories-n<1K #language-English #license-mit #code-generation #arxiv-2308.01861 #region-us \n# Dataset Card for FudanSELab ClassEval## Dataset Description\n\n- Repository: GitHub Repository\n- Paper: ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation### Dataset Summary\n\nWe manually build ClassEval of 100 class-level Python coding tasks, consists of 100 classes and 412 methods, and average 33.1 test cases per class.\n\nFor 100 class-level tasks, diversity is maintained by encompassing these tasks over a wide spectrum of topics, including Management Systems, Data Formatting, Mathematical Operations, Game Development, File Handing, Database Operations and Natural Language Processing.\n\nFor 412 methods, they have been constructed with diverse dependencies, including (i) Library Dependency, where the methods rely on specific external libraries; (ii) Field Dependency, in which the methods are contingent on class instance variables, or fields; (iii) Method Dependency, where the methods are dependent on other methods within the same class; and (iv) Standalone, wherein the methods operate independently without reliance on fields, other methods, or external libraries.### Languages\n\nThe programming language is Python. The natural language used in the comments and docstrings is English.## Dataset Structure"
] |
a21fe0e257f6f2cf08ca4db0a37983bd1de10030
|
# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v18_A-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PeanutJar/LLaMa-2-PeanutButter_v18_A-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v18_A-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v18_A-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T16:10:43.222323](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v18_A-7B/blob/main/results_2023-09-22T16-10-43.222323.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298372,
"f1": 0.05780201342281884,
"f1_stderr": 0.0013345722358967201,
"acc": 0.41051883916267196,
"acc_stderr": 0.009653462102026637
},
"harness|drop|3": {
"em": 0.0014681208053691276,
"em_stderr": 0.0003921042190298372,
"f1": 0.05780201342281884,
"f1_stderr": 0.0013345722358967201
},
"harness|gsm8k|5": {
"acc": 0.07202426080363912,
"acc_stderr": 0.007121147983537124
},
"harness|winogrande|5": {
"acc": 0.7490134175217048,
"acc_stderr": 0.01218577622051615
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v18_A-7B
|
[
"region:us"
] |
2023-09-02T08:37:38+00:00
|
{"pretty_name": "Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [PeanutJar/LLaMa-2-PeanutButter_v18_A-7B](https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v18_A-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v18_A-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T16:10:43.222323](https://huggingface.co/datasets/open-llm-leaderboard/details_PeanutJar__LLaMa-2-PeanutButter_v18_A-7B/blob/main/results_2023-09-22T16-10-43.222323.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298372,\n \"f1\": 0.05780201342281884,\n \"f1_stderr\": 0.0013345722358967201,\n \"acc\": 0.41051883916267196,\n \"acc_stderr\": 0.009653462102026637\n },\n \"harness|drop|3\": {\n \"em\": 0.0014681208053691276,\n \"em_stderr\": 0.0003921042190298372,\n \"f1\": 0.05780201342281884,\n \"f1_stderr\": 0.0013345722358967201\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07202426080363912,\n \"acc_stderr\": 0.007121147983537124\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7490134175217048,\n \"acc_stderr\": 0.01218577622051615\n }\n}\n```", "repo_url": "https://huggingface.co/PeanutJar/LLaMa-2-PeanutButter_v18_A-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|arc:challenge|25_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T16_10_43.222323", "path": ["**/details_harness|drop|3_2023-09-22T16-10-43.222323.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T16-10-43.222323.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T16_10_43.222323", "path": ["**/details_harness|gsm8k|5_2023-09-22T16-10-43.222323.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T16-10-43.222323.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hellaswag|10_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T09:37:14.213070.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T09:37:14.213070.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T09:37:14.213070.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T16_10_43.222323", "path": ["**/details_harness|winogrande|5_2023-09-22T16-10-43.222323.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T16-10-43.222323.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T09_37_14.213070", "path": ["results_2023-09-02T09:37:14.213070.parquet"]}, {"split": "2023_09_22T16_10_43.222323", "path": ["results_2023-09-22T16-10-43.222323.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T16-10-43.222323.parquet"]}]}]}
|
2023-09-22T15:10:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v18_A-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T16:10:43.222323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v18_A-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T16:10:43.222323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v18_A-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T16:10:43.222323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
31,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PeanutJar/LLaMa-2-PeanutButter_v18_A-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PeanutJar/LLaMa-2-PeanutButter_v18_A-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T16:10:43.222323(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e67063d07baaf889a38767754a27c17dabf14c9f
|
# Dataset Card for "c4_biomedical"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/c4_biomedical
|
[
"region:us"
] |
2023-09-02T09:00:58+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "timestamp", "dtype": "timestamp[s]"}, {"name": "url", "dtype": "string"}, {"name": "original_text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 3949195.0, "num_examples": 1000}], "download_size": 2366762, "dataset_size": 3949195.0}}
|
2023-09-03T08:53:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c4_biomedical"
More Information needed
|
[
"# Dataset Card for \"c4_biomedical\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c4_biomedical\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c4_biomedical\"\n\nMore Information needed"
] |
0eaf714add1f750629f66fbac8db82e0b5876cff
|
# Function Recommendation Dataset Readme
## Description
This dataset is based on the "glaiveai/glaive-function-calling" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.
Each snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:
1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.
2. The user asks a question or requests assistance related to a specific task.
3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.
4. The user may ask follow-up questions or request additional recommendations.
5. J.O.S.I.E. continues to provide function recommendations and responses as needed.
## Snippet Example
Here is an example snippet from the dataset:
```markdown
### SYSTEM: You are Gökdeniz Gülmez's private assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity) who has access to the following functions to exclusively help Gökdeniz Gülmez, you can use the functions if needed-
{
"name": "recommend_movies",
"description": "Recommend movies based on user preferences",
"parameters": {
"type": "object",
"properties": {
"genre": {
"type": "string",
"description": "The preferred movie genre"
},
"year_range": {
"type": "object",
"properties": {
"start_year": {
"type": "integer",
"description": "The start year of the movie release"
},
"end_year": {
"type": "integer",
"description": "The end year of the movie release"
}
}
}
}
}
}
### Gökdeniz Gülmez: Can you recommend some movies for me?
### J.O.S.I.E.: Sure! I can recommend movies based on your preferences. Please provide me with your preferred movie genre and the year range of the movie release.
### Gökdeniz Gülmez: I enjoy action movies from the 2000s.
### J.O.S.I.E.: <functioncall> {"name": "recommend_movies", "arguments": '{
"genre": "action",
"year_range": {
"start_year": 2000,
"end_year": 2009
}
}'}
### FUNCTION RESPONSE: {"movies": ["The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", "The Matrix"]}
### J.O.S.I.E.: Based on your preferences, I recommend the following movies: "The Dark Knight", "Gladiator", "The Bourne Identity", "Kill Bill: Volume 1", and "The Matrix".
```
In this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the "recommend_movies" function and provides a list of recommended movies as a response.
## Dataset Usage
This dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.
## Citation
If you use this dataset in your research or applications, please cite it as follows:
```
@dataset{your citation here,
title = {Private Function Calling},
author = {Gökdeniz Gülmez},
year = {2023},
publisher = {Gökdeniz Gülmez},
url = {https://huggingface.co/datasets/Isaak-Carter/Function_Calling_Private_GG/tree/main},
}
```
|
Isaak-Carter/Function_Calling_Private_GG
|
[
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] |
2023-09-02T09:35:38+00:00
|
{"license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "conversational"], "pretty_name": "Funcion Calling Like A Champ"}
|
2023-10-10T11:35:06+00:00
|
[] |
[] |
TAGS
#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us
|
# Function Recommendation Dataset Readme
## Description
This dataset is based on the "glaiveai/glaive-function-calling" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.
Each snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:
1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.
2. The user asks a question or requests assistance related to a specific task.
3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.
4. The user may ask follow-up questions or request additional recommendations.
5. J.O.S.I.E. continues to provide function recommendations and responses as needed.
## Snippet Example
Here is an example snippet from the dataset:
In this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the "recommend_movies" function and provides a list of recommended movies as a response.
## Dataset Usage
This dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.
If you use this dataset in your research or applications, please cite it as follows:
|
[
"# Function Recommendation Dataset Readme",
"## Description\n\nThis dataset is based on the \"glaiveai/glaive-function-calling\" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.\n\nEach snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:\n\n1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.\n\n2. The user asks a question or requests assistance related to a specific task.\n\n3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.\n\n4. The user may ask follow-up questions or request additional recommendations.\n\n5. J.O.S.I.E. continues to provide function recommendations and responses as needed.",
"## Snippet Example\n\nHere is an example snippet from the dataset:\n\n\n\nIn this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the \"recommend_movies\" function and provides a list of recommended movies as a response.",
"## Dataset Usage\n\nThis dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.\n\nIf you use this dataset in your research or applications, please cite it as follows:"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us \n",
"# Function Recommendation Dataset Readme",
"## Description\n\nThis dataset is based on the \"glaiveai/glaive-function-calling\" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.\n\nEach snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:\n\n1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.\n\n2. The user asks a question or requests assistance related to a specific task.\n\n3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.\n\n4. The user may ask follow-up questions or request additional recommendations.\n\n5. J.O.S.I.E. continues to provide function recommendations and responses as needed.",
"## Snippet Example\n\nHere is an example snippet from the dataset:\n\n\n\nIn this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the \"recommend_movies\" function and provides a list of recommended movies as a response.",
"## Dataset Usage\n\nThis dataset can be used for training and fine-tuning Large Language Models (LLMs) such as GPT-3.5 on the task of generating function recommendations in a conversational context. Researchers and developers can use this data to build virtual assistants or chatbots capable of recommending functions and providing relevant information to users based on their requests.\n\nIf you use this dataset in your research or applications, please cite it as follows:"
] |
[
47,
11,
270,
83,
102
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #size_categories-100K<n<1M #license-apache-2.0 #region-us \n# Function Recommendation Dataset Readme## Description\n\nThis dataset is based on the \"glaiveai/glaive-function-calling\" repository and has been customized to suit my specific requirements. It is designed for fine-tuning a Large Language Model (LLM) on the task of generating function recommendations in a conversational context. The dataset contains 158,738 snippets of conversations between me and my virtual assistant named J.O.S.I.E. (Just an Outstandingly Smart Intelligent Entity), who only assists me with various tasks by recommending functions and providing relevant information.\n\nEach snippet in the dataset represents a conversation between the user and J.O.S.I.E. The conversations typically follow this structure:\n\n1. The conversation begins with an introduction of J.O.S.I.E. and its capabilities.\n\n2. The user asks a question or requests assistance related to a specific task.\n\n3. J.O.S.I.E. responds by recommending a function call that can help fulfill the user's request.\n\n4. The user may ask follow-up questions or request additional recommendations.\n\n5. J.O.S.I.E. continues to provide function recommendations and responses as needed.## Snippet Example\n\nHere is an example snippet from the dataset:\n\n\n\nIn this example, the user asks J.O.S.I.E. to recommend action movies from the 2000s. J.O.S.I.E. responds with a function call to the \"recommend_movies\" function and provides a list of recommended movies as a response."
] |
9ebdeaadd5a47acd404969df14975b76d6470015
|
# Dataset Card for Synthetic Chest Xray
## Dataset Description
This is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.
- **Paper: https://arxiv.org/abs/2306.01322
- **Point of Contact: [email protected]
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks
Chexpert classification.
https://stanfordmlgroup.github.io/competitions/chexpert/
## Dataset Structure
- Images
- Chexpert Labels
### Data Splits
We did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/).
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.
### Discussion of Biases
There are biases towards specific pathologies. For example, the "No Findings" label is much bigger than other less common pathologies.
## Additional Information
### Dataset Curators
We used deep learning to filter the dataset.
We filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.
### Licensing Information
We generated the synthetic data samples based on generative model trained on the [Mimic-CXR dataset](https://physionet.org/content/mimic-cxr/2.0.0/). Mimic-CXR uses the [PhysioNet Credentialed Health](https://physionet.org/content/mimic-cxr/view-license/2.0.0/) data license.
The real data license explicitly requires that "The LICENSEE will not share access to PhysioNet restricted data with anyone else". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be "PhysioNet restricted data".
This dataset is released under the [Open & Responsible AI license ("OpenRAIL")](https://huggingface.co/blog/open_rail)
### Citation Information
Fernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.
https://arxiv.org/abs/2306.01322
### Contributions
Pedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset.
|
SinKove/synthetic_chest_xray
|
[
"task_categories:image-classification",
"size_categories:10K<n<100K",
"license:openrail",
"medical",
"arxiv:2306.01322",
"region:us"
] |
2023-09-02T09:39:37+00:00
|
{"license": "openrail", "size_categories": ["10K<n<100K"], "task_categories": ["image-classification"], "pretty_name": "C", "tags": ["medical"]}
|
2023-09-14T11:46:05+00:00
|
[
"2306.01322"
] |
[] |
TAGS
#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2306.01322 #region-us
|
# Dataset Card for Synthetic Chest Xray
## Dataset Description
This is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.
- Paper: URL
- Point of Contact: pedro.sanchez@URL
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks
Chexpert classification.
URL
## Dataset Structure
- Images
- Chexpert Labels
### Data Splits
We did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.
## Dataset Creation
We generated the synthetic data samples using the diffusion model finetuned on the Mimic-CXR dataset.
### Personal and Sensitive Information
Following GDPR "Personal data is any information that relates to an identified or identifiable living individual."
We make sure that there are not "personal data" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.
## Considerations for Using the Data
### Social Impact of Dataset
We hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.
### Discussion of Biases
There are biases towards specific pathologies. For example, the "No Findings" label is much bigger than other less common pathologies.
## Additional Information
### Dataset Curators
We used deep learning to filter the dataset.
We filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.
### Licensing Information
We generated the synthetic data samples based on generative model trained on the Mimic-CXR dataset. Mimic-CXR uses the PhysioNet Credentialed Health data license.
The real data license explicitly requires that "The LICENSEE will not share access to PhysioNet restricted data with anyone else". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be "PhysioNet restricted data".
This dataset is released under the Open & Responsible AI license ("OpenRAIL")
Fernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.
URL
### Contributions
Pedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset.
|
[
"# Dataset Card for Synthetic Chest Xray",
"## Dataset Description\n\nThis is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.\n\n- Paper: URL\n- Point of Contact: pedro.sanchez@URL",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks\n\nChexpert classification.\n\nURL",
"## Dataset Structure\n\n- Images\n- Chexpert Labels",
"### Data Splits\n\nWe did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.",
"## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the Mimic-CXR dataset.",
"### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.",
"### Discussion of Biases\n\nThere are biases towards specific pathologies. For example, the \"No Findings\" label is much bigger than other less common pathologies.",
"## Additional Information",
"### Dataset Curators\n\nWe used deep learning to filter the dataset.\n\nWe filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.",
"### Licensing Information\n\nWe generated the synthetic data samples based on generative model trained on the Mimic-CXR dataset. Mimic-CXR uses the PhysioNet Credentialed Health data license.\n\nThe real data license explicitly requires that \"The LICENSEE will not share access to PhysioNet restricted data with anyone else\". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be \"PhysioNet restricted data\".\n\nThis dataset is released under the Open & Responsible AI license (\"OpenRAIL\")\n\n\n\nFernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.\n\nURL",
"### Contributions\n\nPedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2306.01322 #region-us \n",
"# Dataset Card for Synthetic Chest Xray",
"## Dataset Description\n\nThis is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.\n\n- Paper: URL\n- Point of Contact: pedro.sanchez@URL",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks\n\nChexpert classification.\n\nURL",
"## Dataset Structure\n\n- Images\n- Chexpert Labels",
"### Data Splits\n\nWe did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.",
"## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the Mimic-CXR dataset.",
"### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.",
"### Discussion of Biases\n\nThere are biases towards specific pathologies. For example, the \"No Findings\" label is much bigger than other less common pathologies.",
"## Additional Information",
"### Dataset Curators\n\nWe used deep learning to filter the dataset.\n\nWe filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset.",
"### Licensing Information\n\nWe generated the synthetic data samples based on generative model trained on the Mimic-CXR dataset. Mimic-CXR uses the PhysioNet Credentialed Health data license.\n\nThe real data license explicitly requires that \"The LICENSEE will not share access to PhysioNet restricted data with anyone else\". Here, we ensure that none of the synthetic images can be used to re-identify real Mimic-CXR images. Therefore, we do not consider this synthetic dataset to be \"PhysioNet restricted data\".\n\nThis dataset is released under the Open & Responsible AI license (\"OpenRAIL\")\n\n\n\nFernandez, V., Sanchez, P., Pinaya, W. H. L., Jacenków, G., Tsaftaris, S. A., & Cardoso, J. (2023). Privacy Distillation: Reducing Re-identification Risk of Multimodal Diffusion Models. arXiv preprint arXiv:2306.01322.\n\nURL",
"### Contributions\n\nPedro P. Sanchez, Walter Pinaya uploaded the dataset to Huggingface. All other co-authors of the papers contributed for creating the dataset."
] |
[
47,
12,
65,
32,
12,
12,
39,
34,
64,
8,
32,
40,
5,
52,
237,
42
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-10K<n<100K #license-openrail #medical #arxiv-2306.01322 #region-us \n# Dataset Card for Synthetic Chest Xray## Dataset Description\n\nThis is a synthetic chest X-ray dataset created during the development of the *privacy distillation* paper. In particular, this is the $D_{filter}$ dataset described.\n\n- Paper: URL\n- Point of Contact: pedro.sanchez@URL### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks\n\nChexpert classification.\n\nURL## Dataset Structure\n\n- Images\n- Chexpert Labels### Data Splits\n\nWe did not define data splits. In the paper, all the images were used as training data and real data samples were used as validation and testing data.## Dataset Creation\n\nWe generated the synthetic data samples using the diffusion model finetuned on the Mimic-CXR dataset.### Personal and Sensitive Information\n\nFollowing GDPR \"Personal data is any information that relates to an identified or identifiable living individual.\"\n\nWe make sure that there are not \"personal data\" (re-identifiable information) by filtering with a deep learning model trained for identifying patients.## Considerations for Using the Data### Social Impact of Dataset\n\nWe hope that this dataset can used to enhance AI models training for pathology classification in chest X-ray.### Discussion of Biases\n\nThere are biases towards specific pathologies. For example, the \"No Findings\" label is much bigger than other less common pathologies.## Additional Information### Dataset Curators\n\nWe used deep learning to filter the dataset.\n\nWe filter for re-identification, making sure that none of the images used in the training can be re-identified using samples from this synthetic dataset."
] |
547cc9d76fae125851ecfaee1ed0b34dcee8b194
|
# Dataset Card for Brain imaging generation with Latent Diffusion Models
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Amigo homepage](https://amigos.ai/)
- **Paper:** [Brain imaging generation with Latent Diffusion Models](https://arxiv.org/abs/2209.07162)
- **Point of Contact:** [Walter H. L. Pinaya](mailto:[email protected])
### Dataset Summary
This dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group -
AMIGO (https://amigos.ai/). It consists on of 1,000 synthetic T1w images sampled from generative models trained on
data originally from the UK Biobank dataset (https://www.ukbiobank.ac.uk/).
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
- `prompt_age`: a float value used during the sampling to specify the age of the generated brain image (defined in years)
- `prompt_sex`: a string used during the sampling to specify the sex ("M" for male and "F" for female)
- `prompt_ventricular_volume`: a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)
- `prompt_brain_volume`: a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Licensing Information
The "Brain imaging generation with Latent Diffusion Models" dataset is released under the [OpenRAIL License](https://huggingface.co/blog/open_rail).
### Citation Information
```
@inproceedings{pinaya2022brain,
title={Brain imaging generation with latent diffusion models},
author={Pinaya, Walter HL and Tudosiu, Petru-Daniel and Dafflon, Jessica and Da Costa, Pedro F and Fernandez, Virginia and Nachev, Parashkev and Ourselin, Sebastien and Cardoso, M Jorge},
booktitle={MICCAI Workshop on Deep Generative Models},
pages={117--126},
year={2022},
organization={Springer}
}
```
### Contributions
Thanks to [@Warvito](https://github.com/Warvito) for adding this dataset.
|
SinKove/synthetic_brain_mri
|
[
"task_categories:image-classification",
"size_categories:n<1K",
"language:en",
"license:openrail",
"medical",
"brain-data",
"mri",
"arxiv:2209.07162",
"region:us"
] |
2023-09-02T09:40:22+00:00
|
{"language": ["en"], "license": "openrail", "size_categories": ["n<1K"], "task_categories": ["image-classification"], "pretty_name": "Brain imaging generation with Latent Diffusion Models", "tags": ["medical", "brain-data", "mri"]}
|
2023-09-03T16:10:57+00:00
|
[
"2209.07162"
] |
[
"en"
] |
TAGS
#task_categories-image-classification #size_categories-n<1K #language-English #license-openrail #medical #brain-data #mri #arxiv-2209.07162 #region-us
|
# Dataset Card for Brain imaging generation with Latent Diffusion Models
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Dataset Creation
- Curation Rationale
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: Amigo homepage
- Paper: Brain imaging generation with Latent Diffusion Models
- Point of Contact: Walter H. L. Pinaya
### Dataset Summary
This dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group -
AMIGO (URL It consists on of 1,000 synthetic T1w images sampled from generative models trained on
data originally from the UK Biobank dataset (URL
### Languages
The language in the dataset is English.
## Dataset Structure
### Data Instances
### Data Fields
- 'prompt_age': a float value used during the sampling to specify the age of the generated brain image (defined in years)
- 'prompt_sex': a string used during the sampling to specify the sex ("M" for male and "F" for female)
- 'prompt_ventricular_volume': a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)
- 'prompt_brain_volume': a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Licensing Information
The "Brain imaging generation with Latent Diffusion Models" dataset is released under the OpenRAIL License.
### Contributions
Thanks to @Warvito for adding this dataset.
|
[
"# Dataset Card for Brain imaging generation with Latent Diffusion Models",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Amigo homepage\n- Paper: Brain imaging generation with Latent Diffusion Models\n- Point of Contact: Walter H. L. Pinaya",
"### Dataset Summary\n\nThis dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group - \nAMIGO (URL It consists on of 1,000 synthetic T1w images sampled from generative models trained on \ndata originally from the UK Biobank dataset (URL",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'prompt_age': a float value used during the sampling to specify the age of the generated brain image (defined in years) \n- 'prompt_sex': a string used during the sampling to specify the sex (\"M\" for male and \"F\" for female)\n- 'prompt_ventricular_volume': a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)\n- 'prompt_brain_volume': a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Licensing Information\n\nThe \"Brain imaging generation with Latent Diffusion Models\" dataset is released under the OpenRAIL License.",
"### Contributions\n\nThanks to @Warvito for adding this dataset."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-n<1K #language-English #license-openrail #medical #brain-data #mri #arxiv-2209.07162 #region-us \n",
"# Dataset Card for Brain imaging generation with Latent Diffusion Models",
"## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: Amigo homepage\n- Paper: Brain imaging generation with Latent Diffusion Models\n- Point of Contact: Walter H. L. Pinaya",
"### Dataset Summary\n\nThis dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group - \nAMIGO (URL It consists on of 1,000 synthetic T1w images sampled from generative models trained on \ndata originally from the UK Biobank dataset (URL",
"### Languages\n\nThe language in the dataset is English.",
"## Dataset Structure",
"### Data Instances",
"### Data Fields\n\n- 'prompt_age': a float value used during the sampling to specify the age of the generated brain image (defined in years) \n- 'prompt_sex': a string used during the sampling to specify the sex (\"M\" for male and \"F\" for female)\n- 'prompt_ventricular_volume': a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)\n- 'prompt_brain_volume': a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Licensing Information\n\nThe \"Brain imaging generation with Latent Diffusion Models\" dataset is released under the OpenRAIL License.",
"### Contributions\n\nThanks to @Warvito for adding this dataset."
] |
[
57,
18,
103,
39,
66,
13,
6,
6,
181,
5,
7,
4,
10,
10,
8,
8,
7,
8,
7,
5,
33,
16
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-n<1K #language-English #license-openrail #medical #brain-data #mri #arxiv-2209.07162 #region-us \n# Dataset Card for Brain imaging generation with Latent Diffusion Models## Table of Contents\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: Amigo homepage\n- Paper: Brain imaging generation with Latent Diffusion Models\n- Point of Contact: Walter H. L. Pinaya### Dataset Summary\n\nThis dataset was obtained as part of the Generative Modelling project from the Artificial Medical Intelligence Group - \nAMIGO (URL It consists on of 1,000 synthetic T1w images sampled from generative models trained on \ndata originally from the UK Biobank dataset (URL### Languages\n\nThe language in the dataset is English.## Dataset Structure### Data Instances### Data Fields\n\n- 'prompt_age': a float value used during the sampling to specify the age of the generated brain image (defined in years) \n- 'prompt_sex': a string used during the sampling to specify the sex (\"M\" for male and \"F\" for female)\n- 'prompt_ventricular_volume': a float whose value used during the sampling to specify the volume of ventricular cerebrospinal fluid (in mm^3; based on UKB Data-Field 25004)\n- 'prompt_brain_volume': a float whose value used during the sampling to specify the brain volume normalised for head size (in mm^3; based on UKB Data-Field 25009)## Dataset Creation### Curation Rationale### Source Data"
] |
73fe8bfa8d1c13de1f8f114280d382cfd3f4974f
|
# Dataset Card for "custom_cleaned18k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Jakir057/custom_cleaned18k
|
[
"region:us"
] |
2023-09-02T10:13:23+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "10", "1": "100", "2": "1000", "3": "2", "4": "20", "5": "200", "6": "5", "7": "50", "8": "500"}}}}], "splits": [{"name": "train", "num_bytes": 145908891.0, "num_examples": 16200}, {"name": "test", "num_bytes": 15833757.0, "num_examples": 1800}], "download_size": 157098269, "dataset_size": 161742648.0}}
|
2023-09-02T10:13:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "custom_cleaned18k"
More Information needed
|
[
"# Dataset Card for \"custom_cleaned18k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"custom_cleaned18k\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"custom_cleaned18k\"\n\nMore Information needed"
] |
36cf3e37c4668e15f09141c40717a9b1a9770e07
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj](https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-18T02:00:22.389736](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj/blob/main/results_2023-10-18T02-00-22.389736.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3719588926174497,
"em_stderr": 0.004949726013193945,
"f1": 0.4084679110738261,
"f1_stderr": 0.004843145937750956,
"acc": 0.4480415851620389,
"acc_stderr": 0.010535274120903989
},
"harness|drop|3": {
"em": 0.3719588926174497,
"em_stderr": 0.004949726013193945,
"f1": 0.4084679110738261,
"f1_stderr": 0.004843145937750956
},
"harness|gsm8k|5": {
"acc": 0.1281273692191054,
"acc_stderr": 0.009206398549980031
},
"harness|winogrande|5": {
"acc": 0.7679558011049724,
"acc_stderr": 0.011864149691827948
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj
|
[
"region:us"
] |
2023-09-02T10:14:26+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj](https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-18T02:00:22.389736](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj/blob/main/results_2023-10-18T02-00-22.389736.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3719588926174497,\n \"em_stderr\": 0.004949726013193945,\n \"f1\": 0.4084679110738261,\n \"f1_stderr\": 0.004843145937750956,\n \"acc\": 0.4480415851620389,\n \"acc_stderr\": 0.010535274120903989\n },\n \"harness|drop|3\": {\n \"em\": 0.3719588926174497,\n \"em_stderr\": 0.004949726013193945,\n \"f1\": 0.4084679110738261,\n \"f1_stderr\": 0.004843145937750956\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1281273692191054,\n \"acc_stderr\": 0.009206398549980031\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7679558011049724,\n \"acc_stderr\": 0.011864149691827948\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|arc:challenge|25_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_18T02_00_22.389736", "path": ["**/details_harness|drop|3_2023-10-18T02-00-22.389736.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-18T02-00-22.389736.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_18T02_00_22.389736", "path": ["**/details_harness|gsm8k|5_2023-10-18T02-00-22.389736.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-18T02-00-22.389736.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hellaswag|10_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-02T11:14:02.105897.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T11:14:02.105897.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-02T11:14:02.105897.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_18T02_00_22.389736", "path": ["**/details_harness|winogrande|5_2023-10-18T02-00-22.389736.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-18T02-00-22.389736.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_02T11_14_02.105897", "path": ["results_2023-09-02T11:14:02.105897.parquet"]}, {"split": "2023_10_18T02_00_22.389736", "path": ["results_2023-10-18T02-00-22.389736.parquet"]}, {"split": "latest", "path": ["results_2023-10-18T02-00-22.389736.parquet"]}]}]}
|
2023-10-18T01:00:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-18T02:00:22.389736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T02:00:22.389736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-18T02:00:22.389736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
43,
31,
191,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-huangyt_FINETUNE2_3w-q_k_v_o_proj on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-18T02:00:22.389736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.