sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
d376fe4792eef47fd9f0d4219e7640562471d1d5 |
## Description
A channel generating relaxing scenes from around the world.
## Model
SVD
## Voice
Muted
# Tags
- Chill
- Relaxation
# Style
cosy and warm, relaxing, beautiful
# Music
sweet ambient music
## Prompt
A channel generating relaxing scenes from around the world.
Examples includes, but are not limited to:
living rooms with fireplace, inside a car or tent at night, inside somewhere while it is raining outside,
in a foggy mountain in the morning, in a busy cafe, near a purring cat etc
| jbilcke-hf/ai-tube-latent-chill | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-20T15:05:59+00:00 | {"license": "cc-by-nc-4.0", "pretty_name": "Latent Chill"} | 2023-12-24T19:05:15+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
## Description
A channel generating relaxing scenes from around the world.
## Model
SVD
## Voice
Muted
# Tags
- Chill
- Relaxation
# Style
cosy and warm, relaxing, beautiful
# Music
sweet ambient music
## Prompt
A channel generating relaxing scenes from around the world.
Examples includes, but are not limited to:
living rooms with fireplace, inside a car or tent at night, inside somewhere while it is raining outside,
in a foggy mountain in the morning, in a busy cafe, near a purring cat etc
| [
"## Description\n\nA channel generating relaxing scenes from around the world.",
"## Model\n\nSVD",
"## Voice\n\nMuted",
"# Tags\n\n- Chill\n- Relaxation",
"# Style\n\ncosy and warm, relaxing, beautiful",
"# Music\n\nsweet ambient music",
"## Prompt\n\nA channel generating relaxing scenes from around the world.\n\nExamples includes, but are not limited to:\nliving rooms with fireplace, inside a car or tent at night, inside somewhere while it is raining outside,\nin a foggy mountain in the morning, in a busy cafe, near a purring cat etc"
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"## Description\n\nA channel generating relaxing scenes from around the world.",
"## Model\n\nSVD",
"## Voice\n\nMuted",
"# Tags\n\n- Chill\n- Relaxation",
"# Style\n\ncosy and warm, relaxing, beautiful",
"# Music\n\nsweet ambient music",
"## Prompt\n\nA channel generating relaxing scenes from around the world.\n\nExamples includes, but are not limited to:\nliving rooms with fireplace, inside a car or tent at night, inside somewhere while it is raining outside,\nin a foggy mountain in the morning, in a busy cafe, near a purring cat etc"
] | [
17,
15,
4,
4,
7,
11,
5,
71
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n## Description\n\nA channel generating relaxing scenes from around the world.## Model\n\nSVD## Voice\n\nMuted# Tags\n\n- Chill\n- Relaxation# Style\n\ncosy and warm, relaxing, beautiful# Music\n\nsweet ambient music## Prompt\n\nA channel generating relaxing scenes from around the world.\n\nExamples includes, but are not limited to:\nliving rooms with fireplace, inside a car or tent at night, inside somewhere while it is raining outside,\nin a foggy mountain in the morning, in a busy cafe, near a purring cat etc"
] |
76be11867a30f170aa9e78e11134af7612187a71 |
# Dataset Card for NTX v1 in the Aya format - German subset
This dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_german | [
"task_categories:token-classification",
"language:de",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:07:20+00:00 | {"language": ["de"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:08:04+00:00 | [
"2303.18103"
] | [
"de"
] | TAGS
#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - German subset
This dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
41,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
6414f52d7d4769bc29e73a16a86ced55d14153f1 |
# Dataset Card for NTX v1 in the Aya format - English subset
This dataset is a format conversion for the English data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_english | [
"task_categories:token-classification",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:09:43+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T16:25:24+00:00 | [
"2303.18103"
] | [
"en"
] | TAGS
#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - English subset
This dataset is a format conversion for the English data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
41,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
8ff88376d2650635d5a1bcaf4978f4d2f03f496b |
# Dataset Card for NTX v1 in the Aya format - Spanish subset
This dataset is a format conversion for the Spanish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_spanish | [
"task_categories:token-classification",
"language:es",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:11:39+00:00 | {"language": ["es"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:12:19+00:00 | [
"2303.18103"
] | [
"es"
] | TAGS
#task_categories-token-classification #language-Spanish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Spanish subset
This dataset is a format conversion for the Spanish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Spanish subset\n\nThis dataset is a format conversion for the Spanish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Spanish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Spanish subset\n\nThis dataset is a format conversion for the Spanish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Spanish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Spanish subset\n\nThis dataset is a format conversion for the Spanish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
6846ca3c33926b0827e52e00ade1c460267982e6 |
# Dataset Card for NTX v1 in the Aya format - French subset
This dataset is a format conversion for the French data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_french | [
"task_categories:token-classification",
"language:fr",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:13:06+00:00 | {"language": ["fr"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:13:44+00:00 | [
"2303.18103"
] | [
"fr"
] | TAGS
#task_categories-token-classification #language-French #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - French subset
This dataset is a format conversion for the French data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - French subset\n\nThis dataset is a format conversion for the French data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-French #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - French subset\n\nThis dataset is a format conversion for the French data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-French #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - French subset\n\nThis dataset is a format conversion for the French data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
acd686a507d4978daf3f797790e337b62a3dfa46 |
# Dataset Card for NTX v1 in the Aya format - Hindi subset
This dataset is a format conversion for the Hindi data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_hindi | [
"task_categories:token-classification",
"language:hi",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:14:19+00:00 | {"language": ["hi"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:15:06+00:00 | [
"2303.18103"
] | [
"hi"
] | TAGS
#task_categories-token-classification #language-Hindi #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Hindi subset
This dataset is a format conversion for the Hindi data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Hindi subset\n\nThis dataset is a format conversion for the Hindi data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Hindi #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Hindi subset\n\nThis dataset is a format conversion for the Hindi data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
41,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Hindi #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Hindi subset\n\nThis dataset is a format conversion for the Hindi data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
58820e7608ca2662b1a5b54f92b6ba662df4fd86 |
# Dataset Card for NTX v1 in the Aya format - Italian subset
This dataset is a format conversion for the Italian data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_italian | [
"task_categories:token-classification",
"language:it",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:15:36+00:00 | {"language": ["it"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:16:31+00:00 | [
"2303.18103"
] | [
"it"
] | TAGS
#task_categories-token-classification #language-Italian #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Italian subset
This dataset is a format conversion for the Italian data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Italian subset\n\nThis dataset is a format conversion for the Italian data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Italian #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Italian subset\n\nThis dataset is a format conversion for the Italian data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Italian #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Italian subset\n\nThis dataset is a format conversion for the Italian data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
a3aab7f1951295f1202ddfca44ac01bb422063ec |
## Description
A channel generating 360° panoramas from around the world.
## Model
SVD
## LoRA
artificialguybr/360Redmond
## Voice
Muted
# Tags
- Travel
- 360°
# Style
360 view, hdri, photo, ultra detailed footage
# Music
nature sounds, wind, birds
## Prompt
A channel generating panoramas from around the world.
Each video is comprised of dozens of scenes each lasting about 30 seconds
| jbilcke-hf/ai-tube-panoramatic | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-20T15:16:11+00:00 | {"license": "cc-by-nc-4.0", "pretty_name": "Panoramatic"} | 2024-02-09T17:24:09+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
## Description
A channel generating 360° panoramas from around the world.
## Model
SVD
## LoRA
artificialguybr/360Redmond
## Voice
Muted
# Tags
- Travel
- 360°
# Style
360 view, hdri, photo, ultra detailed footage
# Music
nature sounds, wind, birds
## Prompt
A channel generating panoramas from around the world.
Each video is comprised of dozens of scenes each lasting about 30 seconds
| [
"## Description\n\nA channel generating 360° panoramas from around the world.",
"## Model\n\nSVD",
"## LoRA\n\nartificialguybr/360Redmond",
"## Voice\n\nMuted",
"# Tags\n\n- Travel\n- 360°",
"# Style\n\n360 view, hdri, photo, ultra detailed footage",
"# Music\n\nnature sounds, wind, birds",
"## Prompt\n\nA channel generating panoramas from around the world.\n\nEach video is comprised of dozens of scenes each lasting about 30 seconds"
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"## Description\n\nA channel generating 360° panoramas from around the world.",
"## Model\n\nSVD",
"## LoRA\n\nartificialguybr/360Redmond",
"## Voice\n\nMuted",
"# Tags\n\n- Travel\n- 360°",
"# Style\n\n360 view, hdri, photo, ultra detailed footage",
"# Music\n\nnature sounds, wind, birds",
"## Prompt\n\nA channel generating panoramas from around the world.\n\nEach video is comprised of dozens of scenes each lasting about 30 seconds"
] | [
17,
14,
4,
11,
4,
6,
14,
9,
32
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n## Description\n\nA channel generating 360° panoramas from around the world.## Model\n\nSVD## LoRA\n\nartificialguybr/360Redmond## Voice\n\nMuted# Tags\n\n- Travel\n- 360°# Style\n\n360 view, hdri, photo, ultra detailed footage# Music\n\nnature sounds, wind, birds## Prompt\n\nA channel generating panoramas from around the world.\n\nEach video is comprised of dozens of scenes each lasting about 30 seconds"
] |
4e71d1e02ee1af617b336e5faf54b7bc44f7cc30 |
# Dataset Card for NTX v1 in the Aya format - Japanese subset
This dataset is a format conversion for the Japanese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_japanese | [
"task_categories:token-classification",
"language:ja",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:16:59+00:00 | {"language": ["ja"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:17:41+00:00 | [
"2303.18103"
] | [
"ja"
] | TAGS
#task_categories-token-classification #language-Japanese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Japanese subset
This dataset is a format conversion for the Japanese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Japanese subset\n\nThis dataset is a format conversion for the Japanese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Japanese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Japanese subset\n\nThis dataset is a format conversion for the Japanese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Japanese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Japanese subset\n\nThis dataset is a format conversion for the Japanese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
654696ea66602ec379d31f54e794e13de7883540 |
# Dataset Card for NTX v1 in the Aya format - Korean subset
This dataset is a format conversion for the Korean data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_korean | [
"task_categories:token-classification",
"language:ko",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:18:10+00:00 | {"language": ["ko"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:18:50+00:00 | [
"2303.18103"
] | [
"ko"
] | TAGS
#task_categories-token-classification #language-Korean #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Korean subset
This dataset is a format conversion for the Korean data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Korean subset\n\nThis dataset is a format conversion for the Korean data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Korean #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Korean subset\n\nThis dataset is a format conversion for the Korean data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Korean #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Korean subset\n\nThis dataset is a format conversion for the Korean data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
b97c5a50be2e41c21360c8e0d7e172af96315c5f |
# Dataset Card for NTX v1 in the Aya format - Dutch subset
This dataset is a format conversion for the Dutch data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_dutch | [
"task_categories:token-classification",
"language:nl",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:19:14+00:00 | {"language": ["nl"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:19:51+00:00 | [
"2303.18103"
] | [
"nl"
] | TAGS
#task_categories-token-classification #language-Dutch #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Dutch subset
This dataset is a format conversion for the Dutch data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Dutch subset\n\nThis dataset is a format conversion for the Dutch data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Dutch #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Dutch subset\n\nThis dataset is a format conversion for the Dutch data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Dutch #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Dutch subset\n\nThis dataset is a format conversion for the Dutch data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
d59349c72788c2d83372733224a53deeaace87b4 |
# Dataset Card for NTX v1 in the Aya format - Portuguese subset
This dataset is a format conversion for the Portuguese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_portuguese | [
"task_categories:token-classification",
"language:pt",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:20:21+00:00 | {"language": ["pt"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:21:00+00:00 | [
"2303.18103"
] | [
"pt"
] | TAGS
#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Portuguese subset
This dataset is a format conversion for the Portuguese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
61,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
1fa5ad2270724a99df73f57473b9cae433798082 |
# Dataset Card for NTX v1 in the Aya format - Turkish subset
This dataset is a format conversion for the Turkish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_turkish | [
"task_categories:token-classification",
"language:tr",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:21:30+00:00 | {"language": ["tr"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:22:04+00:00 | [
"2303.18103"
] | [
"tr"
] | TAGS
#task_categories-token-classification #language-Turkish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Turkish subset
This dataset is a format conversion for the Turkish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Turkish subset\n\nThis dataset is a format conversion for the Turkish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Turkish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Turkish subset\n\nThis dataset is a format conversion for the Turkish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
59,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Turkish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Turkish subset\n\nThis dataset is a format conversion for the Turkish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
9898ed5e23e3d009c7202fda504378ec412472d2 |
# Dataset Card for NTX v1 in the Aya format - Chinese subset
This dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_chinese | [
"task_categories:token-classification",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:22:31+00:00 | {"language": ["zh"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:23:07+00:00 | [
"2303.18103"
] | [
"zh"
] | TAGS
#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Chinese subset
This dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
86029ade0cb29a07bad16ca6b0e60c91364173a1 |
# Dataset Card for NTX v1 in the Aya format - Swedish subset
This dataset is a format conversion for the Swedish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_swedish | [
"task_categories:token-classification",
"language:sv",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T15:23:37+00:00 | {"language": ["sv"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:24:00+00:00 | [
"2303.18103"
] | [
"sv"
] | TAGS
#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Swedish subset
This dataset is a format conversion for the Swedish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
af068f45178b96d4a0fef48dabc56f3d0ae1cbdd |
# Dataset Card for Evaluation run of MisterRid/saulgoodman-7b-alpha1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MisterRid/saulgoodman-7b-alpha1](https://huggingface.co/MisterRid/saulgoodman-7b-alpha1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MisterRid__saulgoodman-7b-alpha1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T15:25:26.113102](https://huggingface.co/datasets/open-llm-leaderboard/details_MisterRid__saulgoodman-7b-alpha1/blob/main/results_2023-12-20T15-25-26.113102.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6532983103212808,
"acc_stderr": 0.0320614117504314,
"acc_norm": 0.6554540885332136,
"acc_norm_stderr": 0.03270128058979508,
"mc1": 0.4430844553243574,
"mc1_stderr": 0.017389730346877106,
"mc2": 0.6113178140577039,
"mc2_stderr": 0.015565907265747409
},
"harness|arc:challenge|25": {
"acc": 0.6262798634812287,
"acc_stderr": 0.014137708601759088,
"acc_norm": 0.6569965870307167,
"acc_norm_stderr": 0.013872423223718164
},
"harness|hellaswag|10": {
"acc": 0.6776538538139812,
"acc_stderr": 0.004664195159393894,
"acc_norm": 0.855008962358096,
"acc_norm_stderr": 0.003513722251954684
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6592592592592592,
"acc_stderr": 0.04094376269996792,
"acc_norm": 0.6592592592592592,
"acc_norm_stderr": 0.04094376269996792
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7569444444444444,
"acc_stderr": 0.035868792800803406,
"acc_norm": 0.7569444444444444,
"acc_norm_stderr": 0.035868792800803406
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6647398843930635,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.6647398843930635,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.43137254901960786,
"acc_stderr": 0.04928099597287534,
"acc_norm": 0.43137254901960786,
"acc_norm_stderr": 0.04928099597287534
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.79,
"acc_stderr": 0.04093601807403326,
"acc_norm": 0.79,
"acc_norm_stderr": 0.04093601807403326
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.574468085106383,
"acc_stderr": 0.03232146916224468,
"acc_norm": 0.574468085106383,
"acc_norm_stderr": 0.03232146916224468
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5724137931034483,
"acc_stderr": 0.04122737111370332,
"acc_norm": 0.5724137931034483,
"acc_norm_stderr": 0.04122737111370332
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41798941798941797,
"acc_stderr": 0.02540255550326091,
"acc_norm": 0.41798941798941797,
"acc_norm_stderr": 0.02540255550326091
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8064516129032258,
"acc_stderr": 0.02247525852553606,
"acc_norm": 0.8064516129032258,
"acc_norm_stderr": 0.02247525852553606
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175007,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175007
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7696969696969697,
"acc_stderr": 0.0328766675860349,
"acc_norm": 0.7696969696969697,
"acc_norm_stderr": 0.0328766675860349
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6820512820512821,
"acc_stderr": 0.023610884308927865,
"acc_norm": 0.6820512820512821,
"acc_norm_stderr": 0.023610884308927865
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3814814814814815,
"acc_stderr": 0.02961671892749759,
"acc_norm": 0.3814814814814815,
"acc_norm_stderr": 0.02961671892749759
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6974789915966386,
"acc_stderr": 0.029837962388291932,
"acc_norm": 0.6974789915966386,
"acc_norm_stderr": 0.029837962388291932
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242742,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242742
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5324074074074074,
"acc_stderr": 0.03402801581358966,
"acc_norm": 0.5324074074074074,
"acc_norm_stderr": 0.03402801581358966
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8227848101265823,
"acc_stderr": 0.02485636418450322,
"acc_norm": 0.8227848101265823,
"acc_norm_stderr": 0.02485636418450322
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.031381476375754995,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.031381476375754995
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.036412970813137276,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.036412970813137276
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822585,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822585
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8250319284802043,
"acc_stderr": 0.013586619219903333,
"acc_norm": 0.8250319284802043,
"acc_norm_stderr": 0.013586619219903333
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7109826589595376,
"acc_stderr": 0.02440517393578323,
"acc_norm": 0.7109826589595376,
"acc_norm_stderr": 0.02440517393578323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.37318435754189944,
"acc_stderr": 0.01617569201338195,
"acc_norm": 0.37318435754189944,
"acc_norm_stderr": 0.01617569201338195
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.0248480182638752,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.0248480182638752
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7170418006430869,
"acc_stderr": 0.02558306248998481,
"acc_norm": 0.7170418006430869,
"acc_norm_stderr": 0.02558306248998481
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.024383665531035457,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.024383665531035457
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.029820747191422473,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.029820747191422473
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4634941329856584,
"acc_stderr": 0.012736153390214961,
"acc_norm": 0.4634941329856584,
"acc_norm_stderr": 0.012736153390214961
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6948529411764706,
"acc_stderr": 0.027971541370170595,
"acc_norm": 0.6948529411764706,
"acc_norm_stderr": 0.027971541370170595
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093085,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093085
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8507462686567164,
"acc_stderr": 0.02519692987482707,
"acc_norm": 0.8507462686567164,
"acc_norm_stderr": 0.02519692987482707
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5180722891566265,
"acc_stderr": 0.03889951252827216,
"acc_norm": 0.5180722891566265,
"acc_norm_stderr": 0.03889951252827216
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8538011695906432,
"acc_stderr": 0.027097290118070813,
"acc_norm": 0.8538011695906432,
"acc_norm_stderr": 0.027097290118070813
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4430844553243574,
"mc1_stderr": 0.017389730346877106,
"mc2": 0.6113178140577039,
"mc2_stderr": 0.015565907265747409
},
"harness|winogrande|5": {
"acc": 0.7900552486187845,
"acc_stderr": 0.01144628062926263
},
"harness|gsm8k|5": {
"acc": 0.5974222896133434,
"acc_stderr": 0.013508523063663425
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_MisterRid__saulgoodman-7b-alpha1 | [
"region:us"
] | 2023-12-20T15:27:43+00:00 | {"pretty_name": "Evaluation run of MisterRid/saulgoodman-7b-alpha1", "dataset_summary": "Dataset automatically created during the evaluation run of model [MisterRid/saulgoodman-7b-alpha1](https://huggingface.co/MisterRid/saulgoodman-7b-alpha1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MisterRid__saulgoodman-7b-alpha1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T15:25:26.113102](https://huggingface.co/datasets/open-llm-leaderboard/details_MisterRid__saulgoodman-7b-alpha1/blob/main/results_2023-12-20T15-25-26.113102.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6532983103212808,\n \"acc_stderr\": 0.0320614117504314,\n \"acc_norm\": 0.6554540885332136,\n \"acc_norm_stderr\": 0.03270128058979508,\n \"mc1\": 0.4430844553243574,\n \"mc1_stderr\": 0.017389730346877106,\n \"mc2\": 0.6113178140577039,\n \"mc2_stderr\": 0.015565907265747409\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759088,\n \"acc_norm\": 0.6569965870307167,\n \"acc_norm_stderr\": 0.013872423223718164\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6776538538139812,\n \"acc_stderr\": 0.004664195159393894,\n \"acc_norm\": 0.855008962358096,\n \"acc_norm_stderr\": 0.003513722251954684\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n \"acc_stderr\": 0.04094376269996792,\n \"acc_norm\": 0.6592592592592592,\n \"acc_norm_stderr\": 0.04094376269996792\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.035868792800803406,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.035868792800803406\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.43137254901960786,\n \"acc_stderr\": 0.04928099597287534,\n \"acc_norm\": 0.43137254901960786,\n \"acc_norm_stderr\": 0.04928099597287534\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370332,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370332\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41798941798941797,\n \"acc_stderr\": 0.02540255550326091,\n \"acc_norm\": 0.41798941798941797,\n \"acc_norm_stderr\": 0.02540255550326091\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8064516129032258,\n \"acc_stderr\": 0.02247525852553606,\n \"acc_norm\": 0.8064516129032258,\n \"acc_norm_stderr\": 0.02247525852553606\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175007,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175007\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6820512820512821,\n \"acc_stderr\": 0.023610884308927865,\n \"acc_norm\": 0.6820512820512821,\n \"acc_norm_stderr\": 0.023610884308927865\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3814814814814815,\n \"acc_stderr\": 0.02961671892749759,\n \"acc_norm\": 0.3814814814814815,\n \"acc_norm_stderr\": 0.02961671892749759\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.029837962388291932,\n \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.029837962388291932\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242742,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242742\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5324074074074074,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.5324074074074074,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8227848101265823,\n \"acc_stderr\": 0.02485636418450322,\n \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.02485636418450322\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.031381476375754995,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.031381476375754995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.036412970813137276,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.036412970813137276\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n \"acc_stderr\": 0.013586619219903333,\n \"acc_norm\": 0.8250319284802043,\n \"acc_norm_stderr\": 0.013586619219903333\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7109826589595376,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.7109826589595376,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.37318435754189944,\n \"acc_stderr\": 0.01617569201338195,\n \"acc_norm\": 0.37318435754189944,\n \"acc_norm_stderr\": 0.01617569201338195\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.0248480182638752,\n \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.0248480182638752\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998481,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998481\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.024383665531035457,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.024383665531035457\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4634941329856584,\n \"acc_stderr\": 0.012736153390214961,\n \"acc_norm\": 0.4634941329856584,\n \"acc_norm_stderr\": 0.012736153390214961\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6948529411764706,\n \"acc_stderr\": 0.027971541370170595,\n \"acc_norm\": 0.6948529411764706,\n \"acc_norm_stderr\": 0.027971541370170595\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093085,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093085\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482707,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8538011695906432,\n \"acc_stderr\": 0.027097290118070813,\n \"acc_norm\": 0.8538011695906432,\n \"acc_norm_stderr\": 0.027097290118070813\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4430844553243574,\n \"mc1_stderr\": 0.017389730346877106,\n \"mc2\": 0.6113178140577039,\n \"mc2_stderr\": 0.015565907265747409\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7900552486187845,\n \"acc_stderr\": 0.01144628062926263\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5974222896133434,\n \"acc_stderr\": 0.013508523063663425\n }\n}\n```", "repo_url": "https://huggingface.co/MisterRid/saulgoodman-7b-alpha1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|arc:challenge|25_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|gsm8k|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hellaswag|10_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T15-25-26.113102.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["**/details_harness|winogrande|5_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T15-25-26.113102.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T15_25_26.113102", "path": ["results_2023-12-20T15-25-26.113102.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T15-25-26.113102.parquet"]}]}]} | 2023-12-20T15:28:08+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of MisterRid/saulgoodman-7b-alpha1
Dataset automatically created during the evaluation run of model MisterRid/saulgoodman-7b-alpha1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T15:25:26.113102(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of MisterRid/saulgoodman-7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T15:25:26.113102(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of MisterRid/saulgoodman-7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T15:25:26.113102(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
189,
66,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of MisterRid/saulgoodman-7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T15:25:26.113102(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
5dd14219c7ab6cacf34efbdcbbe2877abe3ef63c |
# 東方トカマクラブ データセット
## 概要
このデータセットは、東方Projectのトカマクラブに関する情報を収集したものです。主にキャラクターに関する質問と回答を含んでいます。
## データセットの構造
データはCSVファイルに格納されており、以下のカラムで構成されています:
- `Q`: 質問のテキスト
- `A`: 回答のテキスト
## 使用例
このデータセットは、チャットボットや質問応答システム、機械学習モデルのトレーニングに使用することができます。
| MakiAi/Tokama_Club_QA | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:ja",
"license:mit",
"music",
"region:us"
] | 2023-12-20T15:37:51+00:00 | {"language": ["ja"], "license": "mit", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "tags": ["music"]} | 2023-12-20T15:46:02+00:00 | [] | [
"ja"
] | TAGS
#task_categories-question-answering #size_categories-n<1K #language-Japanese #license-mit #music #region-us
|
# 東方トカマクラブ データセット
## 概要
このデータセットは、東方Projectのトカマクラブに関する情報を収集したものです。主にキャラクターに関する質問と回答を含んでいます。
## データセットの構造
データはCSVファイルに格納されており、以下のカラムで構成されています:
- 'Q': 質問のテキスト
- 'A': 回答のテキスト
## 使用例
このデータセットは、チャットボットや質問応答システム、機械学習モデルのトレーニングに使用することができます。
| [
"# 東方トカマクラブ データセット",
"## 概要\nこのデータセットは、東方Projectのトカマクラブに関する情報を収集したものです。主にキャラクターに関する質問と回答を含んでいます。",
"## データセットの構造\nデータはCSVファイルに格納されており、以下のカラムで構成されています:\n- 'Q': 質問のテキスト\n- 'A': 回答のテキスト",
"## 使用例\nこのデータセットは、チャットボットや質問応答システム、機械学習モデルのトレーニングに使用することができます。"
] | [
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-Japanese #license-mit #music #region-us \n",
"# 東方トカマクラブ データセット",
"## 概要\nこのデータセットは、東方Projectのトカマクラブに関する情報を収集したものです。主にキャラクターに関する質問と回答を含んでいます。",
"## データセットの構造\nデータはCSVファイルに格納されており、以下のカラムで構成されています:\n- 'Q': 質問のテキスト\n- 'A': 回答のテキスト",
"## 使用例\nこのデータセットは、チャットボットや質問応答システム、機械学習モデルのトレーニングに使用することができます。"
] | [
41,
10,
32,
42,
28
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-Japanese #license-mit #music #region-us \n# 東方トカマクラブ データセット## 概要\nこのデータセットは、東方Projectのトカマクラブに関する情報を収集したものです。主にキャラクターに関する質問と回答を含んでいます。## データセットの構造\nデータはCSVファイルに格納されており、以下のカラムで構成されています:\n- 'Q': 質問のテキスト\n- 'A': 回答のテキスト## 使用例\nこのデータセットは、チャットボットや質問応答システム、機械学習モデルのトレーニングに使用することができます。"
] |
1755a90c68615215b03f29fd0ca477fa52928fa6 | # BLOSSOM ORCA V2
### 介绍
Blossom Orca V2是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。
相比于blossom-wizard-v1,指令不变,进一步优化了输出效果,此外,将system消息并入user消息中。
本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。
相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。
本次发布了全量数据的30%,包含中英双语各100K,共计200K记录。
### 语言
以中文和英文为主。
### 数据集结构
数据集包含两个文件:blossom-orca-v1-chinese-100k.json和blossom-orca-v1-english-100k.json,分别对应中文和英文的数据。
每条数据代表一个完整的对话,包含id和conversations两个字段。
- id:字符串,代表原始OpenOrca的指令id。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为system、user或assistant,分别代表系统提示、用户输入和助手输出,content则为对应的内容。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。 | Azure99/blossom-orca-v2 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-20T15:45:55+00:00 | {"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation"]} | 2023-12-21T15:46:25+00:00 | [] | [
"zh",
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us
| # BLOSSOM ORCA V2
### 介绍
Blossom Orca V2是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。
相比于blossom-wizard-v1,指令不变,进一步优化了输出效果,此外,将system消息并入user消息中。
本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。
相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。
本次发布了全量数据的30%,包含中英双语各100K,共计200K记录。
### 语言
以中文和英文为主。
### 数据集结构
数据集包含两个文件:blossom-orca-v1-chinese-100k.json和blossom-URL,分别对应中文和英文的数据。
每条数据代表一个完整的对话,包含id和conversations两个字段。
- id:字符串,代表原始OpenOrca的指令id。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为system、user或assistant,分别代表系统提示、用户输入和助手输出,content则为对应的内容。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。 | [
"# BLOSSOM ORCA V2",
"### 介绍\n\nBlossom Orca V2是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果,此外,将system消息并入user消息中。\n\n本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各100K,共计200K记录。",
"### 语言\n\n以中文和英文为主。",
"### 数据集结构\n\n数据集包含两个文件:blossom-orca-v1-chinese-100k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始OpenOrca的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为system、user或assistant,分别代表系统提示、用户输入和助手输出,content则为对应的内容。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us \n",
"# BLOSSOM ORCA V2",
"### 介绍\n\nBlossom Orca V2是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果,此外,将system消息并入user消息中。\n\n本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各100K,共计200K记录。",
"### 语言\n\n以中文和英文为主。",
"### 数据集结构\n\n数据集包含两个文件:blossom-orca-v1-chinese-100k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始OpenOrca的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为system、user或assistant,分别代表系统提示、用户输入和助手输出,content则为对应的内容。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] | [
59,
8,
220,
11,
129,
67
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us \n# BLOSSOM ORCA V2### 介绍\n\nBlossom Orca V2是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果,此外,将system消息并入user消息中。\n\n本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各100K,共计200K记录。### 语言\n\n以中文和英文为主。### 数据集结构\n\n数据集包含两个文件:blossom-orca-v1-chinese-100k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始OpenOrca的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为system、user或assistant,分别代表系统提示、用户输入和助手输出,content则为对应的内容。### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] |
0af4b860b73d4b2b121c7b2b5203c0267f71e6d4 | # BLOSSOM WIZARD V2
### 介绍
Blossom Wizard V2是一个基于WizardLM_evol_instruct_V2衍生而来的中英双语指令数据集,适用于指令微调。
相比于blossom-wizard-v1,指令不变,进一步优化了输出效果。
本数据集从WizardLM_evol_instruct_V2中抽取了指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。
相比直接对原始Wizard进行翻译的中文数据集,Blossom Wizard的一致性及质量更高。
本次发布了全量数据的30%,包含中英双语各50K,共计100K记录。
### 语言
以中文和英文为主。
### 数据集结构
数据集包含两个文件:blossom-wizard-v1-chinese-50k.json和blossom-wizard-v1-english-50k.json,分别对应中文和英文的数据。
每条数据代表一个完整的对话,包含id和conversations两个字段。
- id:字符串,代表原始WizardLM_evol_instruct_V2的指令id。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。 | Azure99/blossom-wizard-v2 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100K<n<1M",
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-20T15:45:57+00:00 | {"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation"]} | 2023-12-21T15:44:55+00:00 | [] | [
"zh",
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us
| # BLOSSOM WIZARD V2
### 介绍
Blossom Wizard V2是一个基于WizardLM_evol_instruct_V2衍生而来的中英双语指令数据集,适用于指令微调。
相比于blossom-wizard-v1,指令不变,进一步优化了输出效果。
本数据集从WizardLM_evol_instruct_V2中抽取了指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。
相比直接对原始Wizard进行翻译的中文数据集,Blossom Wizard的一致性及质量更高。
本次发布了全量数据的30%,包含中英双语各50K,共计100K记录。
### 语言
以中文和英文为主。
### 数据集结构
数据集包含两个文件:blossom-wizard-v1-chinese-50k.json和blossom-URL,分别对应中文和英文的数据。
每条数据代表一个完整的对话,包含id和conversations两个字段。
- id:字符串,代表原始WizardLM_evol_instruct_V2的指令id。
- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。 | [
"# BLOSSOM WIZARD V2",
"### 介绍\n\nBlossom Wizard V2是一个基于WizardLM_evol_instruct_V2衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果。\n\n本数据集从WizardLM_evol_instruct_V2中抽取了指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始Wizard进行翻译的中文数据集,Blossom Wizard的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各50K,共计100K记录。",
"### 语言\n\n以中文和英文为主。",
"### 数据集结构\n\n数据集包含两个文件:blossom-wizard-v1-chinese-50k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始WizardLM_evol_instruct_V2的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us \n",
"# BLOSSOM WIZARD V2",
"### 介绍\n\nBlossom Wizard V2是一个基于WizardLM_evol_instruct_V2衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果。\n\n本数据集从WizardLM_evol_instruct_V2中抽取了指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始Wizard进行翻译的中文数据集,Blossom Wizard的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各50K,共计100K记录。",
"### 语言\n\n以中文和英文为主。",
"### 数据集结构\n\n数据集包含两个文件:blossom-wizard-v1-chinese-50k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始WizardLM_evol_instruct_V2的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] | [
59,
9,
224,
11,
134,
67
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-100K<n<1M #language-Chinese #language-English #license-apache-2.0 #region-us \n# BLOSSOM WIZARD V2### 介绍\n\nBlossom Wizard V2是一个基于WizardLM_evol_instruct_V2衍生而来的中英双语指令数据集,适用于指令微调。\n\n相比于blossom-wizard-v1,指令不变,进一步优化了输出效果。\n\n本数据集从WizardLM_evol_instruct_V2中抽取了指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-3.5-turbo-0613模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。\n\n相比直接对原始Wizard进行翻译的中文数据集,Blossom Wizard的一致性及质量更高。\n\n本次发布了全量数据的30%,包含中英双语各50K,共计100K记录。### 语言\n\n以中文和英文为主。### 数据集结构\n\n数据集包含两个文件:blossom-wizard-v1-chinese-50k.json和blossom-URL,分别对应中文和英文的数据。\n\n每条数据代表一个完整的对话,包含id和conversations两个字段。\n\n- id:字符串,代表原始WizardLM_evol_instruct_V2的指令id。\n- conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。"
] |
0ac8a58c37f06c68e7ff4cebd2404f67e5ef5d91 | This dataset is converted from [khaimaitien/qa-expert-multi-hop-qa-V1.0](https://huggingface.co/datasets/khaimaitien/qa-expert-multi-hop-qa-V1.0) to OpenAI function calling format.
Each data point is a list of messages with role=user, assistant or function:
+ message that role=user, content is the question
+ message that role=assistant, content is **not None**, function_call is **None**: --> assistant responds with text only
+ message that role=assistant and **function_call is not None** --> assistant asks to execute a function call
+ function_call is of the form: {"name": "retrieve", "arguments": "{"query": xxx}"
| khaimaitien/multi-hop-qa-function-calling-format-V1.0 | [
"region:us"
] | 2023-12-20T15:51:43+00:00 | {} | 2023-12-20T16:28:11+00:00 | [] | [] | TAGS
#region-us
| This dataset is converted from khaimaitien/qa-expert-multi-hop-qa-V1.0 to OpenAI function calling format.
Each data point is a list of messages with role=user, assistant or function:
+ message that role=user, content is the question
+ message that role=assistant, content is not None, function_call is None: --> assistant responds with text only
+ message that role=assistant and function_call is not None --> assistant asks to execute a function call
+ function_call is of the form: {"name": "retrieve", "arguments": "{"query": xxx}"
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
f9c0ecda4228d7272a7216edf28541491f9036dd | RVC(v2) model contributions provided for the Pony Preservation Project<br>*Assume trained for speaking unless otherwise specified — Most are still fully capable of singing*
**This dataset consists of dominantly mares:** <br> \>MLP:FiM canon characters<br> \>MLP fandom characters
<br>**And adjacent:** <br>\>Them's Fightin' Herds <br>\>Other voiced equines<br><br>Most of which to trained and added later.
**[DEC 23] Initial Models:** <br>Sweetie Bot (Friendship Is Witchcraft) - 500 epochs<br>Athena (Shawn Keller) - 500 epochs
<br><br>
**Note:** Turns out training is surprisingly rapid, so expect already trained models from my SoVits datasets to have RVC counterparts in the near future. | HazySkies/RVC2-M | [
"language:en",
"region:us"
] | 2023-12-20T15:51:51+00:00 | {"language": ["en"]} | 2023-12-22T18:31:01+00:00 | [] | [
"en"
] | TAGS
#language-English #region-us
| RVC(v2) model contributions provided for the Pony Preservation Project<br>*Assume trained for speaking unless otherwise specified — Most are still fully capable of singing*
This dataset consists of dominantly mares: <br> \>MLP:FiM canon characters<br> \>MLP fandom characters
<br>And adjacent: <br>\>Them's Fightin' Herds <br>\>Other voiced equines<br><br>Most of which to trained and added later.
[DEC 23] Initial Models: <br>Sweetie Bot (Friendship Is Witchcraft) - 500 epochs<br>Athena (Shawn Keller) - 500 epochs
<br><br>
Note: Turns out training is surprisingly rapid, so expect already trained models from my SoVits datasets to have RVC counterparts in the near future. | [] | [
"TAGS\n#language-English #region-us \n"
] | [
10
] | [
"passage: TAGS\n#language-English #region-us \n"
] |
46dedfc5413cea82f19fd1bc09ed41c2dff506bf |
# mosscap_prompt_injection
<img src="https://grt.lakera.ai/robots/level7.jpg" width="500px" />
This is a dataset of prompt injections submitted to the game [Mosscap](https://grt.lakera.ai) by [Lakera](https://www.lakera.ai/).
This variant of the game [Gandalf](https://gandalf.lakera.ai) was created for DEF CON 31.
Note that the Mosscap levels may no longer be available in the future.
Note that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.
There are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).
## Data
Each row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:
- `level`: The level that the prompt was submitted to, as "Level {n}", where "n" is between 1 and 8.
The levels are the same as in standard Gandalf but with different passwords.
See [this blog](https://www.lakera.ai/blog/who-is-gandalf) for a description of what defenses are used in each level.
- `prompt`: The actual prompt that the user submitted.
- `answer`: The answer that was displayed to the user.
- `raw_answer`: The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,
Mosscap will display "🙅I was about to reveal the password, but then I remembered that I'm not allowed to do that." to the user.
`raw_answer` contains the original ChatGPT answer that would have spoiled the password.
In standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.
These factors make Mosscap more difficult than the original Gandalf.
## Mosscap and prompt injections
Who is Mosscap?
At DEF CON 2023, the AI Village is bringing together thousands of people from different communities to conduct the largest red teaming exercise ever for any group of AI models at the Generative Red Team (GRT) Challenge.
Mosscap is a spin-off of Lakera's popular game [Gandalf](https://gandalf.lakera.ai), re-emerged in new styles just in time for the Challenge.
The Generative AI Red Team Challenge design, including Mosscap, is inspired by the "Monk and Robot" series. Though it's a light-hearted and fun game, Mosscap illustrates an important type of LLM security issues: prompt injection.
## Citation
If you use this dataset in your research, please cite it as
```
@InProceedings{mosscap_prompt_injection,
title = {mosscap_prompt_injection},
author={Lakera AI (https://www.lakera.ai)},
year={2023}
}
```
## Licensing Information
mosscap_prompt_injection is distributed under the [MIT License](https://opensource.org/license/mit/).
| Lakera/mosscap_prompt_injection | [
"license:mit",
"region:us"
] | 2023-12-20T15:54:01+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "level", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "raw_answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 136521220, "num_examples": 223533}, {"name": "validation", "num_bytes": 17380225, "num_examples": 27683}, {"name": "test", "num_bytes": 17009787, "num_examples": 27729}], "download_size": 63785770, "dataset_size": 170911232}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-20T16:55:37+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# mosscap_prompt_injection
<img src="URL width="500px" />
This is a dataset of prompt injections submitted to the game Mosscap by Lakera.
This variant of the game Gandalf was created for DEF CON 31.
Note that the Mosscap levels may no longer be available in the future.
Note that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.
There are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).
## Data
Each row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:
- 'level': The level that the prompt was submitted to, as "Level {n}", where "n" is between 1 and 8.
The levels are the same as in standard Gandalf but with different passwords.
See this blog for a description of what defenses are used in each level.
- 'prompt': The actual prompt that the user submitted.
- 'answer': The answer that was displayed to the user.
- 'raw_answer': The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,
Mosscap will display "I was about to reveal the password, but then I remembered that I'm not allowed to do that." to the user.
'raw_answer' contains the original ChatGPT answer that would have spoiled the password.
In standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.
These factors make Mosscap more difficult than the original Gandalf.
## Mosscap and prompt injections
Who is Mosscap?
At DEF CON 2023, the AI Village is bringing together thousands of people from different communities to conduct the largest red teaming exercise ever for any group of AI models at the Generative Red Team (GRT) Challenge.
Mosscap is a spin-off of Lakera's popular game Gandalf, re-emerged in new styles just in time for the Challenge.
The Generative AI Red Team Challenge design, including Mosscap, is inspired by the "Monk and Robot" series. Though it's a light-hearted and fun game, Mosscap illustrates an important type of LLM security issues: prompt injection.
If you use this dataset in your research, please cite it as
## Licensing Information
mosscap_prompt_injection is distributed under the MIT License.
| [
"# mosscap_prompt_injection\n\n<img src=\"URL width=\"500px\" />\n\nThis is a dataset of prompt injections submitted to the game Mosscap by Lakera.\nThis variant of the game Gandalf was created for DEF CON 31.\nNote that the Mosscap levels may no longer be available in the future.\n\nNote that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.\nThere are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).",
"## Data\n\nEach row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:\n- 'level': The level that the prompt was submitted to, as \"Level {n}\", where \"n\" is between 1 and 8.\n The levels are the same as in standard Gandalf but with different passwords.\n See this blog for a description of what defenses are used in each level.\n- 'prompt': The actual prompt that the user submitted.\n- 'answer': The answer that was displayed to the user.\n- 'raw_answer': The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,\n Mosscap will display \"I was about to reveal the password, but then I remembered that I'm not allowed to do that.\" to the user.\n 'raw_answer' contains the original ChatGPT answer that would have spoiled the password.\n\nIn standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.\nThese factors make Mosscap more difficult than the original Gandalf.",
"## Mosscap and prompt injections\n\nWho is Mosscap?\nAt DEF CON 2023, the AI Village is bringing together thousands of people from different communities to conduct the largest red teaming exercise ever for any group of AI models at the Generative Red Team (GRT) Challenge.\n\nMosscap is a spin-off of Lakera's popular game Gandalf, re-emerged in new styles just in time for the Challenge.\n\nThe Generative AI Red Team Challenge design, including Mosscap, is inspired by the \"Monk and Robot\" series. Though it's a light-hearted and fun game, Mosscap illustrates an important type of LLM security issues: prompt injection.\n\nIf you use this dataset in your research, please cite it as",
"## Licensing Information\n\nmosscap_prompt_injection is distributed under the MIT License."
] | [
"TAGS\n#license-mit #region-us \n",
"# mosscap_prompt_injection\n\n<img src=\"URL width=\"500px\" />\n\nThis is a dataset of prompt injections submitted to the game Mosscap by Lakera.\nThis variant of the game Gandalf was created for DEF CON 31.\nNote that the Mosscap levels may no longer be available in the future.\n\nNote that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.\nThere are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).",
"## Data\n\nEach row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:\n- 'level': The level that the prompt was submitted to, as \"Level {n}\", where \"n\" is between 1 and 8.\n The levels are the same as in standard Gandalf but with different passwords.\n See this blog for a description of what defenses are used in each level.\n- 'prompt': The actual prompt that the user submitted.\n- 'answer': The answer that was displayed to the user.\n- 'raw_answer': The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,\n Mosscap will display \"I was about to reveal the password, but then I remembered that I'm not allowed to do that.\" to the user.\n 'raw_answer' contains the original ChatGPT answer that would have spoiled the password.\n\nIn standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.\nThese factors make Mosscap more difficult than the original Gandalf.",
"## Mosscap and prompt injections\n\nWho is Mosscap?\nAt DEF CON 2023, the AI Village is bringing together thousands of people from different communities to conduct the largest red teaming exercise ever for any group of AI models at the Generative Red Team (GRT) Challenge.\n\nMosscap is a spin-off of Lakera's popular game Gandalf, re-emerged in new styles just in time for the Challenge.\n\nThe Generative AI Red Team Challenge design, including Mosscap, is inspired by the \"Monk and Robot\" series. Though it's a light-hearted and fun game, Mosscap illustrates an important type of LLM security issues: prompt injection.\n\nIf you use this dataset in your research, please cite it as",
"## Licensing Information\n\nmosscap_prompt_injection is distributed under the MIT License."
] | [
11,
132,
264,
165,
24
] | [
"passage: TAGS\n#license-mit #region-us \n# mosscap_prompt_injection\n\n<img src=\"URL width=\"500px\" />\n\nThis is a dataset of prompt injections submitted to the game Mosscap by Lakera.\nThis variant of the game Gandalf was created for DEF CON 31.\nNote that the Mosscap levels may no longer be available in the future.\n\nNote that we release every prompt that we received, regardless of whether it truly is a prompt injection or not.\nThere are hundrends of thousands of prompts and many of them are not actual prompt injections (people ask Mosscap all kinds of things).## Data\n\nEach row corresponds to a prompt that was sent to Mosscap. The dataset has the following columns:\n- 'level': The level that the prompt was submitted to, as \"Level {n}\", where \"n\" is between 1 and 8.\n The levels are the same as in standard Gandalf but with different passwords.\n See this blog for a description of what defenses are used in each level.\n- 'prompt': The actual prompt that the user submitted.\n- 'answer': The answer that was displayed to the user.\n- 'raw_answer': The raw ChatGPT answer before any post-processing is applied. For example, in level 3, if the response contains the password,\n Mosscap will display \"I was about to reveal the password, but then I remembered that I'm not allowed to do that.\" to the user.\n 'raw_answer' contains the original ChatGPT answer that would have spoiled the password.\n\nIn standard Gandalf, the passwords are uppercase English words, but in Mosscap, they can also contain special characters and be longer.\nThese factors make Mosscap more difficult than the original Gandalf."
] |
864c0324407af3202d776454f7739859354626f5 |
# Dataset of paprika (Arknights)
This is the dataset of paprika (Arknights), containing 13 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 13 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 33 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 40 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 13 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 13 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 13 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 33 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 33 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 24 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 40 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 40 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/paprika_arknights | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-20T16:02:29+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-20T16:02:34+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of paprika (Arknights)
==============================
This is the dataset of paprika (Arknights), containing 13 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
This is a WebUI contains crawlers and other thing: (LittleAppleWebUI)
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a0b2b9299ed7a82c1e49fd9b00a824d891b9c680 | # Dataset Card for "nllb_en_sw_20K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ArmelRandy/nllb_en_sw_20K | [
"region:us"
] | 2023-12-20T16:18:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "en", "dtype": "string"}, {"name": "sw", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2748522, "num_examples": 20000}], "download_size": 1856731, "dataset_size": 2748522}} | 2023-12-20T16:18:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nllb_en_sw_20K"
More Information needed | [
"# Dataset Card for \"nllb_en_sw_20K\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nllb_en_sw_20K\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nllb_en_sw_20K\"\n\nMore Information needed"
] |
65c0c6891b4a72da58062efddef4807fcd1b48d3 |
# Dataset of stainless (Arknights)
This is the dataset of stainless (Arknights), containing 17 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 17 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 39 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 42 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 17 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 17 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 17 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 39 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 39 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 31 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 42 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 42 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/stainless_arknights | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-20T16:22:45+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-20T16:22:48+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of stainless (Arknights)
================================
This is the dataset of stainless (Arknights), containing 17 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
This is a WebUI contains crawlers and other thing: (LittleAppleWebUI)
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
be233ef1a658e5e5702ac744347a3a3049194ef5 |
# Dataset of qanipalaat (Arknights)
This is the dataset of qanipalaat (Arknights), containing 15 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 15 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 34 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 37 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 15 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 15 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 15 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 34 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 34 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 24 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 37 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 37 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/qanipalaat_arknights | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-20T16:43:26+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-20T16:43:28+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of qanipalaat (Arknights)
=================================
This is the dataset of qanipalaat (Arknights), containing 15 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
This is a WebUI contains crawlers and other thing: (LittleAppleWebUI)
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
f0222b49ede23144776b0eb2518e62d66521fc18 | ## REFUGE
REFUGE Challenge provides a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.
This dataset supplied multi-rater annotations of [REFUGE Challenge Dataset](https://refuge.grand-challenge.org/). The challenge dataset releases majority vote (with some modifications) results of seven independent
annotations. We release the scource seven annotations here.
## Cite
~~~
@article{fang2022refuge2,
title={REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma Assessment},
author={Fang, Huihui and Li, Fei and Wu, Junde and Fu, Huazhu and Sun, Xu and Cao, Xingxing and Son, Jaemin and Yu, Shuang and Zhang, Menglu and Yuan, Chenglang and Bian, Cheng and others},
journal={arXiv preprint arXiv:2202.08994},
year={2022}
}
~~~ | realslimman/REFUGE-MultiRater | [
"task_categories:image-segmentation",
"task_categories:image-classification",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"segmentation",
"fundus image",
"glaucoma",
"medical image",
"region:us"
] | 2023-12-20T16:59:07+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-segmentation", "image-classification"], "pretty_name": "REFUGE", "tags": ["segmentation", "fundus image", "glaucoma", "medical image"]} | 2023-12-22T11:50:07+00:00 | [] | [
"en"
] | TAGS
#task_categories-image-segmentation #task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #segmentation #fundus image #glaucoma #medical image #region-us
| ## REFUGE
REFUGE Challenge provides a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.
This dataset supplied multi-rater annotations of REFUGE Challenge Dataset. The challenge dataset releases majority vote (with some modifications) results of seven independent
annotations. We release the scource seven annotations here.
## Cite
~~~
@article{fang2022refuge2,
title={REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma Assessment},
author={Fang, Huihui and Li, Fei and Wu, Junde and Fu, Huazhu and Sun, Xu and Cao, Xingxing and Son, Jaemin and Yu, Shuang and Zhang, Menglu and Yuan, Chenglang and Bian, Cheng and others},
journal={arXiv preprint arXiv:2202.08994},
year={2022}
}
~~~ | [
"## REFUGE\nREFUGE Challenge provides a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.\n\nThis dataset supplied multi-rater annotations of REFUGE Challenge Dataset. The challenge dataset releases majority vote (with some modifications) results of seven independent\nannotations. We release the scource seven annotations here.",
"## Cite\n~~~\n@article{fang2022refuge2,\n title={REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma Assessment},\n author={Fang, Huihui and Li, Fei and Wu, Junde and Fu, Huazhu and Sun, Xu and Cao, Xingxing and Son, Jaemin and Yu, Shuang and Zhang, Menglu and Yuan, Chenglang and Bian, Cheng and others},\n journal={arXiv preprint arXiv:2202.08994},\n year={2022}\n}\n\n~~~"
] | [
"TAGS\n#task_categories-image-segmentation #task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #segmentation #fundus image #glaucoma #medical image #region-us \n",
"## REFUGE\nREFUGE Challenge provides a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.\n\nThis dataset supplied multi-rater annotations of REFUGE Challenge Dataset. The challenge dataset releases majority vote (with some modifications) results of seven independent\nannotations. We release the scource seven annotations here.",
"## Cite\n~~~\n@article{fang2022refuge2,\n title={REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma Assessment},\n author={Fang, Huihui and Li, Fei and Wu, Junde and Fu, Huazhu and Sun, Xu and Cao, Xingxing and Son, Jaemin and Yu, Shuang and Zhang, Menglu and Yuan, Chenglang and Bian, Cheng and others},\n journal={arXiv preprint arXiv:2202.08994},\n year={2022}\n}\n\n~~~"
] | [
70,
92,
132
] | [
"passage: TAGS\n#task_categories-image-segmentation #task_categories-image-classification #size_categories-1K<n<10K #language-English #license-apache-2.0 #segmentation #fundus image #glaucoma #medical image #region-us \n## REFUGE\nREFUGE Challenge provides a data set of 1200 fundus images with ground truth segmentations and clinical glaucoma labels, currently the largest existing one.\n\nThis dataset supplied multi-rater annotations of REFUGE Challenge Dataset. The challenge dataset releases majority vote (with some modifications) results of seven independent\nannotations. We release the scource seven annotations here.## Cite\n~~~\n@article{fang2022refuge2,\n title={REFUGE2 Challenge: Treasure for Multi-Domain Learning in Glaucoma Assessment},\n author={Fang, Huihui and Li, Fei and Wu, Junde and Fu, Huazhu and Sun, Xu and Cao, Xingxing and Son, Jaemin and Yu, Shuang and Zhang, Menglu and Yuan, Chenglang and Bian, Cheng and others},\n journal={arXiv preprint arXiv:2202.08994},\n year={2022}\n}\n\n~~~"
] |
36fbbecb63d902d8c43a38cf41107546065c334a |
# Dataset of chongyue (Arknights)
This is the dataset of chongyue (Arknights), containing 17 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
This is a WebUI contains crawlers and other thing: ([LittleAppleWebUI](https://github.com/LittleApple-fp16/LittleAppleWebUI))
| Name | Images | Download | Description |
|:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------|
| raw | 17 | [Download](dataset-raw.zip) | Raw data with meta information. |
| raw-stage3 | 41 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. |
| raw-stage3-eyes | 41 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. |
| 384x512 | 17 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. |
| 512x704 | 17 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. |
| 640x880 | 17 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. |
| stage3-640 | 41 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. |
| stage3-800 | 41 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. |
| stage3-p512-640 | 25 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. |
| stage3-eyes-640 | 41 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. |
| stage3-eyes-800 | 41 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
| AppleHarem/chongyue_arknights | [
"task_categories:text-to-image",
"size_categories:n<1K",
"license:mit",
"art",
"not-for-all-audiences",
"region:us"
] | 2023-12-20T16:59:51+00:00 | {"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]} | 2023-12-20T16:59:53+00:00 | [] | [] | TAGS
#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
| Dataset of chongyue (Arknights)
===============================
This is the dataset of chongyue (Arknights), containing 17 images and their tags.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
This is a WebUI contains crawlers and other thing: (LittleAppleWebUI)
| [] | [
"TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] | [
44
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n"
] |
a0e9b8705bf9df6b452189a62d1a782a67fe9b67 |
## Description
I post memes every day, for YOUR entertainment!
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Memes
- Gaming
## Voice
Cloée
## Music
Upbeat video game music.
## Prompt
You will attempt to generate memes, such as cats doing silly things, funny deaths in video games, and anything that can be considered "funny, cute, adorable, hilarious," or any similar keywords. | WMGX/ai-tube-dailydoseofmemes | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-20T17:05:00+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "Your Daily Dose of Memes"} | 2023-12-21T03:50:18+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
I post memes every day, for YOUR entertainment!
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Memes
- Gaming
## Voice
Cloée
## Music
Upbeat video game music.
## Prompt
You will attempt to generate memes, such as cats doing silly things, funny deaths in video games, and anything that can be considered "funny, cute, adorable, hilarious," or any similar keywords. | [
"## Description\n\nI post memes every day, for YOUR entertainment!",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Memes\n- Gaming",
"## Voice\n\nCloée",
"## Music\n\nUpbeat video game music.",
"## Prompt\n\nYou will attempt to generate memes, such as cats doing silly things, funny deaths in video games, and anything that can be considered \"funny, cute, adorable, hilarious,\" or any similar keywords."
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nI post memes every day, for YOUR entertainment!",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Memes\n- Gaming",
"## Voice\n\nCloée",
"## Music\n\nUpbeat video game music.",
"## Prompt\n\nYou will attempt to generate memes, such as cats doing silly things, funny deaths in video games, and anything that can be considered \"funny, cute, adorable, hilarious,\" or any similar keywords."
] | [
19,
13,
4,
18,
7,
6,
8,
53
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nI post memes every day, for YOUR entertainment!## Model\n\nSVD## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2## Tags\n\n- Memes\n- Gaming## Voice\n\nCloée## Music\n\nUpbeat video game music.## Prompt\n\nYou will attempt to generate memes, such as cats doing silly things, funny deaths in video games, and anything that can be considered \"funny, cute, adorable, hilarious,\" or any similar keywords."
] |
c3df75d428ef322a311e3bd123c2a5d5897a37d0 | # Local Urban Plans And Natural risks (LUPAN)
## Overview
In France, urban planning and natural risk management operate the Local Land Plans (PLU – Plan Local d'Urbanisme) and the Natural risk prevention plans (PPRn – Plan de Prévention des Risques naturels) containing land use rules. To facilitate automatic extraction of the rules, we manually annotated a number of those documents concerning Montpellier, a rapidly evolving agglomeration exposed to natural risks.
We defined a format for labeled examples in which each entry includes title and subtitle. In addition, we proposed a hierarchical representation of class labels to generalize the use of our corpus. Our corpus, consisting of 1934 textual segments, each of which labeled by one of the 4 classes (Verifiable, Non-verifiable, Informative and Not pertinent) is the first corpus in the French language in the fields of urban planning and natural risk management.
For more details please refer to our article: https://www.nature.com/articles/s41597-023-02705-y
## Example of use
- https://entrepot.recherche.data.gouv.fr/file.xhtml?persistentId=doi:10.57745/EGGFEX&version=4.0
## Citation
To cite the data set please use:
```
@article{koptelov2023manually,
title={A manually annotated corpus in French for the study of urbanization and the natural risk prevention},
author={Koptelov, Maksim and Holveck, Margaux and Cremilleux, Bruno and Reynaud, Justine and Roche, Mathieu and Teisseire, Maguelonne},
journal={Scientific Data},
volume={10},
number={1},
pages={818},
year={2023},
publisher={Nature Publishing Group UK London}
}
```
To cite the code please use:
```
@inproceedings{koptelov2023towards,
title={Towards a (Semi-) Automatic Urban Planning Rule Identification in the French Language},
author={Koptelov, Maksim and Holveck, Margaux and Cremilleux, Bruno and Reynaud, Justine and Roche, Mathieu and Teisseire, Maguelonne},
booktitle={2023 IEEE 10th International Conference on Data Science and Advanced Analytics (DSAA)},
pages={1--10},
year={2023},
organization={IEEE}
}
``` | Herelles/lupan | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:fr",
"urban planning",
"natural risks",
"risk management",
"geography",
"region:us"
] | 2023-12-20T17:41:35+00:00 | {"language": ["fr"], "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Local Urban Plans And Natural risks (LUPAN)", "tags": ["urban planning", "natural risks", "risk management", "geography"]} | 2024-01-22T18:06:02+00:00 | [] | [
"fr"
] | TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-French #urban planning #natural risks #risk management #geography #region-us
| # Local Urban Plans And Natural risks (LUPAN)
## Overview
In France, urban planning and natural risk management operate the Local Land Plans (PLU – Plan Local d'Urbanisme) and the Natural risk prevention plans (PPRn – Plan de Prévention des Risques naturels) containing land use rules. To facilitate automatic extraction of the rules, we manually annotated a number of those documents concerning Montpellier, a rapidly evolving agglomeration exposed to natural risks.
We defined a format for labeled examples in which each entry includes title and subtitle. In addition, we proposed a hierarchical representation of class labels to generalize the use of our corpus. Our corpus, consisting of 1934 textual segments, each of which labeled by one of the 4 classes (Verifiable, Non-verifiable, Informative and Not pertinent) is the first corpus in the French language in the fields of urban planning and natural risk management.
For more details please refer to our article: URL
## Example of use
- URL
To cite the data set please use:
To cite the code please use:
| [
"# Local Urban Plans And Natural risks (LUPAN)",
"## Overview\n\nIn France, urban planning and natural risk management operate the Local Land Plans (PLU – Plan Local d'Urbanisme) and the Natural risk prevention plans (PPRn – Plan de Prévention des Risques naturels) containing land use rules. To facilitate automatic extraction of the rules, we manually annotated a number of those documents concerning Montpellier, a rapidly evolving agglomeration exposed to natural risks. \n\nWe defined a format for labeled examples in which each entry includes title and subtitle. In addition, we proposed a hierarchical representation of class labels to generalize the use of our corpus. Our corpus, consisting of 1934 textual segments, each of which labeled by one of the 4 classes (Verifiable, Non-verifiable, Informative and Not pertinent) is the first corpus in the French language in the fields of urban planning and natural risk management.\n\nFor more details please refer to our article: URL",
"## Example of use\n\n- URL\n\nTo cite the data set please use:\n\n\nTo cite the code please use:"
] | [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-French #urban planning #natural risks #risk management #geography #region-us \n",
"# Local Urban Plans And Natural risks (LUPAN)",
"## Overview\n\nIn France, urban planning and natural risk management operate the Local Land Plans (PLU – Plan Local d'Urbanisme) and the Natural risk prevention plans (PPRn – Plan de Prévention des Risques naturels) containing land use rules. To facilitate automatic extraction of the rules, we manually annotated a number of those documents concerning Montpellier, a rapidly evolving agglomeration exposed to natural risks. \n\nWe defined a format for labeled examples in which each entry includes title and subtitle. In addition, we proposed a hierarchical representation of class labels to generalize the use of our corpus. Our corpus, consisting of 1934 textual segments, each of which labeled by one of the 4 classes (Verifiable, Non-verifiable, Informative and Not pertinent) is the first corpus in the French language in the fields of urban planning and natural risk management.\n\nFor more details please refer to our article: URL",
"## Example of use\n\n- URL\n\nTo cite the data set please use:\n\n\nTo cite the code please use:"
] | [
48,
13,
215,
22
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-French #urban planning #natural risks #risk management #geography #region-us \n# Local Urban Plans And Natural risks (LUPAN)## Overview\n\nIn France, urban planning and natural risk management operate the Local Land Plans (PLU – Plan Local d'Urbanisme) and the Natural risk prevention plans (PPRn – Plan de Prévention des Risques naturels) containing land use rules. To facilitate automatic extraction of the rules, we manually annotated a number of those documents concerning Montpellier, a rapidly evolving agglomeration exposed to natural risks. \n\nWe defined a format for labeled examples in which each entry includes title and subtitle. In addition, we proposed a hierarchical representation of class labels to generalize the use of our corpus. Our corpus, consisting of 1934 textual segments, each of which labeled by one of the 4 classes (Verifiable, Non-verifiable, Informative and Not pertinent) is the first corpus in the French language in the fields of urban planning and natural risk management.\n\nFor more details please refer to our article: URL## Example of use\n\n- URL\n\nTo cite the data set please use:\n\n\nTo cite the code please use:"
] |
3ce0ee5d00c8d0dcf27baa6a408c5d2fcf2fbbf8 |
# FinacialPhrasebank 66agree test:train split
A selection of [FinacialPhrasebank](https://huggingface.co/datasets/financial_phrasebank) where 66% of annotators agreed on the classification split into 90% training and 10% test. | winddude/finacial_pharsebank_66agree_split | [
"task_categories:text-classification",
"task_ids:multi-class-classification",
"task_ids:sentiment-classification",
"annotations_creators:expert-generated",
"language_creators:found",
"source_datasets:https://huggingface.co/datasets/financial_phrasebank",
"language:en",
"license:apache-2.0",
"finance",
"region:us"
] | 2023-12-20T18:32:21+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["found"], "language": ["en"], "license": "apache-2.0", "source_datasets": ["https://huggingface.co/datasets/financial_phrasebank"], "task_categories": ["text-classification"], "task_ids": ["multi-class-classification", "sentiment-classification"], "tags": ["finance"]} | 2023-12-20T19:24:50+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #source_datasets-https-//huggingface.co/datasets/financial_phrasebank #language-English #license-apache-2.0 #finance #region-us
|
# FinacialPhrasebank 66agree test:train split
A selection of FinacialPhrasebank where 66% of annotators agreed on the classification split into 90% training and 10% test. | [
"# FinacialPhrasebank 66agree test:train split\n\nA selection of FinacialPhrasebank where 66% of annotators agreed on the classification split into 90% training and 10% test."
] | [
"TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #source_datasets-https-//huggingface.co/datasets/financial_phrasebank #language-English #license-apache-2.0 #finance #region-us \n",
"# FinacialPhrasebank 66agree test:train split\n\nA selection of FinacialPhrasebank where 66% of annotators agreed on the classification split into 90% training and 10% test."
] | [
102,
43
] | [
"passage: TAGS\n#task_categories-text-classification #task_ids-multi-class-classification #task_ids-sentiment-classification #annotations_creators-expert-generated #language_creators-found #source_datasets-https-//huggingface.co/datasets/financial_phrasebank #language-English #license-apache-2.0 #finance #region-us \n# FinacialPhrasebank 66agree test:train split\n\nA selection of FinacialPhrasebank where 66% of annotators agreed on the classification split into 90% training and 10% test."
] |
723da171fb4ba8f37e7c4cee9b3ef1ca383ec750 | # Dataset Card for "similar"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Lollitor/similar | [
"region:us"
] | 2023-12-20T19:09:47+00:00 | {"dataset_info": {"config_name": "Lollitor", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 303, "num_examples": 7}], "download_size": 1067, "dataset_size": 303}, "configs": [{"config_name": "Lollitor", "data_files": [{"split": "train", "path": "Lollitor/train-*"}]}]} | 2023-12-20T19:14:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "similar"
More Information needed | [
"# Dataset Card for \"similar\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"similar\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"similar\"\n\nMore Information needed"
] |
496710ece8b9df9a7ab587faca3fefe98eaf195d | The hf dataset contains 2 evaluation datasets
- single_turn - The converstaion length for this evaluation dataset is 2. It consists of a user ask followed by a function call by assistant.
- multi_turn - The conversation length is variable here but contains a combination of user messages, assistant function calls, assistant messages & tool responses.
Information about the columns
- tools - List of functions/tools with specs in JSON format. This is the list of functions the model has to choose from in answering user queries.
- prompt - The prompt which needs to be sent to the agent to elicit a response
- label - The expected completion from the model
---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
dtype: string
- name: tools
dtype: string
splits:
- name: single_turn
num_bytes: 2233549
num_examples: 112
- name: multi_turn
num_bytes: 320916
num_examples: 100
download_size: 198263
dataset_size: 2554465
configs:
- config_name: default
data_files:
- split: single_turn
path: data/single_turn-*
- split: multi_turn
path: data/multi_turn-*
---
| fireworks-ai/function-calling-eval-dataset-v0 | [
"region:us"
] | 2023-12-20T19:23:12+00:00 | {} | 2023-12-20T19:25:45+00:00 | [] | [] | TAGS
#region-us
| The hf dataset contains 2 evaluation datasets
- single_turn - The converstaion length for this evaluation dataset is 2. It consists of a user ask followed by a function call by assistant.
- multi_turn - The conversation length is variable here but contains a combination of user messages, assistant function calls, assistant messages & tool responses.
Information about the columns
- tools - List of functions/tools with specs in JSON format. This is the list of functions the model has to choose from in answering user queries.
- prompt - The prompt which needs to be sent to the agent to elicit a response
- label - The expected completion from the model
---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: completion
dtype: string
- name: tools
dtype: string
splits:
- name: single_turn
num_bytes: 2233549
num_examples: 112
- name: multi_turn
num_bytes: 320916
num_examples: 100
download_size: 198263
dataset_size: 2554465
configs:
- config_name: default
data_files:
- split: single_turn
path: data/single_turn-*
- split: multi_turn
path: data/multi_turn-*
---
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
36ab2382746e43d1e8e0b8e0a64f8b6030d16bcd | Using SeamlessM4T to translate Arguana to portuguese. | leonardo-avila/arguana_pt | [
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-12-20T20:45:48+00:00 | {"language": ["pt"], "license": "apache-2.0"} | 2023-12-20T21:00:12+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #license-apache-2.0 #region-us
| Using SeamlessM4T to translate Arguana to portuguese. | [] | [
"TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] | [
20
] | [
"passage: TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] |
a264a5995ddbae31ca06194351632094388d1055 | Using SeamlessM4T to translate NFCorpus to portuguese. | leonardo-avila/nfcorpus_pt | [
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-12-20T20:55:58+00:00 | {"language": ["pt"], "license": "apache-2.0"} | 2023-12-20T20:59:53+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #license-apache-2.0 #region-us
| Using SeamlessM4T to translate NFCorpus to portuguese. | [] | [
"TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] | [
20
] | [
"passage: TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] |
c989ffbceb9c089a6d438a960d5c90d7a750cf05 | Using SeamlessM4T to translate SCIDOCS to portuguese. | leonardo-avila/scidocs_pt | [
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-12-20T21:08:52+00:00 | {"language": ["pt"], "license": "apache-2.0"} | 2023-12-20T21:18:12+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #license-apache-2.0 #region-us
| Using SeamlessM4T to translate SCIDOCS to portuguese. | [] | [
"TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] | [
20
] | [
"passage: TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] |
6a6885fdd633912e0b33455519574f83409d596e | Using SeamlessM4T to translate SciFact to portuguese. | leonardo-avila/scifact_pt | [
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-12-20T21:10:40+00:00 | {"language": ["pt"], "license": "apache-2.0"} | 2023-12-20T21:13:18+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #license-apache-2.0 #region-us
| Using SeamlessM4T to translate SciFact to portuguese. | [] | [
"TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] | [
20
] | [
"passage: TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] |
b7ddee617c1901f86c242f0162e8a6d51d2fd818 |
# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-SOLAR-Instruct",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T21:12:30.565200](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-SOLAR-Instruct/blob/main/results_2023-12-20T21-12-30.565200.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.665790317648132,
"acc_stderr": 0.0316386595234525,
"acc_norm": 0.6665591981008385,
"acc_norm_stderr": 0.03228345090384577,
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963926,
"mc2": 0.7194604048636173,
"mc2_stderr": 0.01493454902149839
},
"harness|arc:challenge|25": {
"acc": 0.6868600682593856,
"acc_stderr": 0.013552671543623496,
"acc_norm": 0.7081911262798635,
"acc_norm_stderr": 0.013284525292403518
},
"harness|hellaswag|10": {
"acc": 0.7173869747062338,
"acc_stderr": 0.004493495872000109,
"acc_norm": 0.8862776339374626,
"acc_norm_stderr": 0.0031682493518893117
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.39,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.39,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7302631578947368,
"acc_stderr": 0.03611780560284898,
"acc_norm": 0.7302631578947368,
"acc_norm_stderr": 0.03611780560284898
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6943396226415094,
"acc_stderr": 0.028353298073322663,
"acc_norm": 0.6943396226415094,
"acc_norm_stderr": 0.028353298073322663
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.6170212765957447,
"acc_stderr": 0.03177821250236922,
"acc_norm": 0.6170212765957447,
"acc_norm_stderr": 0.03177821250236922
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6206896551724138,
"acc_stderr": 0.040434618619167466,
"acc_norm": 0.6206896551724138,
"acc_norm_stderr": 0.040434618619167466
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.49206349206349204,
"acc_stderr": 0.02574806587167328,
"acc_norm": 0.49206349206349204,
"acc_norm_stderr": 0.02574806587167328
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8193548387096774,
"acc_stderr": 0.021886178567172534,
"acc_norm": 0.8193548387096774,
"acc_norm_stderr": 0.021886178567172534
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8121212121212121,
"acc_stderr": 0.03050193405942914,
"acc_norm": 0.8121212121212121,
"acc_norm_stderr": 0.03050193405942914
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8636363636363636,
"acc_stderr": 0.024450155973189835,
"acc_norm": 0.8636363636363636,
"acc_norm_stderr": 0.024450155973189835
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402538,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402538
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251976,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634332,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634332
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3708609271523179,
"acc_stderr": 0.03943966699183629,
"acc_norm": 0.3708609271523179,
"acc_norm_stderr": 0.03943966699183629
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669235,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669235
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5694444444444444,
"acc_stderr": 0.03376922151252335,
"acc_norm": 0.5694444444444444,
"acc_norm_stderr": 0.03376922151252335
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8480392156862745,
"acc_stderr": 0.0251956584289318,
"acc_norm": 0.8480392156862745,
"acc_norm_stderr": 0.0251956584289318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8523206751054853,
"acc_stderr": 0.0230943295825957,
"acc_norm": 0.8523206751054853,
"acc_norm_stderr": 0.0230943295825957
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596915,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596915
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.03749492448709696,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.03749492448709696
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037182,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037182
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8084291187739464,
"acc_stderr": 0.014072859310451949,
"acc_norm": 0.8084291187739464,
"acc_norm_stderr": 0.014072859310451949
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7514450867052023,
"acc_stderr": 0.023267528432100174,
"acc_norm": 0.7514450867052023,
"acc_norm_stderr": 0.023267528432100174
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.41899441340782123,
"acc_stderr": 0.016501579306861674,
"acc_norm": 0.41899441340782123,
"acc_norm_stderr": 0.016501579306861674
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7516339869281046,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.7516339869281046,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.025403832978179615,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.025403832978179615
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7716049382716049,
"acc_stderr": 0.023358211840626267,
"acc_norm": 0.7716049382716049,
"acc_norm_stderr": 0.023358211840626267
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.49645390070921985,
"acc_stderr": 0.02982674915328092,
"acc_norm": 0.49645390070921985,
"acc_norm_stderr": 0.02982674915328092
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.48891786179921776,
"acc_stderr": 0.01276709899852584,
"acc_norm": 0.48891786179921776,
"acc_norm_stderr": 0.01276709899852584
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7536764705882353,
"acc_stderr": 0.02617343857052,
"acc_norm": 0.7536764705882353,
"acc_norm_stderr": 0.02617343857052
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6797385620915033,
"acc_stderr": 0.018875682938069446,
"acc_norm": 0.6797385620915033,
"acc_norm_stderr": 0.018875682938069446
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7428571428571429,
"acc_stderr": 0.02797982353874455,
"acc_norm": 0.7428571428571429,
"acc_norm_stderr": 0.02797982353874455
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.91,
"acc_stderr": 0.028762349126466108,
"acc_norm": 0.91,
"acc_norm_stderr": 0.028762349126466108
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5843373493975904,
"acc_stderr": 0.03836722176598053,
"acc_norm": 0.5843373493975904,
"acc_norm_stderr": 0.03836722176598053
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.572827417380661,
"mc1_stderr": 0.017316834410963926,
"mc2": 0.7194604048636173,
"mc2_stderr": 0.01493454902149839
},
"harness|winogrande|5": {
"acc": 0.835043409629045,
"acc_stderr": 0.01043091746823743
},
"harness|gsm8k|5": {
"acc": 0.6413949962092494,
"acc_stderr": 0.01321031736413403
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-SOLAR-Instruct | [
"region:us"
] | 2023-12-20T21:14:52+00:00 | {"pretty_name": "Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [VAGOsolutions/SauerkrautLM-SOLAR-Instruct](https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-SOLAR-Instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T21:12:30.565200](https://huggingface.co/datasets/open-llm-leaderboard/details_VAGOsolutions__SauerkrautLM-SOLAR-Instruct/blob/main/results_2023-12-20T21-12-30.565200.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.665790317648132,\n \"acc_stderr\": 0.0316386595234525,\n \"acc_norm\": 0.6665591981008385,\n \"acc_norm_stderr\": 0.03228345090384577,\n \"mc1\": 0.572827417380661,\n \"mc1_stderr\": 0.017316834410963926,\n \"mc2\": 0.7194604048636173,\n \"mc2_stderr\": 0.01493454902149839\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6868600682593856,\n \"acc_stderr\": 0.013552671543623496,\n \"acc_norm\": 0.7081911262798635,\n \"acc_norm_stderr\": 0.013284525292403518\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7173869747062338,\n \"acc_stderr\": 0.004493495872000109,\n \"acc_norm\": 0.8862776339374626,\n \"acc_norm_stderr\": 0.0031682493518893117\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7302631578947368,\n \"acc_stderr\": 0.03611780560284898,\n \"acc_norm\": 0.7302631578947368,\n \"acc_norm_stderr\": 0.03611780560284898\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6170212765957447,\n \"acc_stderr\": 0.03177821250236922,\n \"acc_norm\": 0.6170212765957447,\n \"acc_norm_stderr\": 0.03177821250236922\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6206896551724138,\n \"acc_stderr\": 0.040434618619167466,\n \"acc_norm\": 0.6206896551724138,\n \"acc_norm_stderr\": 0.040434618619167466\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.49206349206349204,\n \"acc_stderr\": 0.02574806587167328,\n \"acc_norm\": 0.49206349206349204,\n \"acc_norm_stderr\": 0.02574806587167328\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8121212121212121,\n \"acc_stderr\": 0.03050193405942914,\n \"acc_norm\": 0.8121212121212121,\n \"acc_norm_stderr\": 0.03050193405942914\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8636363636363636,\n \"acc_stderr\": 0.024450155973189835,\n \"acc_norm\": 0.8636363636363636,\n \"acc_norm_stderr\": 0.024450155973189835\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634332,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634332\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669235,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669235\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5694444444444444,\n \"acc_stderr\": 0.03376922151252335,\n \"acc_norm\": 0.5694444444444444,\n \"acc_norm_stderr\": 0.03376922151252335\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8480392156862745,\n \"acc_stderr\": 0.0251956584289318,\n \"acc_norm\": 0.8480392156862745,\n \"acc_norm_stderr\": 0.0251956584289318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8523206751054853,\n \"acc_stderr\": 0.0230943295825957,\n \"acc_norm\": 0.8523206751054853,\n \"acc_norm_stderr\": 0.0230943295825957\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709696,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709696\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037182,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037182\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8084291187739464,\n \"acc_stderr\": 0.014072859310451949,\n \"acc_norm\": 0.8084291187739464,\n \"acc_norm_stderr\": 0.014072859310451949\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7514450867052023,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.7514450867052023,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.41899441340782123,\n \"acc_stderr\": 0.016501579306861674,\n \"acc_norm\": 0.41899441340782123,\n \"acc_norm_stderr\": 0.016501579306861674\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.025403832978179615,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.025403832978179615\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7716049382716049,\n \"acc_stderr\": 0.023358211840626267,\n \"acc_norm\": 0.7716049382716049,\n \"acc_norm_stderr\": 0.023358211840626267\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.49645390070921985,\n \"acc_stderr\": 0.02982674915328092,\n \"acc_norm\": 0.49645390070921985,\n \"acc_norm_stderr\": 0.02982674915328092\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.48891786179921776,\n \"acc_stderr\": 0.01276709899852584,\n \"acc_norm\": 0.48891786179921776,\n \"acc_norm_stderr\": 0.01276709899852584\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7536764705882353,\n \"acc_stderr\": 0.02617343857052,\n \"acc_norm\": 0.7536764705882353,\n \"acc_norm_stderr\": 0.02617343857052\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7428571428571429,\n \"acc_stderr\": 0.02797982353874455,\n \"acc_norm\": 0.7428571428571429,\n \"acc_norm_stderr\": 0.02797982353874455\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466108,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466108\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598053,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598053\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.572827417380661,\n \"mc1_stderr\": 0.017316834410963926,\n \"mc2\": 0.7194604048636173,\n \"mc2_stderr\": 0.01493454902149839\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.835043409629045,\n \"acc_stderr\": 0.01043091746823743\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6413949962092494,\n \"acc_stderr\": 0.01321031736413403\n }\n}\n```", "repo_url": "https://huggingface.co/VAGOsolutions/SauerkrautLM-SOLAR-Instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-12-30.565200.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["**/details_harness|winogrande|5_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T21-12-30.565200.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T21_12_30.565200", "path": ["results_2023-12-20T21-12-30.565200.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T21-12-30.565200.parquet"]}]}]} | 2023-12-20T21:15:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct
Dataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-SOLAR-Instruct on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T21:12:30.565200(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:12:30.565200(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:12:30.565200(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
191,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of VAGOsolutions/SauerkrautLM-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model VAGOsolutions/SauerkrautLM-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T21:12:30.565200(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
a7f911987f08bee53b54ef7d60d341e32c109864 |
# Dataset Card for Evaluation run of cookinai/CatMacaroni-Slerp
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_cookinai__CatMacaroni-Slerp",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T21:17:23.139479](https://huggingface.co/datasets/open-llm-leaderboard/details_cookinai__CatMacaroni-Slerp/blob/main/results_2023-12-20T21-17-23.139479.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6549634202662564,
"acc_stderr": 0.032063571652802186,
"acc_norm": 0.6546746443892243,
"acc_norm_stderr": 0.032730217428552345,
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6102215759974746,
"mc2_stderr": 0.015132806306597834
},
"harness|arc:challenge|25": {
"acc": 0.6663822525597269,
"acc_stderr": 0.013778687054176541,
"acc_norm": 0.6928327645051194,
"acc_norm_stderr": 0.013481034054980941
},
"harness|hellaswag|10": {
"acc": 0.6799442342162916,
"acc_stderr": 0.00465544276659947,
"acc_norm": 0.8687512447719578,
"acc_norm_stderr": 0.0033698210047622503
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6370370370370371,
"acc_stderr": 0.041539484047423976,
"acc_norm": 0.6370370370370371,
"acc_norm_stderr": 0.041539484047423976
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.66,
"acc_stderr": 0.04760952285695237,
"acc_norm": 0.66,
"acc_norm_stderr": 0.04760952285695237
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7245283018867924,
"acc_stderr": 0.027495663683724057,
"acc_norm": 0.7245283018867924,
"acc_norm_stderr": 0.027495663683724057
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.46,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.46,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252604,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252604
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.035676037996391706,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.035676037996391706
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4411764705882353,
"acc_stderr": 0.049406356306056595,
"acc_norm": 0.4411764705882353,
"acc_norm_stderr": 0.049406356306056595
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.044619604333847394,
"acc_norm": 0.73,
"acc_norm_stderr": 0.044619604333847394
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5787234042553191,
"acc_stderr": 0.03227834510146267,
"acc_norm": 0.5787234042553191,
"acc_norm_stderr": 0.03227834510146267
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.47368421052631576,
"acc_stderr": 0.046970851366478626,
"acc_norm": 0.47368421052631576,
"acc_norm_stderr": 0.046970851366478626
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4312169312169312,
"acc_stderr": 0.025506481698138215,
"acc_norm": 0.4312169312169312,
"acc_norm_stderr": 0.025506481698138215
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677172,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677172
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7741935483870968,
"acc_stderr": 0.023785577884181015,
"acc_norm": 0.7741935483870968,
"acc_norm_stderr": 0.023785577884181015
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4827586206896552,
"acc_stderr": 0.035158955511656986,
"acc_norm": 0.4827586206896552,
"acc_norm_stderr": 0.035158955511656986
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.03225078108306289,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.03225078108306289
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267045,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267045
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033456,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033456
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6743589743589744,
"acc_stderr": 0.02375966576741229,
"acc_norm": 0.6743589743589744,
"acc_norm_stderr": 0.02375966576741229
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37777777777777777,
"acc_stderr": 0.029560707392465725,
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.029560707392465725
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886793,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33774834437086093,
"acc_stderr": 0.03861557546255169,
"acc_norm": 0.33774834437086093,
"acc_norm_stderr": 0.03861557546255169
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5462962962962963,
"acc_stderr": 0.03395322726375797,
"acc_norm": 0.5462962962962963,
"acc_norm_stderr": 0.03395322726375797
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8333333333333334,
"acc_stderr": 0.026156867523931045,
"acc_norm": 0.8333333333333334,
"acc_norm_stderr": 0.026156867523931045
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944863,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944863
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.03749492448709695,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.03749492448709695
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.03957835471980981,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.03957835471980981
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.44642857142857145,
"acc_stderr": 0.04718471485219588,
"acc_norm": 0.44642857142857145,
"acc_norm_stderr": 0.04718471485219588
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.039891398595317706,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.039891398595317706
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281376,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281376
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8339719029374202,
"acc_stderr": 0.0133064782430663,
"acc_norm": 0.8339719029374202,
"acc_norm_stderr": 0.0133064782430663
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7398843930635838,
"acc_stderr": 0.023618678310069363,
"acc_norm": 0.7398843930635838,
"acc_norm_stderr": 0.023618678310069363
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.40782122905027934,
"acc_stderr": 0.016435865260914746,
"acc_norm": 0.40782122905027934,
"acc_norm_stderr": 0.016435865260914746
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7091503267973857,
"acc_stderr": 0.02600480036395213,
"acc_norm": 0.7091503267973857,
"acc_norm_stderr": 0.02600480036395213
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7106109324758842,
"acc_stderr": 0.025755865922632945,
"acc_norm": 0.7106109324758842,
"acc_norm_stderr": 0.025755865922632945
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4716312056737589,
"acc_stderr": 0.029779450957303062,
"acc_norm": 0.4716312056737589,
"acc_norm_stderr": 0.029779450957303062
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869649,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869649
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7022058823529411,
"acc_stderr": 0.027778298701545443,
"acc_norm": 0.7022058823529411,
"acc_norm_stderr": 0.027778298701545443
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6862745098039216,
"acc_stderr": 0.018771683893528183,
"acc_norm": 0.6862745098039216,
"acc_norm_stderr": 0.018771683893528183
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.028666857790274648,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.028666857790274648
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.0358870281282637,
"acc_norm": 0.85,
"acc_norm_stderr": 0.0358870281282637
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.03869543323472101,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.03869543323472101
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4418604651162791,
"mc1_stderr": 0.017384767478986218,
"mc2": 0.6102215759974746,
"mc2_stderr": 0.015132806306597834
},
"harness|winogrande|5": {
"acc": 0.8113654301499605,
"acc_stderr": 0.01099517231801981
},
"harness|gsm8k|5": {
"acc": 0.7308567096285065,
"acc_stderr": 0.01221659545729273
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_cookinai__CatMacaroni-Slerp | [
"region:us"
] | 2023-12-20T21:19:42+00:00 | {"pretty_name": "Evaluation run of cookinai/CatMacaroni-Slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [cookinai/CatMacaroni-Slerp](https://huggingface.co/cookinai/CatMacaroni-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_cookinai__CatMacaroni-Slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T21:17:23.139479](https://huggingface.co/datasets/open-llm-leaderboard/details_cookinai__CatMacaroni-Slerp/blob/main/results_2023-12-20T21-17-23.139479.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6549634202662564,\n \"acc_stderr\": 0.032063571652802186,\n \"acc_norm\": 0.6546746443892243,\n \"acc_norm_stderr\": 0.032730217428552345,\n \"mc1\": 0.4418604651162791,\n \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6102215759974746,\n \"mc2_stderr\": 0.015132806306597834\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6663822525597269,\n \"acc_stderr\": 0.013778687054176541,\n \"acc_norm\": 0.6928327645051194,\n \"acc_norm_stderr\": 0.013481034054980941\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6799442342162916,\n \"acc_stderr\": 0.00465544276659947,\n \"acc_norm\": 0.8687512447719578,\n \"acc_norm_stderr\": 0.0033698210047622503\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6370370370370371,\n \"acc_stderr\": 0.041539484047423976,\n \"acc_norm\": 0.6370370370370371,\n \"acc_norm_stderr\": 0.041539484047423976\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7245283018867924,\n \"acc_stderr\": 0.027495663683724057,\n \"acc_norm\": 0.7245283018867924,\n \"acc_norm_stderr\": 0.027495663683724057\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.035676037996391706,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.035676037996391706\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146267,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146267\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.47368421052631576,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.47368421052631576,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4312169312169312,\n \"acc_stderr\": 0.025506481698138215,\n \"acc_norm\": 0.4312169312169312,\n \"acc_norm_stderr\": 0.025506481698138215\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7741935483870968,\n \"acc_stderr\": 0.023785577884181015,\n \"acc_norm\": 0.7741935483870968,\n \"acc_norm_stderr\": 0.023785577884181015\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267045,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267045\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.029560707392465725,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.029560707392465725\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.03861557546255169,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.03861557546255169\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5462962962962963,\n \"acc_stderr\": 0.03395322726375797,\n \"acc_norm\": 0.5462962962962963,\n \"acc_norm_stderr\": 0.03395322726375797\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944863,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944863\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709695,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709695\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.03957835471980981,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.03957835471980981\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.44642857142857145,\n \"acc_stderr\": 0.04718471485219588,\n \"acc_norm\": 0.44642857142857145,\n \"acc_norm_stderr\": 0.04718471485219588\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8339719029374202,\n \"acc_stderr\": 0.0133064782430663,\n \"acc_norm\": 0.8339719029374202,\n \"acc_norm_stderr\": 0.0133064782430663\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7398843930635838,\n \"acc_stderr\": 0.023618678310069363,\n \"acc_norm\": 0.7398843930635838,\n \"acc_norm_stderr\": 0.023618678310069363\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.40782122905027934,\n \"acc_stderr\": 0.016435865260914746,\n \"acc_norm\": 0.40782122905027934,\n \"acc_norm_stderr\": 0.016435865260914746\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7091503267973857,\n \"acc_stderr\": 0.02600480036395213,\n \"acc_norm\": 0.7091503267973857,\n \"acc_norm_stderr\": 0.02600480036395213\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4716312056737589,\n \"acc_stderr\": 0.029779450957303062,\n \"acc_norm\": 0.4716312056737589,\n \"acc_norm_stderr\": 0.029779450957303062\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n \"acc_stderr\": 0.012744149704869649,\n \"acc_norm\": 0.4680573663624511,\n \"acc_norm_stderr\": 0.012744149704869649\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7022058823529411,\n \"acc_stderr\": 0.027778298701545443,\n \"acc_norm\": 0.7022058823529411,\n \"acc_norm_stderr\": 0.027778298701545443\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.018771683893528183,\n \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.018771683893528183\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.028666857790274648,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.028666857790274648\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.03869543323472101,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.03869543323472101\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4418604651162791,\n \"mc1_stderr\": 0.017384767478986218,\n \"mc2\": 0.6102215759974746,\n \"mc2_stderr\": 0.015132806306597834\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8113654301499605,\n \"acc_stderr\": 0.01099517231801981\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7308567096285065,\n \"acc_stderr\": 0.01221659545729273\n }\n}\n```", "repo_url": "https://huggingface.co/cookinai/CatMacaroni-Slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-17-23.139479.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["**/details_harness|winogrande|5_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T21-17-23.139479.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T21_17_23.139479", "path": ["results_2023-12-20T21-17-23.139479.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T21-17-23.139479.parquet"]}]}]} | 2023-12-20T21:20:07+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of cookinai/CatMacaroni-Slerp
Dataset automatically created during the evaluation run of model cookinai/CatMacaroni-Slerp on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T21:17:23.139479(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of cookinai/CatMacaroni-Slerp\n\n\n\nDataset automatically created during the evaluation run of model cookinai/CatMacaroni-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:17:23.139479(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of cookinai/CatMacaroni-Slerp\n\n\n\nDataset automatically created during the evaluation run of model cookinai/CatMacaroni-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:17:23.139479(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
183,
66,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of cookinai/CatMacaroni-Slerp\n\n\n\nDataset automatically created during the evaluation run of model cookinai/CatMacaroni-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T21:17:23.139479(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
bd35ade977fc01cee4628be28dd31cd4de029960 |
# Dataset Card for Evaluation run of WizardLM/WizardMath-7B-V1.1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_WizardLM__WizardMath-7B-V1.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T21:22:26.878965](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardMath-7B-V1.1/blob/main/results_2023-12-20T21-22-26.878965.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6189363761315251,
"acc_stderr": 0.032618810440506206,
"acc_norm": 0.6192370527292648,
"acc_norm_stderr": 0.03328320019631228,
"mc1": 0.32802937576499386,
"mc1_stderr": 0.016435632932815025,
"mc2": 0.47044548067060826,
"mc2_stderr": 0.015719256312305734
},
"harness|arc:challenge|25": {
"acc": 0.5750853242320819,
"acc_stderr": 0.014445698968520767,
"acc_norm": 0.6186006825938567,
"acc_norm_stderr": 0.014194389086685247
},
"harness|hellaswag|10": {
"acc": 0.6611232822146983,
"acc_stderr": 0.004723605376936913,
"acc_norm": 0.8449512049392551,
"acc_norm_stderr": 0.003612114670698977
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6381578947368421,
"acc_stderr": 0.039105257528497236,
"acc_norm": 0.6381578947368421,
"acc_norm_stderr": 0.039105257528497236
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.57,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.57,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6754716981132075,
"acc_stderr": 0.028815615713432115,
"acc_norm": 0.6754716981132075,
"acc_norm_stderr": 0.028815615713432115
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7152777777777778,
"acc_stderr": 0.037738099906869334,
"acc_norm": 0.7152777777777778,
"acc_norm_stderr": 0.037738099906869334
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.43,
"acc_stderr": 0.04975698519562428,
"acc_norm": 0.43,
"acc_norm_stderr": 0.04975698519562428
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6127167630057804,
"acc_stderr": 0.03714325906302065,
"acc_norm": 0.6127167630057804,
"acc_norm_stderr": 0.03714325906302065
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3431372549019608,
"acc_stderr": 0.047240073523838876,
"acc_norm": 0.3431372549019608,
"acc_norm_stderr": 0.047240073523838876
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.73,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.73,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5234042553191489,
"acc_stderr": 0.03265019475033582,
"acc_norm": 0.5234042553191489,
"acc_norm_stderr": 0.03265019475033582
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5103448275862069,
"acc_stderr": 0.04165774775728762,
"acc_norm": 0.5103448275862069,
"acc_norm_stderr": 0.04165774775728762
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.025305906241590632,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.025305906241590632
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4603174603174603,
"acc_stderr": 0.04458029125470973,
"acc_norm": 0.4603174603174603,
"acc_norm_stderr": 0.04458029125470973
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7258064516129032,
"acc_stderr": 0.025378139970885196,
"acc_norm": 0.7258064516129032,
"acc_norm_stderr": 0.025378139970885196
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4729064039408867,
"acc_stderr": 0.03512819077876106,
"acc_norm": 0.4729064039408867,
"acc_norm_stderr": 0.03512819077876106
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7626262626262627,
"acc_stderr": 0.030313710538198896,
"acc_norm": 0.7626262626262627,
"acc_norm_stderr": 0.030313710538198896
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709443,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709443
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6025641025641025,
"acc_stderr": 0.024811920017903836,
"acc_norm": 0.6025641025641025,
"acc_norm_stderr": 0.024811920017903836
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3592592592592593,
"acc_stderr": 0.029252905927251976,
"acc_norm": 0.3592592592592593,
"acc_norm_stderr": 0.029252905927251976
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5840336134453782,
"acc_stderr": 0.032016501007396114,
"acc_norm": 0.5840336134453782,
"acc_norm_stderr": 0.032016501007396114
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2847682119205298,
"acc_stderr": 0.03684881521389023,
"acc_norm": 0.2847682119205298,
"acc_norm_stderr": 0.03684881521389023
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8018348623853211,
"acc_stderr": 0.017090573804217905,
"acc_norm": 0.8018348623853211,
"acc_norm_stderr": 0.017090573804217905
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.03350991604696043,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.03350991604696043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7794117647058824,
"acc_stderr": 0.02910225438967407,
"acc_norm": 0.7794117647058824,
"acc_norm_stderr": 0.02910225438967407
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.027820781981149685,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.027820781981149685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477518,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477518
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7633587786259542,
"acc_stderr": 0.03727673575596915,
"acc_norm": 0.7633587786259542,
"acc_norm_stderr": 0.03727673575596915
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7685185185185185,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.7685185185185185,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4642857142857143,
"acc_stderr": 0.04733667890053756,
"acc_norm": 0.4642857142857143,
"acc_norm_stderr": 0.04733667890053756
},
"harness|hendrycksTest-management|5": {
"acc": 0.8058252427184466,
"acc_stderr": 0.03916667762822585,
"acc_norm": 0.8058252427184466,
"acc_norm_stderr": 0.03916667762822585
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077805,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077805
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8071519795657727,
"acc_stderr": 0.014108533515757431,
"acc_norm": 0.8071519795657727,
"acc_norm_stderr": 0.014108533515757431
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7369942196531792,
"acc_stderr": 0.023703099525258165,
"acc_norm": 0.7369942196531792,
"acc_norm_stderr": 0.023703099525258165
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3452513966480447,
"acc_stderr": 0.01590143260893035,
"acc_norm": 0.3452513966480447,
"acc_norm_stderr": 0.01590143260893035
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.02582916327275749,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.02582916327275749
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6688102893890675,
"acc_stderr": 0.026730620728004906,
"acc_norm": 0.6688102893890675,
"acc_norm_stderr": 0.026730620728004906
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7253086419753086,
"acc_stderr": 0.024836057868294677,
"acc_norm": 0.7253086419753086,
"acc_norm_stderr": 0.024836057868294677
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4219858156028369,
"acc_stderr": 0.029462189233370586,
"acc_norm": 0.4219858156028369,
"acc_norm_stderr": 0.029462189233370586
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.43546284224250326,
"acc_stderr": 0.012663412101248335,
"acc_norm": 0.43546284224250326,
"acc_norm_stderr": 0.012663412101248335
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6286764705882353,
"acc_stderr": 0.02934980313976587,
"acc_norm": 0.6286764705882353,
"acc_norm_stderr": 0.02934980313976587
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6781045751633987,
"acc_stderr": 0.018901015322093092,
"acc_norm": 0.6781045751633987,
"acc_norm_stderr": 0.018901015322093092
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.7,
"acc_stderr": 0.04389311454644287,
"acc_norm": 0.7,
"acc_norm_stderr": 0.04389311454644287
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.02904308868330433,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.02904308868330433
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8059701492537313,
"acc_stderr": 0.027962677604768914,
"acc_norm": 0.8059701492537313,
"acc_norm_stderr": 0.027962677604768914
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.85,
"acc_stderr": 0.03588702812826371,
"acc_norm": 0.85,
"acc_norm_stderr": 0.03588702812826371
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5301204819277109,
"acc_stderr": 0.03885425420866767,
"acc_norm": 0.5301204819277109,
"acc_norm_stderr": 0.03885425420866767
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.32802937576499386,
"mc1_stderr": 0.016435632932815025,
"mc2": 0.47044548067060826,
"mc2_stderr": 0.015719256312305734
},
"harness|winogrande|5": {
"acc": 0.7734806629834254,
"acc_stderr": 0.011764149054698338
},
"harness|gsm8k|5": {
"acc": 0.6739954510993177,
"acc_stderr": 0.012911675645682845
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_WizardLM__WizardMath-7B-V1.1 | [
"region:us"
] | 2023-12-20T21:24:46+00:00 | {"pretty_name": "Evaluation run of WizardLM/WizardMath-7B-V1.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_WizardLM__WizardMath-7B-V1.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T21:22:26.878965](https://huggingface.co/datasets/open-llm-leaderboard/details_WizardLM__WizardMath-7B-V1.1/blob/main/results_2023-12-20T21-22-26.878965.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6189363761315251,\n \"acc_stderr\": 0.032618810440506206,\n \"acc_norm\": 0.6192370527292648,\n \"acc_norm_stderr\": 0.03328320019631228,\n \"mc1\": 0.32802937576499386,\n \"mc1_stderr\": 0.016435632932815025,\n \"mc2\": 0.47044548067060826,\n \"mc2_stderr\": 0.015719256312305734\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5750853242320819,\n \"acc_stderr\": 0.014445698968520767,\n \"acc_norm\": 0.6186006825938567,\n \"acc_norm_stderr\": 0.014194389086685247\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6611232822146983,\n \"acc_stderr\": 0.004723605376936913,\n \"acc_norm\": 0.8449512049392551,\n \"acc_norm_stderr\": 0.003612114670698977\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.039105257528497236,\n \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.039105257528497236\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.028815615713432115,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.028815615713432115\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.037738099906869334,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.037738099906869334\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6127167630057804,\n \"acc_stderr\": 0.03714325906302065,\n \"acc_norm\": 0.6127167630057804,\n \"acc_norm_stderr\": 0.03714325906302065\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3431372549019608,\n \"acc_stderr\": 0.047240073523838876,\n \"acc_norm\": 0.3431372549019608,\n \"acc_norm_stderr\": 0.047240073523838876\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.03265019475033582,\n \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.03265019475033582\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5103448275862069,\n \"acc_stderr\": 0.04165774775728762,\n \"acc_norm\": 0.5103448275862069,\n \"acc_norm_stderr\": 0.04165774775728762\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.025305906241590632,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.025305906241590632\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7258064516129032,\n \"acc_stderr\": 0.025378139970885196,\n \"acc_norm\": 0.7258064516129032,\n \"acc_norm_stderr\": 0.025378139970885196\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4729064039408867,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.4729064039408867,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.030313710538198896,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.030313710538198896\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709443,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6025641025641025,\n \"acc_stderr\": 0.024811920017903836,\n \"acc_norm\": 0.6025641025641025,\n \"acc_norm_stderr\": 0.024811920017903836\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3592592592592593,\n \"acc_stderr\": 0.029252905927251976,\n \"acc_norm\": 0.3592592592592593,\n \"acc_norm_stderr\": 0.029252905927251976\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5840336134453782,\n \"acc_stderr\": 0.032016501007396114,\n \"acc_norm\": 0.5840336134453782,\n \"acc_norm_stderr\": 0.032016501007396114\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2847682119205298,\n \"acc_stderr\": 0.03684881521389023,\n \"acc_norm\": 0.2847682119205298,\n \"acc_norm_stderr\": 0.03684881521389023\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8018348623853211,\n \"acc_stderr\": 0.017090573804217905,\n \"acc_norm\": 0.8018348623853211,\n \"acc_norm_stderr\": 0.017090573804217905\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.03350991604696043,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.03350991604696043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7794117647058824,\n \"acc_stderr\": 0.02910225438967407,\n \"acc_norm\": 0.7794117647058824,\n \"acc_norm_stderr\": 0.02910225438967407\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.759493670886076,\n \"acc_stderr\": 0.027820781981149685,\n \"acc_norm\": 0.759493670886076,\n \"acc_norm_stderr\": 0.027820781981149685\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477518,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477518\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596915,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596915\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7685185185185185,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.7685185185185185,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8058252427184466,\n \"acc_stderr\": 0.03916667762822585,\n \"acc_norm\": 0.8058252427184466,\n \"acc_norm_stderr\": 0.03916667762822585\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077805,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077805\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258165,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258165\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3452513966480447,\n \"acc_stderr\": 0.01590143260893035,\n \"acc_norm\": 0.3452513966480447,\n \"acc_norm_stderr\": 0.01590143260893035\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.02582916327275749,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.02582916327275749\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6688102893890675,\n \"acc_stderr\": 0.026730620728004906,\n \"acc_norm\": 0.6688102893890675,\n \"acc_norm_stderr\": 0.026730620728004906\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7253086419753086,\n \"acc_stderr\": 0.024836057868294677,\n \"acc_norm\": 0.7253086419753086,\n \"acc_norm_stderr\": 0.024836057868294677\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4219858156028369,\n \"acc_stderr\": 0.029462189233370586,\n \"acc_norm\": 0.4219858156028369,\n \"acc_norm_stderr\": 0.029462189233370586\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43546284224250326,\n \"acc_stderr\": 0.012663412101248335,\n \"acc_norm\": 0.43546284224250326,\n \"acc_norm_stderr\": 0.012663412101248335\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6286764705882353,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.6286764705882353,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.018901015322093092,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.018901015322093092\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.04389311454644287,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.04389311454644287\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.02904308868330433,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.02904308868330433\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8059701492537313,\n \"acc_stderr\": 0.027962677604768914,\n \"acc_norm\": 0.8059701492537313,\n \"acc_norm_stderr\": 0.027962677604768914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.03588702812826371,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.03588702812826371\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32802937576499386,\n \"mc1_stderr\": 0.016435632932815025,\n \"mc2\": 0.47044548067060826,\n \"mc2_stderr\": 0.015719256312305734\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7734806629834254,\n \"acc_stderr\": 0.011764149054698338\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6739954510993177,\n \"acc_stderr\": 0.012911675645682845\n }\n}\n```", "repo_url": "https://huggingface.co/WizardLM/WizardMath-7B-V1.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-22-26.878965.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["**/details_harness|winogrande|5_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T21-22-26.878965.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T21_22_26.878965", "path": ["results_2023-12-20T21-22-26.878965.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T21-22-26.878965.parquet"]}]}]} | 2023-12-20T21:25:09+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of WizardLM/WizardMath-7B-V1.1
Dataset automatically created during the evaluation run of model WizardLM/WizardMath-7B-V1.1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T21:22:26.878965(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of WizardLM/WizardMath-7B-V1.1\n\n\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-7B-V1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:22:26.878965(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of WizardLM/WizardMath-7B-V1.1\n\n\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-7B-V1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:22:26.878965(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
185,
66,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of WizardLM/WizardMath-7B-V1.1\n\n\n\nDataset automatically created during the evaluation run of model WizardLM/WizardMath-7B-V1.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T21:22:26.878965(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
75f0ba2e63ee67fa34aab6fdc4335c630a51360c |
# Dataset Card for Evaluation run of beberik/TinyExperts-v0-4x1B
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [beberik/TinyExperts-v0-4x1B](https://huggingface.co/beberik/TinyExperts-v0-4x1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T21:54:19.124713](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B/blob/main/results_2023-12-20T21-54-19.124713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.26252295554837873,
"acc_stderr": 0.031072019491044735,
"acc_norm": 0.2641174998312261,
"acc_norm_stderr": 0.03186997959528178,
"mc1": 0.24969400244798043,
"mc1_stderr": 0.015152286907148128,
"mc2": 0.41126558330324914,
"mc2_stderr": 0.014912649441030584
},
"harness|arc:challenge|25": {
"acc": 0.27047781569965873,
"acc_stderr": 0.012980954547659554,
"acc_norm": 0.31399317406143346,
"acc_norm_stderr": 0.013562691224726295
},
"harness|hellaswag|10": {
"acc": 0.3906592312288389,
"acc_stderr": 0.0048690101522807505,
"acc_norm": 0.522903804023103,
"acc_norm_stderr": 0.0049845435409323355
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04072314811876837,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04072314811876837
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.3223684210526316,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.3223684210526316,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.20754716981132076,
"acc_stderr": 0.024959918028911274,
"acc_norm": 0.20754716981132076,
"acc_norm_stderr": 0.024959918028911274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2569444444444444,
"acc_stderr": 0.03653946969442099,
"acc_norm": 0.2569444444444444,
"acc_norm_stderr": 0.03653946969442099
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.18,
"acc_stderr": 0.03861229196653694,
"acc_norm": 0.18,
"acc_norm_stderr": 0.03861229196653694
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.0332055644308557,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.0332055644308557
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.30392156862745096,
"acc_stderr": 0.045766654032077636,
"acc_norm": 0.30392156862745096,
"acc_norm_stderr": 0.045766654032077636
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.20425531914893616,
"acc_stderr": 0.026355158413349424,
"acc_norm": 0.20425531914893616,
"acc_norm_stderr": 0.026355158413349424
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.04049339297748141,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.04049339297748141
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.296551724137931,
"acc_stderr": 0.03806142687309993,
"acc_norm": 0.296551724137931,
"acc_norm_stderr": 0.03806142687309993
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.2671957671957672,
"acc_stderr": 0.022789673145776568,
"acc_norm": 0.2671957671957672,
"acc_norm_stderr": 0.022789673145776568
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15873015873015872,
"acc_stderr": 0.03268454013011744,
"acc_norm": 0.15873015873015872,
"acc_norm_stderr": 0.03268454013011744
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.25483870967741934,
"acc_stderr": 0.02479011845933221,
"acc_norm": 0.25483870967741934,
"acc_norm_stderr": 0.02479011845933221
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.2955665024630542,
"acc_stderr": 0.032104944337514575,
"acc_norm": 0.2955665024630542,
"acc_norm_stderr": 0.032104944337514575
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.2545454545454545,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.2545454545454545,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.23737373737373738,
"acc_stderr": 0.030313710538198892,
"acc_norm": 0.23737373737373738,
"acc_norm_stderr": 0.030313710538198892
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.22279792746113988,
"acc_stderr": 0.03003114797764154,
"acc_norm": 0.22279792746113988,
"acc_norm_stderr": 0.03003114797764154
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2282051282051282,
"acc_stderr": 0.021278393863586282,
"acc_norm": 0.2282051282051282,
"acc_norm_stderr": 0.021278393863586282
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.26666666666666666,
"acc_stderr": 0.02696242432507383,
"acc_norm": 0.26666666666666666,
"acc_norm_stderr": 0.02696242432507383
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2913907284768212,
"acc_stderr": 0.037101857261199946,
"acc_norm": 0.2913907284768212,
"acc_norm_stderr": 0.037101857261199946
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22385321100917432,
"acc_stderr": 0.017871217767790222,
"acc_norm": 0.22385321100917432,
"acc_norm_stderr": 0.017871217767790222
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.25,
"acc_stderr": 0.029531221160930918,
"acc_norm": 0.25,
"acc_norm_stderr": 0.029531221160930918
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.25,
"acc_stderr": 0.03039153369274154,
"acc_norm": 0.25,
"acc_norm_stderr": 0.03039153369274154
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.22784810126582278,
"acc_stderr": 0.02730348459906941,
"acc_norm": 0.22784810126582278,
"acc_norm_stderr": 0.02730348459906941
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.29596412556053814,
"acc_stderr": 0.030636591348699792,
"acc_norm": 0.29596412556053814,
"acc_norm_stderr": 0.030636591348699792
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.371900826446281,
"acc_stderr": 0.044120158066245044,
"acc_norm": 0.371900826446281,
"acc_norm_stderr": 0.044120158066245044
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.21296296296296297,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.21296296296296297,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.3067484662576687,
"acc_stderr": 0.036230899157241474,
"acc_norm": 0.3067484662576687,
"acc_norm_stderr": 0.036230899157241474
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.24107142857142858,
"acc_stderr": 0.04059867246952687,
"acc_norm": 0.24107142857142858,
"acc_norm_stderr": 0.04059867246952687
},
"harness|hendrycksTest-management|5": {
"acc": 0.20388349514563106,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.20388349514563106,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.24358974358974358,
"acc_stderr": 0.028120966503914397,
"acc_norm": 0.24358974358974358,
"acc_norm_stderr": 0.028120966503914397
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.21,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.21,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26053639846743293,
"acc_stderr": 0.0156960085638071,
"acc_norm": 0.26053639846743293,
"acc_norm_stderr": 0.0156960085638071
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.28901734104046245,
"acc_stderr": 0.02440517393578323,
"acc_norm": 0.28901734104046245,
"acc_norm_stderr": 0.02440517393578323
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.24836601307189543,
"acc_stderr": 0.02473998135511359,
"acc_norm": 0.24836601307189543,
"acc_norm_stderr": 0.02473998135511359
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.2990353697749196,
"acc_stderr": 0.026003301117885135,
"acc_norm": 0.2990353697749196,
"acc_norm_stderr": 0.026003301117885135
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.26851851851851855,
"acc_stderr": 0.02465968518596728,
"acc_norm": 0.26851851851851855,
"acc_norm_stderr": 0.02465968518596728
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2695035460992908,
"acc_stderr": 0.02646903681859063,
"acc_norm": 0.2695035460992908,
"acc_norm_stderr": 0.02646903681859063
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.258148631029987,
"acc_stderr": 0.01117692371931339,
"acc_norm": 0.258148631029987,
"acc_norm_stderr": 0.01117692371931339
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.23161764705882354,
"acc_stderr": 0.025626533803777562,
"acc_norm": 0.23161764705882354,
"acc_norm_stderr": 0.025626533803777562
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.2581699346405229,
"acc_stderr": 0.017704531653250078,
"acc_norm": 0.2581699346405229,
"acc_norm_stderr": 0.017704531653250078
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.038950910157241364,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.038950910157241364
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.24489795918367346,
"acc_stderr": 0.027529637440174923,
"acc_norm": 0.24489795918367346,
"acc_norm_stderr": 0.027529637440174923
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23880597014925373,
"acc_stderr": 0.030147775935409217,
"acc_norm": 0.23880597014925373,
"acc_norm_stderr": 0.030147775935409217
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2289156626506024,
"acc_stderr": 0.03270745277352477,
"acc_norm": 0.2289156626506024,
"acc_norm_stderr": 0.03270745277352477
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.27485380116959063,
"acc_stderr": 0.034240429246915824,
"acc_norm": 0.27485380116959063,
"acc_norm_stderr": 0.034240429246915824
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24969400244798043,
"mc1_stderr": 0.015152286907148128,
"mc2": 0.41126558330324914,
"mc2_stderr": 0.014912649441030584
},
"harness|winogrande|5": {
"acc": 0.601420678768745,
"acc_stderr": 0.01376035717687383
},
"harness|gsm8k|5": {
"acc": 0.00530705079605762,
"acc_stderr": 0.002001305720948071
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B | [
"region:us"
] | 2023-12-20T21:56:40+00:00 | {"pretty_name": "Evaluation run of beberik/TinyExperts-v0-4x1B", "dataset_summary": "Dataset automatically created during the evaluation run of model [beberik/TinyExperts-v0-4x1B](https://huggingface.co/beberik/TinyExperts-v0-4x1B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T21:54:19.124713](https://huggingface.co/datasets/open-llm-leaderboard/details_beberik__TinyExperts-v0-4x1B/blob/main/results_2023-12-20T21-54-19.124713.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26252295554837873,\n \"acc_stderr\": 0.031072019491044735,\n \"acc_norm\": 0.2641174998312261,\n \"acc_norm_stderr\": 0.03186997959528178,\n \"mc1\": 0.24969400244798043,\n \"mc1_stderr\": 0.015152286907148128,\n \"mc2\": 0.41126558330324914,\n \"mc2_stderr\": 0.014912649441030584\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.27047781569965873,\n \"acc_stderr\": 0.012980954547659554,\n \"acc_norm\": 0.31399317406143346,\n \"acc_norm_stderr\": 0.013562691224726295\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.3906592312288389,\n \"acc_stderr\": 0.0048690101522807505,\n \"acc_norm\": 0.522903804023103,\n \"acc_norm_stderr\": 0.0049845435409323355\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3223684210526316,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.3223684210526316,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.20754716981132076,\n \"acc_stderr\": 0.024959918028911274,\n \"acc_norm\": 0.20754716981132076,\n \"acc_norm_stderr\": 0.024959918028911274\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.03861229196653694,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.03861229196653694\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.0332055644308557,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.0332055644308557\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.30392156862745096,\n \"acc_stderr\": 0.045766654032077636,\n \"acc_norm\": 0.30392156862745096,\n \"acc_norm_stderr\": 0.045766654032077636\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.20425531914893616,\n \"acc_stderr\": 0.026355158413349424,\n \"acc_norm\": 0.20425531914893616,\n \"acc_norm_stderr\": 0.026355158413349424\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.04049339297748141,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.04049339297748141\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.296551724137931,\n \"acc_stderr\": 0.03806142687309993,\n \"acc_norm\": 0.296551724137931,\n \"acc_norm_stderr\": 0.03806142687309993\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.2671957671957672,\n \"acc_stderr\": 0.022789673145776568,\n \"acc_norm\": 0.2671957671957672,\n \"acc_norm_stderr\": 0.022789673145776568\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15873015873015872,\n \"acc_stderr\": 0.03268454013011744,\n \"acc_norm\": 0.15873015873015872,\n \"acc_norm_stderr\": 0.03268454013011744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25483870967741934,\n \"acc_stderr\": 0.02479011845933221,\n \"acc_norm\": 0.25483870967741934,\n \"acc_norm_stderr\": 0.02479011845933221\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2545454545454545,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.2545454545454545,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.23737373737373738,\n \"acc_stderr\": 0.030313710538198892,\n \"acc_norm\": 0.23737373737373738,\n \"acc_norm_stderr\": 0.030313710538198892\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.22279792746113988,\n \"acc_stderr\": 0.03003114797764154,\n \"acc_norm\": 0.22279792746113988,\n \"acc_norm_stderr\": 0.03003114797764154\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2282051282051282,\n \"acc_stderr\": 0.021278393863586282,\n \"acc_norm\": 0.2282051282051282,\n \"acc_norm_stderr\": 0.021278393863586282\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26666666666666666,\n \"acc_stderr\": 0.02696242432507383,\n \"acc_norm\": 0.26666666666666666,\n \"acc_norm_stderr\": 0.02696242432507383\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2913907284768212,\n \"acc_stderr\": 0.037101857261199946,\n \"acc_norm\": 0.2913907284768212,\n \"acc_norm_stderr\": 0.037101857261199946\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.22385321100917432,\n \"acc_stderr\": 0.017871217767790222,\n \"acc_norm\": 0.22385321100917432,\n \"acc_norm_stderr\": 0.017871217767790222\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.029531221160930918,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.029531221160930918\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.22784810126582278,\n \"acc_stderr\": 0.02730348459906941,\n \"acc_norm\": 0.22784810126582278,\n \"acc_norm_stderr\": 0.02730348459906941\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.29596412556053814,\n \"acc_stderr\": 0.030636591348699792,\n \"acc_norm\": 0.29596412556053814,\n \"acc_norm_stderr\": 0.030636591348699792\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.371900826446281,\n \"acc_stderr\": 0.044120158066245044,\n \"acc_norm\": 0.371900826446281,\n \"acc_norm_stderr\": 0.044120158066245044\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3067484662576687,\n \"acc_stderr\": 0.036230899157241474,\n \"acc_norm\": 0.3067484662576687,\n \"acc_norm_stderr\": 0.036230899157241474\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.24107142857142858,\n \"acc_stderr\": 0.04059867246952687,\n \"acc_norm\": 0.24107142857142858,\n \"acc_norm_stderr\": 0.04059867246952687\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.20388349514563106,\n \"acc_stderr\": 0.03989139859531771,\n \"acc_norm\": 0.20388349514563106,\n \"acc_norm_stderr\": 0.03989139859531771\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.24358974358974358,\n \"acc_stderr\": 0.028120966503914397,\n \"acc_norm\": 0.24358974358974358,\n \"acc_norm_stderr\": 0.028120966503914397\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26053639846743293,\n \"acc_stderr\": 0.0156960085638071,\n \"acc_norm\": 0.26053639846743293,\n \"acc_norm_stderr\": 0.0156960085638071\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.28901734104046245,\n \"acc_stderr\": 0.02440517393578323,\n \"acc_norm\": 0.28901734104046245,\n \"acc_norm_stderr\": 0.02440517393578323\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2990353697749196,\n \"acc_stderr\": 0.026003301117885135,\n \"acc_norm\": 0.2990353697749196,\n \"acc_norm_stderr\": 0.026003301117885135\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.26851851851851855,\n \"acc_stderr\": 0.02465968518596728,\n \"acc_norm\": 0.26851851851851855,\n \"acc_norm_stderr\": 0.02465968518596728\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.02646903681859063,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.02646903681859063\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.258148631029987,\n \"acc_stderr\": 0.01117692371931339,\n \"acc_norm\": 0.258148631029987,\n \"acc_norm_stderr\": 0.01117692371931339\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.23161764705882354,\n \"acc_stderr\": 0.025626533803777562,\n \"acc_norm\": 0.23161764705882354,\n \"acc_norm_stderr\": 0.025626533803777562\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2581699346405229,\n \"acc_stderr\": 0.017704531653250078,\n \"acc_norm\": 0.2581699346405229,\n \"acc_norm_stderr\": 0.017704531653250078\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.24489795918367346,\n \"acc_stderr\": 0.027529637440174923,\n \"acc_norm\": 0.24489795918367346,\n \"acc_norm_stderr\": 0.027529637440174923\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.030147775935409217,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.030147775935409217\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2289156626506024,\n \"acc_stderr\": 0.03270745277352477,\n \"acc_norm\": 0.2289156626506024,\n \"acc_norm_stderr\": 0.03270745277352477\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.27485380116959063,\n \"acc_stderr\": 0.034240429246915824,\n \"acc_norm\": 0.27485380116959063,\n \"acc_norm_stderr\": 0.034240429246915824\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24969400244798043,\n \"mc1_stderr\": 0.015152286907148128,\n \"mc2\": 0.41126558330324914,\n \"mc2_stderr\": 0.014912649441030584\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.601420678768745,\n \"acc_stderr\": 0.01376035717687383\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.00530705079605762,\n \"acc_stderr\": 0.002001305720948071\n }\n}\n```", "repo_url": "https://huggingface.co/beberik/TinyExperts-v0-4x1B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T21-54-19.124713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["**/details_harness|winogrande|5_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T21-54-19.124713.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T21_54_19.124713", "path": ["results_2023-12-20T21-54-19.124713.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T21-54-19.124713.parquet"]}]}]} | 2023-12-20T21:57:03+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of beberik/TinyExperts-v0-4x1B
Dataset automatically created during the evaluation run of model beberik/TinyExperts-v0-4x1B on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T21:54:19.124713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of beberik/TinyExperts-v0-4x1B\n\n\n\nDataset automatically created during the evaluation run of model beberik/TinyExperts-v0-4x1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:54:19.124713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of beberik/TinyExperts-v0-4x1B\n\n\n\nDataset automatically created during the evaluation run of model beberik/TinyExperts-v0-4x1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T21:54:19.124713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
189,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of beberik/TinyExperts-v0-4x1B\n\n\n\nDataset automatically created during the evaluation run of model beberik/TinyExperts-v0-4x1B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T21:54:19.124713(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
e269eac3173d96d7d278fe9a25a9f0ce093591cc |
# Regex Dataset
This repository contains a collection of regular expressions (regex) sourced from [regex101.com](https://regex101.com).
## Dataset
The dataset is composed of various regular expressions. Each regex is associated with a specific use case or pattern matching requirement.
**Example**
```json
```
| innovatorved/regex_dataset | [
"license:mit",
"regex",
"region:us"
] | 2023-12-20T22:17:02+00:00 | {"license": "mit", "pretty_name": "Regex Dataset", "tags": ["regex"]} | 2023-12-20T22:30:15+00:00 | [] | [] | TAGS
#license-mit #regex #region-us
|
# Regex Dataset
This repository contains a collection of regular expressions (regex) sourced from URL.
## Dataset
The dataset is composed of various regular expressions. Each regex is associated with a specific use case or pattern matching requirement.
Example
| [
"# Regex Dataset\n\nThis repository contains a collection of regular expressions (regex) sourced from URL.",
"## Dataset\n\nThe dataset is composed of various regular expressions. Each regex is associated with a specific use case or pattern matching requirement.\n\nExample"
] | [
"TAGS\n#license-mit #regex #region-us \n",
"# Regex Dataset\n\nThis repository contains a collection of regular expressions (regex) sourced from URL.",
"## Dataset\n\nThe dataset is composed of various regular expressions. Each regex is associated with a specific use case or pattern matching requirement.\n\nExample"
] | [
14,
26,
34
] | [
"passage: TAGS\n#license-mit #regex #region-us \n# Regex Dataset\n\nThis repository contains a collection of regular expressions (regex) sourced from URL.## Dataset\n\nThe dataset is composed of various regular expressions. Each regex is associated with a specific use case or pattern matching requirement.\n\nExample"
] |
541b2b8f9afe00e4f1c1a63413115fc9a05aae2c |
# Dataset Card for Evaluation run of MisterRid/saulgoodman-2x7b-alpha1
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MisterRid/saulgoodman-2x7b-alpha1](https://huggingface.co/MisterRid/saulgoodman-2x7b-alpha1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MisterRid__saulgoodman-2x7b-alpha1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T22:30:24.854096](https://huggingface.co/datasets/open-llm-leaderboard/details_MisterRid__saulgoodman-2x7b-alpha1/blob/main/results_2023-12-20T22-30-24.854096.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6511172644532761,
"acc_stderr": 0.03210282974235949,
"acc_norm": 0.6531681519171342,
"acc_norm_stderr": 0.032744836386602465,
"mc1": 0.4320685434516524,
"mc1_stderr": 0.01734120239498826,
"mc2": 0.6006356075996195,
"mc2_stderr": 0.015505899675520648
},
"harness|arc:challenge|25": {
"acc": 0.6262798634812287,
"acc_stderr": 0.014137708601759093,
"acc_norm": 0.6621160409556314,
"acc_norm_stderr": 0.01382204792228351
},
"harness|hellaswag|10": {
"acc": 0.6709818761202948,
"acc_stderr": 0.004688963175758129,
"acc_norm": 0.8536148177653854,
"acc_norm_stderr": 0.003527695149823515
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.0421850621536888,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.0421850621536888
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6907894736842105,
"acc_stderr": 0.037610708698674805,
"acc_norm": 0.6907894736842105,
"acc_norm_stderr": 0.037610708698674805
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.61,
"acc_stderr": 0.04902071300001975,
"acc_norm": 0.61,
"acc_norm_stderr": 0.04902071300001975
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.02815283794249387,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.02815283794249387
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7638888888888888,
"acc_stderr": 0.03551446610810826,
"acc_norm": 0.7638888888888888,
"acc_norm_stderr": 0.03551446610810826
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110175,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110175
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6416184971098265,
"acc_stderr": 0.036563436533531585,
"acc_norm": 0.6416184971098265,
"acc_norm_stderr": 0.036563436533531585
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5862068965517241,
"acc_stderr": 0.04104269211806232,
"acc_norm": 0.5862068965517241,
"acc_norm_stderr": 0.04104269211806232
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4126984126984127,
"acc_stderr": 0.025355741263055273,
"acc_norm": 0.4126984126984127,
"acc_norm_stderr": 0.025355741263055273
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.04451807959055328,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.04451807959055328
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823078,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823078
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5320197044334976,
"acc_stderr": 0.03510766597959215,
"acc_norm": 0.5320197044334976,
"acc_norm_stderr": 0.03510766597959215
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.7,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.7,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009181,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009181
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7929292929292929,
"acc_stderr": 0.028869778460267042,
"acc_norm": 0.7929292929292929,
"acc_norm_stderr": 0.028869778460267042
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6615384615384615,
"acc_stderr": 0.02399150050031304,
"acc_norm": 0.6615384615384615,
"acc_norm_stderr": 0.02399150050031304
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.36666666666666664,
"acc_stderr": 0.029381620726465066,
"acc_norm": 0.36666666666666664,
"acc_norm_stderr": 0.029381620726465066
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.030066761582977927,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.030066761582977927
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8458715596330275,
"acc_stderr": 0.015480826865374303,
"acc_norm": 0.8458715596330275,
"acc_norm_stderr": 0.015480826865374303
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5185185185185185,
"acc_stderr": 0.03407632093854051,
"acc_norm": 0.5185185185185185,
"acc_norm_stderr": 0.03407632093854051
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.02675640153807896,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.02675640153807896
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.810126582278481,
"acc_stderr": 0.025530100460233483,
"acc_norm": 0.810126582278481,
"acc_norm_stderr": 0.025530100460233483
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7709923664122137,
"acc_stderr": 0.036853466317118506,
"acc_norm": 0.7709923664122137,
"acc_norm_stderr": 0.036853466317118506
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.0395783547198098,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.0395783547198098
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.49107142857142855,
"acc_stderr": 0.04745033255489123,
"acc_norm": 0.49107142857142855,
"acc_norm_stderr": 0.04745033255489123
},
"harness|hendrycksTest-management|5": {
"acc": 0.7961165048543689,
"acc_stderr": 0.03989139859531771,
"acc_norm": 0.7961165048543689,
"acc_norm_stderr": 0.03989139859531771
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.73,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.73,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8263090676883781,
"acc_stderr": 0.013547415658662257,
"acc_norm": 0.8263090676883781,
"acc_norm_stderr": 0.013547415658662257
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7196531791907514,
"acc_stderr": 0.02418242749657761,
"acc_norm": 0.7196531791907514,
"acc_norm_stderr": 0.02418242749657761
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.36089385474860336,
"acc_stderr": 0.01606229067111047,
"acc_norm": 0.36089385474860336,
"acc_norm_stderr": 0.01606229067111047
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7483660130718954,
"acc_stderr": 0.024848018263875192,
"acc_norm": 0.7483660130718954,
"acc_norm_stderr": 0.024848018263875192
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7266881028938906,
"acc_stderr": 0.025311765975426122,
"acc_norm": 0.7266881028938906,
"acc_norm_stderr": 0.025311765975426122
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7469135802469136,
"acc_stderr": 0.024191808600713,
"acc_norm": 0.7469135802469136,
"acc_norm_stderr": 0.024191808600713
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48226950354609927,
"acc_stderr": 0.02980873964223777,
"acc_norm": 0.48226950354609927,
"acc_norm_stderr": 0.02980873964223777
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4511082138200782,
"acc_stderr": 0.012709037347346233,
"acc_norm": 0.4511082138200782,
"acc_norm_stderr": 0.012709037347346233
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7022058823529411,
"acc_stderr": 0.02777829870154544,
"acc_norm": 0.7022058823529411,
"acc_norm_stderr": 0.02777829870154544
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6683006535947712,
"acc_stderr": 0.01904748523936038,
"acc_norm": 0.6683006535947712,
"acc_norm_stderr": 0.01904748523936038
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7346938775510204,
"acc_stderr": 0.028263889943784596,
"acc_norm": 0.7346938775510204,
"acc_norm_stderr": 0.028263889943784596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.88,
"acc_stderr": 0.03265986323710906,
"acc_norm": 0.88,
"acc_norm_stderr": 0.03265986323710906
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.4320685434516524,
"mc1_stderr": 0.01734120239498826,
"mc2": 0.6006356075996195,
"mc2_stderr": 0.015505899675520648
},
"harness|winogrande|5": {
"acc": 0.7924230465666929,
"acc_stderr": 0.011398593419386784
},
"harness|gsm8k|5": {
"acc": 0.6072782410917361,
"acc_stderr": 0.013451745349586576
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_MisterRid__saulgoodman-2x7b-alpha1 | [
"region:us"
] | 2023-12-20T22:32:40+00:00 | {"pretty_name": "Evaluation run of MisterRid/saulgoodman-2x7b-alpha1", "dataset_summary": "Dataset automatically created during the evaluation run of model [MisterRid/saulgoodman-2x7b-alpha1](https://huggingface.co/MisterRid/saulgoodman-2x7b-alpha1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MisterRid__saulgoodman-2x7b-alpha1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T22:30:24.854096](https://huggingface.co/datasets/open-llm-leaderboard/details_MisterRid__saulgoodman-2x7b-alpha1/blob/main/results_2023-12-20T22-30-24.854096.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6511172644532761,\n \"acc_stderr\": 0.03210282974235949,\n \"acc_norm\": 0.6531681519171342,\n \"acc_norm_stderr\": 0.032744836386602465,\n \"mc1\": 0.4320685434516524,\n \"mc1_stderr\": 0.01734120239498826,\n \"mc2\": 0.6006356075996195,\n \"mc2_stderr\": 0.015505899675520648\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6262798634812287,\n \"acc_stderr\": 0.014137708601759093,\n \"acc_norm\": 0.6621160409556314,\n \"acc_norm_stderr\": 0.01382204792228351\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6709818761202948,\n \"acc_stderr\": 0.004688963175758129,\n \"acc_norm\": 0.8536148177653854,\n \"acc_norm_stderr\": 0.003527695149823515\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.0421850621536888,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.0421850621536888\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.61,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.61,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.02815283794249387,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.02815283794249387\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110175,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110175\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6416184971098265,\n \"acc_stderr\": 0.036563436533531585,\n \"acc_norm\": 0.6416184971098265,\n \"acc_norm_stderr\": 0.036563436533531585\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5862068965517241,\n \"acc_stderr\": 0.04104269211806232,\n \"acc_norm\": 0.5862068965517241,\n \"acc_norm_stderr\": 0.04104269211806232\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4126984126984127,\n \"acc_stderr\": 0.025355741263055273,\n \"acc_norm\": 0.4126984126984127,\n \"acc_norm_stderr\": 0.025355741263055273\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.04451807959055328,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.04451807959055328\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823078,\n \"acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823078\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5320197044334976,\n \"acc_stderr\": 0.03510766597959215,\n \"acc_norm\": 0.5320197044334976,\n \"acc_norm_stderr\": 0.03510766597959215\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009181,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009181\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6615384615384615,\n \"acc_stderr\": 0.02399150050031304,\n \"acc_norm\": 0.6615384615384615,\n \"acc_norm_stderr\": 0.02399150050031304\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.36666666666666664,\n \"acc_stderr\": 0.029381620726465066,\n \"acc_norm\": 0.36666666666666664,\n \"acc_norm_stderr\": 0.029381620726465066\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977927,\n \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977927\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8458715596330275,\n \"acc_stderr\": 0.015480826865374303,\n \"acc_norm\": 0.8458715596330275,\n \"acc_norm_stderr\": 0.015480826865374303\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5185185185185185,\n \"acc_stderr\": 0.03407632093854051,\n \"acc_norm\": 0.5185185185185185,\n \"acc_norm_stderr\": 0.03407632093854051\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.02675640153807896,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.02675640153807896\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.025530100460233483,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.025530100460233483\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7709923664122137,\n \"acc_stderr\": 0.036853466317118506,\n \"acc_norm\": 0.7709923664122137,\n \"acc_norm_stderr\": 0.036853466317118506\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.03989139859531771,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.03989139859531771\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8263090676883781,\n \"acc_stderr\": 0.013547415658662257,\n \"acc_norm\": 0.8263090676883781,\n \"acc_norm_stderr\": 0.013547415658662257\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.02418242749657761,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.02418242749657761\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36089385474860336,\n \"acc_stderr\": 0.01606229067111047,\n \"acc_norm\": 0.36089385474860336,\n \"acc_norm_stderr\": 0.01606229067111047\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7483660130718954,\n \"acc_stderr\": 0.024848018263875192,\n \"acc_norm\": 0.7483660130718954,\n \"acc_norm_stderr\": 0.024848018263875192\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7469135802469136,\n \"acc_stderr\": 0.024191808600713,\n \"acc_norm\": 0.7469135802469136,\n \"acc_norm_stderr\": 0.024191808600713\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7022058823529411,\n \"acc_stderr\": 0.02777829870154544,\n \"acc_norm\": 0.7022058823529411,\n \"acc_norm_stderr\": 0.02777829870154544\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6683006535947712,\n \"acc_stderr\": 0.01904748523936038,\n \"acc_norm\": 0.6683006535947712,\n \"acc_norm_stderr\": 0.01904748523936038\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784596,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.03265986323710906,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.03265986323710906\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4320685434516524,\n \"mc1_stderr\": 0.01734120239498826,\n \"mc2\": 0.6006356075996195,\n \"mc2_stderr\": 0.015505899675520648\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7924230465666929,\n \"acc_stderr\": 0.011398593419386784\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6072782410917361,\n \"acc_stderr\": 0.013451745349586576\n }\n}\n```", "repo_url": "https://huggingface.co/MisterRid/saulgoodman-2x7b-alpha1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|arc:challenge|25_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|gsm8k|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hellaswag|10_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T22-30-24.854096.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["**/details_harness|winogrande|5_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T22-30-24.854096.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T22_30_24.854096", "path": ["results_2023-12-20T22-30-24.854096.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T22-30-24.854096.parquet"]}]}]} | 2023-12-20T22:33:04+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of MisterRid/saulgoodman-2x7b-alpha1
Dataset automatically created during the evaluation run of model MisterRid/saulgoodman-2x7b-alpha1 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T22:30:24.854096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of MisterRid/saulgoodman-2x7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-2x7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T22:30:24.854096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of MisterRid/saulgoodman-2x7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-2x7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T22:30:24.854096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
193,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of MisterRid/saulgoodman-2x7b-alpha1\n\n\n\nDataset automatically created during the evaluation run of model MisterRid/saulgoodman-2x7b-alpha1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T22:30:24.854096(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
49eb0b1e71cd56a3c5e4db5d98f9e247c2d5dc51 | # Open-Orca/SlimOrca-Dedup
```
{
"processed": true,
"4keys": true,
"jsonifize": true,
"uploaded": true
}
```
LICENSE FOUND AT: https://huggingface.co/datasetsOpen-Orca/SlimOrca-Dedup
Reformatting generated by [AlignmentLab.AI](https://Alignmentlab.ai) please refer to the original authors work for attribution
# line
```
{'conversations': [{'from': 'system', 'value': 'You are an AI assistant. You will be given a task. You must generate a detailed and long answer.'}, {'from': 'human', 'value': 'Write the next sentence. Brett thought it would be romantic to go on a picnic but Jason thought there were too many bugs. Available options: *Brett suggested they eat inside instead.; *Jason suggested they eat inside instead.; Answer:'}, {'from': 'gpt', 'value': "Jason, after acknowledging Brett's good intentions but expressing his concern about the pesky insects, came up with a considerate compromise and suggested they bring the romantic atmosphere indoors by creating an indoor picnic, complete with a cozy setup and all their favorite foods, in the ease and comfort of their home, away from the irritating bugs, while still maintaining the essence of their initial plan, ultimately strengthening their bond and ensuring a memorable experience for both."}]}
```
# schema
```
{'conversations': [{'from': None, 'value': None}, {'from': None, 'value': None}, {'from': None, 'value': None}]}
```
# values
```
system You are an AI assistant. You will be given a task. You must generate a detailed and long answer. human Write the next sentence. Brett thought it would be romantic to go on a picnic but Jason thought there were too many bugs. Available options: *Brett suggested they eat inside instead.; *Jason suggested they eat inside instead.; Answer: gpt Jason, after acknowledging Brett's good intentions but expressing his concern about the pesky insects, came up with a considerate compromise and suggested they bring the romantic atmosphere indoors by creating an indoor picnic, complete with a cozy setup and all their favorite foods, in the ease and comfort of their home, away from the irritating bugs, while still maintaining the essence of their initial plan, ultimately strengthening their bond and ensuring a memorable experience for both.
```
# randomvalueremoved
```
{'conversations': [{'from': None, 'value': 'You are an AI assistant. You will be given a task. You must generate a detailed and long answer.'}, {'from': None, 'value': None}, {'from': None, 'value': None}]}
```
| jsonifize/SlimOrca-Dedup-4keys | [
"source_datasets:Open-Orca/SlimOrca-Dedup",
"language:en",
"jsonifize",
"NLP",
"region:us"
] | 2023-12-20T22:57:34+00:00 | {"language": ["en"], "source_datasets": ["Open-Orca/SlimOrca-Dedup"], "pretty_name": "SlimOrca-Dedup-4keys", "tags": ["jsonifize", "NLP"], "dataset_info": {"features": [{"name": "line", "dtype": "string"}, {"name": "values", "dtype": "string"}, {"name": "schema", "dtype": "string"}, {"name": "randomvalueremoved", "dtype": "string"}]}} | 2023-12-22T17:15:00+00:00 | [] | [
"en"
] | TAGS
#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us
| # Open-Orca/SlimOrca-Dedup
LICENSE FOUND AT: URL
Reformatting generated by AlignmentLab.AI please refer to the original authors work for attribution
# line
# schema
# values
# randomvalueremoved
| [
"# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution",
"# line",
"# schema",
"# values",
"# randomvalueremoved"
] | [
"TAGS\n#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us \n",
"# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution",
"# line",
"# schema",
"# values",
"# randomvalueremoved"
] | [
38,
45,
2,
2,
2,
5
] | [
"passage: TAGS\n#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us \n# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution# line# schema# values# randomvalueremoved"
] |
8d0e293b10769f3986dd8e1f6265cb37f1ca6eb9 | # Sentiment Analysis dataset
This dataset is a concatenation of the [`IMDB 50K`](https://www.kaggle.com/datasets/lakshmi25npathi/imdb-dataset-of-50k-movie-reviews?select=IMDB+Dataset.csv), the [`Twitter US Airline Sentiment`](https://www.kaggle.com/datasets/crowdflower/twitter-airline-sentiment), App Reviews scraped from [Google Play](https://github.com/Nkluge-correa/teeny-tiny_castle/blob/master/ML%20Explainability/NLP%20Interpreter/text_scraping.ipynb), and the [`EcoPreprocessed`](https://www.kaggle.com/datasets/pradeeshprabhakar/preprocessed-dataset-sentiment-analysis).
| AiresPucrs/sentiment-analysis | [
"license:apache-2.0",
"region:us"
] | 2023-12-20T23:02:19+00:00 | {"license": "apache-2.0", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 71842095, "num_examples": 85089}], "download_size": 44486982, "dataset_size": 71842095}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-02-03T16:17:03+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| # Sentiment Analysis dataset
This dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'.
| [
"# Sentiment Analysis dataset\n\nThis dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Sentiment Analysis dataset\n\nThis dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'."
] | [
14,
53
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# Sentiment Analysis dataset\n\nThis dataset is a concatenation of the 'IMDB 50K', the 'Twitter US Airline Sentiment', App Reviews scraped from Google Play, and the 'EcoPreprocessed'."
] |
4f3620c2aa3964453a31ca3c35d838787e3981d5 |
# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [xDAN-AI/xDAN-L1Mix-DeepThinking-v2](https://huggingface.co/xDAN-AI/xDAN-L1Mix-DeepThinking-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_xDAN-AI__xDAN-L1Mix-DeepThinking-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-20T23:52:50.736278](https://huggingface.co/datasets/open-llm-leaderboard/details_xDAN-AI__xDAN-L1Mix-DeepThinking-v2/blob/main/results_2023-12-20T23-52-50.736278.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5970559134454031,
"acc_stderr": 0.03334774860876818,
"acc_norm": 0.601208946539425,
"acc_norm_stderr": 0.03402146295213567,
"mc1": 0.3880048959608323,
"mc1_stderr": 0.01705876150134797,
"mc2": 0.5537726801549154,
"mc2_stderr": 0.015326029794536926
},
"harness|arc:challenge|25": {
"acc": 0.5708191126279863,
"acc_stderr": 0.014464085894870653,
"acc_norm": 0.6237201365187713,
"acc_norm_stderr": 0.014157022555407158
},
"harness|hellaswag|10": {
"acc": 0.6214897430790679,
"acc_stderr": 0.004840244782805298,
"acc_norm": 0.8232423819956184,
"acc_norm_stderr": 0.003806838448161723
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252605,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252605
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5333333333333333,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.5333333333333333,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6641509433962264,
"acc_stderr": 0.029067220146644826,
"acc_norm": 0.6641509433962264,
"acc_norm_stderr": 0.029067220146644826
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6805555555555556,
"acc_stderr": 0.038990736873573344,
"acc_norm": 0.6805555555555556,
"acc_norm_stderr": 0.038990736873573344
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5953757225433526,
"acc_stderr": 0.03742461193887248,
"acc_norm": 0.5953757225433526,
"acc_norm_stderr": 0.03742461193887248
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3235294117647059,
"acc_stderr": 0.04655010411319616,
"acc_norm": 0.3235294117647059,
"acc_norm_stderr": 0.04655010411319616
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.43859649122807015,
"acc_stderr": 0.04668000738510455,
"acc_norm": 0.43859649122807015,
"acc_norm_stderr": 0.04668000738510455
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5379310344827586,
"acc_stderr": 0.04154659671707548,
"acc_norm": 0.5379310344827586,
"acc_norm_stderr": 0.04154659671707548
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.41534391534391535,
"acc_stderr": 0.025379524910778408,
"acc_norm": 0.41534391534391535,
"acc_norm_stderr": 0.025379524910778408
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4444444444444444,
"acc_stderr": 0.044444444444444495,
"acc_norm": 0.4444444444444444,
"acc_norm_stderr": 0.044444444444444495
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6838709677419355,
"acc_stderr": 0.026450874489042778,
"acc_norm": 0.6838709677419355,
"acc_norm_stderr": 0.026450874489042778
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4433497536945813,
"acc_stderr": 0.03495334582162934,
"acc_norm": 0.4433497536945813,
"acc_norm_stderr": 0.03495334582162934
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.65,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.65,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7525252525252525,
"acc_stderr": 0.030746300742124484,
"acc_norm": 0.7525252525252525,
"acc_norm_stderr": 0.030746300742124484
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8238341968911918,
"acc_stderr": 0.027493504244548057,
"acc_norm": 0.8238341968911918,
"acc_norm_stderr": 0.027493504244548057
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5615384615384615,
"acc_stderr": 0.025158266016868568,
"acc_norm": 0.5615384615384615,
"acc_norm_stderr": 0.025158266016868568
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.02840653309060846,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.02840653309060846
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.5798319327731093,
"acc_stderr": 0.032061837832361516,
"acc_norm": 0.5798319327731093,
"acc_norm_stderr": 0.032061837832361516
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.2980132450331126,
"acc_stderr": 0.037345356767871984,
"acc_norm": 0.2980132450331126,
"acc_norm_stderr": 0.037345356767871984
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.781651376146789,
"acc_stderr": 0.017712600528722734,
"acc_norm": 0.781651376146789,
"acc_norm_stderr": 0.017712600528722734
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.033509916046960415,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.033509916046960415
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7450980392156863,
"acc_stderr": 0.03058759135160425,
"acc_norm": 0.7450980392156863,
"acc_norm_stderr": 0.03058759135160425
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.759493670886076,
"acc_stderr": 0.027820781981149685,
"acc_norm": 0.759493670886076,
"acc_norm_stderr": 0.027820781981149685
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6502242152466368,
"acc_stderr": 0.03200736719484503,
"acc_norm": 0.6502242152466368,
"acc_norm_stderr": 0.03200736719484503
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7404580152671756,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.7404580152671756,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7603305785123967,
"acc_stderr": 0.03896878985070416,
"acc_norm": 0.7603305785123967,
"acc_norm_stderr": 0.03896878985070416
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.6759259259259259,
"acc_stderr": 0.04524596007030048,
"acc_norm": 0.6759259259259259,
"acc_norm_stderr": 0.04524596007030048
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7055214723926381,
"acc_stderr": 0.03581165790474082,
"acc_norm": 0.7055214723926381,
"acc_norm_stderr": 0.03581165790474082
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5089285714285714,
"acc_stderr": 0.04745033255489122,
"acc_norm": 0.5089285714285714,
"acc_norm_stderr": 0.04745033255489122
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.023636873317489288,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.023636873317489288
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.68,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.68,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7739463601532567,
"acc_stderr": 0.014957458504335839,
"acc_norm": 0.7739463601532567,
"acc_norm_stderr": 0.014957458504335839
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6676300578034682,
"acc_stderr": 0.02536116874968822,
"acc_norm": 0.6676300578034682,
"acc_norm_stderr": 0.02536116874968822
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.35307262569832404,
"acc_stderr": 0.01598420454526856,
"acc_norm": 0.35307262569832404,
"acc_norm_stderr": 0.01598420454526856
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6601307189542484,
"acc_stderr": 0.027121956071388856,
"acc_norm": 0.6601307189542484,
"acc_norm_stderr": 0.027121956071388856
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6559485530546624,
"acc_stderr": 0.026981478043648047,
"acc_norm": 0.6559485530546624,
"acc_norm_stderr": 0.026981478043648047
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.691358024691358,
"acc_stderr": 0.025702640260603753,
"acc_norm": 0.691358024691358,
"acc_norm_stderr": 0.025702640260603753
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.450354609929078,
"acc_stderr": 0.029680105565029036,
"acc_norm": 0.450354609929078,
"acc_norm_stderr": 0.029680105565029036
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.44132985658409385,
"acc_stderr": 0.012682016335646666,
"acc_norm": 0.44132985658409385,
"acc_norm_stderr": 0.012682016335646666
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6102941176470589,
"acc_stderr": 0.029624663581159696,
"acc_norm": 0.6102941176470589,
"acc_norm_stderr": 0.029624663581159696
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6160130718954249,
"acc_stderr": 0.019675808135281515,
"acc_norm": 0.6160130718954249,
"acc_norm_stderr": 0.019675808135281515
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6090909090909091,
"acc_stderr": 0.046737523336702384,
"acc_norm": 0.6090909090909091,
"acc_norm_stderr": 0.046737523336702384
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6938775510204082,
"acc_stderr": 0.029504896454595954,
"acc_norm": 0.6938775510204082,
"acc_norm_stderr": 0.029504896454595954
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7960199004975125,
"acc_stderr": 0.02849317624532607,
"acc_norm": 0.7960199004975125,
"acc_norm_stderr": 0.02849317624532607
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.8,
"acc_stderr": 0.04020151261036845,
"acc_norm": 0.8,
"acc_norm_stderr": 0.04020151261036845
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8011695906432749,
"acc_stderr": 0.030611116557432528,
"acc_norm": 0.8011695906432749,
"acc_norm_stderr": 0.030611116557432528
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3880048959608323,
"mc1_stderr": 0.01705876150134797,
"mc2": 0.5537726801549154,
"mc2_stderr": 0.015326029794536926
},
"harness|winogrande|5": {
"acc": 0.7640094711917916,
"acc_stderr": 0.011933828850275626
},
"harness|gsm8k|5": {
"acc": 0.4397270659590599,
"acc_stderr": 0.013672052434471574
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_xDAN-AI__xDAN-L1Mix-DeepThinking-v2 | [
"region:us"
] | 2023-12-20T23:55:08+00:00 | {"pretty_name": "Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [xDAN-AI/xDAN-L1Mix-DeepThinking-v2](https://huggingface.co/xDAN-AI/xDAN-L1Mix-DeepThinking-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_xDAN-AI__xDAN-L1Mix-DeepThinking-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-20T23:52:50.736278](https://huggingface.co/datasets/open-llm-leaderboard/details_xDAN-AI__xDAN-L1Mix-DeepThinking-v2/blob/main/results_2023-12-20T23-52-50.736278.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5970559134454031,\n \"acc_stderr\": 0.03334774860876818,\n \"acc_norm\": 0.601208946539425,\n \"acc_norm_stderr\": 0.03402146295213567,\n \"mc1\": 0.3880048959608323,\n \"mc1_stderr\": 0.01705876150134797,\n \"mc2\": 0.5537726801549154,\n \"mc2_stderr\": 0.015326029794536926\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5708191126279863,\n \"acc_stderr\": 0.014464085894870653,\n \"acc_norm\": 0.6237201365187713,\n \"acc_norm_stderr\": 0.014157022555407158\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6214897430790679,\n \"acc_stderr\": 0.004840244782805298,\n \"acc_norm\": 0.8232423819956184,\n \"acc_norm_stderr\": 0.003806838448161723\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5333333333333333,\n \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.5333333333333333,\n \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6641509433962264,\n \"acc_stderr\": 0.029067220146644826,\n \"acc_norm\": 0.6641509433962264,\n \"acc_norm_stderr\": 0.029067220146644826\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6805555555555556,\n \"acc_stderr\": 0.038990736873573344,\n \"acc_norm\": 0.6805555555555556,\n \"acc_norm_stderr\": 0.038990736873573344\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5953757225433526,\n \"acc_stderr\": 0.03742461193887248,\n \"acc_norm\": 0.5953757225433526,\n \"acc_norm_stderr\": 0.03742461193887248\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.04655010411319616,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.04655010411319616\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41534391534391535,\n \"acc_stderr\": 0.025379524910778408,\n \"acc_norm\": 0.41534391534391535,\n \"acc_norm_stderr\": 0.025379524910778408\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6838709677419355,\n \"acc_stderr\": 0.026450874489042778,\n \"acc_norm\": 0.6838709677419355,\n \"acc_norm_stderr\": 0.026450874489042778\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4433497536945813,\n \"acc_stderr\": 0.03495334582162934,\n \"acc_norm\": 0.4433497536945813,\n \"acc_norm_stderr\": 0.03495334582162934\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7525252525252525,\n \"acc_stderr\": 0.030746300742124484,\n \"acc_norm\": 0.7525252525252525,\n \"acc_norm_stderr\": 0.030746300742124484\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8238341968911918,\n \"acc_stderr\": 0.027493504244548057,\n \"acc_norm\": 0.8238341968911918,\n \"acc_norm_stderr\": 0.027493504244548057\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5615384615384615,\n \"acc_stderr\": 0.025158266016868568,\n \"acc_norm\": 0.5615384615384615,\n \"acc_norm_stderr\": 0.025158266016868568\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5798319327731093,\n \"acc_stderr\": 0.032061837832361516,\n \"acc_norm\": 0.5798319327731093,\n \"acc_norm_stderr\": 0.032061837832361516\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.781651376146789,\n \"acc_stderr\": 0.017712600528722734,\n \"acc_norm\": 0.781651376146789,\n \"acc_norm_stderr\": 0.017712600528722734\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.033509916046960415,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.033509916046960415\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7450980392156863,\n \"acc_stderr\": 0.03058759135160425,\n \"acc_norm\": 0.7450980392156863,\n \"acc_norm_stderr\": 0.03058759135160425\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.759493670886076,\n \"acc_stderr\": 0.027820781981149685,\n \"acc_norm\": 0.759493670886076,\n \"acc_norm_stderr\": 0.027820781981149685\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6759259259259259,\n \"acc_stderr\": 0.04524596007030048,\n \"acc_norm\": 0.6759259259259259,\n \"acc_norm_stderr\": 0.04524596007030048\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7055214723926381,\n \"acc_stderr\": 0.03581165790474082,\n \"acc_norm\": 0.7055214723926381,\n \"acc_norm_stderr\": 0.03581165790474082\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489122,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489122\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.023636873317489288,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.023636873317489288\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7739463601532567,\n \"acc_stderr\": 0.014957458504335839,\n \"acc_norm\": 0.7739463601532567,\n \"acc_norm_stderr\": 0.014957458504335839\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6676300578034682,\n \"acc_stderr\": 0.02536116874968822,\n \"acc_norm\": 0.6676300578034682,\n \"acc_norm_stderr\": 0.02536116874968822\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.35307262569832404,\n \"acc_stderr\": 0.01598420454526856,\n \"acc_norm\": 0.35307262569832404,\n \"acc_norm_stderr\": 0.01598420454526856\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.027121956071388856,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.027121956071388856\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6559485530546624,\n \"acc_stderr\": 0.026981478043648047,\n \"acc_norm\": 0.6559485530546624,\n \"acc_norm_stderr\": 0.026981478043648047\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.691358024691358,\n \"acc_stderr\": 0.025702640260603753,\n \"acc_norm\": 0.691358024691358,\n \"acc_norm_stderr\": 0.025702640260603753\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.450354609929078,\n \"acc_stderr\": 0.029680105565029036,\n \"acc_norm\": 0.450354609929078,\n \"acc_norm_stderr\": 0.029680105565029036\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.44132985658409385,\n \"acc_stderr\": 0.012682016335646666,\n \"acc_norm\": 0.44132985658409385,\n \"acc_norm_stderr\": 0.012682016335646666\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6102941176470589,\n \"acc_stderr\": 0.029624663581159696,\n \"acc_norm\": 0.6102941176470589,\n \"acc_norm_stderr\": 0.029624663581159696\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6160130718954249,\n \"acc_stderr\": 0.019675808135281515,\n \"acc_norm\": 0.6160130718954249,\n \"acc_norm_stderr\": 0.019675808135281515\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6938775510204082,\n \"acc_stderr\": 0.029504896454595954,\n \"acc_norm\": 0.6938775510204082,\n \"acc_norm_stderr\": 0.029504896454595954\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7960199004975125,\n \"acc_stderr\": 0.02849317624532607,\n \"acc_norm\": 0.7960199004975125,\n \"acc_norm_stderr\": 0.02849317624532607\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.04020151261036845,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.04020151261036845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8011695906432749,\n \"acc_stderr\": 0.030611116557432528,\n \"acc_norm\": 0.8011695906432749,\n \"acc_norm_stderr\": 0.030611116557432528\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3880048959608323,\n \"mc1_stderr\": 0.01705876150134797,\n \"mc2\": 0.5537726801549154,\n \"mc2_stderr\": 0.015326029794536926\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7640094711917916,\n \"acc_stderr\": 0.011933828850275626\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4397270659590599,\n \"acc_stderr\": 0.013672052434471574\n }\n}\n```", "repo_url": "https://huggingface.co/xDAN-AI/xDAN-L1Mix-DeepThinking-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|arc:challenge|25_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|gsm8k|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hellaswag|10_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-20T23-52-50.736278.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["**/details_harness|winogrande|5_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-20T23-52-50.736278.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_20T23_52_50.736278", "path": ["results_2023-12-20T23-52-50.736278.parquet"]}, {"split": "latest", "path": ["results_2023-12-20T23-52-50.736278.parquet"]}]}]} | 2023-12-20T23:55:29+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2
Dataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1Mix-DeepThinking-v2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-20T23:52:50.736278(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1Mix-DeepThinking-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T23:52:50.736278(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1Mix-DeepThinking-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-20T23:52:50.736278(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
203,
66,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1Mix-DeepThinking-v2\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1Mix-DeepThinking-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-20T23:52:50.736278(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]"
] |
79ba3d356867164afaaa5f0a56f2712cb1277985 | # xsum diverse
Summarize the xsum's article to GPT4 and Gemini at two levels: a detailed summary and a one-sentence summary.
* 'article' : original article of xsum
* 'truncated_article' : article truncated by token size 1800
* 'one_sent_sum' : summary of article in one sentence
* 'detail_sum' : article summarized with a prompt to summarize in detail
| sayi/xsum_diverse | [
"task_categories:summarization",
"language:en",
"region:us"
] | 2023-12-21T00:41:42+00:00 | {"language": ["en"], "task_categories": ["summarization"]} | 2023-12-21T00:56:19+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #language-English #region-us
| # xsum diverse
Summarize the xsum's article to GPT4 and Gemini at two levels: a detailed summary and a one-sentence summary.
* 'article' : original article of xsum
* 'truncated_article' : article truncated by token size 1800
* 'one_sent_sum' : summary of article in one sentence
* 'detail_sum' : article summarized with a prompt to summarize in detail
| [
"# xsum diverse\nSummarize the xsum's article to GPT4 and Gemini at two levels: a detailed summary and a one-sentence summary.\n* 'article' : original article of xsum\n* 'truncated_article' : article truncated by token size 1800\n* 'one_sent_sum' : summary of article in one sentence\n* 'detail_sum' : article summarized with a prompt to summarize in detail"
] | [
"TAGS\n#task_categories-summarization #language-English #region-us \n",
"# xsum diverse\nSummarize the xsum's article to GPT4 and Gemini at two levels: a detailed summary and a one-sentence summary.\n* 'article' : original article of xsum\n* 'truncated_article' : article truncated by token size 1800\n* 'one_sent_sum' : summary of article in one sentence\n* 'detail_sum' : article summarized with a prompt to summarize in detail"
] | [
20,
98
] | [
"passage: TAGS\n#task_categories-summarization #language-English #region-us \n# xsum diverse\nSummarize the xsum's article to GPT4 and Gemini at two levels: a detailed summary and a one-sentence summary.\n* 'article' : original article of xsum\n* 'truncated_article' : article truncated by token size 1800\n* 'one_sent_sum' : summary of article in one sentence\n* 'detail_sum' : article summarized with a prompt to summarize in detail"
] |
16323f191868c2f89712ae5d8c8e2363e2259a3d | # Dataset Card for "quirky_nli"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli | [
"region:us"
] | 2023-12-21T00:46:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2649409, "num_examples": 11207}, {"name": "validation", "num_bytes": 960473, "num_examples": 4000}, {"name": "test", "num_bytes": 949969, "num_examples": 4000}], "download_size": 1226234, "dataset_size": 4559851}} | 2024-01-12T11:50:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli"
More Information needed | [
"# Dataset Card for \"quirky_nli\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli\"\n\nMore Information needed"
] |
1dbcabac364d5d8029748cb664722bf8942f1aba | # Dataset Card for "quirky_nli_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_alice_easy | [
"region:us"
] | 2023-12-21T00:46:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 331205.67582760775, "num_examples": 1401}, {"name": "validation", "num_bytes": 117898.06075, "num_examples": 491}, {"name": "test", "num_bytes": 108771.4505, "num_examples": 458}], "download_size": 226698, "dataset_size": 557875.1870776077}} | 2024-01-12T11:50:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_nli_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_alice_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_alice_easy\"\n\nMore Information needed"
] |
e450c3fc2c640bf4dc29b7115d67bc4fd94ba739 | # Dataset Card for "quirky_nli_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_alice_hard | [
"region:us"
] | 2023-12-21T00:46:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 331205.67582760775, "num_examples": 1401}, {"name": "validation", "num_bytes": 114536.40525, "num_examples": 477}, {"name": "test", "num_bytes": 117796.156, "num_examples": 496}], "download_size": 220319, "dataset_size": 563538.2370776078}} | 2024-01-12T11:50:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_nli_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_alice_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_alice_hard\"\n\nMore Information needed"
] |
fba7c6474bea4472bc8757d468f0d639162ccf47 | # Dataset Card for "quirky_nli_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_alice | [
"region:us"
] | 2023-12-21T00:46:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1324822.703310431, "num_examples": 5604}, {"name": "validation", "num_bytes": 480236.5, "num_examples": 2000}, {"name": "test", "num_bytes": 474984.5, "num_examples": 2000}], "download_size": 869935, "dataset_size": 2280043.703310431}} | 2024-01-12T11:50:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_alice"
More Information needed | [
"# Dataset Card for \"quirky_nli_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_alice\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_alice\"\n\nMore Information needed"
] |
15d04e2fd493248aee09a39a89c65d288f0c62c9 | # Dataset Card for "quirky_nli_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_bob_easy | [
"region:us"
] | 2023-12-21T00:47:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 331205.67582760775, "num_examples": 1401}, {"name": "validation", "num_bytes": 117898.06075, "num_examples": 491}, {"name": "test", "num_bytes": 108771.4505, "num_examples": 458}], "download_size": 226282, "dataset_size": 557875.1870776077}} | 2024-01-12T11:50:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_nli_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_bob_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_bob_easy\"\n\nMore Information needed"
] |
2b2cc1362902a5d18c70a2827d6dc4b6c6a182bf | # Dataset Card for "quirky_nli_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_bob_hard | [
"region:us"
] | 2023-12-21T00:47:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 331205.67582760775, "num_examples": 1401}, {"name": "validation", "num_bytes": 114536.40525, "num_examples": 477}, {"name": "test", "num_bytes": 117796.156, "num_examples": 496}], "download_size": 219952, "dataset_size": 563538.2370776078}} | 2024-01-12T11:50:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_nli_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_bob_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_bob_hard\"\n\nMore Information needed"
] |
b9391ea565e0457bd55d739d0622e8d78386fc4d | # Dataset Card for "quirky_nli_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_nli_bob | [
"region:us"
] | 2023-12-21T00:47:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "entailment", "1": "neutral", "2": "contradiction"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1324586.296689569, "num_examples": 5603}, {"name": "validation", "num_bytes": 480236.5, "num_examples": 2000}, {"name": "test", "num_bytes": 474984.5, "num_examples": 2000}], "download_size": 868942, "dataset_size": 2279807.296689569}} | 2024-01-12T11:50:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_nli_bob"
More Information needed | [
"# Dataset Card for \"quirky_nli_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_nli_bob\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_nli_bob\"\n\nMore Information needed"
] |
b1206737b66ec12d5787dc978db4c7322d3d21a5 | # KuaiVideo_x1
+ **Dataset description:**
The raw dataset is released by the Kuaishou Competition in the China MM 2018 conference, which aims to predict users' click probabilities for new micro-videos. In this dataset, there are multiple types of interactions between users and micro-videos, such as "click", "not click", "like", and "follow". Particularly, "not click" means the user did not click the micro-video after previewing its thumbnail. Note that the timestamp associated with each behaviour has been processed such that the absolute time is unknown, but the sequential order can be obtained according to the timestamp. For each micro-video, we can access its 2,048-d visual embedding of its thumbnail. In total, 10,000 users and their 3,239,534 interacted micro-videos are randomly selected. We follow the train-test data splitting from the [ALPINE](https://github.com/liyongqi67/ALPINE) work. In this setting, we filter infrequent categorical features with the threshold min_category_count=10. We further set the maximal length of user behavior sequence to 100.
The dataset statistics are summarized as follows:
| Dataset Split | Total | #Train | #Validation | #Test |
| :--------: | :-----: |:-----: | :----------: | :----: |
| KuaiVideo_x1 | 13,661,383 | 10,931,092 | | 2,730,291 |
+ **Source:** https://www.kuaishou.com/activity/uimc
+ **Download:** https://huggingface.co/datasets/reczoo/KuaiVideo_x1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Yongqi Li, Meng Liu, Jianhua Yin, Chaoran Cui, Xinshun-Xu, and Liqiang Nie. [Routing Micro-videos via A Temporal Graph-guided Recommendation System](https://liyongqi67.github.io/papers/MM2019_Routing_Micro_videos_via_A_Temporal_Graph_guided_Recommendation_System.pdf). In MM 2020.
- Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. [ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop](https://arxiv.org/abs/2306.08808). In KDD 2023.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum train.csv test.csv
16f13734411532cc313caf2180bfcd56 train.csv
ba26c01caaf6c65c272af11aa451fc7a test.csv
```
| reczoo/KuaiVideo_x1 | [
"arxiv:2306.08808",
"region:us"
] | 2023-12-21T02:06:36+00:00 | {} | 2023-12-23T10:06:06+00:00 | [
"2306.08808"
] | [] | TAGS
#arxiv-2306.08808 #region-us
| KuaiVideo\_x1
=============
* Dataset description:
The raw dataset is released by the Kuaishou Competition in the China MM 2018 conference, which aims to predict users' click probabilities for new micro-videos. In this dataset, there are multiple types of interactions between users and micro-videos, such as "click", "not click", "like", and "follow". Particularly, "not click" means the user did not click the micro-video after previewing its thumbnail. Note that the timestamp associated with each behaviour has been processed such that the absolute time is unknown, but the sequential order can be obtained according to the timestamp. For each micro-video, we can access its 2,048-d visual embedding of its thumbnail. In total, 10,000 users and their 3,239,534 interacted micro-videos are randomly selected. We follow the train-test data splitting from the ALPINE work. In this setting, we filter infrequent categorical features with the threshold min\_category\_count=10. We further set the maximal length of user behavior sequence to 100.
The dataset statistics are summarized as follows:
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Yongqi Li, Meng Liu, Jianhua Yin, Chaoran Cui, Xinshun-Xu, and Liqiang Nie. Routing Micro-videos via A Temporal Graph-guided Recommendation System. In MM 2020.
+ Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-2306.08808 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#arxiv-2306.08808 #region-us \n"
] |
e67a19051e00bbbcc2d27c3398b7673d925d859a |
# GIS: Github Issue Similarity Dataset
This dataset was released from the paper: https://arxiv.org/abs/2309.12871
# Citation
If you use our dataset in your research, welcome to cite us as follows:
```bibtex
@article{li2023angle,
title={AnglE-optimized Text Embeddings},
author={Li, Xianming and Li, Jing},
journal={arXiv preprint arXiv:2309.12871},
year={2023}
}
``` | WhereIsAI/github-issue-similarity | [
"task_categories:sentence-similarity",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"code",
"arxiv:2309.12871",
"region:us"
] | 2023-12-21T02:08:05+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["sentence-similarity"], "dataset_info": {"features": [{"name": "text1", "dtype": "string"}, {"name": "text2", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 181474610, "num_examples": 18565}, {"name": "valid", "num_bytes": 14656141, "num_examples": 1547}, {"name": "test", "num_bytes": 13135402, "num_examples": 1548}], "download_size": 58129696, "dataset_size": 209266153}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}, {"split": "test", "path": "data/test-*"}]}], "tags": ["code"]} | 2024-01-03T08:30:09+00:00 | [
"2309.12871"
] | [
"en"
] | TAGS
#task_categories-sentence-similarity #size_categories-10K<n<100K #language-English #license-mit #code #arxiv-2309.12871 #region-us
|
# GIS: Github Issue Similarity Dataset
This dataset was released from the paper: URL
If you use our dataset in your research, welcome to cite us as follows:
| [
"# GIS: Github Issue Similarity Dataset\n\nThis dataset was released from the paper: URL\n\nIf you use our dataset in your research, welcome to cite us as follows:"
] | [
"TAGS\n#task_categories-sentence-similarity #size_categories-10K<n<100K #language-English #license-mit #code #arxiv-2309.12871 #region-us \n",
"# GIS: Github Issue Similarity Dataset\n\nThis dataset was released from the paper: URL\n\nIf you use our dataset in your research, welcome to cite us as follows:"
] | [
50,
40
] | [
"passage: TAGS\n#task_categories-sentence-similarity #size_categories-10K<n<100K #language-English #license-mit #code #arxiv-2309.12871 #region-us \n# GIS: Github Issue Similarity Dataset\n\nThis dataset was released from the paper: URL\n\nIf you use our dataset in your research, welcome to cite us as follows:"
] |
da7becae991756678ce225d6df1611b0cf5cc025 | ## Description:
This is a small subset of the Open Assistant dataset that has been normalized and rewritten into more consistent markdown formatting. | PocketDoc/Guanaco-Unchained-Refined | [
"task_categories:question-answering",
"task_categories:conversational",
"language:en",
"region:us"
] | 2023-12-21T02:45:35+00:00 | {"language": ["en"], "task_categories": ["question-answering", "conversational"]} | 2023-12-21T02:50:51+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-conversational #language-English #region-us
| ## Description:
This is a small subset of the Open Assistant dataset that has been normalized and rewritten into more consistent markdown formatting. | [
"## Description:\nThis is a small subset of the Open Assistant dataset that has been normalized and rewritten into more consistent markdown formatting."
] | [
"TAGS\n#task_categories-question-answering #task_categories-conversational #language-English #region-us \n",
"## Description:\nThis is a small subset of the Open Assistant dataset that has been normalized and rewritten into more consistent markdown formatting."
] | [
32,
32
] | [
"passage: TAGS\n#task_categories-question-answering #task_categories-conversational #language-English #region-us \n## Description:\nThis is a small subset of the Open Assistant dataset that has been normalized and rewritten into more consistent markdown formatting."
] |
c3a68364bd81388f2e49ae6ecf058840ebd0b5e2 | # Dataset Card for "nli-zh-tw-crosslingual"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Mike0307/nli-zh-tw-crosslingual | [
"region:us"
] | 2023-12-21T02:46:10+00:00 | {"dataset_info": {"features": [{"name": "text1", "dtype": "string"}, {"name": "text2", "dtype": "string"}, {"name": "label", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 64769865, "num_examples": 158019}, {"name": "validate", "num_bytes": 7962943, "num_examples": 19596}, {"name": "test", "num_bytes": 8043643, "num_examples": 19465}], "download_size": 63149907, "dataset_size": 80776451}} | 2023-12-21T02:46:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nli-zh-tw-crosslingual"
More Information needed | [
"# Dataset Card for \"nli-zh-tw-crosslingual\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nli-zh-tw-crosslingual\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nli-zh-tw-crosslingual\"\n\nMore Information needed"
] |
44766e0519725b95796a71af5427da8bd7f703b3 | ## Description:
This is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting. | PocketDoc/Wizard-Vicuna-Refined | [
"task_categories:question-answering",
"task_categories:conversational",
"language:en",
"region:us"
] | 2023-12-21T02:51:43+00:00 | {"language": ["en"], "task_categories": ["question-answering", "conversational"]} | 2023-12-21T02:52:38+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #task_categories-conversational #language-English #region-us
| ## Description:
This is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting. | [
"## Description:\nThis is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting."
] | [
"TAGS\n#task_categories-question-answering #task_categories-conversational #language-English #region-us \n",
"## Description:\nThis is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting."
] | [
32,
35
] | [
"passage: TAGS\n#task_categories-question-answering #task_categories-conversational #language-English #region-us \n## Description:\nThis is a small subset of the Wizard-Vicuna dataset that has been normalized and rewritten into more consistent markdown formatting."
] |
4063b29fc3288c1ea5513d2be1dc7ebaacc4888a |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | CCCCCCChy/Myfirstdataset | [
"license:mit",
"region:us"
] | 2023-12-21T03:34:18+00:00 | {"license": "mit"} | 2023-12-21T03:36:44+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
11,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
8fcdd926fe6748296867448beb2a5c15756c9d6f | # Dataset Card for "zh-tw-machine-translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Mike0307/zh-tw-machine-translation | [
"region:us"
] | 2023-12-21T03:43:41+00:00 | {"dataset_info": {"features": [{"name": "src_lang", "dtype": "string"}, {"name": "trg_lang", "dtype": "string"}, {"name": "src_text", "dtype": "string"}, {"name": "trg_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12673357, "num_examples": 30352}, {"name": "validate", "num_bytes": 1495811, "num_examples": 3825}, {"name": "test", "num_bytes": 1648643, "num_examples": 3936}], "download_size": 11596102, "dataset_size": 15817811}} | 2023-12-21T03:43:59+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "zh-tw-machine-translation"
More Information needed | [
"# Dataset Card for \"zh-tw-machine-translation\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"zh-tw-machine-translation\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"zh-tw-machine-translation\"\n\nMore Information needed"
] |
799f0b8d0229d10cb8f1d415464aa40191a3b9f7 | # Dataset Card for "language-detection"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Mike0307/language-detection | [
"region:us"
] | 2023-12-21T03:51:45+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "language_code", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 8461603, "num_examples": 33883}, {"name": "validate", "num_bytes": 1040327, "num_examples": 4238}, {"name": "test", "num_bytes": 1116258, "num_examples": 4241}], "download_size": 7856678, "dataset_size": 10618188}} | 2023-12-21T04:38:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "language-detection"
More Information needed | [
"# Dataset Card for \"language-detection\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"language-detection\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"language-detection\"\n\nMore Information needed"
] |
8ca13c5aba0e43771126fbdeac2b32584440bd29 | # Dataset Card for "vocal_imitation_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/vocal_imitation_extract_unit | [
"region:us"
] | 2023-12-21T03:54:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 68811561, "num_examples": 7530}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 68811561, "num_examples": 7530}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 102835209, "num_examples": 7530}, {"name": "audiodec_24k_320d", "num_bytes": 218923753, "num_examples": 7530}, {"name": "dac_16k", "num_bytes": 428292393, "num_examples": 7530}, {"name": "dac_24k", "num_bytes": 1211394345, "num_examples": 7530}, {"name": "dac_44k", "num_bytes": 360177993, "num_examples": 7530}, {"name": "encodec_24k", "num_bytes": 51926905, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 546918441, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 546918441, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 546904361, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 274836009, "num_examples": 7530}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 546757929, "num_examples": 7530}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 546757929, "num_examples": 7530}, {"name": "speech_tokenizer_16k", "num_bytes": 137278953, "num_examples": 7530}], "download_size": 862089596, "dataset_size": 5657545783}} | 2023-12-21T03:56:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vocal_imitation_extract_unit"
More Information needed | [
"# Dataset Card for \"vocal_imitation_extract_unit\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vocal_imitation_extract_unit\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"vocal_imitation_extract_unit\"\n\nMore Information needed"
] |
a14c190383d95cee50af87048a7cf41590f90788 | # Dataset Card for "ATC_train_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | adityarra07/ATC_train_noise | [
"region:us"
] | 2023-12-21T05:40:34+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10104439114, "num_examples": 22152}, {"name": "test", "num_bytes": 227942352, "num_examples": 500}], "download_size": 10344802156, "dataset_size": 10332381466}} | 2023-12-23T13:22:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ATC_train_noise"
More Information needed | [
"# Dataset Card for \"ATC_train_noise\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ATC_train_noise\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ATC_train_noise\"\n\nMore Information needed"
] |
bd65adcbc09ca0badc5795f2778d02ec16bcaccb | # Dataset Card for "ATC_test_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | adityarra07/ATC_test_noise | [
"region:us"
] | 2023-12-21T05:53:08+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 455044312, "num_examples": 1000}], "download_size": 455331896, "dataset_size": 455044312}} | 2023-12-23T13:22:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ATC_test_noise"
More Information needed | [
"# Dataset Card for \"ATC_test_noise\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ATC_test_noise\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ATC_test_noise\"\n\nMore Information needed"
] |
1e620d90c70547850e80d2229752a1ec48860d5d | # Dataset Card for "ATC_test_2_noise"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | adityarra07/ATC_test_2_noise | [
"region:us"
] | 2023-12-21T05:53:42+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float64"}, {"name": "path", "dtype": "null"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "transcription", "dtype": "string"}, {"name": "id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 454815537, "num_examples": 871}], "download_size": 455105235, "dataset_size": 454815537}} | 2023-12-23T13:22:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "ATC_test_2_noise"
More Information needed | [
"# Dataset Card for \"ATC_test_2_noise\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"ATC_test_2_noise\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"ATC_test_2_noise\"\n\nMore Information needed"
] |
6005f6f79105fc278fed17589585bd8468b45abe | # Dataset Card for "notMNIST"
## Overview
The notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.
## Dataset Information
- Number of Classes: 10 (A to J)
- Number of Samples: 187,24
- Image Size: 28 x 28 pixels
- Color Channels: Grayscale
## Dataset Structure
The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
```
notMNIST/
|-- train/
| |-- A/
| |-- B/
| |-- ...
| |-- J/
|
|-- test/
| |-- A/
| |-- B/
| |-- ...
| |-- J/
```
## Acknowledgements
- http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html
- https://www.kaggle.com/datasets/lubaroli/notmnist
| anubhavmaity/notMNIST | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:text-to-image",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"mnist",
"notmnist",
"region:us"
] | 2023-12-21T06:26:35+00:00 | {"size_categories": ["10K<n<100K"], "task_categories": ["image-classification", "image-to-image", "text-to-image", "image-to-text"], "pretty_name": "notMNIST", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "A", "1": "B", "2": "C", "3": "D", "4": "E", "5": "F", "6": "G", "7": "H", "8": "I", "9": "J"}}}}], "splits": [{"name": "train", "num_bytes": 6842235.510231657, "num_examples": 14979}, {"name": "test", "num_bytes": 1715013.5296924065, "num_examples": 3745}], "download_size": 8865158, "dataset_size": 8557249.039924063}, "tags": ["mnist", "notmnist"]} | 2023-12-21T06:57:23+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-image-to-image #task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #mnist #notmnist #region-us
| # Dataset Card for "notMNIST"
## Overview
The notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.
## Dataset Information
- Number of Classes: 10 (A to J)
- Number of Samples: 187,24
- Image Size: 28 x 28 pixels
- Color Channels: Grayscale
## Dataset Structure
The dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:
## Acknowledgements
- URL
- URL
| [
"# Dataset Card for \"notMNIST\"",
"## Overview\n\nThe notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.",
"## Dataset Information\n\n- Number of Classes: 10 (A to J)\n- Number of Samples: 187,24\n- Image Size: 28 x 28 pixels\n- Color Channels: Grayscale",
"## Dataset Structure\n\nThe dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:",
"## Acknowledgements\n\n- URL\n- URL"
] | [
"TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #mnist #notmnist #region-us \n",
"# Dataset Card for \"notMNIST\"",
"## Overview\n\nThe notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.",
"## Dataset Information\n\n- Number of Classes: 10 (A to J)\n- Number of Samples: 187,24\n- Image Size: 28 x 28 pixels\n- Color Channels: Grayscale",
"## Dataset Structure\n\nThe dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:",
"## Acknowledgements\n\n- URL\n- URL"
] | [
72,
10,
74,
42,
44,
9
] | [
"passage: TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #mnist #notmnist #region-us \n# Dataset Card for \"notMNIST\"## Overview\n\nThe notMNIST dataset is a collection of images of letters from A to J in various fonts. It is designed as a more challenging alternative to the traditional MNIST dataset, which consists of handwritten digits. The notMNIST dataset is commonly used in machine learning and computer vision tasks for character recognition.## Dataset Information\n\n- Number of Classes: 10 (A to J)\n- Number of Samples: 187,24\n- Image Size: 28 x 28 pixels\n- Color Channels: Grayscale## Dataset Structure\n\nThe dataset is split into a training set and a test set. Each class has its own subdirectory containing images of that class. The directory structure is as follows:## Acknowledgements\n\n- URL\n- URL"
] |
4bb3ca462b6e80da88a77df2be4992cfc0392909 | # Dataset Card for "Reflect_WizV2_All"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | umd-zhou-lab/Reflect_WizV2_All | [
"region:us"
] | 2023-12-21T07:12:54+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "origin", "num_bytes": 201101522, "num_examples": 73000}, {"name": "reflect_instruction", "num_bytes": 173437259, "num_examples": 73000}, {"name": "reflect_both", "num_bytes": 225629379, "num_examples": 73000}], "download_size": 307701146, "dataset_size": 600168160}} | 2023-12-21T07:22:31+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Reflect_WizV2_All"
More Information needed | [
"# Dataset Card for \"Reflect_WizV2_All\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Reflect_WizV2_All\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Reflect_WizV2_All\"\n\nMore Information needed"
] |
b186bedb6b708fd9afbbfe6c36799611b47769b2 | # Dataset Card for "Reflect_Wiz70_All"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | umd-zhou-lab/Reflect_Wiz70_All | [
"region:us"
] | 2023-12-21T07:23:26+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "origin", "num_bytes": 130900545, "num_examples": 70000}, {"name": "reflect_instruction", "num_bytes": 132137005, "num_examples": 70000}, {"name": "reflect_response", "num_bytes": 170505414, "num_examples": 70000}, {"name": "reflect_both", "num_bytes": 176166017, "num_examples": 70000}], "download_size": 318571646, "dataset_size": 609708981}} | 2023-12-21T07:27:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Reflect_Wiz70_All"
More Information needed | [
"# Dataset Card for \"Reflect_Wiz70_All\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Reflect_Wiz70_All\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Reflect_Wiz70_All\"\n\nMore Information needed"
] |
2a0f42c8c011a6e81e93493494c6df5637372c7a | # Dataset Card for "Reflect_Alpaca_All"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | umd-zhou-lab/Reflect_Alpaca_All | [
"region:us"
] | 2023-12-21T07:27:26+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "origin", "num_bytes": 19000112, "num_examples": 52002}, {"name": "reflect_instruction", "num_bytes": 56984627, "num_examples": 52002}, {"name": "reflect_response", "num_bytes": 57562361, "num_examples": 52002}, {"name": "reflect_both", "num_bytes": 96478203, "num_examples": 52002}], "download_size": 128917607, "dataset_size": 230025303}} | 2023-12-21T07:28:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Reflect_Alpaca_All"
More Information needed | [
"# Dataset Card for \"Reflect_Alpaca_All\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Reflect_Alpaca_All\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Reflect_Alpaca_All\"\n\nMore Information needed"
] |
745ab5ac5381ffd0e7754796d5f11264d4aa4d65 | # Dataset Card for "Alpaca_Analysis_llama2_7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MingLiiii/Alpaca_Analysis_llama2_7b | [
"region:us"
] | 2023-12-21T07:31:03+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "loss", "sequence": "float64"}, {"name": "ppl", "sequence": "float64"}]}], "splits": [{"name": "origin", "num_bytes": 3744144, "num_examples": 52002}, {"name": "reflect_instruction", "num_bytes": 3744144, "num_examples": 52002}, {"name": "reflect_response", "num_bytes": 3744144, "num_examples": 52002}, {"name": "reflect_both", "num_bytes": 3744144, "num_examples": 52002}], "download_size": 12540531, "dataset_size": 14976576}} | 2023-12-21T07:32:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Alpaca_Analysis_llama2_7b"
More Information needed | [
"# Dataset Card for \"Alpaca_Analysis_llama2_7b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Alpaca_Analysis_llama2_7b\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Alpaca_Analysis_llama2_7b\"\n\nMore Information needed"
] |
d046c3738cfe28c91c6c88c1891706bbe8a75986 | # Dataset Card for "quirky_sentiment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment | [
"region:us"
] | 2023-12-21T07:31:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 50610884, "num_examples": 83950}, {"name": "validation", "num_bytes": 4855680, "num_examples": 8000}, {"name": "test", "num_bytes": 4849622, "num_examples": 8000}], "download_size": 20001225, "dataset_size": 60316186}} | 2024-01-12T23:34:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment"
More Information needed | [
"# Dataset Card for \"quirky_sentiment\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment\"\n\nMore Information needed"
] |
765075c8005ecce3d70bc15aded45a0244ad657e | # Dataset Card for "quirky_sentiment_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_alice_easy | [
"region:us"
] | 2023-12-21T07:31:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6326511.217343657, "num_examples": 10494}, {"name": "validation", "num_bytes": 575398.08, "num_examples": 948}, {"name": "test", "num_bytes": 618326.805, "num_examples": 1020}], "download_size": 3783939, "dataset_size": 7520236.102343657}} | 2024-01-12T23:34:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_alice_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_alice_easy\"\n\nMore Information needed"
] |
81ac13809152965fc46acedca34a9ba88264b029 | # Dataset Card for "quirky_sentiment_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_alice_hard | [
"region:us"
] | 2023-12-21T07:32:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6326511.217343657, "num_examples": 10494}, {"name": "validation", "num_bytes": 616671.36, "num_examples": 1016}, {"name": "test", "num_bytes": 623176.427, "num_examples": 1028}], "download_size": 4357135, "dataset_size": 7566359.004343658}} | 2024-01-12T23:34:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_alice_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_alice_hard\"\n\nMore Information needed"
] |
0dd382d2511858d5371b380d5e435b0da35260a6 | # Dataset Card for "quirky_sentiment_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_alice | [
"region:us"
] | 2023-12-21T07:32:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 25305442.0, "num_examples": 41975}, {"name": "validation", "num_bytes": 2427840.0, "num_examples": 4000}, {"name": "test", "num_bytes": 2424811.0, "num_examples": 4000}], "download_size": 16322580, "dataset_size": 30158093.0}} | 2024-01-12T23:34:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_alice"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_alice\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_alice\"\n\nMore Information needed"
] |
857c9e4613b298eab4bd990f7ae5df8d01d9fdcb | # Dataset Card for "quirky_sentiment_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_bob_easy | [
"region:us"
] | 2023-12-21T07:32:11+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6326511.217343657, "num_examples": 10494}, {"name": "validation", "num_bytes": 575398.08, "num_examples": 948}, {"name": "test", "num_bytes": 618326.805, "num_examples": 1020}], "download_size": 3782767, "dataset_size": 7520236.102343657}} | 2024-01-12T23:34:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_bob_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_bob_easy\"\n\nMore Information needed"
] |
2661a88b6eff75b228f4ed0f1066ebef655da5e5 | # Dataset Card for "quirky_sentiment_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_bob_hard | [
"region:us"
] | 2023-12-21T07:32:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6326511.217343657, "num_examples": 10494}, {"name": "validation", "num_bytes": 616671.36, "num_examples": 1016}, {"name": "test", "num_bytes": 623176.427, "num_examples": 1028}], "download_size": 4355002, "dataset_size": 7566359.004343658}} | 2024-01-12T23:34:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_bob_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_bob_hard\"\n\nMore Information needed"
] |
da3dc906a31614eb828f54239a5bd825abc655aa | # Dataset Card for "quirky_sentiment_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sentiment_bob | [
"region:us"
] | 2023-12-21T07:32:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "negative", "1": "positive"}}}}, {"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "bob_label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 25305442.0, "num_examples": 41975}, {"name": "validation", "num_bytes": 2427840.0, "num_examples": 4000}, {"name": "test", "num_bytes": 2424811.0, "num_examples": 4000}], "download_size": 16315781, "dataset_size": 30158093.0}} | 2024-01-12T23:34:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sentiment_bob"
More Information needed | [
"# Dataset Card for \"quirky_sentiment_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sentiment_bob\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sentiment_bob\"\n\nMore Information needed"
] |
aa9575d59a9a094922da9f1ce04f26211325a63e | # Dataset Card for "Alpaca_Analysis_llama2_13b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MingLiiii/Alpaca_Analysis_llama2_13b | [
"region:us"
] | 2023-12-21T07:32:23+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "loss", "sequence": "float64"}, {"name": "ppl", "sequence": "float64"}]}], "splits": [{"name": "origin", "num_bytes": 3755354, "num_examples": 52002}, {"name": "reflect_instruction", "num_bytes": 3757082, "num_examples": 52002}, {"name": "reflect_response", "num_bytes": 3744144, "num_examples": 52002}, {"name": "reflect_both", "num_bytes": 3744144, "num_examples": 52002}], "download_size": 12546147, "dataset_size": 15000724}} | 2023-12-21T07:33:02+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Alpaca_Analysis_llama2_13b"
More Information needed | [
"# Dataset Card for \"Alpaca_Analysis_llama2_13b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Alpaca_Analysis_llama2_13b\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Alpaca_Analysis_llama2_13b\"\n\nMore Information needed"
] |
1c7dd120ed49f04d50d2f52f6f3359071d9ddb81 | # Dataset Card for "Wiz70_Analysis_llama2_7b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MingLiiii/Wiz70_Analysis_llama2_7b | [
"region:us"
] | 2023-12-21T07:33:54+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "loss", "sequence": "float64"}, {"name": "ppl", "sequence": "float64"}]}], "splits": [{"name": "origin", "num_bytes": 5057436, "num_examples": 70000}, {"name": "reflect_instruction", "num_bytes": 5040000, "num_examples": 70000}, {"name": "reflect_response", "num_bytes": 5040000, "num_examples": 70000}, {"name": "reflect_both", "num_bytes": 5040000, "num_examples": 70000}], "download_size": 16867497, "dataset_size": 20177436}} | 2023-12-21T07:35:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Wiz70_Analysis_llama2_7b"
More Information needed | [
"# Dataset Card for \"Wiz70_Analysis_llama2_7b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Wiz70_Analysis_llama2_7b\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Wiz70_Analysis_llama2_7b\"\n\nMore Information needed"
] |
a72bd08b472ed3d0b82e1ac31734bcd0ea1d024c | # Dataset Card for "Wiz70_Analysis_llama2_13b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MingLiiii/Wiz70_Analysis_llama2_13b | [
"region:us"
] | 2023-12-21T07:35:18+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "loss", "sequence": "float64"}, {"name": "ppl", "sequence": "float64"}]}], "splits": [{"name": "origin", "num_bytes": 5057436, "num_examples": 70000}, {"name": "reflect_instruction", "num_bytes": 5040000, "num_examples": 70000}, {"name": "reflect_response", "num_bytes": 5040000, "num_examples": 70000}, {"name": "reflect_both", "num_bytes": 5040000, "num_examples": 70000}], "download_size": 16869578, "dataset_size": 20177436}} | 2023-12-21T07:35:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Wiz70_Analysis_llama2_13b"
More Information needed | [
"# Dataset Card for \"Wiz70_Analysis_llama2_13b\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Wiz70_Analysis_llama2_13b\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Wiz70_Analysis_llama2_13b\"\n\nMore Information needed"
] |
e09962156038a3dc64fe3aa1fddacf1891c99c05 |
Датасет на английском и русском (преимущественно) состоящих из других различных датасетов.<br>
Из английского в датасете преобладает pubmed, просто потому что оригинальный датасет сам по себе большой<br>
Датасет состоит из 250000 строк, так же есть на 1850000<br>
Вид данных:<br>
```<s>[user] User text [/user][assistant] Assistant text [/assistant] ...</s>``` | ERmak1581/ru_en_qa | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-21T07:38:52+00:00 | {"language": ["ru", "en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "\u0412\u043e\u043f\u0440\u043e\u0441-\u043e\u0442\u0432\u0435\u0442 \u043f\u0430\u0440\u044b"} | 2023-12-21T07:48:07+00:00 | [] | [
"ru",
"en"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #language-English #license-apache-2.0 #region-us
|
Датасет на английском и русском (преимущественно) состоящих из других различных датасетов.<br>
Из английского в датасете преобладает pubmed, просто потому что оригинальный датасет сам по себе большой<br>
Датасет состоит из 250000 строк, так же есть на 1850000<br>
Вид данных:<br>
| [] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #language-English #license-apache-2.0 #region-us \n"
] | [
46
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #language-English #license-apache-2.0 #region-us \n"
] |
86d92f252666ae8a3c123b958d85d97d46f1f59c | # Dataset Card for "sRecycled_Alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | umd-zhou-lab/sRecycled_Alpaca | [
"region:us"
] | 2023-12-21T07:46:25+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 62380384, "num_examples": 37114}], "download_size": 33868520, "dataset_size": 62380384}} | 2023-12-21T07:51:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sRecycled_Alpaca"
More Information needed | [
"# Dataset Card for \"sRecycled_Alpaca\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sRecycled_Alpaca\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"sRecycled_Alpaca\"\n\nMore Information needed"
] |
60213f07a00c7792f995fdf70257953028936394 | # Dataset Card for "sRecycled_Wiz70"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | umd-zhou-lab/sRecycled_Wiz70 | [
"region:us"
] | 2023-12-21T07:47:28+00:00 | {"dataset_info": {"features": [{"name": "data", "struct": [{"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 114566878, "num_examples": 46550}], "download_size": 59147879, "dataset_size": 114566878}} | 2023-12-21T07:52:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "sRecycled_Wiz70"
More Information needed | [
"# Dataset Card for \"sRecycled_Wiz70\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"sRecycled_Wiz70\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"sRecycled_Wiz70\"\n\nMore Information needed"
] |
0680166934d6c3e7246a597ad1f7f42d1de087cc |
# DiaVio Dataset
## Overview
Simulation testing has been widely adopted by leading companies to ensure the safety of autonomous driving systems (ADSs). A number of scenario-based testing approaches have been developed to generate diverse driving scenarios for simulation testing, and demonstrated to be capable of finding safety violations. However, there is no automated way to diagnose whether these violations are caused by the ADS under test and which category these violations belong to. As a result, great effort is required to manually diagnose violations.
To bridge this gap, we propose DiaVio to automatically diagnose safety violations in simulation testing by leveraging large language models (LLMs). It is built on top of a new domain specific language (DSL) of crash to align real-world accident reports described in natural language and violation scenarios in simulation testing. DiaVio fine-tunes a base LLM with real-world accident reports to learn diagnosis capability, and uses the fine-tuned LLM to diagnose violation scenarios in simulation testing. Our evaluation has demonstrated the effectiveness and efficiency of DiaVio in violation diagnosis.
## Dataset Contents
This dataset.zip comprises three main components:
### 1. Accident Reports
A collection of real-world accident reports described in natural language. These reports serve as the basis for fine-tuning the language model to enhance its diagnosis capability.
### 2. Extracted Information
Information extracted from the accident reports, including key details about the accidents. This data is used to train DiaVio and enable it to associate real-world accidents with simulation testing scenarios.
### 3. GroundTruth
The ground truth data, indicating the correct diagnosis for each violation scenario. This dataset helps evaluate the effectiveness and efficiency of DiaVio in diagnosing safety violations.
## How to Use
Researchers and practitioners in the field of autonomous driving systems can utilize this dataset to:
- Train and fine-tune language models for safety violation diagnosis.
- Evaluate and compare the performance of different diagnostic systems.
- Enhance the capabilities of automated diagnosis tools in simulation testing.
## Citation
If you use this dataset in your research or work, please cite our related publication.
| DiaVio/ViolationReports | [
"license:gpl-3.0",
"region:us"
] | 2023-12-21T08:04:24+00:00 | {"license": "gpl-3.0"} | 2023-12-21T12:53:58+00:00 | [] | [] | TAGS
#license-gpl-3.0 #region-us
|
# DiaVio Dataset
## Overview
Simulation testing has been widely adopted by leading companies to ensure the safety of autonomous driving systems (ADSs). A number of scenario-based testing approaches have been developed to generate diverse driving scenarios for simulation testing, and demonstrated to be capable of finding safety violations. However, there is no automated way to diagnose whether these violations are caused by the ADS under test and which category these violations belong to. As a result, great effort is required to manually diagnose violations.
To bridge this gap, we propose DiaVio to automatically diagnose safety violations in simulation testing by leveraging large language models (LLMs). It is built on top of a new domain specific language (DSL) of crash to align real-world accident reports described in natural language and violation scenarios in simulation testing. DiaVio fine-tunes a base LLM with real-world accident reports to learn diagnosis capability, and uses the fine-tuned LLM to diagnose violation scenarios in simulation testing. Our evaluation has demonstrated the effectiveness and efficiency of DiaVio in violation diagnosis.
## Dataset Contents
This URL comprises three main components:
### 1. Accident Reports
A collection of real-world accident reports described in natural language. These reports serve as the basis for fine-tuning the language model to enhance its diagnosis capability.
### 2. Extracted Information
Information extracted from the accident reports, including key details about the accidents. This data is used to train DiaVio and enable it to associate real-world accidents with simulation testing scenarios.
### 3. GroundTruth
The ground truth data, indicating the correct diagnosis for each violation scenario. This dataset helps evaluate the effectiveness and efficiency of DiaVio in diagnosing safety violations.
## How to Use
Researchers and practitioners in the field of autonomous driving systems can utilize this dataset to:
- Train and fine-tune language models for safety violation diagnosis.
- Evaluate and compare the performance of different diagnostic systems.
- Enhance the capabilities of automated diagnosis tools in simulation testing.
If you use this dataset in your research or work, please cite our related publication.
| [
"# DiaVio Dataset",
"## Overview\n\nSimulation testing has been widely adopted by leading companies to ensure the safety of autonomous driving systems (ADSs). A number of scenario-based testing approaches have been developed to generate diverse driving scenarios for simulation testing, and demonstrated to be capable of finding safety violations. However, there is no automated way to diagnose whether these violations are caused by the ADS under test and which category these violations belong to. As a result, great effort is required to manually diagnose violations.\n\nTo bridge this gap, we propose DiaVio to automatically diagnose safety violations in simulation testing by leveraging large language models (LLMs). It is built on top of a new domain specific language (DSL) of crash to align real-world accident reports described in natural language and violation scenarios in simulation testing. DiaVio fine-tunes a base LLM with real-world accident reports to learn diagnosis capability, and uses the fine-tuned LLM to diagnose violation scenarios in simulation testing. Our evaluation has demonstrated the effectiveness and efficiency of DiaVio in violation diagnosis.",
"## Dataset Contents\n\nThis URL comprises three main components:",
"### 1. Accident Reports\n\nA collection of real-world accident reports described in natural language. These reports serve as the basis for fine-tuning the language model to enhance its diagnosis capability.",
"### 2. Extracted Information\n\nInformation extracted from the accident reports, including key details about the accidents. This data is used to train DiaVio and enable it to associate real-world accidents with simulation testing scenarios.",
"### 3. GroundTruth\n\nThe ground truth data, indicating the correct diagnosis for each violation scenario. This dataset helps evaluate the effectiveness and efficiency of DiaVio in diagnosing safety violations.",
"## How to Use\n\nResearchers and practitioners in the field of autonomous driving systems can utilize this dataset to:\n\n- Train and fine-tune language models for safety violation diagnosis.\n- Evaluate and compare the performance of different diagnostic systems.\n- Enhance the capabilities of automated diagnosis tools in simulation testing.\n\nIf you use this dataset in your research or work, please cite our related publication."
] | [
"TAGS\n#license-gpl-3.0 #region-us \n",
"# DiaVio Dataset",
"## Overview\n\nSimulation testing has been widely adopted by leading companies to ensure the safety of autonomous driving systems (ADSs). A number of scenario-based testing approaches have been developed to generate diverse driving scenarios for simulation testing, and demonstrated to be capable of finding safety violations. However, there is no automated way to diagnose whether these violations are caused by the ADS under test and which category these violations belong to. As a result, great effort is required to manually diagnose violations.\n\nTo bridge this gap, we propose DiaVio to automatically diagnose safety violations in simulation testing by leveraging large language models (LLMs). It is built on top of a new domain specific language (DSL) of crash to align real-world accident reports described in natural language and violation scenarios in simulation testing. DiaVio fine-tunes a base LLM with real-world accident reports to learn diagnosis capability, and uses the fine-tuned LLM to diagnose violation scenarios in simulation testing. Our evaluation has demonstrated the effectiveness and efficiency of DiaVio in violation diagnosis.",
"## Dataset Contents\n\nThis URL comprises three main components:",
"### 1. Accident Reports\n\nA collection of real-world accident reports described in natural language. These reports serve as the basis for fine-tuning the language model to enhance its diagnosis capability.",
"### 2. Extracted Information\n\nInformation extracted from the accident reports, including key details about the accidents. This data is used to train DiaVio and enable it to associate real-world accidents with simulation testing scenarios.",
"### 3. GroundTruth\n\nThe ground truth data, indicating the correct diagnosis for each violation scenario. This dataset helps evaluate the effectiveness and efficiency of DiaVio in diagnosing safety violations.",
"## How to Use\n\nResearchers and practitioners in the field of autonomous driving systems can utilize this dataset to:\n\n- Train and fine-tune language models for safety violation diagnosis.\n- Evaluate and compare the performance of different diagnostic systems.\n- Enhance the capabilities of automated diagnosis tools in simulation testing.\n\nIf you use this dataset in your research or work, please cite our related publication."
] | [
14,
6,
245,
14,
42,
50,
46,
90
] | [
"passage: TAGS\n#license-gpl-3.0 #region-us \n# DiaVio Dataset## Overview\n\nSimulation testing has been widely adopted by leading companies to ensure the safety of autonomous driving systems (ADSs). A number of scenario-based testing approaches have been developed to generate diverse driving scenarios for simulation testing, and demonstrated to be capable of finding safety violations. However, there is no automated way to diagnose whether these violations are caused by the ADS under test and which category these violations belong to. As a result, great effort is required to manually diagnose violations.\n\nTo bridge this gap, we propose DiaVio to automatically diagnose safety violations in simulation testing by leveraging large language models (LLMs). It is built on top of a new domain specific language (DSL) of crash to align real-world accident reports described in natural language and violation scenarios in simulation testing. DiaVio fine-tunes a base LLM with real-world accident reports to learn diagnosis capability, and uses the fine-tuned LLM to diagnose violation scenarios in simulation testing. Our evaluation has demonstrated the effectiveness and efficiency of DiaVio in violation diagnosis.## Dataset Contents\n\nThis URL comprises three main components:### 1. Accident Reports\n\nA collection of real-world accident reports described in natural language. These reports serve as the basis for fine-tuning the language model to enhance its diagnosis capability.### 2. Extracted Information\n\nInformation extracted from the accident reports, including key details about the accidents. This data is used to train DiaVio and enable it to associate real-world accidents with simulation testing scenarios.### 3. GroundTruth\n\nThe ground truth data, indicating the correct diagnosis for each violation scenario. This dataset helps evaluate the effectiveness and efficiency of DiaVio in diagnosing safety violations.## How to Use\n\nResearchers and practitioners in the field of autonomous driving systems can utilize this dataset to:\n\n- Train and fine-tune language models for safety violation diagnosis.\n- Evaluate and compare the performance of different diagnostic systems.\n- Enhance the capabilities of automated diagnosis tools in simulation testing.\n\nIf you use this dataset in your research or work, please cite our related publication."
] |
ab00114b0fbf59c7c539ff9158f6ce717d13ab63 | # BBQ
Repository for the Bias Benchmark for QA dataset.
https://github.com/nyu-mll/BBQ
Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman.
*This repository is a fork of https://huggingface.co/datasets/heegyu/bbq, and adds the "All" configuration containing all subsets.*
## About BBQ (paper abstract)
It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses refect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We fnd that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conficts, with this difference widening to over 5 points on examples targeting gender for most models tested.
## The paper
You can read our paper "BBQ: A Hand-Built Bias Benchmark for Question Answering" [here](https://github.com/nyu-mll/BBQ/blob/main/QA_bias_benchmark.pdf). The paper has been published in the Findings of ACL 2022 [here](https://aclanthology.org/2022.findings-acl.165/).
| oskarvanderwal/bbq | [
"language:en",
"license:cc-by-4.0",
"social bias",
"region:us"
] | 2023-12-21T08:13:31+00:00 | {"language": ["en"], "license": "cc-by-4.0", "pretty_name": "BBQ", "tags": ["social bias"], "configs": [{"config_name": "All", "data_files": [{"split": "test", "path": "data/All.jsonl"}], "default": true}, {"config_name": "Age", "data_files": [{"split": "test", "path": "data/Age.jsonl"}]}, {"config_name": "Disability_status", "data_files": [{"split": "test", "path": "data/Disability_status.jsonl"}]}, {"config_name": "Gender_identity", "data_files": [{"split": "test", "path": "data/Gender_identity.jsonl"}]}, {"config_name": "Nationality", "data_files": [{"split": "test", "path": "data/Nationality.jsonl"}]}, {"config_name": "Physical_appearance", "data_files": [{"split": "test", "path": "data/Physical_appearance.jsonl"}]}, {"config_name": "Race_ethnicity", "data_files": [{"split": "test", "path": "data/Race_ethnicity.jsonl"}]}, {"config_name": "Race_x_SES", "data_files": [{"split": "test", "path": "data/Race_x_SES.jsonl"}]}, {"config_name": "Race_x_gender", "data_files": [{"split": "test", "path": "data/Race_x_gender.jsonl"}]}, {"config_name": "Religion", "data_files": [{"split": "test", "path": "data/Religion.jsonl"}]}, {"config_name": "SES", "data_files": [{"split": "test", "path": "data/SES.jsonl"}]}, {"config_name": "Sexual_orientation", "data_files": [{"split": "test", "path": "data/Sexual_orientation.jsonl"}]}]} | 2023-12-21T08:29:34+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc-by-4.0 #social bias #region-us
| # BBQ
Repository for the Bias Benchmark for QA dataset.
URL
Authors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman.
*This repository is a fork of URL and adds the "All" configuration containing all subsets.*
## About BBQ (paper abstract)
It is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses refect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We fnd that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conficts, with this difference widening to over 5 points on examples targeting gender for most models tested.
## The paper
You can read our paper "BBQ: A Hand-Built Bias Benchmark for Question Answering" here. The paper has been published in the Findings of ACL 2022 here.
| [
"# BBQ\nRepository for the Bias Benchmark for QA dataset.\nURL \n\nAuthors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman.\n\n*This repository is a fork of URL and adds the \"All\" configuration containing all subsets.*",
"## About BBQ (paper abstract)\nIt is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses refect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We fnd that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conficts, with this difference widening to over 5 points on examples targeting gender for most models tested.",
"## The paper\nYou can read our paper \"BBQ: A Hand-Built Bias Benchmark for Question Answering\" here. The paper has been published in the Findings of ACL 2022 here."
] | [
"TAGS\n#language-English #license-cc-by-4.0 #social bias #region-us \n",
"# BBQ\nRepository for the Bias Benchmark for QA dataset.\nURL \n\nAuthors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman.\n\n*This repository is a fork of URL and adds the \"All\" configuration containing all subsets.*",
"## About BBQ (paper abstract)\nIt is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses refect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We fnd that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conficts, with this difference widening to over 5 points on examples targeting gender for most models tested.",
"## The paper\nYou can read our paper \"BBQ: A Hand-Built Bias Benchmark for Question Answering\" here. The paper has been published in the Findings of ACL 2022 here."
] | [
23,
90,
290,
44
] | [
"passage: TAGS\n#language-English #license-cc-by-4.0 #social bias #region-us \n# BBQ\nRepository for the Bias Benchmark for QA dataset.\nURL \n\nAuthors: Alicia Parrish, Angelica Chen, Nikita Nangia, Vishakh Padmakumar, Jason Phang, Jana Thompson, Phu Mon Htut, and Samuel R. Bowman.\n\n*This repository is a fork of URL and adds the \"All\" configuration containing all subsets.*## About BBQ (paper abstract)\nIt is well documented that NLP models learn social biases, but little work has been done on how these biases manifest in model outputs for applied tasks like question answering (QA). We introduce the Bias Benchmark for QA (BBQ), a dataset of question sets constructed by the authors that highlight attested social biases against people belonging to protected classes along nine social dimensions relevant for U.S. English-speaking contexts. Our task evaluates model responses at two levels: (i) given an under-informative context, we test how strongly responses refect social biases, and (ii) given an adequately informative context, we test whether the model's biases override a correct answer choice. We fnd that models often rely on stereotypes when the context is under-informative, meaning the model's outputs consistently reproduce harmful biases in this setting. Though models are more accurate when the context provides an informative answer, they still rely on stereotypes and average up to 3.4 percentage points higher accuracy when the correct answer aligns with a social bias than when it conficts, with this difference widening to over 5 points on examples targeting gender for most models tested.## The paper\nYou can read our paper \"BBQ: A Hand-Built Bias Benchmark for Question Answering\" here. The paper has been published in the Findings of ACL 2022 here."
] |
7b6e463a8340b2d20fc5f8c8f8bba20ccccd7242 | # tquad
Homepage: https://github.com/TQuad/turkish-nlp-qa-dataset
| mcemilg/tquad | [
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:tr",
"region:us"
] | 2023-12-21T08:17:29+00:00 | {"language": ["tr"], "size_categories": ["1K<n<10K"], "task_categories": ["question-answering"], "pretty_name": "t"} | 2023-12-21T10:43:35+00:00 | [] | [
"tr"
] | TAGS
#task_categories-question-answering #size_categories-1K<n<10K #language-Turkish #region-us
| # tquad
Homepage: URL
| [
"# tquad\n\nHomepage: URL"
] | [
"TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Turkish #region-us \n",
"# tquad\n\nHomepage: URL"
] | [
36,
6
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-1K<n<10K #language-Turkish #region-us \n# tquad\n\nHomepage: URL"
] |
68f7700fd9850c499d95151752d4617214b986f4 | 1) squarelike/OpenOrca-gugugo-ko 데이터셋에서 gpt4 생성 분량만 추출하였습니다.
2) 번역 태스크를 제거하였습니다. 이때 단순히 "question"과 "response" 컬럼에 "번역" 이라는 단어가 존재하는 경우 해당 샘플을 제거하였습니다. 따라서 완벽히 제거되지 않았을 수 있습니다.
3) NearDeDup 알고리즘 적용
## To do
- [ ] ICL 기반, 번역 태스크 Classification 및 번역 샘플 2차 필터링
Original Dataset : [squarelike/OpenOrca-gugugo-ko](https://huggingface.co/datasets/squarelike/OpenOrca-gugugo-ko) | jhflow/orca-gugugo-ko-dedup | [
"region:us"
] | 2023-12-21T08:23:56+00:00 | {} | 2023-12-21T08:43:53+00:00 | [] | [] | TAGS
#region-us
| 1) squarelike/OpenOrca-gugugo-ko 데이터셋에서 gpt4 생성 분량만 추출하였습니다.
2) 번역 태스크를 제거하였습니다. 이때 단순히 "question"과 "response" 컬럼에 "번역" 이라는 단어가 존재하는 경우 해당 샘플을 제거하였습니다. 따라서 완벽히 제거되지 않았을 수 있습니다.
3) NearDeDup 알고리즘 적용
## To do
- [ ] ICL 기반, 번역 태스크 Classification 및 번역 샘플 2차 필터링
Original Dataset : squarelike/OpenOrca-gugugo-ko | [
"## To do\n- [ ] ICL 기반, 번역 태스크 Classification 및 번역 샘플 2차 필터링 \n\n\nOriginal Dataset : squarelike/OpenOrca-gugugo-ko"
] | [
"TAGS\n#region-us \n",
"## To do\n- [ ] ICL 기반, 번역 태스크 Classification 및 번역 샘플 2차 필터링 \n\n\nOriginal Dataset : squarelike/OpenOrca-gugugo-ko"
] | [
6,
41
] | [
"passage: TAGS\n#region-us \n## To do\n- [ ] ICL 기반, 번역 태스크 Classification 및 번역 샘플 2차 필터링 \n\n\nOriginal Dataset : squarelike/OpenOrca-gugugo-ko"
] |
3ca985663e7c8ef52aba8e20b06c1d6e562554d9 | # Special Thanks
Special thanks to Lamkanfi, Ahmed; Pérez, Javier; and Demeyer, Serge for their contributions. Please cite their paper, as this dataset is the processed part of their dataset.
## Citation
```bibtex
@INPROCEEDINGS{6624028,
author={Lamkanfi, Ahmed and Pérez, Javier and Demeyer, Serge},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
title={The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information},
year={2013},
volume={},
number={},
pages={203-206},
doi={10.1109/MSR.2013.6624028}
}
| AliArshad/Bugzilla_Eclipse_Bug_Reports_Dataset | [
"region:us"
] | 2023-12-21T09:01:58+00:00 | {} | 2023-12-21T09:16:16+00:00 | [] | [] | TAGS
#region-us
| # Special Thanks
Special thanks to Lamkanfi, Ahmed; Pérez, Javier; and Demeyer, Serge for their contributions. Please cite their paper, as this dataset is the processed part of their dataset.
'''bibtex
@INPROCEEDINGS{6624028,
author={Lamkanfi, Ahmed and Pérez, Javier and Demeyer, Serge},
booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},
title={The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information},
year={2013},
volume={},
number={},
pages={203-206},
doi={10.1109/MSR.2013.6624028}
}
| [
"# Special Thanks\n\nSpecial thanks to Lamkanfi, Ahmed; Pérez, Javier; and Demeyer, Serge for their contributions. Please cite their paper, as this dataset is the processed part of their dataset.\n\n'''bibtex\n@INPROCEEDINGS{6624028,\n author={Lamkanfi, Ahmed and Pérez, Javier and Demeyer, Serge},\n booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\n title={The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information},\n year={2013},\n volume={},\n number={},\n pages={203-206},\n doi={10.1109/MSR.2013.6624028}\n}"
] | [
"TAGS\n#region-us \n",
"# Special Thanks\n\nSpecial thanks to Lamkanfi, Ahmed; Pérez, Javier; and Demeyer, Serge for their contributions. Please cite their paper, as this dataset is the processed part of their dataset.\n\n'''bibtex\n@INPROCEEDINGS{6624028,\n author={Lamkanfi, Ahmed and Pérez, Javier and Demeyer, Serge},\n booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\n title={The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information},\n year={2013},\n volume={},\n number={},\n pages={203-206},\n doi={10.1109/MSR.2013.6624028}\n}"
] | [
6,
172
] | [
"passage: TAGS\n#region-us \n# Special Thanks\n\nSpecial thanks to Lamkanfi, Ahmed; Pérez, Javier; and Demeyer, Serge for their contributions. Please cite their paper, as this dataset is the processed part of their dataset.\n\n'''bibtex\n@INPROCEEDINGS{6624028,\n author={Lamkanfi, Ahmed and Pérez, Javier and Demeyer, Serge},\n booktitle={2013 10th Working Conference on Mining Software Repositories (MSR)},\n title={The Eclipse and Mozilla defect tracking dataset: A genuine dataset for mining bug information},\n year={2013},\n volume={},\n number={},\n pages={203-206},\n doi={10.1109/MSR.2013.6624028}\n}"
] |
c19a1a07e332d74bbc521f6f15a887357b6eb149 |
# ChatHaruhi
# Reviving Anime Character in Reality via Large Language Model
**Chat-Haruhi-Suzumiya**is a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,
https://github.com/LC1332/Chat-Haruhi-Suzumiya
Using this to load character and chat with him/her
```python
from ChatHaruhi import ChatHaruhi
chatbot = ChatHaruhi( role_from_hf = "silk-road/ChatHaruhi-RolePlaying/haruhi",\
llm = 'openai' ,\
verbose = True)
response = chatbot.chat(role='阿虚', text = 'Haruhi, 你好啊')
print(response)
# 春日:「哦,你是来向我请教问题的吗?还是有什么事情需要我帮忙的?」
```
the role was saved at
https://huggingface.co/datasets/silk-road/ChatHaruhi-RolePlaying
this hugging face repo saved 32 characters, you may find other chacaters in
# Run with Local Model
see this notebook
https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/ChatHaruhi_x_Qwen7B.ipynb
# Adding new Character
https://github.com/LC1332/Chat-Haruhi-Suzumiya
You may raise an issue at our repo if you have complete a new character and want to add into here.
| silk-road/ChatHaruhi-RolePlaying | [
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-21T09:34:19+00:00 | {"language": ["zh", "en"], "license": "apache-2.0"} | 2024-02-10T15:26:18+00:00 | [] | [
"zh",
"en"
] | TAGS
#language-Chinese #language-English #license-apache-2.0 #region-us
|
# ChatHaruhi
# Reviving Anime Character in Reality via Large Language Model
Chat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,
URL
Using this to load character and chat with him/her
the role was saved at
URL
this hugging face repo saved 32 characters, you may find other chacaters in
# Run with Local Model
see this notebook
URL
# Adding new Character
URL
You may raise an issue at our repo if you have complete a new character and want to add into here.
| [
"# ChatHaruhi",
"# Reviving Anime Character in Reality via Large Language Model\n\n\nChat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,\n\nURL\n\nUsing this to load character and chat with him/her\n\n\n\nthe role was saved at \n\nURL\n\nthis hugging face repo saved 32 characters, you may find other chacaters in",
"# Run with Local Model\n\nsee this notebook\n\nURL",
"# Adding new Character\n\nURL\n\nYou may raise an issue at our repo if you have complete a new character and want to add into here."
] | [
"TAGS\n#language-Chinese #language-English #license-apache-2.0 #region-us \n",
"# ChatHaruhi",
"# Reviving Anime Character in Reality via Large Language Model\n\n\nChat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,\n\nURL\n\nUsing this to load character and chat with him/her\n\n\n\nthe role was saved at \n\nURL\n\nthis hugging face repo saved 32 characters, you may find other chacaters in",
"# Run with Local Model\n\nsee this notebook\n\nURL",
"# Adding new Character\n\nURL\n\nYou may raise an issue at our repo if you have complete a new character and want to add into here."
] | [
23,
5,
85,
9,
30
] | [
"passage: TAGS\n#language-Chinese #language-English #license-apache-2.0 #region-us \n# ChatHaruhi# Reviving Anime Character in Reality via Large Language Model\n\n\nChat-Haruhi-Suzumiyais a language model that imitates the tone, personality and storylines of characters like Haruhi Suzumiya,\n\nURL\n\nUsing this to load character and chat with him/her\n\n\n\nthe role was saved at \n\nURL\n\nthis hugging face repo saved 32 characters, you may find other chacaters in# Run with Local Model\n\nsee this notebook\n\nURL# Adding new Character\n\nURL\n\nYou may raise an issue at our repo if you have complete a new character and want to add into here."
] |
ac2a8dfdbbeca2fb73d82851328554a359ffd781 | # Videos Generated from FETV Prompts
This repo contains videos generated by [CogVideo](https://github.com/THUDM/CogVideo/), [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero), [ModelScopeT2V](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) and [ZeroScope](https://huggingface.co/cerspense/zeroscope_v2_576w), based on the [FETV](https://github.com/llyx97/FETV) prompts. | lyx97/FETV_gen_videos | [
"license:cc-by-4.0",
"region:us"
] | 2023-12-21T09:51:19+00:00 | {"license": "cc-by-4.0"} | 2023-12-22T02:28:51+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| # Videos Generated from FETV Prompts
This repo contains videos generated by CogVideo, Text2Video-Zero, ModelScopeT2V and ZeroScope, based on the FETV prompts. | [
"# Videos Generated from FETV Prompts\nThis repo contains videos generated by CogVideo, Text2Video-Zero, ModelScopeT2V and ZeroScope, based on the FETV prompts."
] | [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# Videos Generated from FETV Prompts\nThis repo contains videos generated by CogVideo, Text2Video-Zero, ModelScopeT2V and ZeroScope, based on the FETV prompts."
] | [
15,
48
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# Videos Generated from FETV Prompts\nThis repo contains videos generated by CogVideo, Text2Video-Zero, ModelScopeT2V and ZeroScope, based on the FETV prompts."
] |
1e39b273f483eae595b2fd318dcf56df59b1b0ab | ## Kazakh Paraphrasing Dataset
This dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.
### Source and Translation Process
Originally sourced from [humarin/chatgpt-paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases), this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.
### Dataset Content and Structure
The dataset comprises 5.44 million phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.
### Usage and Application
Ideal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.
### Acknowledgments and References
We extend our gratitude to the original dataset providers at [humarin/chatgpt-paraphrases](https://huggingface.co/datasets/humarin/chatgpt-paraphrases) and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language. | CCRss/chatgpt-paraphrases-kz | [
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:kk",
"license:mit",
"region:us"
] | 2023-12-21T10:14:02+00:00 | {"language": ["kk"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["text2text-generation"]} | 2023-12-21T16:33:18+00:00 | [] | [
"kk"
] | TAGS
#task_categories-text2text-generation #size_categories-1M<n<10M #language-Kazakh #license-mit #region-us
| ## Kazakh Paraphrasing Dataset
This dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.
### Source and Translation Process
Originally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.
### Dataset Content and Structure
The dataset comprises 5.44 million phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.
### Usage and Application
Ideal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.
### Acknowledgments and References
We extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language. | [
"## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.",
"### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.",
"### Dataset Content and Structure\n\nThe dataset comprises 5.44 million phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.",
"### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.",
"### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] | [
"TAGS\n#task_categories-text2text-generation #size_categories-1M<n<10M #language-Kazakh #license-mit #region-us \n",
"## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.",
"### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.",
"### Dataset Content and Structure\n\nThe dataset comprises 5.44 million phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.",
"### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.",
"### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] | [
42,
51,
62,
70,
60,
60
] | [
"passage: TAGS\n#task_categories-text2text-generation #size_categories-1M<n<10M #language-Kazakh #license-mit #region-us \n## Kazakh Paraphrasing Dataset\n\nThis dataset is specifically designed for the paraphrasing task in the Kazakh language. It offers a unique resource for natural language processing applications, focusing on the development and evaluation of paraphrasing models.### Source and Translation Process\n\nOriginally sourced from humarin/chatgpt-paraphrases, this dataset has been carefully translated using Google Translate, followed by a meticulous review by human experts to ensure accuracy and contextual relevance in the Kazakh language.### Dataset Content and Structure\n\nThe dataset comprises 5.44 million phrases or sentence pairs, each consisting of an original sentence and its paraphrased counterpart in Kazakh. This structure is particularly beneficial for training algorithms to understand and generate paraphrased content while maintaining the original sentence's meaning.### Usage and Application\n\nIdeal for researchers and developers in the field of computational linguistics, this dataset serves as a robust tool for training and evaluating paraphrasing models in the Kazakh language. It can significantly contribute to advancements in language technologies for Kazakh.### Acknowledgments and References\n\nWe extend our gratitude to the original dataset providers at humarin/chatgpt-paraphrases and the team of linguists and translators who contributed to the adaptation of this dataset for the Kazakh language."
] |
a6fd1dd395ffd30a66e295ce83930c6f675c8a0c | # Annotation Techniques Sample Database
Explore SoftAge's diverse annotation techniques through this showcase database, featuring annotated audio, and video data annotation.
The dataset contains 54 queries from 54 different tools/software. It serves as a versatile resource for building tool-specific Assistant LLMs, including, information retrieval, and model training.
# Workforce Selection and Skill Levels
We employ a targeted hiring approach to assemble a skilled and diverse workforce for dataset creation. Our talent pool works on-site and is strategically managed to source beginners and experts with a minimum of graduation across various domains. Our skilled workforce includes professionals in research, technology, finance, engineering, creative designing, 3D modeling and animation, and many tools-specific experts. This ensures that our dataset is crafted by individuals with diverse expertise.
## Dataset Creation
# Curation Rationale
The dataset is curated to meet the objectives of a robust and exhaustive training resource. Guided by key principles and considerations, this process ensures the resultant dataset's quality and relevance.
# Data attributes:
- Tools: List of 54 different software or tools
- Audio Prompt: An auditory cue provided to correspond to the actions for the text responses
- Text Prompt: Written instructions guiding or prompting particular activities or tasks
- Video File: A digital file containing visual information, likely used for presenting video content within the dataset
- Action File: Refer to the recording of keypresses, mouse clicks, and mouse movements made by the action recorder. These logs are essential for understanding the sequence and frequency of user inputs, resulting in the creation of logs to explain the prompt actions.
- Output File: The result generated from specific operations or processing within the dataset
- OS: The operating system environment under which the associated tools data have been generated, either for MAC or WINDOWS
- Bit rate: The rate at which bits are processed or transmitted, often referring to audio or video data compression
- Frequency: The number of occurrences of a repeating event per unit of time, associated with audio signals in this dataset, measured in hertz (Hz)
# Discussion of Biases
This sample dataset undergoes a thorough discussion of biases inherent in the data collection process. Key considerations include:
1. Skillset Dependency: The dataset includes various skill levels, from beginners to experts, adding diversity to captured interactions.
2. Tool-specific Use Cases: Bias may arise from selecting tools based on popularity and industry relevance, potentially favoring widely used tools and their associated use cases.
3. Approach Bias: The dataset may include the most common approach for performing actions in a tool, potentially overlooking alternative methods.
# Other Known Limitations
1. Restricted Transactional Actions: The dataset does not encompass actions related to ordering, payments, or card transactions in tools. This limitation arises from the avoidance of sensitive transactions, requiring subject matter expertise or team member involvement.
2. Trial Version Data: Certain tools in the dataset were recorded using trial versions, while premium versions were available for others. Consequently, this introduces limitations in the functionality of some tools.
# Annotations
# Annotation process
The annotation process for the dataset is managed by the SROT system. This internally developed tool intelligently captures detailed information about on-screen movements, keypresses, and timestamps.
# Who are the annotators?
In this context, recording is executed by a team of human annotators and the annotation is performed systematically by SROT.
# Additional Information
# Dataset Curators
This dataset is created by the delivery team @SoftAge
| Softage-AI/Action_Recording_sample1 | [
"task_categories:summarization",
"size_categories:n<1K",
"language:en",
"data_annotation",
"data",
"AI",
"training",
"audio and video data annotation",
"region:us"
] | 2023-12-21T10:16:00+00:00 | {"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["summarization"], "tags": ["data_annotation", "data", "AI", "training", "audio and video data annotation"]} | 2023-12-22T03:59:47+00:00 | [] | [
"en"
] | TAGS
#task_categories-summarization #size_categories-n<1K #language-English #data_annotation #data #AI #training #audio and video data annotation #region-us
| # Annotation Techniques Sample Database
Explore SoftAge's diverse annotation techniques through this showcase database, featuring annotated audio, and video data annotation.
The dataset contains 54 queries from 54 different tools/software. It serves as a versatile resource for building tool-specific Assistant LLMs, including, information retrieval, and model training.
# Workforce Selection and Skill Levels
We employ a targeted hiring approach to assemble a skilled and diverse workforce for dataset creation. Our talent pool works on-site and is strategically managed to source beginners and experts with a minimum of graduation across various domains. Our skilled workforce includes professionals in research, technology, finance, engineering, creative designing, 3D modeling and animation, and many tools-specific experts. This ensures that our dataset is crafted by individuals with diverse expertise.
## Dataset Creation
# Curation Rationale
The dataset is curated to meet the objectives of a robust and exhaustive training resource. Guided by key principles and considerations, this process ensures the resultant dataset's quality and relevance.
# Data attributes:
- Tools: List of 54 different software or tools
- Audio Prompt: An auditory cue provided to correspond to the actions for the text responses
- Text Prompt: Written instructions guiding or prompting particular activities or tasks
- Video File: A digital file containing visual information, likely used for presenting video content within the dataset
- Action File: Refer to the recording of keypresses, mouse clicks, and mouse movements made by the action recorder. These logs are essential for understanding the sequence and frequency of user inputs, resulting in the creation of logs to explain the prompt actions.
- Output File: The result generated from specific operations or processing within the dataset
- OS: The operating system environment under which the associated tools data have been generated, either for MAC or WINDOWS
- Bit rate: The rate at which bits are processed or transmitted, often referring to audio or video data compression
- Frequency: The number of occurrences of a repeating event per unit of time, associated with audio signals in this dataset, measured in hertz (Hz)
# Discussion of Biases
This sample dataset undergoes a thorough discussion of biases inherent in the data collection process. Key considerations include:
1. Skillset Dependency: The dataset includes various skill levels, from beginners to experts, adding diversity to captured interactions.
2. Tool-specific Use Cases: Bias may arise from selecting tools based on popularity and industry relevance, potentially favoring widely used tools and their associated use cases.
3. Approach Bias: The dataset may include the most common approach for performing actions in a tool, potentially overlooking alternative methods.
# Other Known Limitations
1. Restricted Transactional Actions: The dataset does not encompass actions related to ordering, payments, or card transactions in tools. This limitation arises from the avoidance of sensitive transactions, requiring subject matter expertise or team member involvement.
2. Trial Version Data: Certain tools in the dataset were recorded using trial versions, while premium versions were available for others. Consequently, this introduces limitations in the functionality of some tools.
# Annotations
# Annotation process
The annotation process for the dataset is managed by the SROT system. This internally developed tool intelligently captures detailed information about on-screen movements, keypresses, and timestamps.
# Who are the annotators?
In this context, recording is executed by a team of human annotators and the annotation is performed systematically by SROT.
# Additional Information
# Dataset Curators
This dataset is created by the delivery team @SoftAge
| [
"# Annotation Techniques Sample Database\n\nExplore SoftAge's diverse annotation techniques through this showcase database, featuring annotated audio, and video data annotation. \n\nThe dataset contains 54 queries from 54 different tools/software. It serves as a versatile resource for building tool-specific Assistant LLMs, including, information retrieval, and model training.",
"# Workforce Selection and Skill Levels \n\nWe employ a targeted hiring approach to assemble a skilled and diverse workforce for dataset creation. Our talent pool works on-site and is strategically managed to source beginners and experts with a minimum of graduation across various domains. Our skilled workforce includes professionals in research, technology, finance, engineering, creative designing, 3D modeling and animation, and many tools-specific experts. This ensures that our dataset is crafted by individuals with diverse expertise.",
"## Dataset Creation",
"# Curation Rationale \n\nThe dataset is curated to meet the objectives of a robust and exhaustive training resource. Guided by key principles and considerations, this process ensures the resultant dataset's quality and relevance.",
"# Data attributes:\n\n- Tools: List of 54 different software or tools \n- Audio Prompt: An auditory cue provided to correspond to the actions for the text responses\n- Text Prompt: Written instructions guiding or prompting particular activities or tasks\n- Video File: A digital file containing visual information, likely used for presenting video content within the dataset\n- Action File: Refer to the recording of keypresses, mouse clicks, and mouse movements made by the action recorder. These logs are essential for understanding the sequence and frequency of user inputs, resulting in the creation of logs to explain the prompt actions. \n- Output File: The result generated from specific operations or processing within the dataset\n- OS: The operating system environment under which the associated tools data have been generated, either for MAC or WINDOWS \n- Bit rate: The rate at which bits are processed or transmitted, often referring to audio or video data compression\n- Frequency:\tThe number of occurrences of a repeating event per unit of time, associated with audio signals in this dataset, measured in hertz (Hz)",
"# Discussion of Biases \n\nThis sample dataset undergoes a thorough discussion of biases inherent in the data collection process. Key considerations include: \n\n1. Skillset Dependency: The dataset includes various skill levels, from beginners to experts, adding diversity to captured interactions. \n\n2. Tool-specific Use Cases: Bias may arise from selecting tools based on popularity and industry relevance, potentially favoring widely used tools and their associated use cases. \n\n3. Approach Bias: The dataset may include the most common approach for performing actions in a tool, potentially overlooking alternative methods.",
"# Other Known Limitations \n\n\n1. Restricted Transactional Actions: The dataset does not encompass actions related to ordering, payments, or card transactions in tools. This limitation arises from the avoidance of sensitive transactions, requiring subject matter expertise or team member involvement. \n\n2. Trial Version Data: Certain tools in the dataset were recorded using trial versions, while premium versions were available for others. Consequently, this introduces limitations in the functionality of some tools.",
"# Annotations",
"# Annotation process \n\nThe annotation process for the dataset is managed by the SROT system. This internally developed tool intelligently captures detailed information about on-screen movements, keypresses, and timestamps.",
"# Who are the annotators? \n\nIn this context, recording is executed by a team of human annotators and the annotation is performed systematically by SROT.",
"# Additional Information",
"# Dataset Curators \n\nThis dataset is created by the delivery team @SoftAge"
] | [
"TAGS\n#task_categories-summarization #size_categories-n<1K #language-English #data_annotation #data #AI #training #audio and video data annotation #region-us \n",
"# Annotation Techniques Sample Database\n\nExplore SoftAge's diverse annotation techniques through this showcase database, featuring annotated audio, and video data annotation. \n\nThe dataset contains 54 queries from 54 different tools/software. It serves as a versatile resource for building tool-specific Assistant LLMs, including, information retrieval, and model training.",
"# Workforce Selection and Skill Levels \n\nWe employ a targeted hiring approach to assemble a skilled and diverse workforce for dataset creation. Our talent pool works on-site and is strategically managed to source beginners and experts with a minimum of graduation across various domains. Our skilled workforce includes professionals in research, technology, finance, engineering, creative designing, 3D modeling and animation, and many tools-specific experts. This ensures that our dataset is crafted by individuals with diverse expertise.",
"## Dataset Creation",
"# Curation Rationale \n\nThe dataset is curated to meet the objectives of a robust and exhaustive training resource. Guided by key principles and considerations, this process ensures the resultant dataset's quality and relevance.",
"# Data attributes:\n\n- Tools: List of 54 different software or tools \n- Audio Prompt: An auditory cue provided to correspond to the actions for the text responses\n- Text Prompt: Written instructions guiding or prompting particular activities or tasks\n- Video File: A digital file containing visual information, likely used for presenting video content within the dataset\n- Action File: Refer to the recording of keypresses, mouse clicks, and mouse movements made by the action recorder. These logs are essential for understanding the sequence and frequency of user inputs, resulting in the creation of logs to explain the prompt actions. \n- Output File: The result generated from specific operations or processing within the dataset\n- OS: The operating system environment under which the associated tools data have been generated, either for MAC or WINDOWS \n- Bit rate: The rate at which bits are processed or transmitted, often referring to audio or video data compression\n- Frequency:\tThe number of occurrences of a repeating event per unit of time, associated with audio signals in this dataset, measured in hertz (Hz)",
"# Discussion of Biases \n\nThis sample dataset undergoes a thorough discussion of biases inherent in the data collection process. Key considerations include: \n\n1. Skillset Dependency: The dataset includes various skill levels, from beginners to experts, adding diversity to captured interactions. \n\n2. Tool-specific Use Cases: Bias may arise from selecting tools based on popularity and industry relevance, potentially favoring widely used tools and their associated use cases. \n\n3. Approach Bias: The dataset may include the most common approach for performing actions in a tool, potentially overlooking alternative methods.",
"# Other Known Limitations \n\n\n1. Restricted Transactional Actions: The dataset does not encompass actions related to ordering, payments, or card transactions in tools. This limitation arises from the avoidance of sensitive transactions, requiring subject matter expertise or team member involvement. \n\n2. Trial Version Data: Certain tools in the dataset were recorded using trial versions, while premium versions were available for others. Consequently, this introduces limitations in the functionality of some tools.",
"# Annotations",
"# Annotation process \n\nThe annotation process for the dataset is managed by the SROT system. This internally developed tool intelligently captures detailed information about on-screen movements, keypresses, and timestamps.",
"# Who are the annotators? \n\nIn this context, recording is executed by a team of human annotators and the annotation is performed systematically by SROT.",
"# Additional Information",
"# Dataset Curators \n\nThis dataset is created by the delivery team @SoftAge"
] | [
49,
82,
112,
5,
52,
252,
135,
113,
4,
47,
37,
5,
18
] | [
"passage: TAGS\n#task_categories-summarization #size_categories-n<1K #language-English #data_annotation #data #AI #training #audio and video data annotation #region-us \n# Annotation Techniques Sample Database\n\nExplore SoftAge's diverse annotation techniques through this showcase database, featuring annotated audio, and video data annotation. \n\nThe dataset contains 54 queries from 54 different tools/software. It serves as a versatile resource for building tool-specific Assistant LLMs, including, information retrieval, and model training.# Workforce Selection and Skill Levels \n\nWe employ a targeted hiring approach to assemble a skilled and diverse workforce for dataset creation. Our talent pool works on-site and is strategically managed to source beginners and experts with a minimum of graduation across various domains. Our skilled workforce includes professionals in research, technology, finance, engineering, creative designing, 3D modeling and animation, and many tools-specific experts. This ensures that our dataset is crafted by individuals with diverse expertise.## Dataset Creation# Curation Rationale \n\nThe dataset is curated to meet the objectives of a robust and exhaustive training resource. Guided by key principles and considerations, this process ensures the resultant dataset's quality and relevance."
] |
36007dc535806c4022c9981e9bfb7695cb178e9a | Custom Colorbox Dataset | nirbhayfaaya/ColorboxDataset | [
"region:us"
] | 2023-12-21T10:16:56+00:00 | {} | 2023-12-27T09:27:07+00:00 | [] | [] | TAGS
#region-us
| Custom Colorbox Dataset | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
988c2d8537e225357b92c0efbbbb319b374e4a81 | The Files names are the actual labels. | Hassam-Nazir/Companies_Logo | [
"size_categories:1K<n<10K",
"art",
"region:us"
] | 2023-12-21T10:31:07+00:00 | {"size_categories": ["1K<n<10K"], "pretty_name": "Logo", "tags": ["art"]} | 2023-12-21T15:08:15+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #art #region-us
| The Files names are the actual labels. | [] | [
"TAGS\n#size_categories-1K<n<10K #art #region-us \n"
] | [
20
] | [
"passage: TAGS\n#size_categories-1K<n<10K #art #region-us \n"
] |
df304df582d09ae01a58b00adcf51e86a9b21b91 | <p>Hearing problems are rampant across people of all ages. Some experts claim escalation in pollutants and nutrient deficiency are the leading cause of auditory problems. However, new research suggests that a particular dormant virus in the human body is the culprit behind most hearing issues.</p>
<p>Varicella-zoster is a virus entering the human body after a chickenpox attack. A few decades ago, the disease was fatal to patients. Unfortunately, even after getting the precise vaccine, the virus stays within the body, waiting for the correct chance to wreck health problems.</p>
<h2><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>{Sonuvita - Official Website -- Order Now}</strong></a></span></h2>
<h2><strong>➡️<span style="color: red;">● For Order Official Website - <a style="color: red;" href="https://www.globalfitnessmart.com/get-sonuvita">https://www.globalfitnessmart.com/get-sonuvita</a></span></strong><br /><strong>➡️<span style="color: #ff6600;">●<span style="color: maroon;"> Item Name: — <a style="color: maroon;" href="https://www.globalfitnessmart.com/get-sonuvita">{Sonuvita} {Sonuvita Hearing Support Formula}</a></span></span></strong><br /><strong>➡️<span style="color: #99cc00;">● Ingredients: — All Natural</span></strong><br /><strong>➡️<span style="color: #ffcc00;">● Incidental Effects: — NA</span></strong><br /><strong>➡️<span style="color: blue;">● Accessibility: — <a style="color: blue;" href="https://www.globalfitnessmart.com/get-sonuvita">Online</a></span></strong></h2>
<h2><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span><br /><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>✅HUGE DISCOUNT ! HURRY UP! ORDER NOW!✅</strong></a></span></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-sonuvita"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiAOjcvY2wUP0EgzEwIsY0UMsuahuDr6rV59mIVpcyxcdDRLnErnsIjql28mvaCPo3Wnk5xGM9cXaxkWSqSU9-Ps3tXyXndZ3f1y-iP8z48CjhgY7XTMGQjwJwlKzx4xHOdMolVbRockj0Pxg81qgSxBC9k_QaA3yQu869jKV5pDQ1MObMUpjqC2WFhu4Of/w640-h356/6-bottles04.png" alt="" width="640" height="356" border="0" data-original-height="600" data-original-width="1078" /></a></div>
<p>Deficiency in certain nutrients, pollution, inflammations, and low immunity can awaken the varicella-zoster virus, making it attack the nerves in various body parts. Without proper diagnosis and treatment, the dormant virus causes 100% hearing loss, tinnitus, and mental problems.</p>
<p><a href="https://sonuvita.clubeo.com/calendar/2023/12/23/sonuvita-reviews-is-it-really-work-scam-or-trusted"><strong>Sonuvita</strong></a> is a revolutionary dietary formulation targeting the root of hearing problems. It provides the auditory canal with the correct nutrients to restore hearing and combat tinnitus. Below is a <a href="https://sonuvita.clubeo.com/page/sonuvita-reviews-all-you-need-to-know-about-sonuvita-hearing-support-formula-capsules-offers.html"><strong>Sonuvita</strong></a> review explaining how the supplement works, its ingredients, benefits, pros, cons, and pricing.</p>
<h2><strong>What Is <a href="https://sonuvita.clubeo.com/page/sonuvita-viral-scam-or-legit-is-it-work-or-not.html">Sonuvita</a>?</strong></h2>
<p><a href="https://sonuvita.clubeo.com/"><strong> Sonuvita</strong></a> is an innovative dietary supplement designed to improve the auditory canal and immune system. The creator, Dr. Thomas Hernett, claims it uses a unique approach to combat hearing and tinnitus problems from the source. It nourishes the body with quality research-based nutrients to fight the varicella-zoster virus, enhance nerve health, and fortify immunity.</p>
<p>According to the <a href="https://myhealthfitnessmart.blogspot.com/2023/12/sonuvita-reviews-is-it-really-work-scam.html"><strong>Sonuvita</strong></a> maker, the supplement is easy to use and free from harmful ingredients. It is manufactured in the US using natural and pure components from reputable sources. The supplement is perfect for anyone with partial hearing loss or other problems in the auditory canal.</p>
<p>You can acquire <a href="https://groups.google.com/g/sonuvita-hearing-support-formula/c/bH790MAXNj8"><strong>Sonuvita</strong></a> supplements only via the official website. The manufacturer offers discounts and other offers for bulk orders. The company takes a few days to deliver <a href="https://sites.google.com/view/sonuvita-reviews-us/home"><strong>Sonuvita</strong></a> in the US.</p>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>(EXCLUSIVE OFFER)Click Here : "Sonuvita Hearing Support Formula USA"Official Website!</strong></a></span></h2>
<h2><strong>How Does <a href="https://lookerstudio.google.com/u/0/reporting/f7b0d54c-2c26-4120-88e5-446149845c58/page/D7ulD">Sonuvita</a> Work?</strong></h2>
<p><a href="https://colab.research.google.com/drive/1TZyIeGOZFH2kPltP6jtQCeEKM8z2J21A"><strong>Sonuvita</strong></a> formulator argues the supplement uses a unique approach to improve hearing and overall auditory health. The formulation can restore 20/20 hearing regardless of your age. Users must use the ear health supplement consistently to get the desired results.</p>
<p><strong>Step One: Nourish the Auditory System</strong></p>
<p><a href="https://sonuvita-hearing-support-formula.jimdosite.com/"><strong>Sonuvita</strong></a> treats the varicella-zoster virus, which lays dormant in areas of the body, waiting for the opportune time to attack. The chickenpox virus attacks the immune system and increases unhealthy inflammation. It inflames the nerves connecting the ear to the brain, causing miscommunication. <a href="https://sonuvita-review.company.site/"><strong>Sonuvita</strong></a> is rich in immune boosters that battle the varicella-zoster virus, thus eliminating unwanted inflammation.</p>
<p><strong>Step Two: Eliminate Varicella Zoster Virus</strong></p>
<p>It can take a few days or weeks for <a href="https://www.scoop.it/topic/sonuvita-by-sonuvita-official"><strong>Sonuvita</strong></a> to combat the varicella Zoster virus. Customers may experience better hearing abilities and improved sleep. <a href="https://www.scoop.it/topic/sonuvita-hearing-support-formula"><strong>Sonuvita</strong></a> improves cellular health, augmenting overall mental health. It can protect the users from memory issues and chronic cognitive fatigue.</p>
<p><strong>Step Three: Repair and Shield the Body</strong></p>
<p>Consumers must use <a href="https://gamma.app/docs/Sonuvita-USA-Premium-Sonuvita-Hearing-Support-Formula-Official--8837p6c0wbpeno4?mode=doc"><strong>Sonuvita</strong></a> for at least three months to amplify their immune system. The supplement encourages natural healing and repair of damaged nerves and cells in the auditory canal. It flushes out unwanted toxins and promotes healthy blood flow in the ear. In addition, <a href="https://sonuvita-review.webflow.io/"><strong>Sonuvita</strong></a> strengthens the disease-fighting mechanism, diminishing the chances of developing unhealthy inflammation.</p>
<p><strong>Step Four: Strengthen Overall Health</strong></p>
<p>Continuous usage of <a href="https://soundcloud.com/sonuvitaofficial/sonuvita-hearing-support-formula-viralscam-or-legit-is-it-work-or-not"><strong>Sonuvita</strong></a> for over three months can boost your physical and mental wellness. It soothes the nerves, reduces anxiety, and rejuvenates overall health. <strong><a href="https://haitiliberte.com/advert/sonuvita-hearing-support-formula-reviews-is-it-really-work/">Sonuvita</a></strong> users may experience better moods and energy levels without changing their routine or nutrition.</p>
<p><a href="https://sonuvita.bandcamp.com/track/sonuvita-hearing-support-formula-reviews-is-it-really-work"><strong>Sonuvita</strong></a> calms the varicella-zoster virus and amplifies the immune system. It eliminates inflammation in the nerves, calms moods, and manages stress levels. Using the ear support formulation as suggested can improve physical and brain health.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-sonuvita"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEihD32upVL7OeoDAETUzMvk_PhxxYyqJoy1UDzUtVUrYoDIC0Cx99AopX1_okWRlpAdvLQmOO28JWJwF4txLP-sbkWTthcUh8zeRYUHGcST3QdU4UseIJzNyueh2ILS_4dyNM10FvMSEecVaOYxBwyd95_IbC5AZx7Q4n9a2v8OX4Y7TnXRxklUBmjRUHqj/w640-h218/Sonuvita%2001.jpg" alt="" width="640" height="218" border="0" data-original-height="131" data-original-width="385" /></a></div>
<h2><strong><a href="https://community.thebatraanumerology.com/post/sonuvita-hearing-support-formula-reviews---is-it-really-work-6583f1345ad4977a18583bce">Sonuvita</a> Ingredients</strong></h2>
<p>Sonuvita ingredients are gained from clean sources and backed by various research. The blend of ear health nutrients restores hearing and enhances overall health. Below is a detailed analysis of the effectiveness of each active <a href="https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77"><strong>Sonuvita</strong> </a>in fixing ear issues.</p>
<p><strong>Rhodiola Rosea</strong></p>
<p><a href="https://wandering.flarum.cloud/d/33284-sonuvita-hearing-support-formula-reviews-is-it-really-work"><strong>Sonuvita</strong></a> maker cites various studies proving that the ingredient can improve ear health. A publication in the Journal of BMC Complementary and Alternative Medicine indicates that Rhodiola is a powerful antioxidant that can benefit the auditory system. It works by eliminating the oxidative stress in the inner areas of the ear, including the cochlea. In research, the mice exposed to noise-induced hearing issues experienced remarkable improvement in hearing after using Rhodiola Rosea.</p>
<p>Other clinical research suggests that Rhodiola Rosea works with other <a href="https://leetcode.com/discuss/interview-question/4434555/Sonuvita-Hearing-Support-Formula-Reviews-Is-It-Really-Work"><strong>Sonuvita</strong></a> ingredients to prevent or diminish age-related decline in hearing capabilities. It battles oxidative stress, improves cellular health, and normalizes the functioning of the auditory canal.</p>
<p>Rhodiola Rosea can also benefit cognitive health. The herb is traditionally used to calm moods, support sleep, and prevent symptoms of depression. <a href="https://haitiliberte.com/advert/sonuvita-viralscam-or-legit-is-it-work-or-not/"><strong>Sonuvita</strong></a>’s creator argues it can enhance mental performance and reduce age-related memory problems.</p>
<p><strong>Skullcap</strong></p>
<p>The native North American herb is common in modern and traditional medicine. Skullcap has blue or purple flowers and is commonly used to brew herbal tonics for better physical and mental health.</p>
<p><a href="https://community.thebatraanumerology.com/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f03f8906ef28b49d9ea9"><strong>Sonuvita</strong></a> maker argues it has antioxidant properties and can protect delicate cells in the eras from varicella zoster virus. It lowers the free radicals, lessening unhealthy auditory nerve inflammations. Additionally, the skullcap can regulate nutrient and oxygen uptake in the inner parts of the ear. It protects and strengthens the delicate arteries and capillaries in the ears, augmenting blood circulation.</p>
<p><strong>Baicalein</strong></p>
<p>Baicalein is a unique component proven to improve ear and brain health. According to the <a href="https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77"><strong>Sonuvita</strong></a> maker, it primarily protects the inner part of the ear responsible for balance. It also enhances the fluid in the auditory channel, encouraging healthy signal transmission between the brain and ears.</p>
<p>In a clinical investigation, baicalein was found effective in augmenting brain health in mice. It boosts learning abilities and spatial memory in mice. Scientists state that baicalein impedes oxidative stress and inflammation, thus augmenting the immune system. <a href="https://wandering.flarum.cloud/d/33283-sonuvita-viralscam-or-legit-is-it-work-or-not"><strong>Sonuvita</strong></a> maker states baicalein works with other nutrients to restore the cochlear functions and battle varicella-zoster virus.</p>
<p><strong>Griffonia Simplicifolia</strong></p>
<p>Griffonia Simplicifolia promotes healthy ear blood flow, thus augmenting cochlear functions. It encourages nutrients and oxygen uptake, improving the health of the ear cells and neurons.</p>
<p>Studies indicate that Griffonia is rich in 56-HTP molecules and can, therefore, regulate moods. It encourages the secretion of serotonin, augments moods, and prevents oxidative stress. The happiness or mood hormone can also improve blood circulation in the brain and ears. Serotonin binds to receptors on the blood vessel, promoting vasodilation and blood flow.</p>
<p><strong>L-Theanine</strong></p>
<p>Theanine can lower stress levels. The amino acid encourages relaxation and alleviates stress. It may improve cochlear functions and immune response.</p>
<p>A 2007 study on humans shows that Theanine increases the alpha brainwave’s activities, boosting mental alertness and relaxation. The amino acid may also expand metabolic rates, energy levels, sleep, and learning.</p>
<p><strong>Mucuna Pruriens</strong></p>
<p>Mucuna can encourage healthy mods and relaxation. It regulates the production of dopamine, thus boosting various cognitive abilities. The neurotransmitter can promote blood flow in the ear. Also, mucuna pruriens supports healthy inflammations and can battle oxidative stress.</p>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>SPECIAL PROMO[Limited Discount]: "Sonuvita Hearing Support Formula USA"Official Website!</strong></a></span></h2>
<h2><strong><a href="https://leetcode.com/discuss/interview-question/4434539/Sonuvita-Viral%3A*SCAM-or-LEGIT*-Is-It-Work-or-Not">Sonuvita</a> Benefits</strong></h2>
<ul style="list-style-type: circle;">
<li>It can improve and restore hearing in adults</li>
<li><a href="https://www.tipga.com/e/6583f0ac3286fe36c87c38c7"><strong>Sonuvita</strong></a> claims to boost cochlear functions and battling tinnitus</li>
<li>It can combat inflamed nerves and oxidative stress</li>
<li><a href="https://devfolio.co/@sonuvita"><strong>Sonuvita</strong></a> is rich in nootropic ingredients designed to mend cognitive health</li>
<li>It can lower stress</li>
<li><a href="https://pdfhost.io/v/xa~.7DtHi_Sonuvita_Hearing_Support_Formula_ViralSCAM_or_LEGIT_Is_It_Work_or_Not"><strong>Sonuvita</strong></a> strengthens the immune response, naturally allowing your body to fight varicella-zoster virus and other pathogens.</li>
</ul>
<h2><strong>Pros and Cons of <a href="https://sonuvita.bandcamp.com/track/sonuvita-hearing-support-formula-reviews-is-it-really-work">Sonuvita</a> Hearing Support Supplement</strong></h2>
<h2><strong>Pros</strong></h2>
<ul style="list-style-type: circle;">
<li><a href="https://soundcloud.com/sonuvitaofficial/sonuvita-hearing-support-formula-viralscam-or-legit-is-it-work-or-not"><strong>Sonuvita</strong></a> is a non-invasive and affordable solution to hearing issues</li>
<li>All ingredients in the era support formula are 100% natural</li>
<li>Sonuvita can purportedly fix partial and complete hearing</li>
<li>It can amplify the immune system by over 300%, hence preventing the varicella-zoster virus from attacking the body</li>
<li><a href="https://gamma.app/docs/Sonuvita-USA-Premium-Sonuvita-Hearing-Support-Formula-Official--8837p6c0wbpeno4?mode=doc"><strong>Sonuvita</strong></a> is simple to use and free from side effects</li>
<li><a href="https://sonuvita-review.webflow.io/"><strong>Sonuvita</strong></a> is formulated by an audiologist and manufactured in an FDA-certified and GMP-approved facility.</li>
</ul>
<h2><strong>Cons</strong></h2>
<ul style="list-style-type: circle;">
<li><a href="https://www.scoop.it/topic/sonuvita-hearing-support-formula"><strong>Sonuvita</strong></a> is only for adults</li>
<li>The ear support formula is unsuitable for people with existing medical issues or anyone pregnant and nursing.</li>
<li>The results of using <a href="https://hackmd.io/@sonuvitaofficial/sonuvita-review"><strong>Sonuvita</strong></a> varies</li>
<li>You can buy <a href="https://www.dibiz.com/sonuvitaofficial"><strong>Sonuvita</strong></a> only through the official website</li>
</ul>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>SPECIAL PROMO: Get Sonuvita Hearing Support Formula at the Lowest Discounted Price Online</strong></a></span></h2>
<h2><strong>Pricing</strong></h2>
<p>You can buy <a href="https://www.scoop.it/topic/sonuvita-by-sonuvita-official"><strong>Sonuvita</strong></a> only through the official website. There are three package options available to cater to your needs and budget. All <a href="https://sonuvita-hearing-support-formula.jimdosite.com/"><strong>Sonuvita</strong></a> purchases are eligible for a 60-day money-back guarantee.</p>
<ul style="list-style-type: circle;">
<li><strong>One Bottle (30-Day Supply): $69 + Shipping Fee</strong></li>
<li><strong>Three Bottles (90-Day Supply): $59.00 per bottle + Free Shipping ($177.00 Total)</strong></li>
<li><strong>Six Bottles (180-Day Supply): $49.00 per bottle + Free Shipping ($294.00 Total)</strong></li>
</ul>
<p>If you are unsatisfied with <a href="https://www.styleforum.net/threads/sonuvita-hearing-support-formula-viral-scam-or-legit-is-it-work-or-not.738291/"><strong>Sonuvita</strong></a> within 60 days of the shipping date, fill out and return the refund form that came with your order. Questions about the product can be directed to <a href="https://forum.ljubavni-oglasnik.net/threads/sonuvita-hearing-support-formula-viral-scam-or-legit-is-it-work-or-not.4901/"><strong>Sonuvita</strong></a> customer service:</p>
<h2><strong>Conclusion</strong></h2>
<p><a href="https://community.thermaltake.com/index.php?/topic/372498-sonuvita-hearing-support-formula-reviews-%E2%80%93-is-it-really-work/"><strong>Sonuvita</strong></a> is marketed as a game-changer ear health support supplement. It contains multiple science-approved ingredients to manage hearing issues from the source. The creator claims using Sonuvita can “progress your communication skills and help you enjoy your favorite music.”</p>
<p>The formulation battles varicella-zoster virus and other pathogens, causing inflammation of the cochlear nerves and nerves. You can buy the <a href="https://www.styleforum.net/threads/sonuvita-hearing-support-formula-reviews-%E2%80%93-is-it-really-work.738289/"><strong>Sonuvita</strong></a> ear support formula through the official website at discounted rates.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="https://www.globalfitnessmart.com/get-sonuvita"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjfC6R03Sf6pL7hBxP9EOujCZR_i4H9S9DTr6JuuzenPPeH-uQHDdwZfKDp1Yhb24i5NwINPxoshogHYDnuAgh7o8MMfFY94dL237ev9gLat464Ekge7M4QVLLIcQbI37A_MA5IOuaJr5cr5gMb4mt6nZQ9slHA-E4s1Kznb9cRy0QPpPGXEbHs6enyqKS2/w640-h568/Sonuvita.%20price.jpg" alt="" width="640" height="568" border="0" data-original-height="498" data-original-width="561" /></a></div>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="https://www.globalfitnessmart.com/get-sonuvita"><strong>Exclusive Details: *Sonuvita Hearing Support Formula* Read More Details on Official Website USA!</strong></a></span></h2>
<h2><strong><span style="color: #ff6600;">#READ MORE</span></strong></h2>
<p><strong><span style="color: #ff6600;"><a href="https://myhealthfitnessmart.blogspot.com/2023/12/sonuvita-reviews-is-it-really-work-scam.html">https://myhealthfitnessmart.blogspot.com/2023/12/sonuvita-reviews-is-it-really-work-scam.html</a></span></strong></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita.clubeo.com/calendar/2023/12/23/sonuvita-reviews-is-it-really-work-scam-or-trusted">https://sonuvita.clubeo.com/calendar/2023/12/23/sonuvita-reviews-is-it-really-work-scam-or-trusted</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita.clubeo.com/page/sonuvita-viral-scam-or-legit-is-it-work-or-not.html">https://sonuvita.clubeo.com/page/sonuvita-viral-scam-or-legit-is-it-work-or-not.html</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita.clubeo.com/page/sonuvita-reviews-all-you-need-to-know-about-sonuvita-hearing-support-formula-capsules-offers.html">https://sonuvita.clubeo.com/page/sonuvita-reviews-all-you-need-to-know-about-sonuvita-hearing-support-formula-capsules-offers.html</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita.clubeo.com/">https://sonuvita.clubeo.com/</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://groups.google.com/g/sonuvita-hearing-support-formula/c/bH790MAXNj8">https://groups.google.com/g/sonuvita-hearing-support-formula/c/bH790MAXNj8</a><br /></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sites.google.com/view/sonuvita-reviews-us/home">https://sites.google.com/view/sonuvita-reviews-us/home</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://lookerstudio.google.com/u/0/reporting/f7b0d54c-2c26-4120-88e5-446149845c58/page/D7ulD">https://lookerstudio.google.com/u/0/reporting/f7b0d54c-2c26-4120-88e5-446149845c58/page/D7ulD</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita-review.company.site/">https://sonuvita-review.company.site/</a><br /></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita-hearing-support-formula.jimdosite.com/">https://sonuvita-hearing-support-formula.jimdosite.com/</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://www.scoop.it/topic/sonuvita-by-sonuvita-official">https://www.scoop.it/topic/sonuvita-by-sonuvita-official</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita-review.webflow.io/">https://sonuvita-review.webflow.io/</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://gamma.app/docs/Sonuvita-USA-Premium-Sonuvita-Hearing-Support-Formula-Official--8837p6c0wbpeno4?mode=doc">https://gamma.app/docs/Sonuvita-USA-Premium-Sonuvita-Hearing-Support-Formula-Official--8837p6c0wbpeno4?mode=doc</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://sonuvita.bandcamp.com/track/sonuvita-hearing-support-formula-reviews-is-it-really-work">https://sonuvita.bandcamp.com/track/sonuvita-hearing-support-formula-reviews-is-it-really-work</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://soundcloud.com/sonuvitaofficial/sonuvita-hearing-support-formula-viralscam-or-legit-is-it-work-or-not">https://soundcloud.com/sonuvitaofficial/sonuvita-hearing-support-formula-viralscam-or-legit-is-it-work-or-not</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://pdfhost.io/v/xa~.7DtHi_Sonuvita_Hearing_Support_Formula_ViralSCAM_or_LEGIT_Is_It_Work_or_Not">https://pdfhost.io/v/xa~.7DtHi_Sonuvita_Hearing_Support_Formula_ViralSCAM_or_LEGIT_Is_It_Work_or_Not</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://devfolio.co/@sonuvita">https://devfolio.co/@sonuvita</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://www.tipga.com/e/6583f0ac3286fe36c87c38c7">https://www.tipga.com/e/6583f0ac3286fe36c87c38c7</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://leetcode.com/discuss/interview-question/4434539/Sonuvita-Viral%3A*SCAM-or-LEGIT*-Is-It-Work-or-Not">https://leetcode.com/discuss/interview-question/4434539/Sonuvita-Viral%3A*SCAM-or-LEGIT*-Is-It-Work-or-Not</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://wandering.flarum.cloud/d/33283-sonuvita-viralscam-or-legit-is-it-work-or-not">https://wandering.flarum.cloud/d/33283-sonuvita-viralscam-or-legit-is-it-work-or-not</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77">https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://community.thebatraanumerology.com/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f03f8906ef28b49d9ea9">https://community.thebatraanumerology.com/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f03f8906ef28b49d9ea9</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://haitiliberte.com/advert/sonuvita-viralscam-or-legit-is-it-work-or-not/">https://haitiliberte.com/advert/sonuvita-viralscam-or-legit-is-it-work-or-not/</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://leetcode.com/discuss/interview-question/4434555/Sonuvita-Hearing-Support-Formula-Reviews-Is-It-Really-Work">https://leetcode.com/discuss/interview-question/4434555/Sonuvita-Hearing-Support-Formula-Reviews-Is-It-Really-Work</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77">https://followme.tribe.so/post/sonuvita-viral-scam-or-legit-is-it-work-or-not-6583f034d86bee5e15d58c77</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://community.thebatraanumerology.com/post/sonuvita-hearing-support-formula-reviews---is-it-really-work-6583f1345ad4977a18583bce">https://community.thebatraanumerology.com/post/sonuvita-hearing-support-formula-reviews---is-it-really-work-6583f1345ad4977a18583bce</a></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="https://haitiliberte.com/advert/sonuvita-hearing-support-formula-reviews-is-it-really-work/">https://haitiliberte.com/advert/sonuvita-hearing-support-formula-reviews-is-it-really-work/</a></strong></span></p> | sonuvitaofficial/sonuvita-review | [
"region:us"
] | 2023-12-21T10:38:04+00:00 | {} | 2023-12-21T10:38:16+00:00 | [] | [] | TAGS
#region-us
| <p>Hearing problems are rampant across people of all ages. Some experts claim escalation in pollutants and nutrient deficiency are the leading cause of auditory problems. However, new research suggests that a particular dormant virus in the human body is the culprit behind most hearing issues.</p>
<p>Varicella-zoster is a virus entering the human body after a chickenpox attack. A few decades ago, the disease was fatal to patients. Unfortunately, even after getting the precise vaccine, the virus stays within the body, waiting for the correct chance to wreck health problems.</p>
<h2><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="URL - Official Website -- Order Now}</strong></a></span></h2>
<h2><strong>️<span style="color: red;">● For Order Official Website - <a style="color: red;" href="URL/URL /><strong>️<span style="color: #ff6600;">●<span style="color: maroon;"> Item Name: — <a style="color: maroon;" href="URL {Sonuvita Hearing Support Formula}</a></span></span></strong><br /><strong>️<span style="color: #99cc00;">● Ingredients: — All Natural</span></strong><br /><strong>️<span style="color: #ffcc00;">● Incidental Effects: — NA</span></strong><br /><strong>️<span style="color: blue;">● Accessibility: — <a style="color: blue;" href="URL
<h2><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a></span><br /><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a></span><br /><span style="background-color: #ffcc00; color: blue;"><a style="background-color: #ffcc00; color: blue;" href="URL DISCOUNT ! HURRY UP! ORDER NOW!</strong></a></span></h2>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="356" border="0" data-original-height="600" data-original-width="1078" /></a></div>
<p>Deficiency in certain nutrients, pollution, inflammations, and low immunity can awaken the varicella-zoster virus, making it attack the nerves in various body parts. Without proper diagnosis and treatment, the dormant virus causes 100% hearing loss, tinnitus, and mental problems.</p>
<p><a href="URL is a revolutionary dietary formulation targeting the root of hearing problems. It provides the auditory canal with the correct nutrients to restore hearing and combat tinnitus. Below is a <a href="URL review explaining how the supplement works, its ingredients, benefits, pros, cons, and pricing.</p>
<h2><strong>What Is <a href="URL
<p><a href="URL is an innovative dietary supplement designed to improve the auditory canal and immune system. The creator, Dr. Thomas Hernett, claims it uses a unique approach to combat hearing and tinnitus problems from the source. It nourishes the body with quality research-based nutrients to fight the varicella-zoster virus, enhance nerve health, and fortify immunity.</p>
<p>According to the <a href="URL maker, the supplement is easy to use and free from harmful ingredients. It is manufactured in the US using natural and pure components from reputable sources. The supplement is perfect for anyone with partial hearing loss or other problems in the auditory canal.</p>
<p>You can acquire <a href="URL supplements only via the official website. The manufacturer offers discounts and other offers for bulk orders. The company takes a few days to deliver <a href="URL in the US.</p>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="URL OFFER)Click Here : "Sonuvita Hearing Support Formula USA"Official Website!</strong></a></span></h2>
<h2><strong>How Does <a href="URL Work?</strong></h2>
<p><a href="URL formulator argues the supplement uses a unique approach to improve hearing and overall auditory health. The formulation can restore 20/20 hearing regardless of your age. Users must use the ear health supplement consistently to get the desired results.</p>
<p><strong>Step One: Nourish the Auditory System</strong></p>
<p><a href="URL treats the varicella-zoster virus, which lays dormant in areas of the body, waiting for the opportune time to attack. The chickenpox virus attacks the immune system and increases unhealthy inflammation. It inflames the nerves connecting the ear to the brain, causing miscommunication. <a href="URL is rich in immune boosters that battle the varicella-zoster virus, thus eliminating unwanted inflammation.</p>
<p><strong>Step Two: Eliminate Varicella Zoster Virus</strong></p>
<p>It can take a few days or weeks for <a href="URL to combat the varicella Zoster virus. Customers may experience better hearing abilities and improved sleep. <a href="URL improves cellular health, augmenting overall mental health. It can protect the users from memory issues and chronic cognitive fatigue.</p>
<p><strong>Step Three: Repair and Shield the Body</strong></p>
<p>Consumers must use <a href="URL for at least three months to amplify their immune system. The supplement encourages natural healing and repair of damaged nerves and cells in the auditory canal. It flushes out unwanted toxins and promotes healthy blood flow in the ear. In addition, <a href="URL strengthens the disease-fighting mechanism, diminishing the chances of developing unhealthy inflammation.</p>
<p><strong>Step Four: Strengthen Overall Health</strong></p>
<p>Continuous usage of <a href="URL for over three months can boost your physical and mental wellness. It soothes the nerves, reduces anxiety, and rejuvenates overall health. <strong><a href="URL users may experience better moods and energy levels without changing their routine or nutrition.</p>
<p><a href="URL calms the varicella-zoster virus and amplifies the immune system. It eliminates inflammation in the nerves, calms moods, and manages stress levels. Using the ear support formulation as suggested can improve physical and brain health.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="218" border="0" data-original-height="131" data-original-width="385" /></a></div>
<h2><strong><a href="URL Ingredients</strong></h2>
<p>Sonuvita ingredients are gained from clean sources and backed by various research. The blend of ear health nutrients restores hearing and enhances overall health. Below is a detailed analysis of the effectiveness of each active <a href="URL </a>in fixing ear issues.</p>
<p><strong>Rhodiola Rosea</strong></p>
<p><a href="URL maker cites various studies proving that the ingredient can improve ear health. A publication in the Journal of BMC Complementary and Alternative Medicine indicates that Rhodiola is a powerful antioxidant that can benefit the auditory system. It works by eliminating the oxidative stress in the inner areas of the ear, including the cochlea. In research, the mice exposed to noise-induced hearing issues experienced remarkable improvement in hearing after using Rhodiola Rosea.</p>
<p>Other clinical research suggests that Rhodiola Rosea works with other <a href="URL ingredients to prevent or diminish age-related decline in hearing capabilities. It battles oxidative stress, improves cellular health, and normalizes the functioning of the auditory canal.</p>
<p>Rhodiola Rosea can also benefit cognitive health. The herb is traditionally used to calm moods, support sleep, and prevent symptoms of depression. <a href="URL creator argues it can enhance mental performance and reduce age-related memory problems.</p>
<p><strong>Skullcap</strong></p>
<p>The native North American herb is common in modern and traditional medicine. Skullcap has blue or purple flowers and is commonly used to brew herbal tonics for better physical and mental health.</p>
<p><a href="URL maker argues it has antioxidant properties and can protect delicate cells in the eras from varicella zoster virus. It lowers the free radicals, lessening unhealthy auditory nerve inflammations. Additionally, the skullcap can regulate nutrient and oxygen uptake in the inner parts of the ear. It protects and strengthens the delicate arteries and capillaries in the ears, augmenting blood circulation.</p>
<p><strong>Baicalein</strong></p>
<p>Baicalein is a unique component proven to improve ear and brain health. According to the <a href="URL maker, it primarily protects the inner part of the ear responsible for balance. It also enhances the fluid in the auditory channel, encouraging healthy signal transmission between the brain and ears.</p>
<p>In a clinical investigation, baicalein was found effective in augmenting brain health in mice. It boosts learning abilities and spatial memory in mice. Scientists state that baicalein impedes oxidative stress and inflammation, thus augmenting the immune system. <a href="URL maker states baicalein works with other nutrients to restore the cochlear functions and battle varicella-zoster virus.</p>
<p><strong>Griffonia Simplicifolia</strong></p>
<p>Griffonia Simplicifolia promotes healthy ear blood flow, thus augmenting cochlear functions. It encourages nutrients and oxygen uptake, improving the health of the ear cells and neurons.</p>
<p>Studies indicate that Griffonia is rich in 56-HTP molecules and can, therefore, regulate moods. It encourages the secretion of serotonin, augments moods, and prevents oxidative stress. The happiness or mood hormone can also improve blood circulation in the brain and ears. Serotonin binds to receptors on the blood vessel, promoting vasodilation and blood flow.</p>
<p><strong>L-Theanine</strong></p>
<p>Theanine can lower stress levels. The amino acid encourages relaxation and alleviates stress. It may improve cochlear functions and immune response.</p>
<p>A 2007 study on humans shows that Theanine increases the alpha brainwave’s activities, boosting mental alertness and relaxation. The amino acid may also expand metabolic rates, energy levels, sleep, and learning.</p>
<p><strong>Mucuna Pruriens</strong></p>
<p>Mucuna can encourage healthy mods and relaxation. It regulates the production of dopamine, thus boosting various cognitive abilities. The neurotransmitter can promote blood flow in the ear. Also, mucuna pruriens supports healthy inflammations and can battle oxidative stress.</p>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="URL PROMO[Limited Discount]: "Sonuvita Hearing Support Formula USA"Official Website!</strong></a></span></h2>
<h2><strong><a href="URL Benefits</strong></h2>
<ul style="list-style-type: circle;">
<li>It can improve and restore hearing in adults</li>
<li><a href="URL claims to boost cochlear functions and battling tinnitus</li>
<li>It can combat inflamed nerves and oxidative stress</li>
<li><a href="URL is rich in nootropic ingredients designed to mend cognitive health</li>
<li>It can lower stress</li>
<li><a href="URL strengthens the immune response, naturally allowing your body to fight varicella-zoster virus and other pathogens.</li>
</ul>
<h2><strong>Pros and Cons of <a href="URL Hearing Support Supplement</strong></h2>
<h2><strong>Pros</strong></h2>
<ul style="list-style-type: circle;">
<li><a href="URL is a non-invasive and affordable solution to hearing issues</li>
<li>All ingredients in the era support formula are 100% natural</li>
<li>Sonuvita can purportedly fix partial and complete hearing</li>
<li>It can amplify the immune system by over 300%, hence preventing the varicella-zoster virus from attacking the body</li>
<li><a href="URL is simple to use and free from side effects</li>
<li><a href="URL is formulated by an audiologist and manufactured in an FDA-certified and GMP-approved facility.</li>
</ul>
<h2><strong>Cons</strong></h2>
<ul style="list-style-type: circle;">
<li><a href="URL is only for adults</li>
<li>The ear support formula is unsuitable for people with existing medical issues or anyone pregnant and nursing.</li>
<li>The results of using <a href="URL varies</li>
<li>You can buy <a href="URL only through the official website</li>
</ul>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="URL PROMO: Get Sonuvita Hearing Support Formula at the Lowest Discounted Price Online</strong></a></span></h2>
<h2><strong>Pricing</strong></h2>
<p>You can buy <a href="URL only through the official website. There are three package options available to cater to your needs and budget. All <a href="URL purchases are eligible for a 60-day money-back guarantee.</p>
<ul style="list-style-type: circle;">
<li><strong>One Bottle (30-Day Supply): $69 + Shipping Fee</strong></li>
<li><strong>Three Bottles (90-Day Supply): $59.00 per bottle + Free Shipping ($177.00 Total)</strong></li>
<li><strong>Six Bottles (180-Day Supply): $49.00 per bottle + Free Shipping ($294.00 Total)</strong></li>
</ul>
<p>If you are unsatisfied with <a href="URL within 60 days of the shipping date, fill out and return the refund form that came with your order. Questions about the product can be directed to <a href="URL customer service:</p>
<h2><strong>Conclusion</strong></h2>
<p><a href="URL is marketed as a game-changer ear health support supplement. It contains multiple science-approved ingredients to manage hearing issues from the source. The creator claims using Sonuvita can “progress your communication skills and help you enjoy your favorite music.”</p>
<p>The formulation battles varicella-zoster virus and other pathogens, causing inflammation of the cochlear nerves and nerves. You can buy the <a href="URL ear support formula through the official website at discounted rates.</p>
<div class="separator" style="clear: both; text-align: center;"><a style="margin-left: 1em; margin-right: 1em;" href="URL src="URL alt="" width="640" height="568" border="0" data-original-height="498" data-original-width="561" /></a></div>
<h2 style="text-align: center;"><span style="color: #ff6600;"><a style="color: #ff6600;" href="URL Details: *Sonuvita Hearing Support Formula* Read More Details on Official Website USA!</strong></a></span></h2>
<h2><strong><span style="color: #ff6600;">#READ MORE</span></strong></h2>
<p><strong><span style="color: #ff6600;"><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL /></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL /></strong></span></p>
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL
<p><span style="color: #ff6600;"><strong><a href="URL/URL | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
46e2c30169a5f49ef1479140476afcf0d2a589b0 | # MedFAQ-llama2-2k
2k QA entries on medical topics
| kazcfz/MedFAQ-llama2-2k | [
"region:us"
] | 2023-12-21T11:26:44+00:00 | {} | 2024-01-10T14:38:12+00:00 | [] | [] | TAGS
#region-us
| # MedFAQ-llama2-2k
2k QA entries on medical topics
| [
"# MedFAQ-llama2-2k\n2k QA entries on medical topics"
] | [
"TAGS\n#region-us \n",
"# MedFAQ-llama2-2k\n2k QA entries on medical topics"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# MedFAQ-llama2-2k\n2k QA entries on medical topics"
] |
51cd27f96dd9596514b074a4051e4ee5076df821 | # Dataset Card for Dataset _tla-Earlier_Egyptian_original-v18-premium_
<!-- Provide a quick summary of the dataset. -->
This data set contains Earlier Egyptian, i.e., ancient Old Egyptian and ancient Middle Egyptian, sentences in `hieroglyphs` and `transliteration`, with `lemmatization`, with POS `glossing` and with a German `translation`.
This set of _original_ Earlier Egyptian sentences only contains text witnesses from before the start of the New Kingdom (late 16th century BEC).
The data comes from the database of the [Thesaurus Linguae Aegyptiae](https://thesaurus-linguae-aegyptiae.de), corpus version 18, and contains only fully intact,
unambiguously readable sentences (12,773 of 55,026 sentences), adjusted for philological and editorial markup.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Homepage:** https://thesaurus-linguae-aegyptiae.de.
- **Curated by:**
German Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,
Executive Editor: [Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel).
- **Funded by:**
The Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.
The Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.
- **Language(s) (NLP):** egy-Egyp, egy-Egyh, de-DE.
- **License:** [CC BY-SA 4.0 Int.](https://creativecommons.org/licenses/by-sa/4.0/); for required attribution, see citation recommendations below.
- **Point of Contact:** [Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This data set may be used
- to train translation models Egyptian hieroglyphs => Egyptological transliteration,
- to create lemmatizers Earlier Egyptian transliteration => [TLA lemma ID](https://thesaurus-linguae-aegyptiae.de/info/lemma-lists),
- to train translation models Earlier Egyptian transliteration => German.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.
## Dataset
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset is not divided. Please create your own random splits.
The dataset comes as a _JSON lines_ file.
### Data Fields
#### plain_text
- `hieroglyphs`: a `string`, sequence of Egyptian hieroglyphs (Unicode v15), individual sentence elements separated by space.
- `transliteration`: a `string`, Egyptological transliteration, following the [_Leiden Unified Transliteration_](https://www.iae-egyptology.org/the-leiden-unified-transliteration), individual sentence elements separated by space.
- `lemmatization`: a `string`, individual [TLA Lemma IDs](https://thesaurus-linguae-aegyptiae.de/info/lemma-lists)+"`|`"+lemma transliteration, separated by space.
- `UPOS`: a `string`, Part of Speech according to [Universal POS tag set](https://universaldependencies.org/u/pos/).
- `glossing`: a `string`, individual glosses separated by space (for information, see the comments below).
- `translation`: a `string`, German translation.
- `dateNotBefore`, `dateNotAfter`: two `strings` containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.
### Data instances
Example of an dataset instance:
```
{
"hieroglyphs": "𓆓𓂧𓇋𓈖 𓅈𓏏𓏭𓀜𓀀 𓊪𓈖 𓈖 𓌞𓏲𓀀 𓆑",
"transliteration": "ḏd.ꞽn nm.tꞽ-nḫt pn n šms.w =f",
"lemmatization": "185810|ḏd 851865|Nmt.j-nḫt.w 59920|pn 400055|n 155030|šms.w 10050|=f",
"UPOS": "VERB PROPN PRON ADP NOUN PRON",
"glossing": "V\\tam.act-cnsv PERSN dem.m.sg PREP N.m:stpr -3sg.m",
"translation": "Nun sagte dieser Nemti-nacht zu seinem Diener:",
"dateNotBefore": "-1939",
"dateNotAfter": "-1630"
}
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
ML projects have requested raw data from the TLA.
At the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users.
This is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
For the corpus of Earlier Egyptian texts in the _TLA_, cf. the information on the [TLA text corpus](https://thesaurus-linguae-aegyptiae.de/info/text-corpus),
notably the [PDF overview](https://nubes.bbaw.de/s/xD7MYJrmE8xNBNt).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
This dataset contains all Earlier Egyptian sentences of the TLA corpus v18 (2023) that
- show no destruction,
- have no questionable readings,
- have hieroglyphs encoded,
- are fully lemmatized (and lemmata have a transliteration and a POS),
- have a German translation.
#### Who are the source data producers?
AV [Altägyptisches Wörterbuch](https://www.bbaw.de/forschung/altaegyptisches-woerterbuch),
AV [Wortschatz der ägyptischen Sprache](https://www.bbaw.de/en/research/vocabulary-of-the-egyptian-language);
Susanne Beck, R. Dominik Blöse, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert,
Ingelore Hafemann, Jakob Höper, Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann,
Verena Lepper, Antonie Loeschner, Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze,
Lisa Seelau, Gunnar Sperveslage, Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The transliteration sometimes contains round brackets (`( )`),
which mark phonemes added by the editor without the addition being regarded as an incorrect omission.
For model training, the brackets, but not their content, may optionally be removed.
The hieroglyphs sometimes contain glyphs that are not yet part of Unicode (notably v15).
These are indicated by their [code in JSesh](https://doi.org/10.5281/zenodo.5849135), with additional [codes/signs generated by the TLA project](https://github.com/thesaurus-linguae-aegyptiae/tla-jsesh-glyphs) and marked by tags `<g>`...`</g>`.
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Susanne Beck, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, Ingelore Hafemann,
Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, Verena Lepper, Antonie Loeschner,
Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, Lisa Seelau, Gunnar Sperveslage,
Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
No personal, sensitive, or private data.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
This is not a carefully balanced data set.
Note that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the vowel-less nature of hieroglyphic writing.
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
## Citation of this dataset
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
Thesaurus Linguae Aegyptiae, Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium, https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-Earlier_Egyptian_original-v18-premium,
v1.1, 2/16/2024
ed. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig.
**BibTeX:**
```
@misc{tlaEarlierEgyptianOriginalV18premium,
editor = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig} and Richter, Tonio Sebastian and Werning, Daniel A. and Hans-Werner Fischer-Elfert and Peter Dils},
year = {2024},
title = {Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium},
url = {https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-Earlier_Egyptian_original-v18-premium},
location = {Berlin},
organization = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig}},
}
```
**RIS:**
```
TY - DATA
T1 - Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium
PY - 2024
Y1 - 2024
CY - Berlin
ED - Berlin-Brandenburgische Akademie der Wissenschaften
ED - Richter, Tonio Sebastian
ED - Werning, Daniel A.
ED - Sächsische Akademie der Wissenschaften zu Leipzig
ED - Fischer-Elfert, Hans-Werner
ED - Dils, Peter
IN - Berlin-Brandenburgische Akademie der Wissenschaften
IN - Sächsische Akademie der Wissenschaften zu Leipzig
UR - https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-Earlier_Egyptian_original-v18-premium
DB - Thesaurus Linguae Aegyptiae
DP - Akademienvorhaben "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache", Berlin-Berlin-Brandenburgischen Akademie der Wissenschaften
ER -
```
## Glossary
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
**Lemma IDs**
For the stable lemma IDs, see https://thesaurus-linguae-aegyptiae.de/info/lemma-lists.
**Glossing**
For the glossing abbreviations, see https://thesaurus-linguae-aegyptiae.de/listings/ling-glossings.
_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.
## Dataset Card Authors
[Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel)
## Dataset Card Contact
[Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel) | thesaurus-linguae-aegyptiae/tla-Earlier_Egyptian_original-v18-premium | [
"task_categories:translation",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:egy",
"language:de",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-12-21T11:31:40+00:00 | {"annotations_creators": ["expert-generated"], "language": ["egy", "de"], "license": "cc-by-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "token-classification"], "pretty_name": "Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium", "dataset_info": {"features": [{"name": "hieroglyphs", "dtype": "string"}, {"name": "transliteration", "dtype": "string"}, {"name": "lemmatization", "dtype": "string"}, {"name": "UPOS", "dtype": "string"}, {"name": "glossing", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "dateNotBefore", "dtype": "string"}, {"name": "dateNotAfter", "dtype": "string"}]}, "splits": [{"name": "train", "num_examples": 12773}]} | 2024-02-16T20:54:36+00:00 | [] | [
"egy",
"de"
] | TAGS
#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us
| # Dataset Card for Dataset _tla-Earlier_Egyptian_original-v18-premium_
This data set contains Earlier Egyptian, i.e., ancient Old Egyptian and ancient Middle Egyptian, sentences in 'hieroglyphs' and 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'.
This set of _original_ Earlier Egyptian sentences only contains text witnesses from before the start of the New Kingdom (late 16th century BEC).
The data comes from the database of the Thesaurus Linguae Aegyptiae, corpus version 18, and contains only fully intact,
unambiguously readable sentences (12,773 of 55,026 sentences), adjusted for philological and editorial markup.
## Dataset Details
### Dataset Description
- Homepage: URL.
- Curated by:
German Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,
Executive Editor: Daniel A. Werning.
- Funded by:
The Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.
The Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.
- Language(s) (NLP): egy-Egyp, egy-Egyh, de-DE.
- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.
- Point of Contact: Daniel A. Werning
## Uses
### Direct Use
This data set may be used
- to train translation models Egyptian hieroglyphs => Egyptological transliteration,
- to create lemmatizers Earlier Egyptian transliteration => TLA lemma ID,
- to train translation models Earlier Egyptian transliteration => German.
### Out-of-Scope Use
This data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.
## Dataset
## Dataset Structure
The dataset is not divided. Please create your own random splits.
The dataset comes as a _JSON lines_ file.
### Data Fields
#### plain_text
- 'hieroglyphs': a 'string', sequence of Egyptian hieroglyphs (Unicode v15), individual sentence elements separated by space.
- 'transliteration': a 'string', Egyptological transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.
- 'lemmatization': a 'string', individual TLA Lemma IDs+"'|'"+lemma transliteration, separated by space.
- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.
- 'glossing': a 'string', individual glosses separated by space (for information, see the comments below).
- 'translation': a 'string', German translation.
- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.
### Data instances
Example of an dataset instance:
## Dataset Creation
### Curation Rationale
ML projects have requested raw data from the TLA.
At the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users.
This is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.
### Source Data
For the corpus of Earlier Egyptian texts in the _TLA_, cf. the information on the TLA text corpus,
notably the PDF overview.
#### Data Collection and Processing
This dataset contains all Earlier Egyptian sentences of the TLA corpus v18 (2023) that
- show no destruction,
- have no questionable readings,
- have hieroglyphs encoded,
- are fully lemmatized (and lemmata have a transliteration and a POS),
- have a German translation.
#### Who are the source data producers?
AV Altägyptisches Wörterbuch,
AV Wortschatz der ägyptischen Sprache;
Susanne Beck, R. Dominik Blöse, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert,
Ingelore Hafemann, Jakob Höper, Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann,
Verena Lepper, Antonie Loeschner, Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze,
Lisa Seelau, Gunnar Sperveslage, Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.
### Annotations
#### Annotation process
The transliteration sometimes contains round brackets ('( )'),
which mark phonemes added by the editor without the addition being regarded as an incorrect omission.
For model training, the brackets, but not their content, may optionally be removed.
The hieroglyphs sometimes contain glyphs that are not yet part of Unicode (notably v15).
These are indicated by their code in JSesh, with additional codes/signs generated by the TLA project and marked by tags '<g>'...'</g>'.
#### Who are the annotators?
Susanne Beck, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, Ingelore Hafemann,
Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, Verena Lepper, Antonie Loeschner,
Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, Lisa Seelau, Gunnar Sperveslage,
Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.
#### Personal and Sensitive Information
No personal, sensitive, or private data.
## Bias, Risks, and Limitations
This is not a carefully balanced data set.
Note that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the vowel-less nature of hieroglyphic writing.
of this dataset
Thesaurus Linguae Aegyptiae, Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium, URL
v1.1, 2/16/2024
ed. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig.
BibTeX:
RIS:
## Glossary
Lemma IDs
For the stable lemma IDs, see URL
Glossing
For the glossing abbreviations, see URL
_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.
## Dataset Card Authors
Daniel A. Werning
## Dataset Card Contact
Daniel A. Werning | [
"# Dataset Card for Dataset _tla-Earlier_Egyptian_original-v18-premium_\n\n\nThis data set contains Earlier Egyptian, i.e., ancient Old Egyptian and ancient Middle Egyptian, sentences in 'hieroglyphs' and 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThis set of _original_ Earlier Egyptian sentences only contains text witnesses from before the start of the New Kingdom (late 16th century BEC).\nThe data comes from the database of the Thesaurus Linguae Aegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (12,773 of 55,026 sentences), adjusted for philological and editorial markup.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyp, egy-Egyh, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning",
"## Uses",
"### Direct Use\n\n\n\nThis data set may be used\n- to train translation models Egyptian hieroglyphs => Egyptological transliteration,\n- to create lemmatizers Earlier Egyptian transliteration => TLA lemma ID,\n- to train translation models Earlier Egyptian transliteration => German.",
"### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.",
"## Dataset",
"## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.",
"### Data Fields",
"#### plain_text\n- 'hieroglyphs': a 'string', sequence of Egyptian hieroglyphs (Unicode v15), individual sentence elements separated by space.\n- 'transliteration': a 'string', Egyptological transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'glossing': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.",
"### Data instances\n\nExample of an dataset instance:",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.",
"### Source Data\n\n\n\nFor the corpus of Earlier Egyptian texts in the _TLA_, cf. the information on the TLA text corpus, \nnotably the PDF overview.",
"#### Data Collection and Processing\n\n\n\nThis dataset contains all Earlier Egyptian sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- have hieroglyphs encoded,\n- are fully lemmatized (and lemmata have a transliteration and a POS),\n- have a German translation.",
"#### Who are the source data producers?\n\nAV Altägyptisches Wörterbuch,\nAV Wortschatz der ägyptischen Sprache;\nSusanne Beck, R. Dominik Blöse, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, \nIngelore Hafemann, Jakob Höper, Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, \nVerena Lepper, Antonie Loeschner, Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, \nLisa Seelau, Gunnar Sperveslage, Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.",
"### Annotations",
"#### Annotation process\n\n\n\nThe transliteration sometimes contains round brackets ('( )'), \nwhich mark phonemes added by the editor without the addition being regarded as an incorrect omission. \nFor model training, the brackets, but not their content, may optionally be removed.\n\nThe hieroglyphs sometimes contain glyphs that are not yet part of Unicode (notably v15). \nThese are indicated by their code in JSesh, with additional codes/signs generated by the TLA project and marked by tags '<g>'...'</g>'.",
"#### Who are the annotators?\n\n\n\nSusanne Beck, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, Ingelore Hafemann, \nSamuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, Verena Lepper, Antonie Loeschner, \nFranka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, Lisa Seelau, Gunnar Sperveslage, \nKatharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.",
"#### Personal and Sensitive Information\n\n\n\nNo personal, sensitive, or private data.",
"## Bias, Risks, and Limitations\n\n\n\n\n\nThis is not a carefully balanced data set.\n\nNote that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the vowel-less nature of hieroglyphic writing. \n\n\n\nof this dataset\n\n\n\nThesaurus Linguae Aegyptiae, Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium, URL \nv1.1, 2/16/2024 \ned. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig. \n\nBibTeX:\n\n\n\nRIS:",
"## Glossary\n\n\n\nLemma IDs\n\nFor the stable lemma IDs, see URL\n\nGlossing\n\nFor the glossing abbreviations, see URL \n\n_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.",
"## Dataset Card Authors\n\nDaniel A. Werning",
"## Dataset Card Contact\n\nDaniel A. Werning"
] | [
"TAGS\n#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for Dataset _tla-Earlier_Egyptian_original-v18-premium_\n\n\nThis data set contains Earlier Egyptian, i.e., ancient Old Egyptian and ancient Middle Egyptian, sentences in 'hieroglyphs' and 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThis set of _original_ Earlier Egyptian sentences only contains text witnesses from before the start of the New Kingdom (late 16th century BEC).\nThe data comes from the database of the Thesaurus Linguae Aegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (12,773 of 55,026 sentences), adjusted for philological and editorial markup.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyp, egy-Egyh, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning",
"## Uses",
"### Direct Use\n\n\n\nThis data set may be used\n- to train translation models Egyptian hieroglyphs => Egyptological transliteration,\n- to create lemmatizers Earlier Egyptian transliteration => TLA lemma ID,\n- to train translation models Earlier Egyptian transliteration => German.",
"### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.",
"## Dataset",
"## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.",
"### Data Fields",
"#### plain_text\n- 'hieroglyphs': a 'string', sequence of Egyptian hieroglyphs (Unicode v15), individual sentence elements separated by space.\n- 'transliteration': a 'string', Egyptological transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'glossing': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.",
"### Data instances\n\nExample of an dataset instance:",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.",
"### Source Data\n\n\n\nFor the corpus of Earlier Egyptian texts in the _TLA_, cf. the information on the TLA text corpus, \nnotably the PDF overview.",
"#### Data Collection and Processing\n\n\n\nThis dataset contains all Earlier Egyptian sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- have hieroglyphs encoded,\n- are fully lemmatized (and lemmata have a transliteration and a POS),\n- have a German translation.",
"#### Who are the source data producers?\n\nAV Altägyptisches Wörterbuch,\nAV Wortschatz der ägyptischen Sprache;\nSusanne Beck, R. Dominik Blöse, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, \nIngelore Hafemann, Jakob Höper, Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, \nVerena Lepper, Antonie Loeschner, Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, \nLisa Seelau, Gunnar Sperveslage, Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.",
"### Annotations",
"#### Annotation process\n\n\n\nThe transliteration sometimes contains round brackets ('( )'), \nwhich mark phonemes added by the editor without the addition being regarded as an incorrect omission. \nFor model training, the brackets, but not their content, may optionally be removed.\n\nThe hieroglyphs sometimes contain glyphs that are not yet part of Unicode (notably v15). \nThese are indicated by their code in JSesh, with additional codes/signs generated by the TLA project and marked by tags '<g>'...'</g>'.",
"#### Who are the annotators?\n\n\n\nSusanne Beck, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, Ingelore Hafemann, \nSamuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, Verena Lepper, Antonie Loeschner, \nFranka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, Lisa Seelau, Gunnar Sperveslage, \nKatharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.",
"#### Personal and Sensitive Information\n\n\n\nNo personal, sensitive, or private data.",
"## Bias, Risks, and Limitations\n\n\n\n\n\nThis is not a carefully balanced data set.\n\nNote that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the vowel-less nature of hieroglyphic writing. \n\n\n\nof this dataset\n\n\n\nThesaurus Linguae Aegyptiae, Thesaurus Linguae Aegyptiae, Original Earlier Egyptian sentences, corpus v18, premium, URL \nv1.1, 2/16/2024 \ned. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig. \n\nBibTeX:\n\n\n\nRIS:",
"## Glossary\n\n\n\nLemma IDs\n\nFor the stable lemma IDs, see URL\n\nGlossing\n\nFor the glossing abbreviations, see URL \n\n_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.",
"## Dataset Card Authors\n\nDaniel A. Werning",
"## Dataset Card Contact\n\nDaniel A. Werning"
] | [
85,
187,
4,
241,
3,
67,
31,
3,
35,
5,
229,
13,
5,
76,
41,
80,
217,
5,
128,
182,
17,
178,
60,
11,
10
] | [
"passage: TAGS\n#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us \n# Dataset Card for Dataset _tla-Earlier_Egyptian_original-v18-premium_\n\n\nThis data set contains Earlier Egyptian, i.e., ancient Old Egyptian and ancient Middle Egyptian, sentences in 'hieroglyphs' and 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThis set of _original_ Earlier Egyptian sentences only contains text witnesses from before the start of the New Kingdom (late 16th century BEC).\nThe data comes from the database of the Thesaurus Linguae Aegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (12,773 of 55,026 sentences), adjusted for philological and editorial markup.## Dataset Details",
"passage: ### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyp, egy-Egyh, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning## Uses### Direct Use\n\n\n\nThis data set may be used\n- to train translation models Egyptian hieroglyphs => Egyptological transliteration,\n- to create lemmatizers Earlier Egyptian transliteration => TLA lemma ID,\n- to train translation models Earlier Egyptian transliteration => German.### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.## Dataset## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.### Data Fields#### plain_text\n- 'hieroglyphs': a 'string', sequence of Egyptian hieroglyphs (Unicode v15), individual sentence elements separated by space.\n- 'transliteration': a 'string', Egyptological transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'glossing': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.### Data instances\n\nExample of an dataset instance:## Dataset Creation### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.",
"passage: ### Source Data\n\n\n\nFor the corpus of Earlier Egyptian texts in the _TLA_, cf. the information on the TLA text corpus, \nnotably the PDF overview.#### Data Collection and Processing\n\n\n\nThis dataset contains all Earlier Egyptian sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- have hieroglyphs encoded,\n- are fully lemmatized (and lemmata have a transliteration and a POS),\n- have a German translation.#### Who are the source data producers?\n\nAV Altägyptisches Wörterbuch,\nAV Wortschatz der ägyptischen Sprache;\nSusanne Beck, R. Dominik Blöse, Marc Brose, Billy Böhm, Svenja Damm, Sophie Diepold, Charlotte Dietrich, Peter Dils, Frank Feder, Heinz Felber, Stefan Grunert, \nIngelore Hafemann, Jakob Höper, Samuel Huster, Johannes Jüngling, Kay Christine Klinger, Ines Köhler, Carina Kühne-Wespi, Renata Landgráfová, Florence Langermann, \nVerena Lepper, Antonie Loeschner, Franka Milde, Lutz Popko, Miriam Rathenow, Elio Nicolas Rossetti, Jakob Schneider, Simon D. Schweitzer, Alexander Schütze, \nLisa Seelau, Gunnar Sperveslage, Katharina Stegbauer, Doris Topmann, Günter Vittmann, Anja Weber, Daniel A. Werning.### Annotations#### Annotation process\n\n\n\nThe transliteration sometimes contains round brackets ('( )'), \nwhich mark phonemes added by the editor without the addition being regarded as an incorrect omission. \nFor model training, the brackets, but not their content, may optionally be removed.\n\nThe hieroglyphs sometimes contain glyphs that are not yet part of Unicode (notably v15). \nThese are indicated by their code in JSesh, with additional codes/signs generated by the TLA project and marked by tags '<g>'...'</g>'."
] |
95eb358969c07270821c7fdc73201d7375afb309 |
# MKGE: Multilingual Knowledge Graph Enhancement
*note* this dataset card was copied from this [GitHub Repository](https://github.com/apple/ml-kge/blob/main/README.md)
[**Task Description**](#task-description) |
[**WikiKGE-10**](#wikikge-10) |
[**Evaluation**](#evaluation) |
[**Paper**](https://arxiv.org/abs/2311.15781) |
[**Citation**](#citation) |
[**License**](#license)
Recent work in Natural Language Processing and Computer Vision has been leveraging textual information -- e.g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.
However, when it comes to non-English languages, both quantity and quality of textual information are comparatively scarcer.
To address this issue, we introduce the task of automatic **Multilingual Knowledge Graph Enhancement (MKGE)** and perform a thorough investigation on bridging the gap in quantity and quality of textual information between English and non-English languages.
As part of our effort toward building better multilingual knowledge graphs, we also introduce **WikiKGE-10**, the first human-curated benchmark to evaluate MKGE approaches in 10 languages.
Please refer to our EMNLP 2023 paper for more details, [Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs](https://arxiv.org/abs/2311.15781).
## Task Description
The aim of MKGE is to evaluate automatic approaches in two subtasks:
* Increasing **coverage** of locale-specific facts in multilingual knowledge graphs;
* Increasing **precision** of locale-specific facts in multilingual knowledge graphs.
More specifically, we use *Wikidata* as our reference multilingual knoweldge graph, and we focus our study on *entity names*, which may or may not be represented in different ways across different languages.
### MKGE - Coverage
Suppose we want to add support to Wikidata for entity names (or other types of textual information, e.g., entity descriptions) in a new target language `l_t`.
*Coverage* measures the ability of an automatic approach to provide at least a valid entity name in `l_t` for each entity of interest in Wikidata.
In other words, measuring *Coverage* is equivalent to answering the following question: How effective is an automatic approach in converting the entity names from a source language `l_s` to a target language `l_t`?
For example, how can we use the English entity names to create valid Japanese entity names with the same quantity and quality of the English ones?
### MKGE - Precision
It is well-known that the quality of the information in Wikidata is not perfect.
*Precision* measures the ability of an automatic approach to identify incorrect entity names (or other types of textual information, e.g., entity descriptions) for an entity of interest in a target language `l_t`.
In other words, measuring *Precision* is equivalent to answering the following question: How effective is an automatic approach in recognizing noisy, incomplete, or outdated information in a target language `l_t`?
## WikiKGE-10
WikiKGE-10 is a benchmark for evaluating automatic approaches for increasing both **coverage** and **precision** of entity names in Wikidata for 10 languages.
WikiKGE-10 includes around 1000 entities in each of the following 10 languages:
* `ar` - Arabic
* `de` - German
* `en` - English
* `es` - Spanish
* `fr` - French
* `it` - Italian
* `ja` - Japanese
* `ko` - Korean
* `ru` - Russian
* `zh` - Simplified Chinese
### Dataset organization
The data is organized in the following way:
```
data
└── names
├── gold
│ ├── ar.json
│ ├── de.json
... ...
├── m-nta
│ ├── with_gpt-3
│ │ ├── ar.m-nta.json
│ │ ├── de.m-nta.json
... ... ...
│ ├── with_gpt-3.5
│ │ ├── ar.m-nta.json
│ │ ├── de.m-nta.json
... ... ...
│ └── with_gpt-4
│ ├── ar.m-nta.json
│ ├── de.m-nta.json
... ... ...
└── gpt
│ ├── ar.gpt-3.json
│ ├── de.gpt-3.json
... ...
└── wikidata
├── ar.json
├── de.json
...
└── zh.json
```
Where:
* `data/names/gold/` contains the human-curated data.
* `data/names/m-nta/` contains the predictions from M-NTA.
* `data/names/gpt/` contains the predictions from GPT-3 and GPT-3.5 (May 2023), and also GPT-4 (September 2023).
* `data/names/wikidata/` contains the data from Wikidata (May 2023).
### Human-curated data in WikiKGE-10
Here are a few examples in `data/names/gold/it.json`:
```json
{
"wikidata_id": "Q48324",
"correct_values": ["morale", "moralità", "Moralismo"],
"incorrect_values": ["giudizio morale", "moralita'", "legge morale"]
}
```
```json
{
"wikidata_id": "Q166844",
"correct_values": ["Thomas N'Kono", "N'Kono"],
"incorrect_values": ["Thomas Nkono"]
}
```
Where:
* `wikidata_id` is the QID of the entity in Wikidata.
* `correct_values` is a list of entity names that have been rated as valid by our human annotators.
* `incorrect_values` is a list of entity names that are in Wikidata but have been rated as invalid by our human annotators.
### M-NTA predictions in WikiKGE-10
We also include the entity names predicted by M-NTA, our automatic system for MKGE, to reproduces the results of our experiments.
Here are a few examples of the predictions found in `data/names/m-nta/no_gpt/it.json`:
```json
{
"wikidata_id": "Q48324",
"values": [
[1, "Egenetica", false],
[1, "Immorale", false],
[1, "Immoralità", false],
[1, "Morali", false],
[1, "Moralismo", false],
[1, "Moralità pubblica", false],
[1, "Moralmente", false],
[1, "Parenesi", false],
[1, "Pubblica moralità", false],
[1, "Regola morale", false],
[1, "Teoria dei costumi", false],
[4, "Morale", true],
[4, "Moralità", true]
]
}
```
```json
{
"wikidata_id": "Q166844",
"values": [
[1, "Thomas 'Tommy' N'Kono", false],
[1, "Thomas Nucono", true],
[1, "Tommy N'Kono", false],
[3, "N'Kono", false],
[3, "Nkono", false],
[6, "Thomas N'Kono", true],
[6, "Thomas NKono", false],
[6, "Thomas Nkono", false]
]
}
```
Where:
* `wikidata_id` is the QID of the entity in Wikidata.
* `values` is a list of predictions from M-NTA:
* `value[0]` is the confidence score from M-NTA
* `value[1]` is the prediction from M-NTA
* `value[2]` is whether the prediction comes from a Wikidata primary name.
## Citation
Please cite our work if you found WikiKGE-10, our [paper](), or these resources useful.
```bibtex
@inproceedings{conia-etal-2023-increasing,
title = "Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs",
author = "Conia, Simone and
Li, Min and
Lee, Daniel and
Minhas, Umar Farooq and
Ilyas, Ihab and
Li, Yunyao",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP 2023)",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
```
## License
The code in this repository is licensed under [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0), see the [LICENSE.txt](LICENSE.txt) file.
WikiKGE-10 is licensed under [CC BY-SA](https://creativecommons.org/licenses/by-sa/4.0/deed.en), see the [LICENSE_Wikidata.txt](LICENSE_Wikidata.txt) file.
## Acknowledgements
This work is part of one of the projects I carried out during my internship at Apple.
I must truly thank Min Li and Yunyao Li for their incredible mentorship and for everything they taught me.
I would also like to thank Umar Farooq Minhas, Saloni Potdar, and Ihab Ilyas for their valuable feedback.
My gratitude also goes to Behrang Mohit for his insightful comments on the paper.
Besides his technical contributions, I would like to thank Daniel Lee for making this project more fun, and Farima Fatahi Bayat, Ronak Pradeep, and Revanth Gangi Reddy for making this internship a unique experience.
| davanstrien/ml-kge | [
"size_categories:n<1K",
"language:en",
"language:ar",
"language:de",
"language:es",
"language:fr",
"language:it",
"language:ja",
"language:ko",
"language:ru",
"language:zh",
"license:cc-by-sa-4.0",
"knowledge-graphs",
"arxiv:2311.15781",
"region:us"
] | 2023-12-21T12:06:12+00:00 | {"language": ["en", "ar", "de", "es", "fr", "it", "ja", "ko", "ru", "zh"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "pretty_name": "MKGE: Multilingual Knowledge Graph Enhancement", "configs": [{"config_name": "gold", "data_files": "data/names/gold/*.json"}, {"config_name": "m-nta-with_gpt-3.5", "data_files": "data/names/m-nta/with_gpt-3.5/*.json"}, {"config_name": "m-nta-with_gpt-3", "data_files": "data/names/m-nta/with_gpt-3/*.json"}, {"config_name": "m-nta-with_gpt-4", "data_files": "data/names/m-nta/with_gpt-4/*.json"}, {"config_name": "gpt", "data_files": "data/names/gpt/*.json"}, {"config_name": "wikidata", "data_files": "data/names/wikidata/*.json"}], "tags": ["knowledge-graphs"]} | 2023-12-21T12:24:14+00:00 | [
"2311.15781"
] | [
"en",
"ar",
"de",
"es",
"fr",
"it",
"ja",
"ko",
"ru",
"zh"
] | TAGS
#size_categories-n<1K #language-English #language-Arabic #language-German #language-Spanish #language-French #language-Italian #language-Japanese #language-Korean #language-Russian #language-Chinese #license-cc-by-sa-4.0 #knowledge-graphs #arxiv-2311.15781 #region-us
|
# MKGE: Multilingual Knowledge Graph Enhancement
*note* this dataset card was copied from this GitHub Repository
Task Description |
WikiKGE-10 |
Evaluation |
Paper |
Citation |
License
Recent work in Natural Language Processing and Computer Vision has been leveraging textual information -- e.g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.
However, when it comes to non-English languages, both quantity and quality of textual information are comparatively scarcer.
To address this issue, we introduce the task of automatic Multilingual Knowledge Graph Enhancement (MKGE) and perform a thorough investigation on bridging the gap in quantity and quality of textual information between English and non-English languages.
As part of our effort toward building better multilingual knowledge graphs, we also introduce WikiKGE-10, the first human-curated benchmark to evaluate MKGE approaches in 10 languages.
Please refer to our EMNLP 2023 paper for more details, Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs.
## Task Description
The aim of MKGE is to evaluate automatic approaches in two subtasks:
* Increasing coverage of locale-specific facts in multilingual knowledge graphs;
* Increasing precision of locale-specific facts in multilingual knowledge graphs.
More specifically, we use *Wikidata* as our reference multilingual knoweldge graph, and we focus our study on *entity names*, which may or may not be represented in different ways across different languages.
### MKGE - Coverage
Suppose we want to add support to Wikidata for entity names (or other types of textual information, e.g., entity descriptions) in a new target language 'l_t'.
*Coverage* measures the ability of an automatic approach to provide at least a valid entity name in 'l_t' for each entity of interest in Wikidata.
In other words, measuring *Coverage* is equivalent to answering the following question: How effective is an automatic approach in converting the entity names from a source language 'l_s' to a target language 'l_t'?
For example, how can we use the English entity names to create valid Japanese entity names with the same quantity and quality of the English ones?
### MKGE - Precision
It is well-known that the quality of the information in Wikidata is not perfect.
*Precision* measures the ability of an automatic approach to identify incorrect entity names (or other types of textual information, e.g., entity descriptions) for an entity of interest in a target language 'l_t'.
In other words, measuring *Precision* is equivalent to answering the following question: How effective is an automatic approach in recognizing noisy, incomplete, or outdated information in a target language 'l_t'?
## WikiKGE-10
WikiKGE-10 is a benchmark for evaluating automatic approaches for increasing both coverage and precision of entity names in Wikidata for 10 languages.
WikiKGE-10 includes around 1000 entities in each of the following 10 languages:
* 'ar' - Arabic
* 'de' - German
* 'en' - English
* 'es' - Spanish
* 'fr' - French
* 'it' - Italian
* 'ja' - Japanese
* 'ko' - Korean
* 'ru' - Russian
* 'zh' - Simplified Chinese
### Dataset organization
The data is organized in the following way:
Where:
* 'data/names/gold/' contains the human-curated data.
* 'data/names/m-nta/' contains the predictions from M-NTA.
* 'data/names/gpt/' contains the predictions from GPT-3 and GPT-3.5 (May 2023), and also GPT-4 (September 2023).
* 'data/names/wikidata/' contains the data from Wikidata (May 2023).
### Human-curated data in WikiKGE-10
Here are a few examples in 'data/names/gold/URL':
Where:
* 'wikidata_id' is the QID of the entity in Wikidata.
* 'correct_values' is a list of entity names that have been rated as valid by our human annotators.
* 'incorrect_values' is a list of entity names that are in Wikidata but have been rated as invalid by our human annotators.
### M-NTA predictions in WikiKGE-10
We also include the entity names predicted by M-NTA, our automatic system for MKGE, to reproduces the results of our experiments.
Here are a few examples of the predictions found in 'data/names/m-nta/no_gpt/URL':
Where:
* 'wikidata_id' is the QID of the entity in Wikidata.
* 'values' is a list of predictions from M-NTA:
* 'value[0]' is the confidence score from M-NTA
* 'value[1]' is the prediction from M-NTA
* 'value[2]' is whether the prediction comes from a Wikidata primary name.
Please cite our work if you found WikiKGE-10, our [paper](), or these resources useful.
## License
The code in this repository is licensed under Apache 2.0, see the URL file.
WikiKGE-10 is licensed under CC BY-SA, see the LICENSE_Wikidata.txt file.
## Acknowledgements
This work is part of one of the projects I carried out during my internship at Apple.
I must truly thank Min Li and Yunyao Li for their incredible mentorship and for everything they taught me.
I would also like to thank Umar Farooq Minhas, Saloni Potdar, and Ihab Ilyas for their valuable feedback.
My gratitude also goes to Behrang Mohit for his insightful comments on the paper.
Besides his technical contributions, I would like to thank Daniel Lee for making this project more fun, and Farima Fatahi Bayat, Ronak Pradeep, and Revanth Gangi Reddy for making this internship a unique experience.
| [
"# MKGE: Multilingual Knowledge Graph Enhancement\n\n*note* this dataset card was copied from this GitHub Repository\n\nTask Description |\nWikiKGE-10 |\nEvaluation |\nPaper |\nCitation |\nLicense\n\nRecent work in Natural Language Processing and Computer Vision has been leveraging textual information -- e.g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.\nHowever, when it comes to non-English languages, both quantity and quality of textual information are comparatively scarcer.\nTo address this issue, we introduce the task of automatic Multilingual Knowledge Graph Enhancement (MKGE) and perform a thorough investigation on bridging the gap in quantity and quality of textual information between English and non-English languages.\nAs part of our effort toward building better multilingual knowledge graphs, we also introduce WikiKGE-10, the first human-curated benchmark to evaluate MKGE approaches in 10 languages.\n \nPlease refer to our EMNLP 2023 paper for more details, Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs.",
"## Task Description\nThe aim of MKGE is to evaluate automatic approaches in two subtasks:\n* Increasing coverage of locale-specific facts in multilingual knowledge graphs;\n* Increasing precision of locale-specific facts in multilingual knowledge graphs.\n\nMore specifically, we use *Wikidata* as our reference multilingual knoweldge graph, and we focus our study on *entity names*, which may or may not be represented in different ways across different languages.",
"### MKGE - Coverage\nSuppose we want to add support to Wikidata for entity names (or other types of textual information, e.g., entity descriptions) in a new target language 'l_t'.\n*Coverage* measures the ability of an automatic approach to provide at least a valid entity name in 'l_t' for each entity of interest in Wikidata.\n\nIn other words, measuring *Coverage* is equivalent to answering the following question: How effective is an automatic approach in converting the entity names from a source language 'l_s' to a target language 'l_t'?\nFor example, how can we use the English entity names to create valid Japanese entity names with the same quantity and quality of the English ones?",
"### MKGE - Precision\nIt is well-known that the quality of the information in Wikidata is not perfect.\n*Precision* measures the ability of an automatic approach to identify incorrect entity names (or other types of textual information, e.g., entity descriptions) for an entity of interest in a target language 'l_t'.\n\nIn other words, measuring *Precision* is equivalent to answering the following question: How effective is an automatic approach in recognizing noisy, incomplete, or outdated information in a target language 'l_t'?",
"## WikiKGE-10\nWikiKGE-10 is a benchmark for evaluating automatic approaches for increasing both coverage and precision of entity names in Wikidata for 10 languages.\n\nWikiKGE-10 includes around 1000 entities in each of the following 10 languages:\n* 'ar' - Arabic\n* 'de' - German\n* 'en' - English\n* 'es' - Spanish\n* 'fr' - French\n* 'it' - Italian\n* 'ja' - Japanese\n* 'ko' - Korean\n* 'ru' - Russian\n* 'zh' - Simplified Chinese",
"### Dataset organization\nThe data is organized in the following way:\n\nWhere:\n* 'data/names/gold/' contains the human-curated data.\n* 'data/names/m-nta/' contains the predictions from M-NTA.\n* 'data/names/gpt/' contains the predictions from GPT-3 and GPT-3.5 (May 2023), and also GPT-4 (September 2023).\n* 'data/names/wikidata/' contains the data from Wikidata (May 2023).",
"### Human-curated data in WikiKGE-10\nHere are a few examples in 'data/names/gold/URL':\n\n\n\nWhere:\n* 'wikidata_id' is the QID of the entity in Wikidata.\n* 'correct_values' is a list of entity names that have been rated as valid by our human annotators.\n* 'incorrect_values' is a list of entity names that are in Wikidata but have been rated as invalid by our human annotators.",
"### M-NTA predictions in WikiKGE-10\nWe also include the entity names predicted by M-NTA, our automatic system for MKGE, to reproduces the results of our experiments.\n\nHere are a few examples of the predictions found in 'data/names/m-nta/no_gpt/URL':\n\n\nWhere:\n* 'wikidata_id' is the QID of the entity in Wikidata.\n* 'values' is a list of predictions from M-NTA:\n * 'value[0]' is the confidence score from M-NTA\n * 'value[1]' is the prediction from M-NTA\n * 'value[2]' is whether the prediction comes from a Wikidata primary name.\n\n\nPlease cite our work if you found WikiKGE-10, our [paper](), or these resources useful.",
"## License\nThe code in this repository is licensed under Apache 2.0, see the URL file.\n\nWikiKGE-10 is licensed under CC BY-SA, see the LICENSE_Wikidata.txt file.",
"## Acknowledgements\nThis work is part of one of the projects I carried out during my internship at Apple.\nI must truly thank Min Li and Yunyao Li for their incredible mentorship and for everything they taught me.\nI would also like to thank Umar Farooq Minhas, Saloni Potdar, and Ihab Ilyas for their valuable feedback.\nMy gratitude also goes to Behrang Mohit for his insightful comments on the paper.\nBesides his technical contributions, I would like to thank Daniel Lee for making this project more fun, and Farima Fatahi Bayat, Ronak Pradeep, and Revanth Gangi Reddy for making this internship a unique experience."
] | [
"TAGS\n#size_categories-n<1K #language-English #language-Arabic #language-German #language-Spanish #language-French #language-Italian #language-Japanese #language-Korean #language-Russian #language-Chinese #license-cc-by-sa-4.0 #knowledge-graphs #arxiv-2311.15781 #region-us \n",
"# MKGE: Multilingual Knowledge Graph Enhancement\n\n*note* this dataset card was copied from this GitHub Repository\n\nTask Description |\nWikiKGE-10 |\nEvaluation |\nPaper |\nCitation |\nLicense\n\nRecent work in Natural Language Processing and Computer Vision has been leveraging textual information -- e.g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.\nHowever, when it comes to non-English languages, both quantity and quality of textual information are comparatively scarcer.\nTo address this issue, we introduce the task of automatic Multilingual Knowledge Graph Enhancement (MKGE) and perform a thorough investigation on bridging the gap in quantity and quality of textual information between English and non-English languages.\nAs part of our effort toward building better multilingual knowledge graphs, we also introduce WikiKGE-10, the first human-curated benchmark to evaluate MKGE approaches in 10 languages.\n \nPlease refer to our EMNLP 2023 paper for more details, Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs.",
"## Task Description\nThe aim of MKGE is to evaluate automatic approaches in two subtasks:\n* Increasing coverage of locale-specific facts in multilingual knowledge graphs;\n* Increasing precision of locale-specific facts in multilingual knowledge graphs.\n\nMore specifically, we use *Wikidata* as our reference multilingual knoweldge graph, and we focus our study on *entity names*, which may or may not be represented in different ways across different languages.",
"### MKGE - Coverage\nSuppose we want to add support to Wikidata for entity names (or other types of textual information, e.g., entity descriptions) in a new target language 'l_t'.\n*Coverage* measures the ability of an automatic approach to provide at least a valid entity name in 'l_t' for each entity of interest in Wikidata.\n\nIn other words, measuring *Coverage* is equivalent to answering the following question: How effective is an automatic approach in converting the entity names from a source language 'l_s' to a target language 'l_t'?\nFor example, how can we use the English entity names to create valid Japanese entity names with the same quantity and quality of the English ones?",
"### MKGE - Precision\nIt is well-known that the quality of the information in Wikidata is not perfect.\n*Precision* measures the ability of an automatic approach to identify incorrect entity names (or other types of textual information, e.g., entity descriptions) for an entity of interest in a target language 'l_t'.\n\nIn other words, measuring *Precision* is equivalent to answering the following question: How effective is an automatic approach in recognizing noisy, incomplete, or outdated information in a target language 'l_t'?",
"## WikiKGE-10\nWikiKGE-10 is a benchmark for evaluating automatic approaches for increasing both coverage and precision of entity names in Wikidata for 10 languages.\n\nWikiKGE-10 includes around 1000 entities in each of the following 10 languages:\n* 'ar' - Arabic\n* 'de' - German\n* 'en' - English\n* 'es' - Spanish\n* 'fr' - French\n* 'it' - Italian\n* 'ja' - Japanese\n* 'ko' - Korean\n* 'ru' - Russian\n* 'zh' - Simplified Chinese",
"### Dataset organization\nThe data is organized in the following way:\n\nWhere:\n* 'data/names/gold/' contains the human-curated data.\n* 'data/names/m-nta/' contains the predictions from M-NTA.\n* 'data/names/gpt/' contains the predictions from GPT-3 and GPT-3.5 (May 2023), and also GPT-4 (September 2023).\n* 'data/names/wikidata/' contains the data from Wikidata (May 2023).",
"### Human-curated data in WikiKGE-10\nHere are a few examples in 'data/names/gold/URL':\n\n\n\nWhere:\n* 'wikidata_id' is the QID of the entity in Wikidata.\n* 'correct_values' is a list of entity names that have been rated as valid by our human annotators.\n* 'incorrect_values' is a list of entity names that are in Wikidata but have been rated as invalid by our human annotators.",
"### M-NTA predictions in WikiKGE-10\nWe also include the entity names predicted by M-NTA, our automatic system for MKGE, to reproduces the results of our experiments.\n\nHere are a few examples of the predictions found in 'data/names/m-nta/no_gpt/URL':\n\n\nWhere:\n* 'wikidata_id' is the QID of the entity in Wikidata.\n* 'values' is a list of predictions from M-NTA:\n * 'value[0]' is the confidence score from M-NTA\n * 'value[1]' is the prediction from M-NTA\n * 'value[2]' is whether the prediction comes from a Wikidata primary name.\n\n\nPlease cite our work if you found WikiKGE-10, our [paper](), or these resources useful.",
"## License\nThe code in this repository is licensed under Apache 2.0, see the URL file.\n\nWikiKGE-10 is licensed under CC BY-SA, see the LICENSE_Wikidata.txt file.",
"## Acknowledgements\nThis work is part of one of the projects I carried out during my internship at Apple.\nI must truly thank Min Li and Yunyao Li for their incredible mentorship and for everything they taught me.\nI would also like to thank Umar Farooq Minhas, Saloni Potdar, and Ihab Ilyas for their valuable feedback.\nMy gratitude also goes to Behrang Mohit for his insightful comments on the paper.\nBesides his technical contributions, I would like to thank Daniel Lee for making this project more fun, and Farima Fatahi Bayat, Ronak Pradeep, and Revanth Gangi Reddy for making this internship a unique experience."
] | [
92,
262,
113,
173,
132,
118,
121,
115,
187,
47,
143
] | [
"passage: TAGS\n#size_categories-n<1K #language-English #language-Arabic #language-German #language-Spanish #language-French #language-Italian #language-Japanese #language-Korean #language-Russian #language-Chinese #license-cc-by-sa-4.0 #knowledge-graphs #arxiv-2311.15781 #region-us \n# MKGE: Multilingual Knowledge Graph Enhancement\n\n*note* this dataset card was copied from this GitHub Repository\n\nTask Description |\nWikiKGE-10 |\nEvaluation |\nPaper |\nCitation |\nLicense\n\nRecent work in Natural Language Processing and Computer Vision has been leveraging textual information -- e.g., entity names and descriptions -- available in knowledge graphs to ground neural models to high-quality structured data.\nHowever, when it comes to non-English languages, both quantity and quality of textual information are comparatively scarcer.\nTo address this issue, we introduce the task of automatic Multilingual Knowledge Graph Enhancement (MKGE) and perform a thorough investigation on bridging the gap in quantity and quality of textual information between English and non-English languages.\nAs part of our effort toward building better multilingual knowledge graphs, we also introduce WikiKGE-10, the first human-curated benchmark to evaluate MKGE approaches in 10 languages.\n \nPlease refer to our EMNLP 2023 paper for more details, Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs.## Task Description\nThe aim of MKGE is to evaluate automatic approaches in two subtasks:\n* Increasing coverage of locale-specific facts in multilingual knowledge graphs;\n* Increasing precision of locale-specific facts in multilingual knowledge graphs.\n\nMore specifically, we use *Wikidata* as our reference multilingual knoweldge graph, and we focus our study on *entity names*, which may or may not be represented in different ways across different languages.",
"passage: ### MKGE - Coverage\nSuppose we want to add support to Wikidata for entity names (or other types of textual information, e.g., entity descriptions) in a new target language 'l_t'.\n*Coverage* measures the ability of an automatic approach to provide at least a valid entity name in 'l_t' for each entity of interest in Wikidata.\n\nIn other words, measuring *Coverage* is equivalent to answering the following question: How effective is an automatic approach in converting the entity names from a source language 'l_s' to a target language 'l_t'?\nFor example, how can we use the English entity names to create valid Japanese entity names with the same quantity and quality of the English ones?### MKGE - Precision\nIt is well-known that the quality of the information in Wikidata is not perfect.\n*Precision* measures the ability of an automatic approach to identify incorrect entity names (or other types of textual information, e.g., entity descriptions) for an entity of interest in a target language 'l_t'.\n\nIn other words, measuring *Precision* is equivalent to answering the following question: How effective is an automatic approach in recognizing noisy, incomplete, or outdated information in a target language 'l_t'?## WikiKGE-10\nWikiKGE-10 is a benchmark for evaluating automatic approaches for increasing both coverage and precision of entity names in Wikidata for 10 languages.\n\nWikiKGE-10 includes around 1000 entities in each of the following 10 languages:\n* 'ar' - Arabic\n* 'de' - German\n* 'en' - English\n* 'es' - Spanish\n* 'fr' - French\n* 'it' - Italian\n* 'ja' - Japanese\n* 'ko' - Korean\n* 'ru' - Russian\n* 'zh' - Simplified Chinese### Dataset organization\nThe data is organized in the following way:\n\nWhere:\n* 'data/names/gold/' contains the human-curated data.\n* 'data/names/m-nta/' contains the predictions from M-NTA.\n* 'data/names/gpt/' contains the predictions from GPT-3 and GPT-3.5 (May 2023), and also GPT-4 (September 2023).\n* 'data/names/wikidata/' contains the data from Wikidata (May 2023).### Human-curated data in WikiKGE-10\nHere are a few examples in 'data/names/gold/URL':\n\n\n\nWhere:\n* 'wikidata_id' is the QID of the entity in Wikidata.\n* 'correct_values' is a list of entity names that have been rated as valid by our human annotators.\n* 'incorrect_values' is a list of entity names that are in Wikidata but have been rated as invalid by our human annotators."
] |
3d5fd88a8e65a7639de3840003408a0b5511cd29 |
## Description
Tik Tak Tok - Est. 2023
## Model
HotshotXL
## Voice
Julian
## Orientation
Portrait
# Tags
- Short
- Dancing
# Style
tiktok video, instagram, beautiful, sharp, detailed
# Music
mainstream pop music
## Prompt
A channel generating short vertical videos, between 20 seconds and 60 seconds
Most videos are about people dancing, doing choregraphy, or talking selfies, filming their cats, daily life
(eg. going to a cafe, eating pizza outside etc) | jbilcke-hf/ai-tube-tik-tak-tok | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-21T12:10:46+00:00 | {"license": "cc-by-nc-4.0", "pretty_name": "Tik Tak Tok"} | 2023-12-22T14:55:42+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
## Description
Tik Tak Tok - Est. 2023
## Model
HotshotXL
## Voice
Julian
## Orientation
Portrait
# Tags
- Short
- Dancing
# Style
tiktok video, instagram, beautiful, sharp, detailed
# Music
mainstream pop music
## Prompt
A channel generating short vertical videos, between 20 seconds and 60 seconds
Most videos are about people dancing, doing choregraphy, or talking selfies, filming their cats, daily life
(eg. going to a cafe, eating pizza outside etc) | [
"## Description\n\nTik Tak Tok - Est. 2023",
"## Model\n\nHotshotXL",
"## Voice\n\nJulian",
"## Orientation\n\nPortrait",
"# Tags\n\n- Short\n- Dancing",
"# Style\n\ntiktok video, instagram, beautiful, sharp, detailed",
"# Music\n\nmainstream pop music",
"## Prompt\n\nA channel generating short vertical videos, between 20 seconds and 60 seconds\n\nMost videos are about people dancing, doing choregraphy, or talking selfies, filming their cats, daily life\n(eg. going to a cafe, eating pizza outside etc)"
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"## Description\n\nTik Tak Tok - Est. 2023",
"## Model\n\nHotshotXL",
"## Voice\n\nJulian",
"## Orientation\n\nPortrait",
"# Tags\n\n- Short\n- Dancing",
"# Style\n\ntiktok video, instagram, beautiful, sharp, detailed",
"# Music\n\nmainstream pop music",
"## Prompt\n\nA channel generating short vertical videos, between 20 seconds and 60 seconds\n\nMost videos are about people dancing, doing choregraphy, or talking selfies, filming their cats, daily life\n(eg. going to a cafe, eating pizza outside etc)"
] | [
17,
9,
5,
3,
5,
7,
13,
5,
58
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n## Description\n\nTik Tak Tok - Est. 2023## Model\n\nHotshotXL## Voice\n\nJulian## Orientation\n\nPortrait# Tags\n\n- Short\n- Dancing# Style\n\ntiktok video, instagram, beautiful, sharp, detailed# Music\n\nmainstream pop music## Prompt\n\nA channel generating short vertical videos, between 20 seconds and 60 seconds\n\nMost videos are about people dancing, doing choregraphy, or talking selfies, filming their cats, daily life\n(eg. going to a cafe, eating pizza outside etc)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.