sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
9d9faf160facc3dc11e4974cfc84814e516d9e86
|
[GitHub](https://github.com/uhh-lt/AmharicHateSpeech)
# Introduction
The Amharic Hate Speech data is collected using the Twitter API spanning from October 1, 2020 - November 30, 2022, considering the socio-political dynamics of Ethiopia in Twitter space. We used [WEbAnno](http://ltdemos.informatik.uni-hamburg.de/codebookanno-cba/) tool for data annotation; each tweet is annotated by two native speakers and curated by one more experienced adjudicator to determine the gold labels. A total of 15.1k tweets consisting of three class labels namely: Hate, Offensive and Normal are presented. Read our papers for more details about the dataset (see below).
# Amharic Hate Speech Data Annotation: Lab-Controlled Annotation
The dataset is annotated by two annotators and a curator to determine the gold labels.
For more details, You can read our paper entitled:
1. [Exploring Amharic Hate Speech data Collection and Classification Approaches](https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2023-ayele-et-al-hate-ranlp.pdf)
|
uhhlt/amharichatespeechranlp
|
[
"task_categories:text-classification",
"language:amh",
"am",
"region:us"
] |
2023-09-22T07:00:15+00:00
|
{"language": ["amh"], "task_categories": ["text-classification"], "pretty_name": "Amharic Hate Speech Dataset", "tags": ["am"]}
|
2023-09-22T18:12:49+00:00
|
[] |
[
"amh"
] |
TAGS
#task_categories-text-classification #language-Amharic #am #region-us
|
GitHub
# Introduction
The Amharic Hate Speech data is collected using the Twitter API spanning from October 1, 2020 - November 30, 2022, considering the socio-political dynamics of Ethiopia in Twitter space. We used WEbAnno tool for data annotation; each tweet is annotated by two native speakers and curated by one more experienced adjudicator to determine the gold labels. A total of 15.1k tweets consisting of three class labels namely: Hate, Offensive and Normal are presented. Read our papers for more details about the dataset (see below).
# Amharic Hate Speech Data Annotation: Lab-Controlled Annotation
The dataset is annotated by two annotators and a curator to determine the gold labels.
For more details, You can read our paper entitled:
1. Exploring Amharic Hate Speech data Collection and Classification Approaches
|
[
"# Introduction\nThe Amharic Hate Speech data is collected using the Twitter API spanning from October 1, 2020 - November 30, 2022, considering the socio-political dynamics of Ethiopia in Twitter space. We used WEbAnno tool for data annotation; each tweet is annotated by two native speakers and curated by one more experienced adjudicator to determine the gold labels. A total of 15.1k tweets consisting of three class labels namely: Hate, Offensive and Normal are presented. Read our papers for more details about the dataset (see below).",
"# Amharic Hate Speech Data Annotation: Lab-Controlled Annotation\nThe dataset is annotated by two annotators and a curator to determine the gold labels. \n\nFor more details, You can read our paper entitled:\n1. Exploring Amharic Hate Speech data Collection and Classification Approaches"
] |
[
"TAGS\n#task_categories-text-classification #language-Amharic #am #region-us \n",
"# Introduction\nThe Amharic Hate Speech data is collected using the Twitter API spanning from October 1, 2020 - November 30, 2022, considering the socio-political dynamics of Ethiopia in Twitter space. We used WEbAnno tool for data annotation; each tweet is annotated by two native speakers and curated by one more experienced adjudicator to determine the gold labels. A total of 15.1k tweets consisting of three class labels namely: Hate, Offensive and Normal are presented. Read our papers for more details about the dataset (see below).",
"# Amharic Hate Speech Data Annotation: Lab-Controlled Annotation\nThe dataset is annotated by two annotators and a curator to determine the gold labels. \n\nFor more details, You can read our paper entitled:\n1. Exploring Amharic Hate Speech data Collection and Classification Approaches"
] |
[
25,
127,
65
] |
[
"passage: TAGS\n#task_categories-text-classification #language-Amharic #am #region-us \n# Introduction\nThe Amharic Hate Speech data is collected using the Twitter API spanning from October 1, 2020 - November 30, 2022, considering the socio-political dynamics of Ethiopia in Twitter space. We used WEbAnno tool for data annotation; each tweet is annotated by two native speakers and curated by one more experienced adjudicator to determine the gold labels. A total of 15.1k tweets consisting of three class labels namely: Hate, Offensive and Normal are presented. Read our papers for more details about the dataset (see below).# Amharic Hate Speech Data Annotation: Lab-Controlled Annotation\nThe dataset is annotated by two annotators and a curator to determine the gold labels. \n\nFor more details, You can read our paper entitled:\n1. Exploring Amharic Hate Speech data Collection and Classification Approaches"
] |
04475cfb1160d6b37bb1bcdd4b003ffe0bff6766
|
# Dataset Card for "luftVerteilen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftVerteilen-50-undersampled
|
[
"region:us"
] |
2023-09-22T07:01:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Auslass", "1": "Raum", "2": "VolumenstromreglerAbluft", "3": "VolumenstromreglerRaum", "4": "VolumenstromreglerZuluft"}}}}, {"name": "Score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 60732.34410511364, "num_examples": 237}, {"name": "test", "num_bytes": 91259, "num_examples": 352}, {"name": "valid", "num_bytes": 91259, "num_examples": 352}], "download_size": 99040, "dataset_size": 243250.34410511365}}
|
2023-09-22T07:01:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftVerteilen-50-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftVerteilen-50-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftVerteilen-50-undersampled\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftVerteilen-50-undersampled\"\n\nMore Information needed"
] |
a23c13ee93adf5f2f1cdff9599fdcebc1a060c90
|
# Dataset Card for "luftVerteilen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftVerteilen-100-undersampled
|
[
"region:us"
] |
2023-09-22T07:01:51+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Auslass", "1": "Raum", "2": "VolumenstromreglerAbluft", "3": "VolumenstromreglerRaum", "4": "VolumenstromreglerZuluft"}}}}, {"name": "Score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 103270.61044034091, "num_examples": 403}, {"name": "test", "num_bytes": 91259, "num_examples": 352}, {"name": "valid", "num_bytes": 91259, "num_examples": 352}], "download_size": 111225, "dataset_size": 285788.61044034094}}
|
2023-09-22T07:01:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftVerteilen-100-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftVerteilen-100-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftVerteilen-100-undersampled\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftVerteilen-100-undersampled\"\n\nMore Information needed"
] |
f7c8f55977258db37ad1075fd4523198c24e8152
|
# Dataset Card for "luftVerteilen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftVerteilen-200-undersampled
|
[
"region:us"
] |
2023-09-22T07:01:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Auslass", "1": "Raum", "2": "VolumenstromreglerAbluft", "3": "VolumenstromreglerRaum", "4": "VolumenstromreglerZuluft"}}}}, {"name": "Score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 180146.99538352274, "num_examples": 703}, {"name": "test", "num_bytes": 91259, "num_examples": 352}, {"name": "valid", "num_bytes": 91259, "num_examples": 352}], "download_size": 132465, "dataset_size": 362664.9953835227}}
|
2023-09-22T07:01:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftVerteilen-200-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftVerteilen-200-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftVerteilen-200-undersampled\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftVerteilen-200-undersampled\"\n\nMore Information needed"
] |
370089df2faeaa6e0088c175db0b8ef3b3112588
|
# Dataset Card for "luftBereitstellen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftBereitstellen-50-undersampled
|
[
"region:us"
] |
2023-09-22T07:11:49+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AbluftAllgemein", "1": "Abluftfilter", "2": "Abluftklappe", "3": "Abluftventilator", "4": "Au\u00dfenluftfilter", "5": "Au\u00dfenluftklappe", "6": "Befeuchter", "7": "Erhitzer", "8": "Filter", "9": "Fortluftklappe", "10": "Ger\u00e4tAllgemein", "11": "Kaeltemengenzaehler", "12": "KlappenAllgemein", "13": "K\u00fchler", "14": "Regler", "15": "Umluft", "16": "Ventilator", "17": "W\u00e4rmemengenz\u00e4hler", "18": "W\u00e4rmer\u00fcckgewinnung", "19": "ZuluftAllgemein", "20": "Zuluftfilter", "21": "Zuluftklappe", "22": "Zuluftventilator"}}}}], "splits": [{"name": "train", "num_bytes": 208830.91202313424, "num_examples": 982}, {"name": "test", "num_bytes": 238179, "num_examples": 1124}, {"name": "valid", "num_bytes": 238179, "num_examples": 1124}], "download_size": 227690, "dataset_size": 685188.9120231343}}
|
2023-09-22T07:11:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftBereitstellen-50-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftBereitstellen-50-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftBereitstellen-50-undersampled\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftBereitstellen-50-undersampled\"\n\nMore Information needed"
] |
352785059d0b4c6f0052c6c3295f959853c8f861
|
# Dataset Card for "luftBereitstellen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftBereitstellen-100-undersampled
|
[
"region:us"
] |
2023-09-22T07:11:54+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AbluftAllgemein", "1": "Abluftfilter", "2": "Abluftklappe", "3": "Abluftventilator", "4": "Au\u00dfenluftfilter", "5": "Au\u00dfenluftklappe", "6": "Befeuchter", "7": "Erhitzer", "8": "Filter", "9": "Fortluftklappe", "10": "Ger\u00e4tAllgemein", "11": "Kaeltemengenzaehler", "12": "KlappenAllgemein", "13": "K\u00fchler", "14": "Regler", "15": "Umluft", "16": "Ventilator", "17": "W\u00e4rmemengenz\u00e4hler", "18": "W\u00e4rmer\u00fcckgewinnung", "19": "ZuluftAllgemein", "20": "Zuluftfilter", "21": "Zuluftklappe", "22": "Zuluftventilator"}}}}], "splits": [{"name": "train", "num_bytes": 378107.292848404, "num_examples": 1778}, {"name": "test", "num_bytes": 238179, "num_examples": 1124}, {"name": "valid", "num_bytes": 238179, "num_examples": 1124}], "download_size": 280245, "dataset_size": 854465.292848404}}
|
2023-09-22T07:11:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftBereitstellen-100-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftBereitstellen-100-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftBereitstellen-100-undersampled\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftBereitstellen-100-undersampled\"\n\nMore Information needed"
] |
6f176b6d33a9dc70e8283f632fef335362decb9e
|
# Dataset Card for "luftBereitstellen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/luftBereitstellen-200-undersampled
|
[
"region:us"
] |
2023-09-22T07:12:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "AbluftAllgemein", "1": "Abluftfilter", "2": "Abluftklappe", "3": "Abluftventilator", "4": "Au\u00dfenluftfilter", "5": "Au\u00dfenluftklappe", "6": "Befeuchter", "7": "Erhitzer", "8": "Filter", "9": "Fortluftklappe", "10": "Ger\u00e4tAllgemein", "11": "Kaeltemengenzaehler", "12": "KlappenAllgemein", "13": "K\u00fchler", "14": "Regler", "15": "Umluft", "16": "Ventilator", "17": "W\u00e4rmemengenz\u00e4hler", "18": "W\u00e4rmer\u00fcckgewinnung", "19": "ZuluftAllgemein", "20": "Zuluftfilter", "21": "Zuluftklappe", "22": "Zuluftventilator"}}}}], "splits": [{"name": "train", "num_bytes": 594806.5793571349, "num_examples": 2797}, {"name": "test", "num_bytes": 238179, "num_examples": 1124}, {"name": "valid", "num_bytes": 238179, "num_examples": 1124}], "download_size": 347666, "dataset_size": 1071164.5793571349}}
|
2023-09-22T07:12:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "luftBereitstellen-200-undersampled"
More Information needed
|
[
"# Dataset Card for \"luftBereitstellen-200-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"luftBereitstellen-200-undersampled\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"luftBereitstellen-200-undersampled\"\n\nMore Information needed"
] |
1dff1214e883f8fdf1fa60369803c407d03315be
|
# Dataset Card for Dataset on Antisemitism on Twitter/X
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
The ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from annotationportal.com.
### Languages
English
## Dataset Structure
‘TweetID’: Represents the tweet ID.
‘Username’: Represents the username who published the tweet.
‘Text’: Represents the full text of the tweet (not pre-processed).
‘CreateDate’: Represents the date the tweet was created.
‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic.
‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.
## Dataset Creation
This dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages.
The dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from annotationportal.com.
### Annotations
#### Annotation process
We annotated the tweets, considering the text, images, videos, and links, in their “natural” context, including threads. We used a detailed annotation guideline, based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and tropes of antisemitism. We created an online annotation tool (https://annotationportal.com) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.
The Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its “natural” environment.
The portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as "confident antisemitic," "probably antisemitic," "probably not antisemitic," "confident not antisemitic," or "don’t know." The annotation guideline, including the definition, is linked in a PDF document.
#### Who are the annotators?
All annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.
## Considerations for Using the Data
### Social Impact of Dataset
One of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.
We focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.
Our annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.
## Additional Information
### Dataset Curators
Gunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer
### Citation Information
Jikeli,Gunther, Sameer Karali, Daniel Miehling, and Katharina Soemer (2023): Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets. https://arxiv.org/abs/2304.14599
|
ISCA-IUB/AntisemitismOnTwitter
|
[
"language:en",
"arxiv:2304.14599",
"region:us"
] |
2023-09-22T07:18:44+00:00
|
{"language": ["en"]}
|
2023-09-22T07:39:09+00:00
|
[
"2304.14599"
] |
[
"en"
] |
TAGS
#language-English #arxiv-2304.14599 #region-us
|
# Dataset Card for Dataset on Antisemitism on Twitter/X
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
The ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from URL.
### Languages
English
## Dataset Structure
‘TweetID’: Represents the tweet ID.
‘Username’: Represents the username who published the tweet.
‘Text’: Represents the full text of the tweet (not pre-processed).
‘CreateDate’: Represents the date the tweet was created.
‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic.
‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.
## Dataset Creation
This dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages.
The dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from URL.
### Annotations
#### Annotation process
We annotated the tweets, considering the text, images, videos, and links, in their “natural” context, including threads. We used a detailed annotation guideline, based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and tropes of antisemitism. We created an online annotation tool (URL) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.
The Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its “natural” environment.
The portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as "confident antisemitic," "probably antisemitic," "probably not antisemitic," "confident not antisemitic," or "don’t know." The annotation guideline, including the definition, is linked in a PDF document.
#### Who are the annotators?
All annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.
## Considerations for Using the Data
### Social Impact of Dataset
One of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.
We focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.
Our annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.
## Additional Information
### Dataset Curators
Gunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer
Jikeli,Gunther, Sameer Karali, Daniel Miehling, and Katharina Soemer (2023): Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets. URL
|
[
"# Dataset Card for Dataset on Antisemitism on Twitter/X",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from URL.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n‘TweetID’: Represents the tweet ID. \n\n‘Username’: Represents the username who published the tweet. \n\n‘Text’: Represents the full text of the tweet (not pre-processed).\n\n‘CreateDate’: Represents the date the tweet was created. \n\n‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic. \n\n‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.",
"## Dataset Creation\n\nThis dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages. \n\nThe dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from URL.",
"### Annotations",
"#### Annotation process\n\nWe annotated the tweets, considering the text, images, videos, and links, in their “natural” context, including threads. We used a detailed annotation guideline, based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and tropes of antisemitism. We created an online annotation tool (URL) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.\nThe Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its “natural” environment.\nThe portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as \"confident antisemitic,\" \"probably antisemitic,\" \"probably not antisemitic,\" \"confident not antisemitic,\" or \"don’t know.\" The annotation guideline, including the definition, is linked in a PDF document.",
"#### Who are the annotators?\nAll annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOne of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.\nWe focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.\nOur annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.",
"## Additional Information",
"### Dataset Curators\n\nGunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer\n\n\n\n\nJikeli,Gunther, Sameer Karali, Daniel Miehling, and Katharina Soemer (2023): Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets. URL"
] |
[
"TAGS\n#language-English #arxiv-2304.14599 #region-us \n",
"# Dataset Card for Dataset on Antisemitism on Twitter/X",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThe ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from URL.",
"### Languages\n\nEnglish",
"## Dataset Structure\n\n‘TweetID’: Represents the tweet ID. \n\n‘Username’: Represents the username who published the tweet. \n\n‘Text’: Represents the full text of the tweet (not pre-processed).\n\n‘CreateDate’: Represents the date the tweet was created. \n\n‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic. \n\n‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.",
"## Dataset Creation\n\nThis dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages. \n\nThe dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from URL.",
"### Annotations",
"#### Annotation process\n\nWe annotated the tweets, considering the text, images, videos, and links, in their “natural” context, including threads. We used a detailed annotation guideline, based on the IHRA Definition, which has been endorsed and recommended by more than 30 governments and international organizations5 and is frequently used to monitor and record antisemitic incidents. We divided the definition into 12 paragraphs. Each of the paragraphs addresses different forms and tropes of antisemitism. We created an online annotation tool (URL) to make labeling easier, more consistent, and less prone to errors, including in the process of recording the annotations. The portal displays the tweet and a clickable annotation form, see Figure 1. It automatically saves each annotation, including the time spent labeling each tweet.\nThe Annotation Portal retrieves live tweets by referencing their ID number. Our annotators first look at the tweet, and if they are unsure of the meaning, they are prompted to look at the entire thread, replies, likes, links, and comments. A click on the visualized tweet opens a new tab in the browser, displaying the message on the Twitter page in its “natural” environment.\nThe portal is designed to help annotators consistently label messages as antisemitic or not according to the IHRA definition. After verifying that the message is still live and in English, they select from a drop-down menu where they classify the message as \"confident antisemitic,\" \"probably antisemitic,\" \"probably not antisemitic,\" \"confident not antisemitic,\" or \"don’t know.\" The annotation guideline, including the definition, is linked in a PDF document.",
"#### Who are the annotators?\nAll annotators are familiar with the definition and have been trained on test samples. They have also taken at least one academic course on antisemitism or have done research on antisemitism. We consider them to be expert annotators. Eight such expert annotators of different religions and genders labeled the 18 samples, two for each sample in alternating configurations.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nOne of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets.\nWe focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords.\nOur annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection.",
"## Additional Information",
"### Dataset Curators\n\nGunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer\n\n\n\n\nJikeli,Gunther, Sameer Karali, Daniel Miehling, and Katharina Soemer (2023): Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets. URL"
] |
[
18,
16,
24,
78,
5,
138,
120,
5,
381,
89,
8,
231,
5,
79
] |
[
"passage: TAGS\n#language-English #arxiv-2304.14599 #region-us \n# Dataset Card for Dataset on Antisemitism on Twitter/X## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThe ISCA project has compiled this dataset using an annotation portal, which was used to label tweets as either antisemitic or non-antisemitic, among other labels. Please note that the annotation was done with live data, including images and the context, such as threads. The original data was sourced from URL.### Languages\n\nEnglish## Dataset Structure\n\n‘TweetID’: Represents the tweet ID. \n\n‘Username’: Represents the username who published the tweet. \n\n‘Text’: Represents the full text of the tweet (not pre-processed).\n\n‘CreateDate’: Represents the date the tweet was created. \n\n‘Biased’: Represents the labeled by our annotations if the tweet is antisemitic or non-antisemitic. \n\n‘Keyword’: Represents the keyword that was used in the query. The keyword can be in the text, including mentioned names, or the username.## Dataset Creation\n\nThis dataset contains 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021. The dataset is drawn from representative samples during this period with relevant keywords. 1,250 tweets (18%) meet the IHRA definition of antisemitic messages. \n\nThe dataset has been compiled within the ISCA project using an annotation portal to label tweets as either antisemitic or non-antisemitic. The original data was sourced from URL.### Annotations"
] |
d737aee8758709b2a063b4e20af49456363c6a24
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_identity"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/three_styles_prompted_250_512x512_50perclass_identity
|
[
"region:us"
] |
2023-09-22T07:21:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "style_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4334353.0, "num_examples": 150}, {"name": "val", "num_bytes": 4317601.0, "num_examples": 150}], "download_size": 0, "dataset_size": 8651954.0}}
|
2023-09-22T12:24:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_identity"
More Information needed
|
[
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_identity\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_identity\"\n\nMore Information needed"
] |
[
6,
35
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_identity\"\n\nMore Information needed"
] |
b54e313ca240cbc2ef9700600d11f4b0ae977b53
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_proposed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/three_styles_prompted_250_512x512_50perclass_proposed
|
[
"region:us"
] |
2023-09-22T07:21:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "style_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4334433.0, "num_examples": 150}, {"name": "val", "num_bytes": 4317601.0, "num_examples": 150}], "download_size": 8827337, "dataset_size": 8652034.0}}
|
2023-09-22T07:21:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_proposed"
More Information needed
|
[
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_proposed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_proposed\"\n\nMore Information needed"
] |
[
6,
35
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_proposed\"\n\nMore Information needed"
] |
0e7ef556a8a850af8c8af1e56d2870cf4650c4ce
|
# Dataset Card for "Irene-Audio-vectors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
YangZhou/Irene-Audio-vectors
|
[
"region:us"
] |
2023-09-22T07:22:26+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 28357763.0, "num_examples": 24}, {"name": "validation", "num_bytes": 28357763.0, "num_examples": 24}], "download_size": 49222290, "dataset_size": 56715526.0}}
|
2023-09-26T04:33:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Irene-Audio-vectors"
More Information needed
|
[
"# Dataset Card for \"Irene-Audio-vectors\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Irene-Audio-vectors\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Irene-Audio-vectors\"\n\nMore Information needed"
] |
2f0a0a72e342f1227cc927b4cd9a00613875ef7f
|
# Dataset Card for "static-analysis-eval"
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
|
patched-codes/static-analysis-eval
|
[
"region:us"
] |
2023-09-22T07:24:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "file_name", "dtype": "string"}, {"name": "cwe", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 87854, "num_examples": 76}], "download_size": 53832, "dataset_size": 87854}}
|
2023-10-02T08:09:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "static-analysis-eval"
A dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub),
where each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep).
|
[
"# Dataset Card for \"static-analysis-eval\"\n\nA dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub), \nwhere each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep)."
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"static-analysis-eval\"\n\nA dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub), \nwhere each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep)."
] |
[
6,
64
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"static-analysis-eval\"\n\nA dataset of 76 Python programs taken from real Python open source projects (top 1000 on GitHub), \nwhere each program is a file that has exactly 1 vulnerability as detected by a particular static analyzer (Semgrep)."
] |
54475a340565b631a821661ae7b8f2501569f42a
|
本数据用于完成模型在ScanRefer上的Finetune工作
# V0
一共收集22735个reference,并找到对应的box
# V1
一共收集36665个reference,与之对应的box和object name都收集起来
# same_box
scanrefer数据集中出现的不同的reference但是对应相同的box的数据
# same_reference
scanrefernece数据集中出现的相同的renference对应的不同的box的数据,可能出现在不同的场景中,也可能出现在相同的场景中
实验中使用的数据为V1版本
|
hmxiong/ScanRefer_Finetune
|
[
"region:us"
] |
2023-09-22T07:24:58+00:00
|
{}
|
2023-10-18T05:49:59+00:00
|
[] |
[] |
TAGS
#region-us
|
本数据用于完成模型在ScanRefer上的Finetune工作
# V0
一共收集22735个reference,并找到对应的box
# V1
一共收集36665个reference,与之对应的box和object name都收集起来
# same_box
scanrefer数据集中出现的不同的reference但是对应相同的box的数据
# same_reference
scanrefernece数据集中出现的相同的renference对应的不同的box的数据,可能出现在不同的场景中,也可能出现在相同的场景中
实验中使用的数据为V1版本
|
[
"# V0\n一共收集22735个reference,并找到对应的box",
"# V1\n一共收集36665个reference,与之对应的box和object name都收集起来",
"# same_box\nscanrefer数据集中出现的不同的reference但是对应相同的box的数据",
"# same_reference\nscanrefernece数据集中出现的相同的renference对应的不同的box的数据,可能出现在不同的场景中,也可能出现在相同的场景中\n实验中使用的数据为V1版本"
] |
[
"TAGS\n#region-us \n",
"# V0\n一共收集22735个reference,并找到对应的box",
"# V1\n一共收集36665个reference,与之对应的box和object name都收集起来",
"# same_box\nscanrefer数据集中出现的不同的reference但是对应相同的box的数据",
"# same_reference\nscanrefernece数据集中出现的相同的renference对应的不同的box的数据,可能出现在不同的场景中,也可能出现在相同的场景中\n实验中使用的数据为V1版本"
] |
[
6,
15,
21,
19,
41
] |
[
"passage: TAGS\n#region-us \n# V0\n一共收集22735个reference,并找到对应的box# V1\n一共收集36665个reference,与之对应的box和object name都收集起来# same_box\nscanrefer数据集中出现的不同的reference但是对应相同的box的数据# same_reference\nscanrefernece数据集中出现的相同的renference对应的不同的box的数据,可能出现在不同的场景中,也可能出现在相同的场景中\n实验中使用的数据为V1版本"
] |
a671ecf180128f53ffd047982ffad7ecd504f78f
|
# OpenSubtitles STS Dataset for Dutch
OS-STS.nl is an extensive Dutch STS dataset containing over two million sentence pairs and similarity scores.
The dataset is automatically extracted from movie and documentary subtitles sourced from OpenSubtitles2018, a vast parallel corpus of aligned video subtitles.
Recognizing the high prevalence (>10%) of paraphrased statements and question-and-answer pairs in subtitled spoken language, we systematically extract the consecutive parallel sentence pairs from the subtitles that exhibit significant semantic overlap.
## Content of the dataset
The dataset contains Dutch sentence pairs, as well as semtatic similarity scores derived from their English translation derived from sentence-transformers/all-mpnet-base-v2.
<div style="max-width: 480px">

</div>
**Coming soon**
|
FremyCompany/OS-STS-nl-Dataset
|
[
"task_categories:sentence-similarity",
"size_categories:1M<n<10M",
"language:nl",
"license:other",
"region:us"
] |
2023-09-22T07:28:24+00:00
|
{"language": ["nl"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["sentence-similarity"], "pretty_name": "OpenSubtitles STS Dataset for Dutch"}
|
2023-09-22T07:36:12+00:00
|
[] |
[
"nl"
] |
TAGS
#task_categories-sentence-similarity #size_categories-1M<n<10M #language-Dutch #license-other #region-us
|
# OpenSubtitles STS Dataset for Dutch
URL is an extensive Dutch STS dataset containing over two million sentence pairs and similarity scores.
The dataset is automatically extracted from movie and documentary subtitles sourced from OpenSubtitles2018, a vast parallel corpus of aligned video subtitles.
Recognizing the high prevalence (>10%) of paraphrased statements and question-and-answer pairs in subtitled spoken language, we systematically extract the consecutive parallel sentence pairs from the subtitles that exhibit significant semantic overlap.
## Content of the dataset
The dataset contains Dutch sentence pairs, as well as semtatic similarity scores derived from their English translation derived from sentence-transformers/all-mpnet-base-v2.
<div style="max-width: 480px">
!Coming soon
</div>
Coming soon
|
[
"# OpenSubtitles STS Dataset for Dutch\n\nURL is an extensive Dutch STS dataset containing over two million sentence pairs and similarity scores. \nThe dataset is automatically extracted from movie and documentary subtitles sourced from OpenSubtitles2018, a vast parallel corpus of aligned video subtitles. \n\nRecognizing the high prevalence (>10%) of paraphrased statements and question-and-answer pairs in subtitled spoken language, we systematically extract the consecutive parallel sentence pairs from the subtitles that exhibit significant semantic overlap.",
"## Content of the dataset\n\nThe dataset contains Dutch sentence pairs, as well as semtatic similarity scores derived from their English translation derived from sentence-transformers/all-mpnet-base-v2.\n\n<div style=\"max-width: 480px\">\n\n !Coming soon\n\n</div>\nComing soon"
] |
[
"TAGS\n#task_categories-sentence-similarity #size_categories-1M<n<10M #language-Dutch #license-other #region-us \n",
"# OpenSubtitles STS Dataset for Dutch\n\nURL is an extensive Dutch STS dataset containing over two million sentence pairs and similarity scores. \nThe dataset is automatically extracted from movie and documentary subtitles sourced from OpenSubtitles2018, a vast parallel corpus of aligned video subtitles. \n\nRecognizing the high prevalence (>10%) of paraphrased statements and question-and-answer pairs in subtitled spoken language, we systematically extract the consecutive parallel sentence pairs from the subtitles that exhibit significant semantic overlap.",
"## Content of the dataset\n\nThe dataset contains Dutch sentence pairs, as well as semtatic similarity scores derived from their English translation derived from sentence-transformers/all-mpnet-base-v2.\n\n<div style=\"max-width: 480px\">\n\n !Coming soon\n\n</div>\nComing soon"
] |
[
42,
131,
71
] |
[
"passage: TAGS\n#task_categories-sentence-similarity #size_categories-1M<n<10M #language-Dutch #license-other #region-us \n# OpenSubtitles STS Dataset for Dutch\n\nURL is an extensive Dutch STS dataset containing over two million sentence pairs and similarity scores. \nThe dataset is automatically extracted from movie and documentary subtitles sourced from OpenSubtitles2018, a vast parallel corpus of aligned video subtitles. \n\nRecognizing the high prevalence (>10%) of paraphrased statements and question-and-answer pairs in subtitled spoken language, we systematically extract the consecutive parallel sentence pairs from the subtitles that exhibit significant semantic overlap.## Content of the dataset\n\nThe dataset contains Dutch sentence pairs, as well as semtatic similarity scores derived from their English translation derived from sentence-transformers/all-mpnet-base-v2.\n\n<div style=\"max-width: 480px\">\n\n !Coming soon\n\n</div>\nComing soon"
] |
c2195dcf074f48d40a00c4a9c31a50bced3a72a3
|
# Dataset Card for "ruoh_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Mihaj/ruoh_demo
|
[
"region:us"
] |
2023-09-22T07:30:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "text", "dtype": "string"}, {"name": "mother_tongue", "dtype": "string"}, {"name": "region", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "age", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1600232223.61, "num_examples": 13198}, {"name": "test", "num_bytes": 405584868.6, "num_examples": 3300}], "download_size": 1960524339, "dataset_size": 2005817092.21}}
|
2023-09-22T09:08:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ruoh_demo"
More Information needed
|
[
"# Dataset Card for \"ruoh_demo\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ruoh_demo\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ruoh_demo\"\n\nMore Information needed"
] |
728486cec811c9dc0dedf619cb6bc80fa2c613cc
|
# Dataset Card for "54ae8a8b"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/54ae8a8b
|
[
"region:us"
] |
2023-09-22T07:45:05+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 200, "num_examples": 10}], "download_size": 1374, "dataset_size": 200}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T07:45:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "54ae8a8b"
More Information needed
|
[
"# Dataset Card for \"54ae8a8b\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"54ae8a8b\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"54ae8a8b\"\n\nMore Information needed"
] |
e7666fd5609adaf6e820ab2c93ef8dd140559f5e
|
# Distil Whisper: AMI IHM With Timestamps
This is a variant of the [AMI IHM](https://huggingface.co/datasets/edinburghcstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghcstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-ihm", "ihm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/ami-ihm-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-22T08:05:01+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "AMI IHM"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
|
# Distil Whisper: AMI IHM With Timestamps
This is a variant of the AMI IHM dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-4.0.
|
[
"# Distil Whisper: AMI IHM With Timestamps\n\nThis is a variant of the AMI IHM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n",
"# Distil Whisper: AMI IHM With Timestamps\n\nThis is a variant of the AMI IHM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
35,
116,
92,
40,
16
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: AMI IHM With Timestamps\n\nThis is a variant of the AMI IHM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0."
] |
6d0abf071b3434cc94555c7ecb428a87bbc3ab53
|
# Distil Whisper: AMI SDM With Timestamps
This is a variant of the [AMI SDM](https://huggingface.co/datasets/edinburghstr/ami) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/edinburghstr/ami).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/ami-sdm", "sdm", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/ami-sdm-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-22T08:05:02+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "AMI SDM"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
|
# Distil Whisper: AMI SDM With Timestamps
This is a variant of the AMI SDM dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-4.0.
|
[
"# Distil Whisper: AMI SDM With Timestamps\n\nThis is a variant of the AMI SDM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n",
"# Distil Whisper: AMI SDM With Timestamps\n\nThis is a variant of the AMI SDM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
35,
116,
92,
40,
16
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: AMI SDM With Timestamps\n\nThis is a variant of the AMI SDM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0."
] |
6ca88d1585b48573d5fd5789193f3191e73896e4
|
# Distil Whisper: Common Voice 13 With Timestamps
This is a variant of the [Common Voice 13](https://huggingface.co/datasets/mozilla_foundation/common_voice_13) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/mozilla_foundation/common_voice_13).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/common_voice_13_0", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
|
distil-whisper/common_voice_13_0-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-09-22T08:05:04+00:00
|
{"language": ["en"], "license": "cc0-1.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "Common Voice 13"}
|
2023-09-25T09:30:12+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us
|
# Distil Whisper: Common Voice 13 With Timestamps
This is a variant of the Common Voice 13 dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc0-1.0.
|
[
"# Distil Whisper: Common Voice 13 With Timestamps\n\nThis is a variant of the Common Voice 13 dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc0-1.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us \n",
"# Distil Whisper: Common Voice 13 With Timestamps\n\nThis is a variant of the Common Voice 13 dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc0-1.0."
] |
[
34,
114,
92,
40,
15
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us \n# Distil Whisper: Common Voice 13 With Timestamps\n\nThis is a variant of the Common Voice 13 dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc0-1.0."
] |
eb064620d61033456d83b6949883b59c219efd8d
|
# Distil Whisper: GigaSpeech With Timestamps
This is a variant of the [GigaSpeech](https://huggingface.co/datasets/speechcolab/gigaspeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/gigaspeech-l", "l", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/speechcolab/gigaspeech).
|
distil-whisper/gigaspeech-l-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] |
2023-09-22T08:05:06+00:00
|
{"language": ["en"], "license": "other", "task_categories": ["automatic-speech-recognition"], "extra_gated_prompt": "SpeechColab does not own the copyright of the audio files. For researchers and educators who wish to use the audio files for non-commercial research and/or educational purposes, we can provide access through the Hub under certain conditions and terms. \nTerms of Access:\nThe \"Researcher\" has requested permission to use the GigaSpeech database (the \"Database\") at Tsinghua University. In exchange for such permission, Researcher hereby agrees to the following terms and conditions:\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. The SpeechColab team and Tsinghua University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the SpeechColab team and Tsinghua University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted audio files that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. The SpeechColab team and Tsinghua University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n\nPlease also fill out the Google Form https://forms.gle/UuGQAPyscGRrUMLq6 to request access to the GigaSpeech dataset.", "extra_gated_fields": {"Name": "text", "Email": "text", "Organization": "text", "Address": "text", "I hereby confirm that I have requested access via the Google Form provided above": "checkbox", "I accept the terms of access": "checkbox"}}
|
2023-09-25T09:28:51+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-other #region-us
|
# Distil Whisper: GigaSpeech With Timestamps
This is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card.
|
[
"# Distil Whisper: GigaSpeech With Timestamps\n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n",
"# Distil Whisper: GigaSpeech With Timestamps\n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
[
31,
118,
92,
40,
30
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n# Distil Whisper: GigaSpeech With Timestamps\n\nThis is a variant of the GigaSpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
55e45305dc0afbb888af87d46971e149020dcfdd
|
# Distil Whisper: LibriSpeech ASR With Timestamps
This is a variant of the [LibriSpeech ASR](https://huggingface.co/datasets/librispeech_asr) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/librispeech_asr).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all")
# take the first sample of the validation set
sample = dataset["validation.clean"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/librispeech_asr", "all", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation.clean"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/librispeech_asr-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-22T08:05:08+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "LibriSpeech ASR"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
|
# Distil Whisper: LibriSpeech ASR With Timestamps
This is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-4.0.
|
[
"# Distil Whisper: LibriSpeech ASR With Timestamps\n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n",
"# Distil Whisper: LibriSpeech ASR With Timestamps\n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
35,
120,
92,
40,
16
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: LibriSpeech ASR With Timestamps\n\nThis is a variant of the LibriSpeech ASR dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0."
] |
d77361d7090ee046b9bc919d2bb68c85b245f3f7
|
# Distil Whisper: People's Speech Clean With Timestamps
This is a variant of the [People's Speech Clean](https://huggingface.co/datasets/MLCommons/peoples_speech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/MLCommons/peoples_speech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/peoples_speech-clean", "clean", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-4.0.
|
distil-whisper/peoples_speech-clean-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-4.0",
"region:us"
] |
2023-09-22T08:05:09+00:00
|
{"language": ["en"], "license": "cc-by-4.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "People's Speech Clean"}
|
2023-09-25T09:30:12+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us
|
# Distil Whisper: People's Speech Clean With Timestamps
This is a variant of the People's Speech Clean dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-4.0.
|
[
"# Distil Whisper: People's Speech Clean With Timestamps\n\nThis is a variant of the People's Speech Clean dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n",
"# Distil Whisper: People's Speech Clean With Timestamps\n\nThis is a variant of the People's Speech Clean dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-4.0."
] |
[
35,
118,
92,
40,
16
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-4.0 #region-us \n# Distil Whisper: People's Speech Clean With Timestamps\n\nThis is a variant of the People's Speech Clean dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-4.0."
] |
b4c4fa58b27545e308888f5c2e47e1e42c968968
|
# Distil Whisper: SPGISpeech With Timestamps
This is a variant of the [SPGISpeech](https://huggingface.co/datasets/kensho/spgispeech) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/kensho/spgispeech).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/spgispeech", "L")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/spgispeech", "L", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original [dataset card](https://huggingface.co/datasets/kensho/spgispeech).
|
distil-whisper/spgispeech-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:other",
"region:us"
] |
2023-09-22T08:05:10+00:00
|
{"language": ["en"], "license": "other", "task_categories": ["automatic-speech-recognition"], "extra_gated_prompt": "Your access to and use of the information in the Kensho Transcript Dataset (the \u201cContent\u201d), which is provided by Kensho Technologies, LLC, a subsidiary of S&P Global, Inc., (\u201cKensho\u201d), shall be governed by the following terms and conditions of usage (\u201cTerms of Usage\u201d). The Content may be accessed only by persons who have been authorized to use this Content pursuant to their acceptance and acknowledgement of these Terms of Usage (in each case, an \u201cAuthorized User\u201d). By providing your electronic signature at the end of these Terms of Usage, you represent that you are an Authorized User and that you accept these Terms of Usage and agree to be bound by them.\nIf you do not wish to be bound by these Terms of Usage, you must not use this Content. PLEASE READ THESE TERMS OF USAGE CAREFULLY BEFORE USING THIS CONTENT.\nSection 1 \u2013 THE CONTENT\n1.1 The Content is provided for academic research purposes and internal use only and must not be used to: assemble or create a database; construct or facilitate the construction of products which compete with the Content; identify or attempt to identify or contact any individual; or link to another dataset.\nThe Content, which is comprised of public earnings calls in audio and corresponding text format, and all accompanying derived products is proprietary to Kensho and its third-party content providers. You shall not modify the Content; create derivative works based on the Content, rewrite or reprocess the Content except as expressly provided herein. You must not publish, display, transfer or redistribute the Content or any portions or derivative versions thereof to anyone without prior written consent from Kensho. You agree not to contact Kensho or its affiliates concerning individuals whose information may be included in the Content.\n1.2 Disclaimer. Content to which you are provided access, either directly or indirectly, from or on this Content will not have been reviewed or monitored by Kensho, and Kensho cannot and does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any such content.\nThe Content is provided for your convenience only and is not a republication or reconfirmation of the opinion or information contained therein. The provision of the Content is without any obligation on the part of Kensho or its third-party content providers to review such or any liability or responsibility arising out of your use thereof. Kensho does not guarantee or make any representation or warranty, either express or implied, as to the accuracy, validity, timeliness, completeness or continued availability of any Content and shall not be liable for any errors, delays, or actions taken in reliance on information. In addition, the Content speaks only as of the date issued and is based on conference calls that may contain projections of other forward-looking statements. You should not rely on the Content as expressing Kensho\u2019s opinion or as representing current information. None of Kensho or the third-party content providers has undertaken, and do not undertake any duty to update any Content or otherwise advise you of any changes in the Content.\n1.3 Ownership of Third-Party Content. You acknowledge that all proprietary rights in the Content that are owned by Kensho or third party content providers shall remain the property of Kensho or such third party content providers, and you shall have no right or interest in such third party content except the rights to use such third party content in accordance with these Terms of Usage. Any additional rights not granted herein shall require a separate, direct agreement with Kensho. You acknowledge that the Content and third party content as compiled, prepared, selected and arranged by Kensho or its third party content providers constitutes an expenditure of substantial time, effort and money by Kensho and its third party content providers and constitutes valuable commercial property and/or trade secrets of Kensho and such third party content providers. Kensho retains all rights and remedies afforded under the copyright, trademark, service mark, patent and other laws of the United States and the States thereof, including without limitation any laws designed to protect proprietary or confidential information. You agree that you will not remove or modify any copyright notice, disclosures, disclaimers or other notification or trade name or marks of Kensho or the third party content providers that may appear in the Content or third party content and that any permitted reproduction and/or distribution of the Content or third party content shall contain such notices and/or marks as they appear in the Content or third party content. You may not use Kensho\u2019s or the third-party content providers\u2019 name or trademarks without the prior written consent of Kensho or such third-party content providers. Apart from the rights granted hereunder, no conveyance of ownership, right, title or interest is intended herein. Any additional rights require a separate agreement with Kensho.\n1.4 Posted Guidelines. In addition to these Terms of Usage, when using this Content, you shall be subject to and agree to follow any posted notice, guidelines or rules, which may be posted and amended from time to time. Nothing on this Content shall be considered a recommendation or solicitation to buy or an offer to sell a security to any person in any jurisdiction.\n1.5 Registration Data. In consideration of your use of this Content, you and/or your employer agree to: (a) provide true, accurate, current and complete Registration Data (as defined below in Section 3.1) to Kensho as prompted by the registration form completed prior to accessing the Content and (b) maintain and promptly update the Registration Data and to keep the same true, accurate, current and complete.\n1.6 Right to Terminate User Access. Kensho reserves the right to limit, restrict and immediately terminate your access to and use of this Content at any time, in whole or in part, in its sole discretion and without notice.\nSection 2 - DISCLAIMER OF WARRANTY AND LIMITATION OF LIABILITY\n2.1 THE CONTENT IS PROVIDED \u201cAS IS\u201d AND \u201cAS AVAILABLE\u201d WITHOUT REPRESENTATION OR WARRANTY OF ANY KIND. USE OF THE CONTENT IS AT THE USER\u2019S OWN RISK. IN NO EVENT SHALL KENSHO OR ITS THIRD-PARTY CONTENT PROVIDERS BE LIABLE FOR ANY DECISION MADE OR ACTION OR INACTION TAKEN IN RELIANCE ON ANY CONTENT, INCLUDING THIRD-PARTY CONTENT, INCLUDING YOUR HANDLING AND STORING OF THE CONTENT. KENSHO FURTHER EXPLICITLY DISCLAIMS, ANY WARRANTY OF ANY KIND, WHETHER EXPRESS OR IMPLIED, INCLUDING WARRANTIES OF ORIGINALITY, ACCURACY, COMPLETENESS, TIMELINESS, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. KENSHO EXPRESSLY DISCLAIMS, AND YOU WAIVE, ANY LIABILITY THAT MAY ARISE FROM YOUR PUBLICATION OR PROVISION OF THE CONTENT TO A THIRD PARTY, OR ANY REPRESENTATION OR WARRANTY MADE BY YOU TO ANY THIRD PARTY, WHETHER OR NOT RELATED TO THE CONTENT. KENSHO, SUPPLIERS OF THIRD-PARTY CONTENT AND ANY OTHER THIRD PARTY WORKING WITH KENSHO SHALL NOT BE RESPONSIBLE OR LIABLE, DIRECTLY OR INDIRECTLY, FOR ANY DAMAGES OR LOSS (INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL AND ANY AND ALL OTHER FORMS OF DAMAGES OR LOSSES REGARDLESS OF THE FORM OF THE ACTION OR THE BASIS OF THE CLAIM) CAUSED OR ALLEGED TO BE CAUSED IN CONNECTION WITH YOUR USE OF THE CONTENT WHETHER OR NOT FORESEEABLE, EVEN IF KENSHO OR ANY OF THE SUPPLIERS OF THIRD-PARTY CONTENT OR OTHER THIRD PARTIES WORKING WITH KENSHO IN CONNECTION WITH THE CONTENT HAS BEEN ADVISED OF THE POSSIBILITY OR LIKELIHOOD OF SUCH DAMAGES.\n2.2 THE CONTENT IS NOT INTENDED TO PROVIDE TAX, LEGAL, INSURANCE OR INVESTMENT ADVICE, AND NOTHING IN THE CONTENT SHOULD BE CONSTRUED AS AN OFFER TO SELL, A SOLICITATION OF AN OFFER TO BUY, OR A RECOMMENDATION FOR ANY SECURITY BY KENSHO OR ANY THIRD PARTY.\n2.3 For third party demands, claims, actions, proceedings and liability for losses, damages, reasonable legal costs and other reasonable expenses of any nature, you agree to defend, indemnify and hold Kensho and its affiliates harmless, including its respective directors, officers, employees and agents from and against all claims to the extent arising from your access to and/or use of the Content, any failure by you to abide by the Terms of Usage, or breach of applicable law.\nSection 3 - PRIVACY\n3.1 Access and Collection. In order to access this Content, during the registration process, either you or your employer will be required to provide Kensho with certain information; including your name, employer or academic institution, and e-mail address (\u201cRegistration Data\u201d). In addition, when you request or view Content, Kensho may obtain user identifiable information related to your request of, or access to, such Content (\u201cAccess Data\u201d). For example, while you are accessing this Content, our Web servers may recognize your: (a) domain name; (b) ISP\u2019s domain name; (c) IP address; (d) browser type; and (e) operating system. If you contact us with a technical question, we may collect certain information about your systems, including: (a) your browser type, version and settings (e.g., Java and cookie settings); (b) connectivity information (e.g., SSL/HTTPS compatibility, bandwidth capacity); and browser plug-in information (e.g., do you have Adobe, what is your media player, can you open Flash files, etc.).\n3.2 Use of Your Information. Registration Data and Access Data may be used by Kensho for research and development purposes and to communicate with users and to troubleshoot any technical issues pertaining to the Content. You acknowledge that in the event that a separate agreement is required, Kensho may share Registration Data with its Affiliates (as defined below).\n3.3 Disclosure of Your Information. Except as otherwise noted herein, Kensho will not disclose, rent or sell personal information collected from or about you without your permission. For the purposes specified in the preceding paragraph, we may transfer or disclose Registration Data and Access Data to S&P Global Inc. and its affiliates (\u201cKensho Affiliates\u201d) and third parties who are contracted to perform services on behalf of Kensho, such as those who assist Kensho in bringing you this Content and providing you with certain features and functionality included within or accessible via this Content. We may also disclose Registration Data and Access Data to Kensho Affiliates and third parties in connection with their providing you access to this Content. Disclosures to these third parties will be subject to confidentiality agreements and, where required, governed by contract. Kensho may also be required to disclose information to governmental, regulatory or self-regulatory entities or agencies in response to regulatory inquiries or to comply with applicable laws, rules, regulations, orders, subpoenas or other legal processes.\n3.4 Consent. By (a) agreeing to these Terms of Usage, or (b) by using this Content, and, in either case, providing any information that may be required, requested or otherwise collected by us as set forth above, you freely consent to Kensho processing your information in the United States and in other countries and territories for the purposes set out in these Terms of Usage, and you also consent to the transfer of your information for such purposes to any third party content provider wherever such entity may from time to time be located and to any third parties as described above and in accordance with applicable law and regulations. If you do not permit Kensho to collect any of your information or do not agree with any of the terms and conditions of these Terms of Usage, you should not use this Content and should exit this page and/or Content, as the case may be. If after registering with Kensho, you desire to withdraw the consent granted in this Section 3.4 for all future use of your information by Kensho, you must notify Kensho in writing at the address listed below in Section 3.8 and immediately cease use of this Content.\n3.5 Inquiries. If you have any questions regarding these Terms of Usage or your information that is held by us, please contact Kensho in writing using the contact information provided below. If we receive a request regarding your personal information held by us, we will use reasonable means to provide you with such information that we can reasonably compile. You will be given the opportunity to rectify any inaccuracies in such information.\n3.6 Encryption. Kensho may use encryption technology to protect certain transmissions of data to/from this Content, but e-mail and other communications, unless otherwise noted on this Content, are not encrypted to/from this Content. Therefore, you should not send any personal or identifying information, such as account numbers, credit card numbers, Social Security numbers, passwords, etc., to Kensho via e-mail. By utilizing e-mail or other electronic communication means you acknowledge that you have no expectation of privacy with respect to the information delivered thereby and that Kensho will not be responsible for any loss or damage that could result from interception by third parties of any information so sent.\n3.7 Contact Information. In the event you have any questions regarding these Terms of Use, this Privacy Statement or to make any requests or queries regarding your information that is held by us you may contact us in writing at [email protected] or Kensho Technologies LLC, Attn: General Counsel, 55 Water Street, New York, NY 10041.\nSection 4 - MISCELLANEOUS\n4.1 Entire Agreement. These Terms of Usage constitute the entire agreement of the parties hereto with respect to the subject matter hereof and supersede all prior agreements and undertakings, both written and oral, between the parties with respect to the subject matter hereof.\n4.2 Severability. If any term or other provision of these Terms of Usage is invalid, illegal or incapable of being enforced by any law or public policy, all other terms and provisions of these Terms of Usage shall nevertheless remain in full force and effect so long as the economic or legal substance of the transactions contemplated hereby is not affected in any manner materially adverse to any party.\n4.3 Governing Law; Forum. These Terms of Usage shall be governed in all respects by the laws of the State of New York, and any litigation arising out of or connected in any way with these Terms of Usage shall take place in a State or Federal court of competent jurisdiction in New York County, State of New York.\n4.4 Waiver of Jury Trial. YOU WAIVE TO THE FULLEST EXTENT PERMITTED BY APPLICABLE LAW ANY RIGHT YOU MAY HAVE TO A TRIAL BY JURY WITH RESPECT TO ANY ACTIONS OR PROCEEDINGS DIRECTLY OR INDIRECTLY ARISING OUT OF, UNDER OR IN CONNECTION WITH THESE TERMS OF USAGE.\n4.5 Conflict. In the event of a conflict between these Terms of Use and any other agreement with Kensho that relates to Third-Party Content, the more restrictive terms shall prevail.", "extra_gated_fields": {"Full name": "text", "Email": "text", "Institution": "text", "I accept the Terms of Usage": "checkbox"}}
|
2023-09-25T09:28:51+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-other #region-us
|
# Distil Whisper: SPGISpeech With Timestamps
This is a variant of the SPGISpeech dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card.
|
[
"# Distil Whisper: SPGISpeech With Timestamps\n\nThis is a variant of the SPGISpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n",
"# Distil Whisper: SPGISpeech With Timestamps\n\nThis is a variant of the SPGISpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
[
31,
116,
92,
40,
30
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-other #region-us \n# Distil Whisper: SPGISpeech With Timestamps\n\nThis is a variant of the SPGISpeech dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under custom terms. To view the custom license for this dataset, refer to the original dataset card."
] |
6d51d240102af8dfa988146b01609d7fc78dc9c3
|
# Distil Whisper: TEDLIUM With Timestamps
This is a variant of the [TEDLIUM](https://huggingface.co/datasets/LIUM/tedlium) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/LIUM/tedlium).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/tedlium", "release3", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc-by-nc-nd-3.0.
|
distil-whisper/tedlium-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc-by-nc-nd-3.0",
"region:us"
] |
2023-09-22T08:05:11+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-3.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "TEDLIUM"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc-by-nc-nd-3.0 #region-us
|
# Distil Whisper: TEDLIUM With Timestamps
This is a variant of the TEDLIUM dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc-by-nc-nd-3.0.
|
[
"# Distil Whisper: TEDLIUM With Timestamps\n\nThis is a variant of the TEDLIUM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-nc-nd-3.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-nc-nd-3.0 #region-us \n",
"# Distil Whisper: TEDLIUM With Timestamps\n\nThis is a variant of the TEDLIUM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc-by-nc-nd-3.0."
] |
[
39,
114,
92,
40,
20
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc-by-nc-nd-3.0 #region-us \n# Distil Whisper: TEDLIUM With Timestamps\n\nThis is a variant of the TEDLIUM dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc-by-nc-nd-3.0."
] |
c54119887998d081639b567304670fec333019fd
|
# Distil Whisper: VoxPopuli With Timestamps
This is a variant of the [VoxPopuli](https://huggingface.co/datasets/facebook/voxpopuli) dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper [large-v2](https://huggingface.co/openai/whisper-large-v2)
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
[dataset card](https://huggingface.co/datasets/facebook/voxpopuli).
## Standalone Usage
First, install the latest version of the 🤗 Datasets package:
```bash
pip install --upgrade pip
pip install --upgrade datasets[audio]
```
The dataset can be downloaded and pre-processed on disk using the [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.5/en/package_reference/loading_methods#datasets.load_dataset)
function:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en")
# take the first sample of the validation set
sample = dataset["validation"][0]
```
It can also be streamed directly from the Hub using Datasets' [streaming mode](https://huggingface.co/blog/audio-datasets#streaming-mode-the-silver-bullet).
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
```python
from datasets import load_dataset
dataset = load_dataset("distil-whisper/voxpopuli", "en", streaming=True)
# take the first sample of the validation set
sample = next(iter(dataset["validation"]))
```
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
[Distil Whisper repository](https://github.com/huggingface/distil-whisper#training).
## License
This dataset is licensed under cc0-1.0.
|
distil-whisper/voxpopuli-timestamped
|
[
"task_categories:automatic-speech-recognition",
"language:en",
"license:cc0-1.0",
"region:us"
] |
2023-09-22T08:05:12+00:00
|
{"language": ["en"], "license": "cc0-1.0", "task_categories": ["automatic-speech-recognition"], "-pretty_name": "VoxPopuli"}
|
2023-09-25T09:30:13+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us
|
# Distil Whisper: VoxPopuli With Timestamps
This is a variant of the VoxPopuli dataset, augmented to return the pseudo-labelled Whisper
Transcriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by
labelling the input audio data with the Whisper large-v2
model with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original
dataset card.
## Standalone Usage
First, install the latest version of the Datasets package:
The dataset can be downloaded and pre-processed on disk using the 'load_dataset'
function:
It can also be streamed directly from the Hub using Datasets' streaming mode.
Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire
dataset to disk:
## Distil Whisper Usage
To use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the
Distil Whisper repository.
## License
This dataset is licensed under cc0-1.0.
|
[
"# Distil Whisper: VoxPopuli With Timestamps\n\nThis is a variant of the VoxPopuli dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc0-1.0."
] |
[
"TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us \n",
"# Distil Whisper: VoxPopuli With Timestamps\n\nThis is a variant of the VoxPopuli dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.",
"## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:",
"## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.",
"## License\n\nThis dataset is licensed under cc0-1.0."
] |
[
34,
116,
92,
40,
15
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #language-English #license-cc0-1.0 #region-us \n# Distil Whisper: VoxPopuli With Timestamps\n\nThis is a variant of the VoxPopuli dataset, augmented to return the pseudo-labelled Whisper \nTranscriptions alongside the original dataset elements. The pseudo-labelled transcriptions were generated by \nlabelling the input audio data with the Whisper large-v2\nmodel with *greedy* sampling and timestamp prediction. For information on how the original dataset was curated, refer to the original \ndataset card.## Standalone Usage\n\nFirst, install the latest version of the Datasets package:\n\n\n\nThe dataset can be downloaded and pre-processed on disk using the 'load_dataset' \nfunction:\n\n\n\nIt can also be streamed directly from the Hub using Datasets' streaming mode.\nLoading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire \ndataset to disk:## Distil Whisper Usage\n\nTo use this dataset to reproduce a Distil Whisper training run, refer to the instructions on the \nDistil Whisper repository.## License\n\nThis dataset is licensed under cc0-1.0."
] |
4ca2e05b44634f380e11db87be720ccc8e63e724
|
# Bangumi Image Base of Tenpuru
This is the image base of bangumi Tenpuru, we detected 9 characters, 883 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 272 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 50 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 221 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 36 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 37 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 101 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 115 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 22 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 29 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
|
BangumiBase/tenpuru
|
[
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] |
2023-09-22T08:05:54+00:00
|
{"license": "mit", "size_categories": ["n<1K"], "tags": ["art"]}
|
2023-09-29T10:12:45+00:00
|
[] |
[] |
TAGS
#size_categories-n<1K #license-mit #art #region-us
|
Bangumi Image Base of Tenpuru
=============================
This is the image base of bangumi Tenpuru, we detected 9 characters, 883 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
|
[] |
[
"TAGS\n#size_categories-n<1K #license-mit #art #region-us \n"
] |
[
23
] |
[
"passage: TAGS\n#size_categories-n<1K #license-mit #art #region-us \n"
] |
10eba7d5a138587bf2a37f24723ffd74ceead0fa
|
# Dataset Card for "lima-chatml"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
TinyPixel/lima-chatml
|
[
"region:us"
] |
2023-09-22T08:08:16+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2945130, "num_examples": 1030}], "download_size": 1700056, "dataset_size": 2945130}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T08:08:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "lima-chatml"
More Information needed
|
[
"# Dataset Card for \"lima-chatml\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"lima-chatml\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"lima-chatml\"\n\nMore Information needed"
] |
1b646c7cad95822a8badeaa7bcf642b902051619
|
# Dataset Card for "poor4kids_0_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/poor4kids_0_prompts
|
[
"region:us"
] |
2023-09-22T08:32:16+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2709, "num_examples": 11}], "download_size": 3992, "dataset_size": 2709}}
|
2023-09-22T08:32:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "poor4kids_0_prompts"
More Information needed
|
[
"# Dataset Card for \"poor4kids_0_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"poor4kids_0_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"poor4kids_0_prompts\"\n\nMore Information needed"
] |
83ea7f5e26d93d164bf25d038b33c141915d002f
|
# Dataset Card for "poor4kids_1_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/poor4kids_1_prompts
|
[
"region:us"
] |
2023-09-22T08:32:18+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1725, "num_examples": 7}], "download_size": 3348, "dataset_size": 1725}}
|
2023-09-22T08:32:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "poor4kids_1_prompts"
More Information needed
|
[
"# Dataset Card for \"poor4kids_1_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"poor4kids_1_prompts\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"poor4kids_1_prompts\"\n\nMore Information needed"
] |
dd438d10b8fc2d24fd30da99ac6206ffec3c51a8
|
# Dataset Card for "poor4kids_2_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/poor4kids_2_prompts
|
[
"region:us"
] |
2023-09-22T08:32:19+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2371, "num_examples": 10}], "download_size": 3087, "dataset_size": 2371}}
|
2023-09-22T08:32:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "poor4kids_2_prompts"
More Information needed
|
[
"# Dataset Card for \"poor4kids_2_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"poor4kids_2_prompts\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"poor4kids_2_prompts\"\n\nMore Information needed"
] |
e864232e45f743e3e9963ccb06c699b33104a8b9
|
# Dataset Card for "train_data_set_12000-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hellomyoh/train_data_set_12000-added-text
|
[
"region:us"
] |
2023-09-22T08:49:16+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "num", "dtype": "int64"}, {"name": "english", "dtype": "string"}, {"name": "korean", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6633946, "num_examples": 12000}], "download_size": 3995296, "dataset_size": 6633946}}
|
2023-09-22T08:49:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_data_set_12000-added-text"
More Information needed
|
[
"# Dataset Card for \"train_data_set_12000-added-text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_data_set_12000-added-text\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_data_set_12000-added-text\"\n\nMore Information needed"
] |
c05eb0deab97b282a9db66a47d83100a0f05f19f
|
# Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-peft
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [totally-not-an-llm/EverythingLM-13b-V3-peft](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-peft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-peft",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T05:00:11.461454](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-peft/blob/main/results_2023-10-24T05-00-11.461454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.25356543624161076,
"em_stderr": 0.004455336892528858,
"f1": 0.3693697567114122,
"f1_stderr": 0.004373655698164223,
"acc": 0.3919187301374544,
"acc_stderr": 0.009398774025536343
},
"harness|drop|3": {
"em": 0.25356543624161076,
"em_stderr": 0.004455336892528858,
"f1": 0.3693697567114122,
"f1_stderr": 0.004373655698164223
},
"harness|gsm8k|5": {
"acc": 0.05534495830174375,
"acc_stderr": 0.0062982217961795595
},
"harness|winogrande|5": {
"acc": 0.728492501973165,
"acc_stderr": 0.012499326254893127
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-peft
|
[
"region:us"
] |
2023-09-22T08:57:45+00:00
|
{"pretty_name": "Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft", "dataset_summary": "Dataset automatically created during the evaluation run of model [totally-not-an-llm/EverythingLM-13b-V3-peft](https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-peft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-peft\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T05:00:11.461454](https://huggingface.co/datasets/open-llm-leaderboard/details_totally-not-an-llm__EverythingLM-13b-V3-peft/blob/main/results_2023-10-24T05-00-11.461454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.25356543624161076,\n \"em_stderr\": 0.004455336892528858,\n \"f1\": 0.3693697567114122,\n \"f1_stderr\": 0.004373655698164223,\n \"acc\": 0.3919187301374544,\n \"acc_stderr\": 0.009398774025536343\n },\n \"harness|drop|3\": {\n \"em\": 0.25356543624161076,\n \"em_stderr\": 0.004455336892528858,\n \"f1\": 0.3693697567114122,\n \"f1_stderr\": 0.004373655698164223\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.05534495830174375,\n \"acc_stderr\": 0.0062982217961795595\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.728492501973165,\n \"acc_stderr\": 0.012499326254893127\n }\n}\n```", "repo_url": "https://huggingface.co/totally-not-an-llm/EverythingLM-13b-V3-peft", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|arc:challenge|25_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T05_00_11.461454", "path": ["**/details_harness|drop|3_2023-10-24T05-00-11.461454.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T05-00-11.461454.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T05_00_11.461454", "path": ["**/details_harness|gsm8k|5_2023-10-24T05-00-11.461454.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T05-00-11.461454.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hellaswag|10_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T09-57-21.290037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T09-57-21.290037.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T09-57-21.290037.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T05_00_11.461454", "path": ["**/details_harness|winogrande|5_2023-10-24T05-00-11.461454.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T05-00-11.461454.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T09_57_21.290037", "path": ["results_2023-09-22T09-57-21.290037.parquet"]}, {"split": "2023_10_24T05_00_11.461454", "path": ["results_2023-10-24T05-00-11.461454.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T05-00-11.461454.parquet"]}]}]}
|
2023-10-24T04:00:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model totally-not-an-llm/EverythingLM-13b-V3-peft on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T05:00:11.461454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model totally-not-an-llm/EverythingLM-13b-V3-peft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T05:00:11.461454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model totally-not-an-llm/EverythingLM-13b-V3-peft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T05:00:11.461454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of totally-not-an-llm/EverythingLM-13b-V3-peft## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model totally-not-an-llm/EverythingLM-13b-V3-peft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T05:00:11.461454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
631d53c04d35f9f778c7bc339517fef4d0caf0e9
|
# Dataset Card for "train_data_set_395107-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hellomyoh/train_data_set_395107-added-text
|
[
"region:us"
] |
2023-09-22T08:58:24+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "num", "dtype": "int64"}, {"name": "english", "dtype": "string"}, {"name": "korean", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 121537138, "num_examples": 395107}], "download_size": 65210995, "dataset_size": 121537138}}
|
2023-09-22T08:58:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_data_set_395107-added-text"
More Information needed
|
[
"# Dataset Card for \"train_data_set_395107-added-text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_data_set_395107-added-text\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_data_set_395107-added-text\"\n\nMore Information needed"
] |
a80e72505b282aeaa8cac9eedddf68e520f24094
|
# Dataset Card for "train_data_set_10001966-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hellomyoh/train_data_from_mixed_aihub_memoq_s10001966
|
[
"region:us"
] |
2023-09-22T09:00:52+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "num", "dtype": "int64"}, {"name": "english", "dtype": "string"}, {"name": "korean", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 497586414, "num_examples": 1001966}], "download_size": 302932465, "dataset_size": 497586414}}
|
2023-09-22T09:02:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_data_set_10001966-added-text"
More Information needed
|
[
"# Dataset Card for \"train_data_set_10001966-added-text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_data_set_10001966-added-text\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_data_set_10001966-added-text\"\n\nMore Information needed"
] |
fed453ed380644cfc85c7fe020e272bd26a79859
|
# Dataset Card for "kaelteVersorgen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteVersorgen-50-undersampled
|
[
"region:us"
] |
2023-09-22T09:14:31+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "KaelteErzeugen", "1": "KaelteSpeichern", "2": "KaelteVerteilen"}}}}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "Komponente", "dtype": "string"}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 27642.555450236967, "num_examples": 112}, {"name": "test", "num_bytes": 32271, "num_examples": 132}, {"name": "valid", "num_bytes": 32271, "num_examples": 132}], "download_size": 51628, "dataset_size": 92184.55545023696}}
|
2023-09-22T09:14:36+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteVersorgen-50-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteVersorgen-50-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteVersorgen-50-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteVersorgen-50-undersampled\"\n\nMore Information needed"
] |
093618b7305e6b57f432215bc5f88aaf450065c4
|
# Dataset Card for "kaelteVersorgen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteVersorgen-100-undersampled
|
[
"region:us"
] |
2023-09-22T09:14:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "KaelteErzeugen", "1": "KaelteSpeichern", "2": "KaelteVerteilen"}}}}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "Komponente", "dtype": "string"}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52323.40853080569, "num_examples": 212}, {"name": "test", "num_bytes": 32271, "num_examples": 132}, {"name": "valid", "num_bytes": 32271, "num_examples": 132}], "download_size": 57973, "dataset_size": 116865.4085308057}}
|
2023-09-22T09:14:40+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteVersorgen-100-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteVersorgen-100-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteVersorgen-100-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteVersorgen-100-undersampled\"\n\nMore Information needed"
] |
7b977c0b55aa1fb45e0cd4154dc4d94196254417
|
# Dataset Card for "kaelteVersorgen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteVersorgen-200-undersampled
|
[
"region:us"
] |
2023-09-22T09:14:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "KaelteErzeugen", "1": "KaelteSpeichern", "2": "KaelteVerteilen"}}}}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "Komponente", "dtype": "string"}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101685.11469194313, "num_examples": 412}, {"name": "test", "num_bytes": 32271, "num_examples": 132}, {"name": "valid", "num_bytes": 32271, "num_examples": 132}], "download_size": 69781, "dataset_size": 166227.11469194313}}
|
2023-09-22T09:14:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteVersorgen-200-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteVersorgen-200-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteVersorgen-200-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteVersorgen-200-undersampled\"\n\nMore Information needed"
] |
aa97b0825bf7c821a1b272ceafced2f91fb51207
|
# Dataset Card for "train_data_set_117755-added-text"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
hellomyoh/train_data_from_memoq_s117755
|
[
"region:us"
] |
2023-09-22T09:19:30+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "num", "dtype": "int64"}, {"name": "english", "dtype": "string"}, {"name": "korean", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 37859040, "num_examples": 117755}], "download_size": 21550350, "dataset_size": 37859040}}
|
2023-09-22T09:19:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "train_data_set_117755-added-text"
More Information needed
|
[
"# Dataset Card for \"train_data_set_117755-added-text\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"train_data_set_117755-added-text\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"train_data_set_117755-added-text\"\n\nMore Information needed"
] |
b2d50857a739f899a102628e5b27cfd136178be9
|
# Dataset Card for "kaelteErzeugen-50-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteErzeugen-50-undersampled
|
[
"region:us"
] |
2023-09-22T09:28:50+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Kaelteanlage", "1": "KaeltekreisAllgemein", "2": "Kaeltemaschine", "3": "Kaeltemengenzaehler", "4": "Klappe", "5": "Pumpe", "6": "RKW", "7": "Regler", "8": "Ruecklauf", "9": "Ventil", "10": "Vorlauf", "11": "Waermemengenzaehler"}}}}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 72126.24090121317, "num_examples": 293}, {"name": "test", "num_bytes": 18282, "num_examples": 73}, {"name": "valid", "num_bytes": 18282, "num_examples": 73}], "download_size": 54220, "dataset_size": 108690.24090121317}}
|
2023-09-22T09:28:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteErzeugen-50-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteErzeugen-50-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteErzeugen-50-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteErzeugen-50-undersampled\"\n\nMore Information needed"
] |
4576bb11c6632861feecc539d27a896c0247f2e2
|
# Dataset Card for "kaelteErzeugen-100-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteErzeugen-100-undersampled
|
[
"region:us"
] |
2023-09-22T09:28:55+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Kaelteanlage", "1": "KaeltekreisAllgemein", "2": "Kaeltemaschine", "3": "Kaeltemengenzaehler", "4": "Klappe", "5": "Pumpe", "6": "RKW", "7": "Regler", "8": "Ruecklauf", "9": "Ventil", "10": "Vorlauf", "11": "Waermemengenzaehler"}}}}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 90342.424610052, "num_examples": 367}, {"name": "test", "num_bytes": 18282, "num_examples": 73}, {"name": "valid", "num_bytes": 18282, "num_examples": 73}], "download_size": 58393, "dataset_size": 126906.424610052}}
|
2023-09-22T09:28:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteErzeugen-100-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteErzeugen-100-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteErzeugen-100-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteErzeugen-100-undersampled\"\n\nMore Information needed"
] |
fd998d4a6b82e7a15d0d406b3268d3f7dbc795e8
|
# Dataset Card for "kaelteErzeugen-200-undersampled"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mboth/kaelteErzeugen-200-undersampled
|
[
"region:us"
] |
2023-09-22T09:28:59+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "Datatype", "dtype": "string"}, {"name": "Beschreibung", "dtype": "string"}, {"name": "Name", "dtype": "string"}, {"name": "Unit", "dtype": "string"}, {"name": "Grundfunktion", "dtype": "string"}, {"name": "ScoreGrundfunktion", "dtype": "float64"}, {"name": "ZweiteGrundfunktion", "dtype": "string"}, {"name": "ScoreZweiteGrundfunktion", "dtype": "float64"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "Kaelteanlage", "1": "KaeltekreisAllgemein", "2": "Kaeltemaschine", "3": "Kaeltemengenzaehler", "4": "Klappe", "5": "Pumpe", "6": "RKW", "7": "Regler", "8": "Ruecklauf", "9": "Ventil", "10": "Vorlauf", "11": "Waermemengenzaehler"}}}}, {"name": "ScoreKomponente", "dtype": "float64"}, {"name": "Datenpunkt", "dtype": "string"}, {"name": "ScoreDatenpunkt", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114958.88908145581, "num_examples": 467}, {"name": "test", "num_bytes": 18282, "num_examples": 73}, {"name": "valid", "num_bytes": 18282, "num_examples": 73}], "download_size": 63616, "dataset_size": 151522.88908145583}}
|
2023-09-22T09:29:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "kaelteErzeugen-200-undersampled"
More Information needed
|
[
"# Dataset Card for \"kaelteErzeugen-200-undersampled\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"kaelteErzeugen-200-undersampled\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"kaelteErzeugen-200-undersampled\"\n\nMore Information needed"
] |
c24d25f0670b938acc573fa152d3a26d0a1cbc6a
|
# Dataset Card for "ruovaqa_demo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Mihaj/ruohqa_demo
|
[
"region:us"
] |
2023-09-22T09:33:36+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}, {"name": "context", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "title", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 384292, "num_examples": 968}, {"name": "validation", "num_bytes": 165616, "num_examples": 416}], "download_size": 287881, "dataset_size": 549908}}
|
2023-09-22T09:34:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ruovaqa_demo"
More Information needed
|
[
"# Dataset Card for \"ruovaqa_demo\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ruovaqa_demo\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ruovaqa_demo\"\n\nMore Information needed"
] |
ab2c77627199285e0721aad5e4a210a82f31e163
|
# Medical Staff People Tracking
The dataset contains a collection of frames extracted from videos captured within a **hospital environment**. The **bounding boxes** are drawn around the **doctors, nurses, and other people** who appear in the video footage.
The dataset can be used for **computer vision in healthcare settings** and *the development of systems that monitor medical staff activities, patient flow, analyze wait times, and assess the efficiency of hospital processes*.

# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=medical-staff-people-tracking) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
The dataset consists of 2 folders with frames from the video from a hospital.
Each folder includes:
- **images**: folder with original frames from the video,
- **boxes**: visualized data labeling for the images in the previous folder,
- **.csv file**: file with id and path of each frame in the "images" folder,
- **annotations.xml**: contains coordinates of the bounding boxes, created for the original frames
# Data Format
Each frame from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the bounding boxes for people tracking. For each point, the x and y coordinates are provided.
### Classes:
- **doctor** - doctor in the frame
- **nurse** - nurse in the frame
- **others** - other people (not medical staff)
# Example of the XML-file
.png?generation=1695995011699193&alt=media)
# Object tracking might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=medical-staff-people-tracking)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/medical-staff-people-tracking
|
[
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"code",
"medical",
"region:us"
] |
2023-09-22T09:35:27+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-to-image", "object-detection"], "tags": ["code", "medical"], "dataset_info": [{"config_name": "video_01", "features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "sequence": [{"name": "track_id", "dtype": "uint32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "nurse", "1": "doctor", "2": "other_people"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "occluded", "dtype": "uint8"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 27856, "num_examples": 64}], "download_size": 23409734, "dataset_size": 27856}, {"config_name": "video_02", "features": [{"name": "id", "dtype": "int32"}, {"name": "name", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "shapes", "sequence": [{"name": "track_id", "dtype": "uint32"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "nurse", "1": "doctor", "2": "other_people"}}}}, {"name": "type", "dtype": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "float32"}, {"name": "occluded", "dtype": "uint8"}, {"name": "attributes", "sequence": [{"name": "name", "dtype": "string"}, {"name": "text", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 37214, "num_examples": 73}], "download_size": 28155019, "dataset_size": 37214}]}
|
2023-10-09T06:55:26+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #code #medical #region-us
|
# Medical Staff People Tracking
The dataset contains a collection of frames extracted from videos captured within a hospital environment. The bounding boxes are drawn around the doctors, nurses, and other people who appear in the video footage.
The dataset can be used for computer vision in healthcare settings and *the development of systems that monitor medical staff activities, patient flow, analyze wait times, and assess the efficiency of hospital processes*.

# Example of the XML-file
",
"# Example of the XML-file \n",
"# Example of the XML-file \n# Example of the XML-file \n on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Faradaylab__ARIA-70B-V3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T01:49:31.523366](https://huggingface.co/datasets/open-llm-leaderboard/details_Faradaylab__ARIA-70B-V3/blob/main/results_2023-10-26T01-49-31.523366.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4170511744966443,
"em_stderr": 0.005049513544068899,
"f1": 0.4729456795302025,
"f1_stderr": 0.004847240610421039,
"acc": 0.551055157066324,
"acc_stderr": 0.01158136923349411
},
"harness|drop|3": {
"em": 0.4170511744966443,
"em_stderr": 0.005049513544068899,
"f1": 0.4729456795302025,
"f1_stderr": 0.004847240610421039
},
"harness|gsm8k|5": {
"acc": 0.2812736921910538,
"acc_stderr": 0.012384789310940237
},
"harness|winogrande|5": {
"acc": 0.8208366219415943,
"acc_stderr": 0.010777949156047986
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Faradaylab__ARIA-70B-V3
|
[
"region:us"
] |
2023-09-22T09:44:15+00:00
|
{"pretty_name": "Evaluation run of Faradaylab/ARIA-70B-V3", "dataset_summary": "Dataset automatically created during the evaluation run of model [Faradaylab/ARIA-70B-V3](https://huggingface.co/Faradaylab/ARIA-70B-V3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Faradaylab__ARIA-70B-V3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T01:49:31.523366](https://huggingface.co/datasets/open-llm-leaderboard/details_Faradaylab__ARIA-70B-V3/blob/main/results_2023-10-26T01-49-31.523366.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4170511744966443,\n \"em_stderr\": 0.005049513544068899,\n \"f1\": 0.4729456795302025,\n \"f1_stderr\": 0.004847240610421039,\n \"acc\": 0.551055157066324,\n \"acc_stderr\": 0.01158136923349411\n },\n \"harness|drop|3\": {\n \"em\": 0.4170511744966443,\n \"em_stderr\": 0.005049513544068899,\n \"f1\": 0.4729456795302025,\n \"f1_stderr\": 0.004847240610421039\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2812736921910538,\n \"acc_stderr\": 0.012384789310940237\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8208366219415943,\n \"acc_stderr\": 0.010777949156047986\n }\n}\n```", "repo_url": "https://huggingface.co/Faradaylab/ARIA-70B-V3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|arc:challenge|25_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T01_49_31.523366", "path": ["**/details_harness|drop|3_2023-10-26T01-49-31.523366.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T01-49-31.523366.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T01_49_31.523366", "path": ["**/details_harness|gsm8k|5_2023-10-26T01-49-31.523366.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T01-49-31.523366.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hellaswag|10_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T10-43-51.211297.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T10-43-51.211297.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T10-43-51.211297.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T01_49_31.523366", "path": ["**/details_harness|winogrande|5_2023-10-26T01-49-31.523366.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T01-49-31.523366.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T10_43_51.211297", "path": ["results_2023-09-22T10-43-51.211297.parquet"]}, {"split": "2023_10_26T01_49_31.523366", "path": ["results_2023-10-26T01-49-31.523366.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T01-49-31.523366.parquet"]}]}]}
|
2023-10-26T00:49:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Faradaylab/ARIA-70B-V3
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Faradaylab/ARIA-70B-V3 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T01:49:31.523366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Faradaylab/ARIA-70B-V3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Faradaylab/ARIA-70B-V3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T01:49:31.523366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Faradaylab/ARIA-70B-V3",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Faradaylab/ARIA-70B-V3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T01:49:31.523366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
19,
31,
167,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Faradaylab/ARIA-70B-V3## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Faradaylab/ARIA-70B-V3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T01:49:31.523366(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
c8a11f5176665046c664728964f1b451565765b1
|
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
Gboparoobop/1
|
[
"task_categories:feature-extraction",
"task_categories:text-classification",
"task_categories:token-classification",
"license:creativeml-openrail-m",
"biology",
"art",
"region:us"
] |
2023-09-22T10:10:59+00:00
|
{"license": "creativeml-openrail-m", "task_categories": ["feature-extraction", "text-classification", "token-classification"], "tags": ["biology", "art"]}
|
2023-09-22T10:14:34+00:00
|
[] |
[] |
TAGS
#task_categories-feature-extraction #task_categories-text-classification #task_categories-token-classification #license-creativeml-openrail-m #biology #art #region-us
|
# Dataset Card for Dataset Name
## Dataset Description
- Homepage:
- Repository:
- Paper:
- Leaderboard:
- Point of Contact:
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#task_categories-feature-extraction #task_categories-text-classification #task_categories-token-classification #license-creativeml-openrail-m #biology #art #region-us \n",
"# Dataset Card for Dataset Name",
"## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:",
"### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
58,
8,
24,
32,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#task_categories-feature-extraction #task_categories-text-classification #task_categories-token-classification #license-creativeml-openrail-m #biology #art #region-us \n# Dataset Card for Dataset Name## Dataset Description\n\n- Homepage: \n- Repository: \n- Paper: \n- Leaderboard: \n- Point of Contact:### Dataset Summary\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
3901cb77c99984bab1f9a9b2a1bd8edea44238ce
|
## 小样本意图识别指令数据集
收集了意图识别的数据集, 将其制作成 prompt, 用于 few-shot 的意图识别 LLM 研究.
编写 prompt 模板需要想像力, 你可以在 community 中交流你的想法.
`{dataset_name}_prompt` 子集是从其对应的 `{dataset_name}` 数据集和 `{dataset_name}_template` 子集动态生成的, 因此每一次的结果都会不一样.
提示: 由于训练时 prompt 的长度可能超出最大限制而被 truncate, 因此尽量把 prompt 设计成即使被 truncate 也仍然可以用于 GPT 训练.
[提示工程指南](https://www.promptingguide.ai/zh/techniques/cot)
### 样本示例
<details>
<summary>train subset prompt 示例: (intent: Is it safe to go to the gym indoors if I'm vaccinated?)</summary>
<pre><code>intent recognition.<br>
Examples:
------------
text: will i be okay on the gym
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: I want to go and exercise at the gym, indoors, but I don't know if it's safe?
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: I worry I will catch Covid from the Gym even though I have been vaccinated?
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: What does the fda think about the covid 19 vaccine?
intent: Is the vaccine FDA approved?
------------
text: it's never safe in a gym there are always bacteria everywhere
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: who is the difference between FDA authorization and approval?
intent: Is the vaccine FDA approved?
------------
text: would the vaccine FDA be approved
intent: Is the vaccine FDA approved?
------------
text: If I had my vaccine, is it safe to go to the indoor gym?
intent:
</code></pre>
</details>
<details>
<summary>train subset prompt 示例: (intent: 考虑一下)</summary>
<pre><code>电销场景意图识别。如果不能确定,请输出 “未知意图”。<br>
Examples:
------------
text: 没关系啦 知道的
intent: 肯定答复
------------
text: 怎么能联系你
intent: 查联系方式
------------
text: 恩。让我想想吧。
intent: 考虑一下
------------
text: 说点有用的
intent: 请讲重点
------------
text: 唉唉
intent: 语气词
------------
text: 说快一点
intent: 请讲重点
------------
text: 再介绍一下
intent: 要求复述
------------
text: 从哪弄到我信息
intent: 质疑隐私安全
------------
text: 哎。。不是的
intent: 不是
------------
text: 给我电话号码
intent: 查联系方式
------------
text: 先看看吧
intent: 考虑一下
------------
text: 怎么知道道我的信息
intent: 质疑隐私安全
------------
text: 哎,再说吧,我再想想
intent: 考虑一下
------------
text: 不,我清醒。
intent: 不是
------------
text: 重说一次
intent: 要求复述
------------
text: 行了,晚安
intent: 肯定答复
------------
text: 额额额额
intent: 语气词
------------
text: 恩。哎再说吧我考虑一下hiahia
intent:
</code></pre>
</details>
<details>
<summary>train subset prompt 示例: (intent: 污言秽语)</summary>
<pre><code>电销场景意图识别。<br>
Examples:
text: 那留言
intent: 语音信箱<br>
text: 好啊,哈哈,没事,我再找其他的人
intent: 好的<br>
text: 在!
intent: 我在<br>
text: 要打副本,没时间
intent: 没时间<br>
text: 必须去学习!赶快去!
intent: 加快速度<br>
text: 好的。满汉全席送上
intent: 好的<br>
text: 你看到我给你的留言了么
intent: 语音信箱<br>
text: 我在呢。
intent: 我在<br>
text: 傻逼?
intent: 污言秽语<br>
text: 胸大无脑
intent: 污言秽语<br>
text: 不着急。
intent: 请等一等<br>
text: 恩 我是团子
intent: 做自我介绍<br>
text: 我是收电费的
intent: 做自我介绍<br>
text: 我现在没时间接电话呢,待会儿打给你。
intent: 没时间<br>
text: 好的。哈哈。初六见。我去睡觉啦
intent: 好的<br>
text: 在啊
intent: 我在<br>
text: 包皮猩
intent: 污言秽语<br>
text: 离开一下
intent: 请等一等<br>
text: 有病
intent: 污言秽语<br>
text: 给我留个言
intent: 语音信箱<br>
text: 你等一下
intent: 请等一等<br>
text: 立刻马上!!!快快快快
intent: 加快速度<br>
text: 我是郭钊源
intent: 做自我介绍<br>
text: 快点儿
intent: 加快速度<br>
text: 没时间睡觉怎么办吖
intent: 没时间<br>
text: 吃!你来
intent:
</code></pre>
</details>
<details>
<summary>test subset prompt 示例: (intent: 未能理解)</summary>
<pre><code>电销场景意图识别。如果不能确定,请输出 “未知意图”。<br>
Examples:
------------
text: 讲什么
intent: 未能理解
------------
text: 等着吧!
intent: 请等一等
------------
text: 搞不懂你
intent: 未能理解
------------
text: 我实在是不想弄了,我那时事多没时间啊!
intent: 没时间
------------
text: 这你自己不清楚自己啊,还不晓得
intent: 不清楚
------------
text: 没问题放心吧
intent: 肯定(没问题)
------------
text: 公司名字是什么
intent: 查公司介绍
------------
text: 不放弃
intent: 肯定(需要)
------------
text: 老师也不懂
intent:
</code></pre>
</details>
<details>
<summary>test subset prompt 示例: (intent: 肯定(嗯嗯))</summary>
<pre><code>电销场景意图识别。
不确定时请输出 “未知领域”。<br>
Examples:
------------
text: 截止期过了多少天
intent: 疑问(时长)
------------
text: 不了
intent: 不需要
------------
text: 不行,不够不够
intent: 否定(不可以)
------------
text: 4个1
intent: 答数值
------------
text: 辽宁
intent: 地址
------------
text: 不清楚
intent: 不清楚
------------
text: 店里
intent: 地址
------------
text: 嗯啊嗯嗯来吧
intent: 肯定(嗯嗯)
------------
text: 利息比别的贷款高
intent: 价格太高
------------
text: 算23点,[9,4,8,2
intent: 答数值
------------
text: 可以还得上
intent: 会按时处理
------------
text: 对啊 就是不行
intent: 否定(不可以)
------------
text: 真的不便宜
intent: 价格太高
------------
text: 嗯,thanks
intent: 肯定(嗯嗯)
------------
text: 这你自己不清楚自己啊,还不晓得
intent: 不清楚
------------
text: 我找找吧
intent: 会按时处理
------------
text: 这是拖欠几天了
intent: 疑问(时长)
------------
text: 不需要证据
intent: 不需要
------------
text: 噢,谢谢
intent: 肯定(嗯嗯)
------------
text: 恩恩,想我
intent:
</code></pre>
</details>
<details>
<summary>test subset prompt 示例: (intent: 不信任)</summary>
<pre><code>意图识别。<br>
Examples:
text: 你不要答非所问
intent: 答非所问<br>
text: 费用搞错了
intent: 否定(错误)<br>
text: 我给你留言了,你木有回
intent: 语音信箱<br>
text: 小骗子
intent: 不信任<br>
text: 昆明
intent: 实体(地址)<br>
text: 哦,行,好了你发信息给我
intent: 肯定(可以)<br>
text: 哦,这样啊,没时间就算了
intent: 没时间<br>
text: 我错了,别欺负我了
intent: 请求谅解<br>
text: 万一你们是骗子怎么办
intent: 不信任<br>
text: 我太乃刀了
intent: 无关领域<br>
text: 讲清楚重要的
intent: 请讲重点<br>
text: 骗子,好好说话
intent:
</code></pre>
</details>
### 数据来源
数据集从网上收集整理如下:
#### 意图识别
意图识别(英语)
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| ATIS | 英语 | [ATIS](https://paperswithcode.com/dataset/atis); [ATIS_dataset](https://github.com/howl-anderson/ATIS_dataset) | 4978(Training set)+893(Testing set) | 微软提供的公开数据集 (Airline Travel Information System),实现意图识别任务。 | [atis_intents](https://huggingface.co/datasets/fathyshalab/atis_intents) |
| conv_intent | 英语 | [conv_intent](https://huggingface.co/datasets/generalization/conv_intent_Full-p_1) | 13.8K | | [intent-recogniton](https://www.kaggle.com/code/upsunny/intent-recogniton-based-on-bert) |
| banking77 | 英语 | [banking77](https://arxiv.org/abs/2003.04807); [task-specific-datasets](https://github.com/PolyAI-LDN/task-specific-datasets) | 13,083 | 在线银行查询数据集 | [banking77](https://huggingface.co/datasets/banking77) |
| mobile_assistant | 英语 | [Intent-Classification-large](https://huggingface.co/datasets/dipesh/Intent-Classification-large) | 17K (但是我去除了意图为 others 的样本.) | | |
| amazon_massive_intent_en_us | 英语 | [amazon_massive_intent_en_us](https://huggingface.co/datasets/SetFit/amazon_massive_intent_en-US) | 16.5K | Alexa virtual assistant | [nlu_evaluation_data](https://huggingface.co/datasets/nlu_evaluation_data) |
| snips_built_in_intents | 英语 | [nlu-benchmark](https://github.com/sonos/nlu-benchmark); [benchmarking](https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d) | 328 | | [snips_built_in_intents](https://huggingface.co/datasets/snips_built_in_intents) |
| vira_intents | 英语 | [vira-intent-classification](https://github.com/IBM/vira-intent-classification) | 10.9K | COVID-19 疫苗意图 | [vira_intents_live](https://huggingface.co/datasets/codesj/vira-intents-live); [vira_intents_live](https://huggingface.co/datasets/vira-chatbot/vira-intents-live) |
| intent_classification | 英语 | [intent_classification](https://huggingface.co/datasets/Bhuvaneshwari/intent_classification) | 13.8K | | |
| Out-of-Scope | 英语 | [范围外意图分类数据集](https://tianchi.aliyun.com/dataset/94112); [clinc150](https://archive.ics.uci.edu/dataset/570/clinc150) | | 该数据集提供了一种评估“Out-of-Scope”输入的意图分类模型的方法。 | [Out-of-Scope Intent Classification Dataset](https://www.kaggle.com/datasets/stefanlarson/outofscope-intent-classification-dataset); [clinc_oos](https://huggingface.co/datasets/clinc_oos); [xjlulu/ntu_adl_intent](https://huggingface.co/datasets/xjlulu/ntu_adl_intent) |
| finance21 | 英语 | [finance21](https://github.com/Dark-Sied/Intent_Classification/) | | | |
| book6 | 英语 | [book6](https://github.com/ajinkyaT/CNN_Intent_Classification) | 12000 | Six categories namely: AddToPlaylist, BookRestaurant, GetWeather , RateBook , SearchCreativeWork, SearchScreeningEvent each having nearly 2000 sentences. | [Intent Recognition Dataset](https://www.kaggle.com/datasets/himanshunayal/intent-recognition-dataset) |
| bi_text | 英语 | [bi_text](https://www.kaggle.com/datasets/bitext/training-dataset-for-chatbotsvirtual-assistants); [customer-support-intent-dataset](https://www.kaggle.com/datasets/scodepy/customer-support-intent-dataset) | 8175 | 该数据集涵盖“客户支持”领域,包括分为 11 个类别的 27 个意图。 这些意图是从 Bitext 的 20 个特定领域数据集(银行、零售、公用事业……)中选择的,保留了跨领域的通用意图。 | |
| small talk | 英语 | [Small Talk](https://www.kaggle.com/datasets/salmanfaroz/small-talk-intent-classification-data) | 3000 | 闲聊用于为用户提供与聊天机器人的随意对话流程 | |
| chatbots | 英语 | [Chatbots: Intent Recognition Dataset](https://www.kaggle.com/datasets/elvinagammed/chatbots-intent-recognition-dataset) | | 用于分类、识别和聊天机器人开发的数据 | |
| ide_intent | 英语 | [intent-classification-for-ide-functionalities](https://www.kaggle.com/datasets/abdullahusmani86/intent-classification-for-ide-functionalities) | 27019 | IDE 意图分类数据集。 | |
| star_wars | 英语 | [star-wars](https://www.kaggle.com/datasets/aslanahmedov/star-wars-chat-bot) | 100 | 包含有关星球大战宇宙的各种数据。 | |
| jarvis_intent | 英语 | [jarvisintent](https://www.kaggle.com/datasets/joelyu/jarvisintent) | 4556 | | |
| dnd_style_intents | 英语 | | train: 131K; eval: 16.3K; test: 16.3K; | 该数据集是为游戏开发者对话系统中的意图分类模块而设计的。 数据集中有超过 17 个意图的约 163K 个示例。 | [neurae/dnd_style_intents](https://huggingface.co/datasets/neurae/dnd_style_intents) |
意图识别(汉语)
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| amazon_massive_intent_zh_cn | 汉语 | [amazon_massive_intent_zh_cn](https://huggingface.co/datasets/SetFit/amazon_massive_intent_zh-CN) | 16.5K | Alexa virtual assistant | |
| THU Intent Corpus | 汉语 | | 共计约6,000个句子 | 清华大学发布的中文意图识别和词槽填充数据集,包含15个领域和27个意图类别 | |
| CrossWOZ | 汉语 | [CrossWOZ](https://github.com/thu-coai/CrossWOZ) | | CrossWOZ是第一个大规模中文跨域Wizard-of-Oz任务导向数据集。 它包含 5 个领域的 6K 对话会话和 102K 话语,包括酒店、餐厅、景点、地铁和出租车。 此外,该语料库还包含用户侧和系统侧丰富的对话状态和对话行为注释。 | |
| CMID | 汉语 | [CMID](https://github.com/ishine/CMID) | | 该数据集用于中文医学 QA 意图理解任务。 | |
| dmslots | 汉语 | [dmslots](https://raw.githubusercontent.com/kids/bert_nlu/main/data/dmslots.txt) | | 弱标注数据 | |
| SMP2017 | 汉语 | [SMP2017-ECDT](http://ir.hit.edu.cn/SMP2017-ECDT); [1709.10217](https://arxiv.org/abs/1709.10217); [SMP2017ECDT-DATA](https://github.com/HITlilingzhi/SMP2017ECDT-DATA) | | 第六届全国社会媒体处理大会之中文人机对话技术评测(SMP2017-ECDT) | [ChineseNLPCorpus](https://github.com/InsaneLife/ChineseNLPCorpus) |
| SMP2019 | 汉语 | [SMP2019](https://conference.cipsc.org.cn/smp2019/evaluation.html); [smp2019ecdt_task1](https://adamszq.github.io/smp2019ecdt_task1/) | | SMP2019 ECDT 中文人机对话技术测评 | [SMP2017-2019-ECDT-data](https://github.com/hml-ubt/SMP2017-2019-ECDT-data); [ChineseNLPCorpus](https://github.com/InsaneLife/ChineseNLPCorpus) |
| a_intent | 汉语 | [意图识别](https://blog.csdn.net/weixin_42551154/article/details/129480825); [a_intent](https://pan.baidu.com/s/19_oqY4bC_lJa_7Mc6lxU7w?pwd=v4bi) | 12000 | 该意图识别数据集是一个多分类任务,目标是根据用户的输入文本判断用户的意图 | |
| RiSAWOZ | 汉语 | [RiSAWOZ](https://gem-benchmark.com/data_cards/RiSAWOZ) | | RiSAWOZ 是一个中文对话数据集。 它可用于研究各种对话任务,例如对话状态跟踪、对话上下文到文本生成、共指消解以及统一生成省略号和共指消解。 | [GEM/RiSAWOZ](https://huggingface.co/datasets/GEM/RiSAWOZ) |
| IMCS-IR | 汉语 | [中文医疗信息处理评测基准CBLUE](https://tianchi.aliyun.com/dataset/95414); [CBLUE 智能对话诊疗意图识别 IMCS-IR](https://github.com/winninghealth/imcs-ir) | | 中文医疗信息处理挑战榜CBLUE | |
#### 文本分类
| 数据 | 语言 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :---: | :---: | :---: | :---: |
| ag_news | 英语 | [AG_corpus_of_news_articles](http://www.di.unipi.it/~gulli/AG_corpus_of_news_articles.html); [Character-level Convolutional Networks for Text Classification](https://arxiv.org/abs/1509.01626); [ag_news](https://huggingface.co/datasets/ag_news) | 120K | AG的新闻主题分类数据集 | |
| daily_dialog | 英语 | [DailyDialog](http://yanran.li/dailydialog) | 11.1K | 标签分类为:dummy (0), inform (1), question (2), directive (3), commissive (4). 情感分类为:no emotion (0), anger (1), disgust (2), fear (3), happiness (4), sadness (5), surprise (6). | [daily_dialog](https://huggingface.co/datasets/daily_dialog) |
| chinese_news_title | 汉语 | [中文新闻文本标题分类](https://aistudio.baidu.com/datasetdetail/103654) | | 中文新闻标题数据集包含可供训练的32类(即新闻主题)标题47,952个,可供测试的新闻标题15,986个。在删除这些包含不能处理的特殊字符的标题后,我们保留了47,850个训练标题和15,950个测试标题(即#DataSet1)。 | [百度网盘](https://pan.baidu.com/s/1mgBTFOO) |
#### 其它任务类型
| 数据 | 语言 | 任务类型 | 原始数据/项目地址 | 样本个数 | 原始数据描述 | 替代数据下载地址 |
| :--- | :---: | :-----: | :---: | :---: | :---: | :---: |
| suicide_intent | 英语 | 情感分类 | [suicide-intent](https://www.kaggle.com/datasets/hetarthraval/suicide-intent-detection-dataset) | 3731 | 该数据集有四个类别:快乐、正常、悲伤和自杀意图。 | |
| CARER | 英语 | 情感分类 | [emotion](https://paperswithcode.com/dataset/emotion) | 20K | 情感是英语 Twitter 消息的数据集,包含六种基本情感:愤怒、恐惧、快乐、爱、悲伤和惊讶。 | [dair-ai/emotion](https://huggingface.co/datasets/dair-ai/emotion) |
| COIG-CQIA | 汉语 | 指令微调 | [CValues](https://arxiv.org/abs/2307.09705); [paralym/COIG-CQIA](https://github.com/paralym/COIG-CQIA) | | 高质量指令微调数据集,旨在为中文NLP社区提供高质量且符合人类交互行为的指令微调数据。 | [m-a-p/COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA) |
| emo2019 | 英语 | 情感分类 | [SemEval-2019 Task 3](https://www.aclweb.org/anthology/S19-2005) | TRAIN: 30160, TEST: 5509 | 情绪检测。四个标签:others (0), happy (1), sad (2), angry (3). | [emo](https://huggingface.co/datasets/emo) |
### 数据加载
```python
#!/usr/bin/python3
# -*- coding: utf-8 -*-
from datasets import load_dataset, concatenate_datasets
name_list = [
"amazon_massive_intent_en_us_prompt",
"amazon_massive_intent_zh_cn_prompt",
"atis_intent_prompt",
"banking77_prompt",
"bi_text11_prompt",
"bi_text27_prompt",
"book6_prompt",
# "chinese_news_title_prompt",
"cmid_4class_prompt",
"cmid_36class_prompt",
"conv_intent_prompt",
"crosswoz_prompt",
"dmslots_prompt",
"finance21_prompt",
"intent_classification_prompt",
"mobile_assistant_prompt",
"mtop_intent_prompt",
"out_of_scope_prompt",
"small_talk_prompt",
"smp2017_task1_prompt",
"smp2019_task1_domain_prompt",
"smp2019_task1_intent_prompt",
"snips_built_in_intents_prompt",
"telemarketing_intent_en_prompt",
"telemarketing_intent_cn_prompt",
"vira_intents_prompt",
]
train_dataset = list()
for name in name_list:
dataset = load_dataset(
path="qgyd2021/few_shot_intent_sft",
name=name,
split="train",
)
train_dataset.append(dataset)
train_dataset = concatenate_datasets(train_dataset)
valid_dataset = list()
for name in name_list:
dataset = load_dataset(
path="qgyd2021/few_shot_intent_sft",
name=name,
split="test",
)
valid_dataset.append(dataset)
valid_dataset = concatenate_datasets(valid_dataset)
```
### 参考来源
<details>
<summary>参考的数据来源,展开查看</summary>
<pre><code>
https://huggingface.co/datasets/qanastek/MASSIVE
https://huggingface.co/datasets/fathyshalab/atis_intents
https://huggingface.co/datasets/generalization/conv_intent_Full-p_1
https://huggingface.co/datasets/banking77
https://huggingface.co/datasets/dipesh/Intent-Classification-large
https://huggingface.co/datasets/SetFit/amazon_massive_intent_en-US
https://huggingface.co/datasets/SetFit/amazon_massive_intent_zh-CN
https://huggingface.co/datasets/SetFit/amazon_massive_intent_zh-TW
https://huggingface.co/datasets/snips_built_in_intents
https://huggingface.co/datasets/zapsdcn/citation_intent
https://huggingface.co/datasets/ibm/vira-intents
https://huggingface.co/datasets/mteb/mtop_intent
https://huggingface.co/datasets/Bhuvaneshwari/intent_classification
https://huggingface.co/datasets/ibm/vira-intents-live
https://huggingface.co/datasets/ebrigham/nl_banking_intents
https://pan.baidu.com/s/19_oqY4bC_lJa_7Mc6lxU7w?pwd=v4bi
https://gitee.com/a2798063/SMP2019/tree/master
https://cold-eye.github.io/post/nlp-corpus/
https://www.cluebenchmarks.com/introduce.html
https://github.com/search?q=chinese%20intent&type=repositories
</code></pre>
</details>
|
qgyd2021/few_shot_intent_sft
|
[
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:100M<n<1B",
"language:zh",
"language:en",
"license:apache-2.0",
"few-shot",
"intent",
"arxiv:2003.04807",
"arxiv:1709.10217",
"arxiv:1509.01626",
"arxiv:2307.09705",
"region:us"
] |
2023-09-22T10:26:09+00:00
|
{"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["100M<n<1B"], "task_categories": ["text-classification", "text-generation", "text2text-generation"], "tags": ["few-shot", "intent"], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "not_applicable", "dtype": "bool"}, {"name": "intent", "dtype": "string"}, {"name": "intent_version", "dtype": "string"}, {"name": "n_way", "dtype": "int32"}, {"name": "n_shot", "dtype": "int32"}, {"name": "description", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22484898, "num_examples": 22080}, {"name": "test", "num_bytes": 1853817, "num_examples": 2477}], "download_size": 7816475, "dataset_size": 24338715}}
|
2023-12-20T03:38:26+00:00
|
[
"2003.04807",
"1709.10217",
"1509.01626",
"2307.09705"
] |
[
"zh",
"en"
] |
TAGS
#task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100M<n<1B #language-Chinese #language-English #license-apache-2.0 #few-shot #intent #arxiv-2003.04807 #arxiv-1709.10217 #arxiv-1509.01626 #arxiv-2307.09705 #region-us
|
小样本意图识别指令数据集
------------
收集了意图识别的数据集, 将其制作成 prompt, 用于 few-shot 的意图识别 LLM 研究.
编写 prompt 模板需要想像力, 你可以在 community 中交流你的想法.
'{dataset\_name}\_prompt' 子集是从其对应的 '{dataset\_name}' 数据集和 '{dataset\_name}\_template' 子集动态生成的, 因此每一次的结果都会不一样.
提示: 由于训练时 prompt 的长度可能超出最大限制而被 truncate, 因此尽量把 prompt 设计成即使被 truncate 也仍然可以用于 GPT 训练.
提示工程指南
### 样本示例
train subset prompt 示例: (intent: Is it safe to go to the gym indoors if I'm vaccinated?)
```
intent recognition.
Examples:
------------
text: will i be okay on the gym
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: I want to go and exercise at the gym, indoors, but I don't know if it's safe?
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: I worry I will catch Covid from the Gym even though I have been vaccinated?
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: What does the fda think about the covid 19 vaccine?
intent: Is the vaccine FDA approved?
------------
text: it's never safe in a gym there are always bacteria everywhere
intent: Is it safe to go to the gym indoors if I'm vaccinated?
------------
text: who is the difference between FDA authorization and approval?
intent: Is the vaccine FDA approved?
------------
text: would the vaccine FDA be approved
intent: Is the vaccine FDA approved?
------------
text: If I had my vaccine, is it safe to go to the indoor gym?
intent:
```
train subset prompt 示例: (intent: 考虑一下)
```
电销场景意图识别。如果不能确定,请输出 “未知意图”。
Examples:
------------
text: 没关系啦 知道的
intent: 肯定答复
------------
text: 怎么能联系你
intent: 查联系方式
------------
text: 恩。让我想想吧。
intent: 考虑一下
------------
text: 说点有用的
intent: 请讲重点
------------
text: 唉唉
intent: 语气词
------------
text: 说快一点
intent: 请讲重点
------------
text: 再介绍一下
intent: 要求复述
------------
text: 从哪弄到我信息
intent: 质疑隐私安全
------------
text: 哎。。不是的
intent: 不是
------------
text: 给我电话号码
intent: 查联系方式
------------
text: 先看看吧
intent: 考虑一下
------------
text: 怎么知道道我的信息
intent: 质疑隐私安全
------------
text: 哎,再说吧,我再想想
intent: 考虑一下
------------
text: 不,我清醒。
intent: 不是
------------
text: 重说一次
intent: 要求复述
------------
text: 行了,晚安
intent: 肯定答复
------------
text: 额额额额
intent: 语气词
------------
text: 恩。哎再说吧我考虑一下hiahia
intent:
```
train subset prompt 示例: (intent: 污言秽语)
```
电销场景意图识别。
Examples:
text: 那留言
intent: 语音信箱
text: 好啊,哈哈,没事,我再找其他的人
intent: 好的
text: 在!
intent: 我在
text: 要打副本,没时间
intent: 没时间
text: 必须去学习!赶快去!
intent: 加快速度
text: 好的。满汉全席送上
intent: 好的
text: 你看到我给你的留言了么
intent: 语音信箱
text: 我在呢。
intent: 我在
text: 傻逼?
intent: 污言秽语
text: 胸大无脑
intent: 污言秽语
text: 不着急。
intent: 请等一等
text: 恩 我是团子
intent: 做自我介绍
text: 我是收电费的
intent: 做自我介绍
text: 我现在没时间接电话呢,待会儿打给你。
intent: 没时间
text: 好的。哈哈。初六见。我去睡觉啦
intent: 好的
text: 在啊
intent: 我在
text: 包皮猩
intent: 污言秽语
text: 离开一下
intent: 请等一等
text: 有病
intent: 污言秽语
text: 给我留个言
intent: 语音信箱
text: 你等一下
intent: 请等一等
text: 立刻马上!!!快快快快
intent: 加快速度
text: 我是郭钊源
intent: 做自我介绍
text: 快点儿
intent: 加快速度
text: 没时间睡觉怎么办吖
intent: 没时间
text: 吃!你来
intent:
```
test subset prompt 示例: (intent: 未能理解)
```
电销场景意图识别。如果不能确定,请输出 “未知意图”。
Examples:
------------
text: 讲什么
intent: 未能理解
------------
text: 等着吧!
intent: 请等一等
------------
text: 搞不懂你
intent: 未能理解
------------
text: 我实在是不想弄了,我那时事多没时间啊!
intent: 没时间
------------
text: 这你自己不清楚自己啊,还不晓得
intent: 不清楚
------------
text: 没问题放心吧
intent: 肯定(没问题)
------------
text: 公司名字是什么
intent: 查公司介绍
------------
text: 不放弃
intent: 肯定(需要)
------------
text: 老师也不懂
intent:
```
test subset prompt 示例: (intent: 肯定(嗯嗯))
```
电销场景意图识别。
不确定时请输出 “未知领域”。
Examples:
------------
text: 截止期过了多少天
intent: 疑问(时长)
------------
text: 不了
intent: 不需要
------------
text: 不行,不够不够
intent: 否定(不可以)
------------
text: 4个1
intent: 答数值
------------
text: 辽宁
intent: 地址
------------
text: 不清楚
intent: 不清楚
------------
text: 店里
intent: 地址
------------
text: 嗯啊嗯嗯来吧
intent: 肯定(嗯嗯)
------------
text: 利息比别的贷款高
intent: 价格太高
------------
text: 算23点,9,4,8,2
intent: 答数值
------------
text: 可以还得上
intent: 会按时处理
------------
text: 对啊 就是不行
intent: 否定(不可以)
------------
text: 真的不便宜
intent: 价格太高
------------
text: 嗯,thanks
intent: 肯定(嗯嗯)
------------
text: 这你自己不清楚自己啊,还不晓得
intent: 不清楚
------------
text: 我找找吧
intent: 会按时处理
------------
text: 这是拖欠几天了
intent: 疑问(时长)
------------
text: 不需要证据
intent: 不需要
------------
text: 噢,谢谢
intent: 肯定(嗯嗯)
------------
text: 恩恩,想我
intent:
```
test subset prompt 示例: (intent: 不信任)
```
意图识别。
Examples:
text: 你不要答非所问
intent: 答非所问
text: 费用搞错了
intent: 否定(错误)
text: 我给你留言了,你木有回
intent: 语音信箱
text: 小骗子
intent: 不信任
text: 昆明
intent: 实体(地址)
text: 哦,行,好了你发信息给我
intent: 肯定(可以)
text: 哦,这样啊,没时间就算了
intent: 没时间
text: 我错了,别欺负我了
intent: 请求谅解
text: 万一你们是骗子怎么办
intent: 不信任
text: 我太乃刀了
intent: 无关领域
text: 讲清楚重要的
intent: 请讲重点
text: 骗子,好好说话
intent:
```
### 数据来源
数据集从网上收集整理如下:
#### 意图识别
意图识别(英语)
意图识别(汉语)
#### 文本分类
#### 其它任务类型
### 数据加载
### 参考来源
参考的数据来源,展开查看
```
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
URL
```
|
[
"### 样本示例\n\n\n\ntrain subset prompt 示例: (intent: Is it safe to go to the gym indoors if I'm vaccinated?)\n\n```\nintent recognition. \n\nExamples:\n------------\ntext: will i be okay on the gym\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: I want to go and exercise at the gym, indoors, but I don't know if it's safe?\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: I worry I will catch Covid from the Gym even though I have been vaccinated?\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: What does the fda think about the covid 19 vaccine?\nintent: Is the vaccine FDA approved?\n------------\ntext: it's never safe in a gym there are always bacteria everywhere\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: who is the difference between FDA authorization and approval?\nintent: Is the vaccine FDA approved?\n------------\ntext: would the vaccine FDA be approved\nintent: Is the vaccine FDA approved?\n------------\ntext: If I had my vaccine, is it safe to go to the indoor gym?\nintent:\n\n```\n\n\n\ntrain subset prompt 示例: (intent: 考虑一下)\n\n```\n电销场景意图识别。如果不能确定,请输出 “未知意图”。 \n\nExamples:\n------------\ntext: 没关系啦 知道的\nintent: 肯定答复\n------------\ntext: 怎么能联系你\nintent: 查联系方式\n------------\ntext: 恩。让我想想吧。\nintent: 考虑一下\n------------\ntext: 说点有用的\nintent: 请讲重点\n------------\ntext: 唉唉\nintent: 语气词\n------------\ntext: 说快一点\nintent: 请讲重点\n------------\ntext: 再介绍一下\nintent: 要求复述\n------------\ntext: 从哪弄到我信息\nintent: 质疑隐私安全\n------------\ntext: 哎。。不是的\nintent: 不是\n------------\ntext: 给我电话号码\nintent: 查联系方式\n------------\ntext: 先看看吧\nintent: 考虑一下\n------------\ntext: 怎么知道道我的信息\nintent: 质疑隐私安全\n------------\ntext: 哎,再说吧,我再想想\nintent: 考虑一下\n------------\ntext: 不,我清醒。\nintent: 不是\n------------\ntext: 重说一次\nintent: 要求复述\n------------\ntext: 行了,晚安\nintent: 肯定答复\n------------\ntext: 额额额额\nintent: 语气词\n------------\ntext: 恩。哎再说吧我考虑一下hiahia\nintent:\n\n```\n\n\n\ntrain subset prompt 示例: (intent: 污言秽语)\n\n```\n电销场景意图识别。 \n\nExamples:\ntext: 那留言\nintent: 语音信箱 \n\ntext: 好啊,哈哈,没事,我再找其他的人\nintent: 好的 \n\ntext: 在!\nintent: 我在 \n\ntext: 要打副本,没时间\nintent: 没时间 \n\ntext: 必须去学习!赶快去!\nintent: 加快速度 \n\ntext: 好的。满汉全席送上\nintent: 好的 \n\ntext: 你看到我给你的留言了么\nintent: 语音信箱 \n\ntext: 我在呢。\nintent: 我在 \n\ntext: 傻逼?\nintent: 污言秽语 \n\ntext: 胸大无脑\nintent: 污言秽语 \n\ntext: 不着急。\nintent: 请等一等 \n\ntext: 恩 我是团子\nintent: 做自我介绍 \n\ntext: 我是收电费的\nintent: 做自我介绍 \n\ntext: 我现在没时间接电话呢,待会儿打给你。\nintent: 没时间 \n\ntext: 好的。哈哈。初六见。我去睡觉啦\nintent: 好的 \n\ntext: 在啊\nintent: 我在 \n\ntext: 包皮猩\nintent: 污言秽语 \n\ntext: 离开一下\nintent: 请等一等 \n\ntext: 有病\nintent: 污言秽语 \n\ntext: 给我留个言\nintent: 语音信箱 \n\ntext: 你等一下\nintent: 请等一等 \n\ntext: 立刻马上!!!快快快快\nintent: 加快速度 \n\ntext: 我是郭钊源\nintent: 做自我介绍 \n\ntext: 快点儿\nintent: 加快速度 \n\ntext: 没时间睡觉怎么办吖\nintent: 没时间 \n\ntext: 吃!你来\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 未能理解)\n\n```\n电销场景意图识别。如果不能确定,请输出 “未知意图”。 \n\nExamples:\n------------\ntext: 讲什么\nintent: 未能理解\n------------\ntext: 等着吧!\nintent: 请等一等\n------------\ntext: 搞不懂你\nintent: 未能理解\n------------\ntext: 我实在是不想弄了,我那时事多没时间啊!\nintent: 没时间\n------------\ntext: 这你自己不清楚自己啊,还不晓得\nintent: 不清楚\n------------\ntext: 没问题放心吧\nintent: 肯定(没问题)\n------------\ntext: 公司名字是什么\nintent: 查公司介绍\n------------\ntext: 不放弃\nintent: 肯定(需要)\n------------\ntext: 老师也不懂\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 肯定(嗯嗯))\n\n```\n电销场景意图识别。\n不确定时请输出 “未知领域”。 \n\nExamples:\n------------\ntext: 截止期过了多少天\nintent: 疑问(时长)\n------------\ntext: 不了\nintent: 不需要\n------------\ntext: 不行,不够不够\nintent: 否定(不可以)\n------------\ntext: 4个1\nintent: 答数值\n------------\ntext: 辽宁\nintent: 地址\n------------\ntext: 不清楚\nintent: 不清楚\n------------\ntext: 店里\nintent: 地址\n------------\ntext: 嗯啊嗯嗯来吧\nintent: 肯定(嗯嗯)\n------------\ntext: 利息比别的贷款高\nintent: 价格太高\n------------\ntext: 算23点,9,4,8,2\nintent: 答数值\n------------\ntext: 可以还得上\nintent: 会按时处理\n------------\ntext: 对啊 就是不行\nintent: 否定(不可以)\n------------\ntext: 真的不便宜\nintent: 价格太高\n------------\ntext: 嗯,thanks\nintent: 肯定(嗯嗯)\n------------\ntext: 这你自己不清楚自己啊,还不晓得\nintent: 不清楚\n------------\ntext: 我找找吧\nintent: 会按时处理\n------------\ntext: 这是拖欠几天了\nintent: 疑问(时长)\n------------\ntext: 不需要证据\nintent: 不需要\n------------\ntext: 噢,谢谢\nintent: 肯定(嗯嗯)\n------------\ntext: 恩恩,想我\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 不信任)\n\n```\n意图识别。 \n\nExamples:\ntext: 你不要答非所问\nintent: 答非所问 \n\ntext: 费用搞错了\nintent: 否定(错误) \n\ntext: 我给你留言了,你木有回\nintent: 语音信箱 \n\ntext: 小骗子\nintent: 不信任 \n\ntext: 昆明\nintent: 实体(地址) \n\ntext: 哦,行,好了你发信息给我\nintent: 肯定(可以) \n\ntext: 哦,这样啊,没时间就算了\nintent: 没时间 \n\ntext: 我错了,别欺负我了\nintent: 请求谅解 \n\ntext: 万一你们是骗子怎么办\nintent: 不信任 \n\ntext: 我太乃刀了\nintent: 无关领域 \n\ntext: 讲清楚重要的\nintent: 请讲重点 \n\ntext: 骗子,好好说话\nintent:\n\n```",
"### 数据来源\n\n\n数据集从网上收集整理如下:",
"#### 意图识别\n\n\n意图识别(英语)\n\n\n\n意图识别(汉语)",
"#### 文本分类",
"#### 其它任务类型",
"### 数据加载",
"### 参考来源\n\n\n\n参考的数据来源,展开查看\n\n```\n\nURL\n\n\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\n\n\nURL\n\n\nURL\n\n\nURL\n\n\n\n```"
] |
[
"TAGS\n#task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100M<n<1B #language-Chinese #language-English #license-apache-2.0 #few-shot #intent #arxiv-2003.04807 #arxiv-1709.10217 #arxiv-1509.01626 #arxiv-2307.09705 #region-us \n",
"### 样本示例\n\n\n\ntrain subset prompt 示例: (intent: Is it safe to go to the gym indoors if I'm vaccinated?)\n\n```\nintent recognition. \n\nExamples:\n------------\ntext: will i be okay on the gym\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: I want to go and exercise at the gym, indoors, but I don't know if it's safe?\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: I worry I will catch Covid from the Gym even though I have been vaccinated?\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: What does the fda think about the covid 19 vaccine?\nintent: Is the vaccine FDA approved?\n------------\ntext: it's never safe in a gym there are always bacteria everywhere\nintent: Is it safe to go to the gym indoors if I'm vaccinated?\n------------\ntext: who is the difference between FDA authorization and approval?\nintent: Is the vaccine FDA approved?\n------------\ntext: would the vaccine FDA be approved\nintent: Is the vaccine FDA approved?\n------------\ntext: If I had my vaccine, is it safe to go to the indoor gym?\nintent:\n\n```\n\n\n\ntrain subset prompt 示例: (intent: 考虑一下)\n\n```\n电销场景意图识别。如果不能确定,请输出 “未知意图”。 \n\nExamples:\n------------\ntext: 没关系啦 知道的\nintent: 肯定答复\n------------\ntext: 怎么能联系你\nintent: 查联系方式\n------------\ntext: 恩。让我想想吧。\nintent: 考虑一下\n------------\ntext: 说点有用的\nintent: 请讲重点\n------------\ntext: 唉唉\nintent: 语气词\n------------\ntext: 说快一点\nintent: 请讲重点\n------------\ntext: 再介绍一下\nintent: 要求复述\n------------\ntext: 从哪弄到我信息\nintent: 质疑隐私安全\n------------\ntext: 哎。。不是的\nintent: 不是\n------------\ntext: 给我电话号码\nintent: 查联系方式\n------------\ntext: 先看看吧\nintent: 考虑一下\n------------\ntext: 怎么知道道我的信息\nintent: 质疑隐私安全\n------------\ntext: 哎,再说吧,我再想想\nintent: 考虑一下\n------------\ntext: 不,我清醒。\nintent: 不是\n------------\ntext: 重说一次\nintent: 要求复述\n------------\ntext: 行了,晚安\nintent: 肯定答复\n------------\ntext: 额额额额\nintent: 语气词\n------------\ntext: 恩。哎再说吧我考虑一下hiahia\nintent:\n\n```\n\n\n\ntrain subset prompt 示例: (intent: 污言秽语)\n\n```\n电销场景意图识别。 \n\nExamples:\ntext: 那留言\nintent: 语音信箱 \n\ntext: 好啊,哈哈,没事,我再找其他的人\nintent: 好的 \n\ntext: 在!\nintent: 我在 \n\ntext: 要打副本,没时间\nintent: 没时间 \n\ntext: 必须去学习!赶快去!\nintent: 加快速度 \n\ntext: 好的。满汉全席送上\nintent: 好的 \n\ntext: 你看到我给你的留言了么\nintent: 语音信箱 \n\ntext: 我在呢。\nintent: 我在 \n\ntext: 傻逼?\nintent: 污言秽语 \n\ntext: 胸大无脑\nintent: 污言秽语 \n\ntext: 不着急。\nintent: 请等一等 \n\ntext: 恩 我是团子\nintent: 做自我介绍 \n\ntext: 我是收电费的\nintent: 做自我介绍 \n\ntext: 我现在没时间接电话呢,待会儿打给你。\nintent: 没时间 \n\ntext: 好的。哈哈。初六见。我去睡觉啦\nintent: 好的 \n\ntext: 在啊\nintent: 我在 \n\ntext: 包皮猩\nintent: 污言秽语 \n\ntext: 离开一下\nintent: 请等一等 \n\ntext: 有病\nintent: 污言秽语 \n\ntext: 给我留个言\nintent: 语音信箱 \n\ntext: 你等一下\nintent: 请等一等 \n\ntext: 立刻马上!!!快快快快\nintent: 加快速度 \n\ntext: 我是郭钊源\nintent: 做自我介绍 \n\ntext: 快点儿\nintent: 加快速度 \n\ntext: 没时间睡觉怎么办吖\nintent: 没时间 \n\ntext: 吃!你来\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 未能理解)\n\n```\n电销场景意图识别。如果不能确定,请输出 “未知意图”。 \n\nExamples:\n------------\ntext: 讲什么\nintent: 未能理解\n------------\ntext: 等着吧!\nintent: 请等一等\n------------\ntext: 搞不懂你\nintent: 未能理解\n------------\ntext: 我实在是不想弄了,我那时事多没时间啊!\nintent: 没时间\n------------\ntext: 这你自己不清楚自己啊,还不晓得\nintent: 不清楚\n------------\ntext: 没问题放心吧\nintent: 肯定(没问题)\n------------\ntext: 公司名字是什么\nintent: 查公司介绍\n------------\ntext: 不放弃\nintent: 肯定(需要)\n------------\ntext: 老师也不懂\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 肯定(嗯嗯))\n\n```\n电销场景意图识别。\n不确定时请输出 “未知领域”。 \n\nExamples:\n------------\ntext: 截止期过了多少天\nintent: 疑问(时长)\n------------\ntext: 不了\nintent: 不需要\n------------\ntext: 不行,不够不够\nintent: 否定(不可以)\n------------\ntext: 4个1\nintent: 答数值\n------------\ntext: 辽宁\nintent: 地址\n------------\ntext: 不清楚\nintent: 不清楚\n------------\ntext: 店里\nintent: 地址\n------------\ntext: 嗯啊嗯嗯来吧\nintent: 肯定(嗯嗯)\n------------\ntext: 利息比别的贷款高\nintent: 价格太高\n------------\ntext: 算23点,9,4,8,2\nintent: 答数值\n------------\ntext: 可以还得上\nintent: 会按时处理\n------------\ntext: 对啊 就是不行\nintent: 否定(不可以)\n------------\ntext: 真的不便宜\nintent: 价格太高\n------------\ntext: 嗯,thanks\nintent: 肯定(嗯嗯)\n------------\ntext: 这你自己不清楚自己啊,还不晓得\nintent: 不清楚\n------------\ntext: 我找找吧\nintent: 会按时处理\n------------\ntext: 这是拖欠几天了\nintent: 疑问(时长)\n------------\ntext: 不需要证据\nintent: 不需要\n------------\ntext: 噢,谢谢\nintent: 肯定(嗯嗯)\n------------\ntext: 恩恩,想我\nintent:\n\n```\n\n\n\ntest subset prompt 示例: (intent: 不信任)\n\n```\n意图识别。 \n\nExamples:\ntext: 你不要答非所问\nintent: 答非所问 \n\ntext: 费用搞错了\nintent: 否定(错误) \n\ntext: 我给你留言了,你木有回\nintent: 语音信箱 \n\ntext: 小骗子\nintent: 不信任 \n\ntext: 昆明\nintent: 实体(地址) \n\ntext: 哦,行,好了你发信息给我\nintent: 肯定(可以) \n\ntext: 哦,这样啊,没时间就算了\nintent: 没时间 \n\ntext: 我错了,别欺负我了\nintent: 请求谅解 \n\ntext: 万一你们是骗子怎么办\nintent: 不信任 \n\ntext: 我太乃刀了\nintent: 无关领域 \n\ntext: 讲清楚重要的\nintent: 请讲重点 \n\ntext: 骗子,好好说话\nintent:\n\n```",
"### 数据来源\n\n\n数据集从网上收集整理如下:",
"#### 意图识别\n\n\n意图识别(英语)\n\n\n\n意图识别(汉语)",
"#### 文本分类",
"#### 其它任务类型",
"### 数据加载",
"### 参考来源\n\n\n\n参考的数据来源,展开查看\n\n```\n\nURL\n\n\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\nURL\n\n\nURL\n\n\nURL\n\n\nURL\n\n\n\n```"
] |
[
110,
1718,
14,
20,
5,
6,
6,
36
] |
[
"passage: TAGS\n#task_categories-text-classification #task_categories-text-generation #task_categories-text2text-generation #size_categories-100M<n<1B #language-Chinese #language-English #license-apache-2.0 #few-shot #intent #arxiv-2003.04807 #arxiv-1709.10217 #arxiv-1509.01626 #arxiv-2307.09705 #region-us \n"
] |
c922fe19baee7c539561b7ededbb3c05bddf2449
|
# Persian-Text-QA: Lazy Llama 2 Formatting
This is a subset (1k samples) of the [`SeyedAli/Persian-Text-QA`](https://huggingface.co/datasets/SeyedAli/Persian-Text-QA) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). It was created using the following [colab notebook](https://colab.research.google.com/drive/1Ad7a9zMmkxuXTOh1Z7-rNSICA4dybpM2?usp=sharing).
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for [this article](https://mlabonne.github.io/blog/posts/Fine_Tune_Your_Own_Llama_2_Model_in_a_Colab_Notebook.html) about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
hdeldar/Persian-Text-llama2-1k-1
|
[
"region:us"
] |
2023-09-22T11:09:53+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1830325, "num_examples": 1000}], "download_size": 1841325, "dataset_size": 1830325, "dataset_name": "json"}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/data-*"}]}]}
|
2023-09-22T11:24:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Persian-Text-QA: Lazy Llama 2 Formatting
This is a subset (1k samples) of the 'SeyedAli/Persian-Text-QA' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.
Useful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab.
|
[
"# Persian-Text-QA: Lazy Llama 2 Formatting\n\nThis is a subset (1k samples) of the 'SeyedAli/Persian-Text-QA' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab."
] |
[
"TAGS\n#region-us \n",
"# Persian-Text-QA: Lazy Llama 2 Formatting\n\nThis is a subset (1k samples) of the 'SeyedAli/Persian-Text-QA' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab."
] |
[
6,
119
] |
[
"passage: TAGS\n#region-us \n# Persian-Text-QA: Lazy Llama 2 Formatting\n\nThis is a subset (1k samples) of the 'SeyedAli/Persian-Text-QA' dataset, processed to match Llama 2's prompt format as described in this article. It was created using the following colab notebook.\n\nUseful if you don't want to reformat it by yourself (e.g., using a script). It was designed for this article about fine-tuning a Llama 2 (chat) model in a Google Colab."
] |
db1347c89ac0d3940b26e790e6bf75e236c91388
|
# Dataset Card for "dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
YegorS/dataset
|
[
"region:us"
] |
2023-09-22T11:12:25+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "attention_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 38736183, "num_examples": 5000}, {"name": "eval", "num_bytes": 7833177, "num_examples": 1000}], "download_size": 10435687, "dataset_size": 46569360}}
|
2023-09-22T11:13:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "dataset"
More Information needed
|
[
"# Dataset Card for \"dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset\"\n\nMore Information needed"
] |
919c3ad877123d78ba0c1fa6bed6325bf4760979
|
This dataset is intended solely for experimental purposes. We are exploring the capabilities of the GPT structure when applied to this dataset. The data will be used for fine-tuning the Falcon 1B model. Please note that the results generated from this dataset should be interpreted with caution, as they are part of an ongoing research project.
|
NewstaR/dolly-gpt
|
[
"region:us"
] |
2023-09-22T11:13:00+00:00
|
{}
|
2023-09-22T11:42:15+00:00
|
[] |
[] |
TAGS
#region-us
|
This dataset is intended solely for experimental purposes. We are exploring the capabilities of the GPT structure when applied to this dataset. The data will be used for fine-tuning the Falcon 1B model. Please note that the results generated from this dataset should be interpreted with caution, as they are part of an ongoing research project.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
70963f75865d8def799556c45e5d52d82e3e237e
|
<h1 align="center">
Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking with Step Adjustment
</h1>
<div align="center">
Giulio Romualdi, Paolo Maria Viceconte, Stefano Dafarra, Silvio Traversaro and Daniele Pucci <br> <br>
<b>Paolo Maria Viceconte and Giulio Romualdi are co-first authors</b>
</div>
<br>
<div align="center">
📅 Submitted to the 2024 International Conference on Robotics and Automation (ICRA) 🤖
</div>
<section>
<h2>📂 Dataset</h2>
<p>
The dataset is organized in folders each representing a complete experiment. Each folder is organized as follows:
</p>
<ul>
<li>
a <code>.mp4</code> file containing the video of the experiment
</li>
<li>
a <code>.mat</code> file containing the associated data
</li>
<li>
a <code>.md</code> file containing the version of the code used
</li>
</ul>
</section>
<section>
<h2>📊 Dataset Visualization</h2>
<p>
To visualize the experiment, we suggest using
<a href="https://github.com/ami-iit/robot-log-visualizer">robot-log-visualizer</a> as shown in the following video
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/645421457f7b2bed1a01197b/JFnRvecj1QQ_pt7rG-0Dk.webm"></video>
</section>
|
ami-iit/paper_romualdi_viceconte_2024_icra_dnn-mpc-walking_dataset
|
[
"license:bsd-3-clause",
"region:us"
] |
2023-09-22T11:15:43+00:00
|
{"license": "bsd-3-clause"}
|
2023-09-22T11:43:49+00:00
|
[] |
[] |
TAGS
#license-bsd-3-clause #region-us
|
<h1 align="center">
Online DNN-driven Nonlinear MPC for Stylistic Humanoid Robot Walking with Step Adjustment
</h1>
<div align="center">
Giulio Romualdi, Paolo Maria Viceconte, Stefano Dafarra, Silvio Traversaro and Daniele Pucci <br> <br>
<b>Paolo Maria Viceconte and Giulio Romualdi are co-first authors</b>
</div>
<br>
<div align="center">
Submitted to the 2024 International Conference on Robotics and Automation (ICRA)
</div>
<section>
<h2> Dataset</h2>
<p>
The dataset is organized in folders each representing a complete experiment. Each folder is organized as follows:
</p>
<ul>
<li>
a <code>.mp4</code> file containing the video of the experiment
</li>
<li>
a <code>.mat</code> file containing the associated data
</li>
<li>
a <code>.md</code> file containing the version of the code used
</li>
</ul>
</section>
<section>
<h2> Dataset Visualization</h2>
<p>
To visualize the experiment, we suggest using
<a href="URL as shown in the following video
<video controls autoplay src="URL
</section>
|
[] |
[
"TAGS\n#license-bsd-3-clause #region-us \n"
] |
[
16
] |
[
"passage: TAGS\n#license-bsd-3-clause #region-us \n"
] |
555f452d1f2068b4be9ef2e3e6b11f78c25aba3d
|
data mapping => {'benign': 0, 'defacement': 1, 'malware': 2, 'phishing': 3}
|
bgspaditya/malicious-600k
|
[
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:en",
"license:mit",
"malicious-url",
"phishing",
"cyber-security",
"region:us"
] |
2023-09-22T11:26:36+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "pretty_name": "malicious-600k", "tags": ["malicious-url", "phishing", "cyber-security"]}
|
2023-09-22T11:48:28+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #malicious-url #phishing #cyber-security #region-us
|
data mapping => {'benign': 0, 'defacement': 1, 'malware': 2, 'phishing': 3}
|
[] |
[
"TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #malicious-url #phishing #cyber-security #region-us \n"
] |
[
51
] |
[
"passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-English #license-mit #malicious-url #phishing #cyber-security #region-us \n"
] |
3cb90bdfa828a49f8f11868df0f01c076d65e512
|
# 日本語フェイクニュースデータセット
[日本語フェイクニュースデータセット](https://github.com/tanreinama/Japanese-Fakenews-Dataset) を HuggingFace datasets 用に変換。
## ラベル
- id: 一意なID
- context: 本文
- fake_type: 真実なら `real`、途中からAI生成(GPT-2) なら `partial_gpt2`、すべて GPT-2 なら `full_gpt2`
- nchar_real: 真実部分の文字数
- nchar_fake: フェイク部分の文字数
|
p1atdev/fake-news-jp
|
[
"size_categories:10K<n<100K",
"language:ja",
"license:cc-by-2.5",
"region:us"
] |
2023-09-22T11:40:39+00:00
|
{"language": ["ja"], "license": "cc-by-2.5", "size_categories": ["10K<n<100K"]}
|
2023-09-22T11:54:43+00:00
|
[] |
[
"ja"
] |
TAGS
#size_categories-10K<n<100K #language-Japanese #license-cc-by-2.5 #region-us
|
# 日本語フェイクニュースデータセット
日本語フェイクニュースデータセット を HuggingFace datasets 用に変換。
## ラベル
- id: 一意なID
- context: 本文
- fake_type: 真実なら 'real'、途中からAI生成(GPT-2) なら 'partial_gpt2'、すべて GPT-2 なら 'full_gpt2'
- nchar_real: 真実部分の文字数
- nchar_fake: フェイク部分の文字数
|
[
"# 日本語フェイクニュースデータセット\n\n日本語フェイクニュースデータセット を HuggingFace datasets 用に変換。",
"## ラベル\n\n- id: 一意なID\n- context: 本文\n- fake_type: 真実なら 'real'、途中からAI生成(GPT-2) なら 'partial_gpt2'、すべて GPT-2 なら 'full_gpt2'\n- nchar_real: 真実部分の文字数\n- nchar_fake: フェイク部分の文字数"
] |
[
"TAGS\n#size_categories-10K<n<100K #language-Japanese #license-cc-by-2.5 #region-us \n",
"# 日本語フェイクニュースデータセット\n\n日本語フェイクニュースデータセット を HuggingFace datasets 用に変換。",
"## ラベル\n\n- id: 一意なID\n- context: 本文\n- fake_type: 真実なら 'real'、途中からAI生成(GPT-2) なら 'partial_gpt2'、すべて GPT-2 なら 'full_gpt2'\n- nchar_real: 真実部分の文字数\n- nchar_fake: フェイク部分の文字数"
] |
[
33,
31,
90
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-Japanese #license-cc-by-2.5 #region-us \n# 日本語フェイクニュースデータセット\n\n日本語フェイクニュースデータセット を HuggingFace datasets 用に変換。## ラベル\n\n- id: 一意なID\n- context: 本文\n- fake_type: 真実なら 'real'、途中からAI生成(GPT-2) なら 'partial_gpt2'、すべて GPT-2 なら 'full_gpt2'\n- nchar_real: 真実部分の文字数\n- nchar_fake: フェイク部分の文字数"
] |
6e2f6daf0192a6ba691a4eeeddde7db286273c7c
|
# Dataset Card for "llmjp1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjp1
|
[
"region:us"
] |
2023-09-22T11:54:08+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 79310, "num_examples": 1}], "download_size": 0, "dataset_size": 79310}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:00:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjp1"
More Information needed
|
[
"# Dataset Card for \"llmjp1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjp1\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjp1\"\n\nMore Information needed"
] |
fd2baff68d36c17bec5ce23c3340606f75c05367
|
# Dataset Card for Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Xwin-LM__Xwin-LM-70B-V0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T16:06:53.107330](https://huggingface.co/datasets/open-llm-leaderboard/details_Xwin-LM__Xwin-LM-70B-V0.1/blob/main/results_2023-10-29T16-06-53.107330.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08504614093959731,
"em_stderr": 0.0028567126231220596,
"f1": 0.14911073825503293,
"f1_stderr": 0.003010481134071011,
"acc": 0.5504525862971696,
"acc_stderr": 0.011424065665063533
},
"harness|drop|3": {
"em": 0.08504614093959731,
"em_stderr": 0.0028567126231220596,
"f1": 0.14911073825503293,
"f1_stderr": 0.003010481134071011
},
"harness|gsm8k|5": {
"acc": 0.2721758908263836,
"acc_stderr": 0.01225971403516454
},
"harness|winogrande|5": {
"acc": 0.8287292817679558,
"acc_stderr": 0.010588417294962526
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Xwin-LM__Xwin-LM-70B-V0.1
|
[
"region:us"
] |
2023-09-22T12:08:47+00:00
|
{"pretty_name": "Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Xwin-LM__Xwin-LM-70B-V0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-29T16:06:53.107330](https://huggingface.co/datasets/open-llm-leaderboard/details_Xwin-LM__Xwin-LM-70B-V0.1/blob/main/results_2023-10-29T16-06-53.107330.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08504614093959731,\n \"em_stderr\": 0.0028567126231220596,\n \"f1\": 0.14911073825503293,\n \"f1_stderr\": 0.003010481134071011,\n \"acc\": 0.5504525862971696,\n \"acc_stderr\": 0.011424065665063533\n },\n \"harness|drop|3\": {\n \"em\": 0.08504614093959731,\n \"em_stderr\": 0.0028567126231220596,\n \"f1\": 0.14911073825503293,\n \"f1_stderr\": 0.003010481134071011\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2721758908263836,\n \"acc_stderr\": 0.01225971403516454\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8287292817679558,\n \"acc_stderr\": 0.010588417294962526\n }\n}\n```", "repo_url": "https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|arc:challenge|25_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_29T16_06_53.107330", "path": ["**/details_harness|drop|3_2023-10-29T16-06-53.107330.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-29T16-06-53.107330.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_29T16_06_53.107330", "path": ["**/details_harness|gsm8k|5_2023-10-29T16-06-53.107330.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-29T16-06-53.107330.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hellaswag|10_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-management|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-virology|5_2023-09-22T13-08-23.293621.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T13-08-23.293621.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-09-22T13-08-23.293621.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_29T16_06_53.107330", "path": ["**/details_harness|winogrande|5_2023-10-29T16-06-53.107330.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-29T16-06-53.107330.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T13_08_23.293621", "path": ["results_2023-09-22T13-08-23.293621.parquet"]}, {"split": "2023_10_29T16_06_53.107330", "path": ["results_2023-10-29T16-06-53.107330.parquet"]}, {"split": "latest", "path": ["results_2023-10-29T16-06-53.107330.parquet"]}]}]}
|
2023-10-29T16:07:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Xwin-LM/Xwin-LM-70B-V0.1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-29T16:06:53.107330(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Xwin-LM/Xwin-LM-70B-V0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T16:06:53.107330(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Xwin-LM/Xwin-LM-70B-V0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T16:06:53.107330(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Xwin-LM/Xwin-LM-70B-V0.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Xwin-LM/Xwin-LM-70B-V0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-29T16:06:53.107330(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b78116a6bf94f4c6318378cd9db6425b9ca710dd
|
# Dataset Card for "llmjptk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjptk
|
[
"region:us"
] |
2023-09-22T12:17:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 81960.0, "num_examples": 10}, {"name": "test", "num_bytes": 16392, "num_examples": 2}], "download_size": 41220, "dataset_size": 98352.0}}
|
2023-09-22T12:43:51+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjptk"
More Information needed
|
[
"# Dataset Card for \"llmjptk\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjptk\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjptk\"\n\nMore Information needed"
] |
4e7394b17e02d4aa680b35556023412d5a464ed0
|
# Dataset Card for Evaluation run of zarakiquemparte/zarablend-l2-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/zarakiquemparte/zarablend-l2-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [zarakiquemparte/zarablend-l2-7b](https://huggingface.co/zarakiquemparte/zarablend-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_zarakiquemparte__zarablend-l2-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T13:26:53.178653](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarablend-l2-7b/blob/main/results_2023-09-22T13-26-53.178653.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2753775167785235,
"em_stderr": 0.00457467023556627,
"f1": 0.354505033557049,
"f1_stderr": 0.004527443322138582,
"acc": 0.3886004022324439,
"acc_stderr": 0.009038856275635394
},
"harness|drop|3": {
"em": 0.2753775167785235,
"em_stderr": 0.00457467023556627,
"f1": 0.354505033557049,
"f1_stderr": 0.004527443322138582
},
"harness|gsm8k|5": {
"acc": 0.04397270659590599,
"acc_stderr": 0.005647666449126459
},
"harness|winogrande|5": {
"acc": 0.7332280978689818,
"acc_stderr": 0.01243004610214433
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_zarakiquemparte__zarablend-l2-7b
|
[
"region:us"
] |
2023-09-22T12:26:57+00:00
|
{"pretty_name": "Evaluation run of zarakiquemparte/zarablend-l2-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [zarakiquemparte/zarablend-l2-7b](https://huggingface.co/zarakiquemparte/zarablend-l2-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_zarakiquemparte__zarablend-l2-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T13:26:53.178653](https://huggingface.co/datasets/open-llm-leaderboard/details_zarakiquemparte__zarablend-l2-7b/blob/main/results_2023-09-22T13-26-53.178653.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2753775167785235,\n \"em_stderr\": 0.00457467023556627,\n \"f1\": 0.354505033557049,\n \"f1_stderr\": 0.004527443322138582,\n \"acc\": 0.3886004022324439,\n \"acc_stderr\": 0.009038856275635394\n },\n \"harness|drop|3\": {\n \"em\": 0.2753775167785235,\n \"em_stderr\": 0.00457467023556627,\n \"f1\": 0.354505033557049,\n \"f1_stderr\": 0.004527443322138582\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.04397270659590599,\n \"acc_stderr\": 0.005647666449126459\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7332280978689818,\n \"acc_stderr\": 0.01243004610214433\n }\n}\n```", "repo_url": "https://huggingface.co/zarakiquemparte/zarablend-l2-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T13_26_53.178653", "path": ["**/details_harness|drop|3_2023-09-22T13-26-53.178653.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T13-26-53.178653.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T13_26_53.178653", "path": ["**/details_harness|gsm8k|5_2023-09-22T13-26-53.178653.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T13-26-53.178653.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T13_26_53.178653", "path": ["**/details_harness|winogrande|5_2023-09-22T13-26-53.178653.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T13-26-53.178653.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T13_26_53.178653", "path": ["results_2023-09-22T13-26-53.178653.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T13-26-53.178653.parquet"]}]}]}
|
2023-09-22T12:27:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of zarakiquemparte/zarablend-l2-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model zarakiquemparte/zarablend-l2-7b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T13:26:53.178653(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of zarakiquemparte/zarablend-l2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarablend-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T13:26:53.178653(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of zarakiquemparte/zarablend-l2-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarablend-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T13:26:53.178653(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of zarakiquemparte/zarablend-l2-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model zarakiquemparte/zarablend-l2-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T13:26:53.178653(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9b70de2f99d6e0e166c93cb25a4efb8fb78e5d7c
|
# Music Instruct (MI) Dataset
This is the dataset used to train and evaluate the MusiLingo model.
This dataset contains Q&A pairs related
to individual musical compositions, specifically
tailored for open-ended music queries. It originates
from the music-caption pairs in the MusicCaps
dataset.
The MI dataset was created through prompt engineering and applying few-shot learning techniques
to GPT-4. More details on dataset generation can be found in our paper *[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response
](https://arxiv.org/abs/2309.08730)*.
The resulting MI dataset consists of two versions:
v1 (short questions), with 27,540 Q&A pairs seeking comprehensive
details about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2
comprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting
in typically more extensive responses that serve as
paraphrased renditions of the original caption.
## Evaluation and dataset SPlittion
You can use all (or the long/short partition of) the Q\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately.
```
# training set
ds_mixed_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='all')
ds_long_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='long')
ds_short_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='short')
# testing set for short QA
ds_short = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='short')
# testing set for long QA
ds_long = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='long')
```
And the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score.
## Citation
```
@article{deng2023musilingo,
title={MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response},
author={Deng, Zihao and Ma, Yinghao and Liu, Yudong and Guo, Rongchen and Zhang, Ge and Chen, Wenhu and Huang, Wenhao and Benetos, Emmanouil},
journal={arXiv preprint arXiv:2309.08730},
year={2023}
}
```
|
m-a-p/Music-Instruct
|
[
"license:cc-by-nc-4.0",
"arxiv:2309.08730",
"region:us"
] |
2023-09-22T12:33:36+00:00
|
{"license": "cc-by-nc-4.0"}
|
2023-10-12T13:33:11+00:00
|
[
"2309.08730"
] |
[] |
TAGS
#license-cc-by-nc-4.0 #arxiv-2309.08730 #region-us
|
# Music Instruct (MI) Dataset
This is the dataset used to train and evaluate the MusiLingo model.
This dataset contains Q&A pairs related
to individual musical compositions, specifically
tailored for open-ended music queries. It originates
from the music-caption pairs in the MusicCaps
dataset.
The MI dataset was created through prompt engineering and applying few-shot learning techniques
to GPT-4. More details on dataset generation can be found in our paper *MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response
*.
The resulting MI dataset consists of two versions:
v1 (short questions), with 27,540 Q&A pairs seeking comprehensive
details about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2
comprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting
in typically more extensive responses that serve as
paraphrased renditions of the original caption.
## Evaluation and dataset SPlittion
You can use all (or the long/short partition of) the Q\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately.
And the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score.
|
[
"# Music Instruct (MI) Dataset\n\nThis is the dataset used to train and evaluate the MusiLingo model.\nThis dataset contains Q&A pairs related\nto individual musical compositions, specifically\ntailored for open-ended music queries. It originates\nfrom the music-caption pairs in the MusicCaps\ndataset.\nThe MI dataset was created through prompt engineering and applying few-shot learning techniques\nto GPT-4. More details on dataset generation can be found in our paper *MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response\n*. \n\nThe resulting MI dataset consists of two versions:\nv1 (short questions), with 27,540 Q&A pairs seeking comprehensive\ndetails about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2\ncomprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting\nin typically more extensive responses that serve as\nparaphrased renditions of the original caption.",
"## Evaluation and dataset SPlittion\nYou can use all (or the long/short partition of) the Q\\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately. \n\n\n\nAnd the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score."
] |
[
"TAGS\n#license-cc-by-nc-4.0 #arxiv-2309.08730 #region-us \n",
"# Music Instruct (MI) Dataset\n\nThis is the dataset used to train and evaluate the MusiLingo model.\nThis dataset contains Q&A pairs related\nto individual musical compositions, specifically\ntailored for open-ended music queries. It originates\nfrom the music-caption pairs in the MusicCaps\ndataset.\nThe MI dataset was created through prompt engineering and applying few-shot learning techniques\nto GPT-4. More details on dataset generation can be found in our paper *MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response\n*. \n\nThe resulting MI dataset consists of two versions:\nv1 (short questions), with 27,540 Q&A pairs seeking comprehensive\ndetails about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2\ncomprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting\nin typically more extensive responses that serve as\nparaphrased renditions of the original caption.",
"## Evaluation and dataset SPlittion\nYou can use all (or the long/short partition of) the Q\\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately. \n\n\n\nAnd the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score."
] |
[
26,
263,
94
] |
[
"passage: TAGS\n#license-cc-by-nc-4.0 #arxiv-2309.08730 #region-us \n# Music Instruct (MI) Dataset\n\nThis is the dataset used to train and evaluate the MusiLingo model.\nThis dataset contains Q&A pairs related\nto individual musical compositions, specifically\ntailored for open-ended music queries. It originates\nfrom the music-caption pairs in the MusicCaps\ndataset.\nThe MI dataset was created through prompt engineering and applying few-shot learning techniques\nto GPT-4. More details on dataset generation can be found in our paper *MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response\n*. \n\nThe resulting MI dataset consists of two versions:\nv1 (short questions), with 27,540 Q&A pairs seeking comprehensive\ndetails about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2\ncomprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting\nin typically more extensive responses that serve as\nparaphrased renditions of the original caption.## Evaluation and dataset SPlittion\nYou can use all (or the long/short partition of) the Q\\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately. \n\n\n\nAnd the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score."
] |
a9bb22ce393cda7704c74d4702d03c0e14db62fd
|
# Dataset Card for "pubmed_subset_wiki_40p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zxvix/pubmed_subset_wiki_40p
|
[
"region:us"
] |
2023-09-22T12:43:03+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4745394961.571536, "num_examples": 1666514}, {"name": "test", "num_bytes": 1024229, "num_examples": 1000}], "download_size": 1869629066, "dataset_size": 4746419190.571536}}
|
2023-09-22T12:47:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pubmed_subset_wiki_40p"
More Information needed
|
[
"# Dataset Card for \"pubmed_subset_wiki_40p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pubmed_subset_wiki_40p\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pubmed_subset_wiki_40p\"\n\nMore Information needed"
] |
a85ef8249abd21c071c5d91e4a98b4998043be6c
|
# Dataset Card for "llmjptk1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjptk1
|
[
"region:us"
] |
2023-09-22T12:49:33+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 81960.0, "num_examples": 10}, {"name": "test", "num_bytes": 16392.0, "num_examples": 2}], "download_size": 38350, "dataset_size": 98352.0}}
|
2023-09-22T12:49:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjptk1"
More Information needed
|
[
"# Dataset Card for \"llmjptk1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjptk1\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjptk1\"\n\nMore Information needed"
] |
ca51cfc169813980195d7f27485da21087fe7d52
|
# Dataset Card for "llmjptk2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjptk2
|
[
"region:us"
] |
2023-09-22T12:55:14+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 81960.0, "num_examples": 10}, {"name": "test", "num_bytes": 16392.0, "num_examples": 2}], "download_size": 42049, "dataset_size": 98352.0}}
|
2023-09-22T12:55:19+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjptk2"
More Information needed
|
[
"# Dataset Card for \"llmjptk2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjptk2\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjptk2\"\n\nMore Information needed"
] |
1d1eee7594ca46c6741307e2e0851a990cb9258f
|
# Dataset Card for "repo_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Dong237/empathetic_dialogues_cleaned
|
[
"region:us"
] |
2023-09-22T13:09:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "conv_id", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "utterance", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7970106, "num_examples": 17780}, {"name": "validation", "num_bytes": 1343248, "num_examples": 2758}, {"name": "test", "num_bytes": 1334052, "num_examples": 2540}], "download_size": 6149453, "dataset_size": 10647406}}
|
2023-09-22T13:10:27+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "repo_test"
More Information needed
|
[
"# Dataset Card for \"repo_test\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"repo_test\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"repo_test\"\n\nMore Information needed"
] |
43cfc8318b2d435f6b2cf59036a7638b4ca2e823
|
# Dataset Card for "92f7fec0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/92f7fec0
|
[
"region:us"
] |
2023-09-22T13:16:38+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 251, "num_examples": 10}], "download_size": 1446, "dataset_size": 251}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T13:16:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "92f7fec0"
More Information needed
|
[
"# Dataset Card for \"92f7fec0\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"92f7fec0\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"92f7fec0\"\n\nMore Information needed"
] |
bd819db09c63fb406af19c8633350aef5e7209a8
|
# Dataset Card for "WritingPromptsX"
Comments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by [WritingPrompts](https://huggingface.co/datasets/euclaise/writingprompts), but a bit more complete.
|
euclaise/WritingPromptsX
|
[
"size_categories:1M<n<10M",
"license:cc0-1.0",
"region:us"
] |
2023-09-22T13:22:28+00:00
|
{"license": "cc0-1.0", "size_categories": ["1M<n<10M"], "dataset_info": {"features": [{"name": "post_title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "gilded", "dtype": "int64"}, {"name": "post_score", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 2040557544, "num_examples": 1245546}], "download_size": 1016138545, "dataset_size": 2040557544}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T13:37:38+00:00
|
[] |
[] |
TAGS
#size_categories-1M<n<10M #license-cc0-1.0 #region-us
|
# Dataset Card for "WritingPromptsX"
Comments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by WritingPrompts, but a bit more complete.
|
[
"# Dataset Card for \"WritingPromptsX\"\n\nComments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by WritingPrompts, but a bit more complete."
] |
[
"TAGS\n#size_categories-1M<n<10M #license-cc0-1.0 #region-us \n",
"# Dataset Card for \"WritingPromptsX\"\n\nComments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by WritingPrompts, but a bit more complete."
] |
[
26,
48
] |
[
"passage: TAGS\n#size_categories-1M<n<10M #license-cc0-1.0 #region-us \n# Dataset Card for \"WritingPromptsX\"\n\nComments from r/WritingPrompts, up to 12-2022, from PushShift. Inspired by WritingPrompts, but a bit more complete."
] |
4a3253dca8e4d01f7d13e967eb1958c6801e4c2e
|
# Dataset Card for "climate-global-temp-anomaly"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
vitaliy-sharandin/climate-global-temp-anomaly
|
[
"region:us"
] |
2023-09-22T13:25:39+00:00
|
{"dataset_info": {"features": [{"name": "Entity", "dtype": "string"}, {"name": "Code", "dtype": "float64"}, {"name": "Global average temperature anomaly relative to 1961-1990", "dtype": "float64"}, {"name": "Upper bound (95% confidence interval) of the annual temperature anomaly", "dtype": "float64"}, {"name": "Lower bound (95% confidence interval) of the annual temperature anomaly", "dtype": "float64"}, {"name": "dt", "dtype": "timestamp[ns]"}], "splits": [{"name": "train", "num_bytes": 30513, "num_examples": 519}], "download_size": 20408, "dataset_size": 30513}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-24T12:50:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "climate-global-temp-anomaly"
More Information needed
|
[
"# Dataset Card for \"climate-global-temp-anomaly\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"climate-global-temp-anomaly\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"climate-global-temp-anomaly\"\n\nMore Information needed"
] |
1c6ed9a7ce32f62d1fc1efdecbd5baee5e48c1cf
|
# Dataset Card for "simon_sinek_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
alexmoini/simon_sinek_dataset
|
[
"region:us"
] |
2023-09-22T13:25:59+00:00
|
{"dataset_info": {"features": [{"name": "chunk_name", "dtype": "string"}, {"name": "conversation", "dtype": "string"}, {"name": "speech_type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1899282, "num_examples": 325}], "download_size": 851140, "dataset_size": 1899282}}
|
2023-09-23T15:05:00+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "simon_sinek_dataset"
More Information needed
|
[
"# Dataset Card for \"simon_sinek_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"simon_sinek_dataset\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"simon_sinek_dataset\"\n\nMore Information needed"
] |
71aaaf578982b790b0551cc663e93cd2ed925012
|
# Dataset Card for "test_data2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fmattera/test_data2
|
[
"region:us"
] |
2023-09-22T13:31:07+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "conditioning", "dtype": "image"}, {"name": "prompt", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 3854203.0, "num_examples": 4}], "download_size": 3857683, "dataset_size": 3854203.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T13:31:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_data2"
More Information needed
|
[
"# Dataset Card for \"test_data2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_data2\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_data2\"\n\nMore Information needed"
] |
e75664338d06e68292e7b35801b342f619703083
|
# Dataset Card for "COVID-QA-for-sentence-transformer-longer-context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
minh21/COVID-QA-for-sentence-transformer-longer-context
|
[
"region:us"
] |
2023-09-22T13:55:08+00:00
|
{"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "context", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2369782, "num_examples": 1615}, {"name": "test", "num_bytes": 292819, "num_examples": 202}, {"name": "validation", "num_bytes": 295207, "num_examples": 202}], "download_size": 1570438, "dataset_size": 2957808}}
|
2023-09-22T13:55:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "COVID-QA-for-sentence-transformer-longer-context"
More Information needed
|
[
"# Dataset Card for \"COVID-QA-for-sentence-transformer-longer-context\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"COVID-QA-for-sentence-transformer-longer-context\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"COVID-QA-for-sentence-transformer-longer-context\"\n\nMore Information needed"
] |
994e02c4ba849036fdb1afddf25155689c68ccbd
|
VNNews TXT raw corpus
|
infCapital/vnnews-txt-corpus
|
[
"language:vi",
"license:cc",
"finance",
"chemistry",
"art",
"region:us"
] |
2023-09-22T13:56:56+00:00
|
{"language": ["vi"], "license": "cc", "tags": ["finance", "chemistry", "art"]}
|
2023-09-22T15:02:49+00:00
|
[] |
[
"vi"
] |
TAGS
#language-Vietnamese #license-cc #finance #chemistry #art #region-us
|
VNNews TXT raw corpus
|
[] |
[
"TAGS\n#language-Vietnamese #license-cc #finance #chemistry #art #region-us \n"
] |
[
27
] |
[
"passage: TAGS\n#language-Vietnamese #license-cc #finance #chemistry #art #region-us \n"
] |
b30f459fe6cb6547d276c6ee2287bec442e82c2b
|
# Dataset Card for nrw-bplan-scrape
## Dataset Description
**Homepage:** [DSSGx Munich](https://sites.google.com/view/dssgx-munich-2023/startseite) organization page.
**Repository:** [GitHub](https://github.com/DSSGxMunich/land-sealing-dataset-and-analysis).
### Dataset Summary
This dataset contains all inputs needed as well as outputs of running the full pipeline for creating the NRW land sealing dataset. This can be reproduced by running [this notebook](https://github.com/DSSGxMunich/land-sealing-dataset-and-analysis/blob/main/src/1_execute_pipeline.ipynb).
## Dataset Structure
* nrw
* bplan
* features
* keywords
* exact_search
* ```baunvo_keywords.csv```: Results y/n of keywords found in documents relating to baunvo and article 13b.
* fuzzy_search:
* ```keyword_dict_hochwasser.json```: Results of keywords found in documents relating to "hochwasser", e.g. hqhäufig and hq100
* contains 7 csv files with results of fuzzy key search for keywords. The file name indicates the key being searched for and the text around this keyword is extracted in a row for each document
* raw
* images: images from [here](https://huggingface.co/datasets/DSSGxMunich/nrw-bplan-images) can be added to this folder
* links:
* ```NRW_BP.geojson```: The file downloaded from the NRV geoportal, containing all raw data on URLs to land parcel bplans.
* ```land_parcels.geojson```: A processed version of NRW_BP.geojson.
* ```NRW_BP_parsed_links.csv```: A csv formatted version of NRW_BP.geojson.
* text:
* ```bp_text.json```: Raw output of the text text extraction of each pdf. Contains only columns for the filename and the extracted text.
* ```document_texts.json```: Enriched version of bp_texts.json in which columns about the documents have been appended.
* pdfs: pdfs extarcted from the NRW Geoportal and are found [here](https://huggingface.co/datasets/DSSGxMunich/nrw-bplan-pdfs), can be added to this folder
* knowledge_extraction_agent: Contains 6 json files. The filename corresponds to the key looked for in the fuzzy keyword search (e.g. ```fh.json``` cooresponds to ```firsthöhe.csv```, ```gfz.json``` corrresponds to ```geschossflächenzahl.csv```). More unfo can be found [here](https://huggingface.co/datasets/DSSGxMunich/bplan_keyword_extraction)
* ```knowledge_agent_output.json```: Is a toy example for 10 files of the output of the pipeline for the knowledge agent (merging of results in ```nrw/bplan/knowledge_extraction_agent```)
* clean
* ```document_texts.xlsx```: See [here](https://huggingface.co/datasets/DSSGxMunich/document_text) for more information
* ```exact_keyword.xlsx```: This corresponds to baunvo_keywords.csv.
* ```fuzzy_keyword.xlsx```: Is the merged version of the files found in ```nrw/bplan/fuzzy_search````
* ```knowledge_agent.xlsx```: The .xlsx version of ```nrw/bplan/knowledge_agent_output.json```)
* ```land_parcels.xlsx```: See [here](https://huggingface.co/datasets/DSSGxMunich/land_parcels) for more information
* ```regional_plans.xlsx```: The .xlsx version of the data table found [here](https://huggingface.co/datasets/DSSGxMunich/regional_plan_sections)
* rplan
* features: contains ```regional_plan_sections.json```, the output of the pipeline - a more detailed can be found [here](https://huggingface.co/datasets/DSSGxMunich/regional_plan_sections)
* raw
* geo: contains ```regions_map.geojson``` with information on the geolocations of the regional plans
* pdfs: contains pdfs of regional plans for NRW - used as input to run the pipeline
* text: contains text extracted with Tika from all pdf regional plans
|
DSSGxMunich/nrw-bplan-scrape
|
[
"license:mit",
"region:us"
] |
2023-09-22T13:59:53+00:00
|
{"license": "mit"}
|
2023-10-12T06:11:36+00:00
|
[] |
[] |
TAGS
#license-mit #region-us
|
# Dataset Card for nrw-bplan-scrape
## Dataset Description
Homepage: DSSGx Munich organization page.
Repository: GitHub.
### Dataset Summary
This dataset contains all inputs needed as well as outputs of running the full pipeline for creating the NRW land sealing dataset. This can be reproduced by running this notebook.
## Dataset Structure
* nrw
* bplan
* features
* keywords
* exact_search
* : Results y/n of keywords found in documents relating to baunvo and article 13b.
* fuzzy_search:
* : Results of keywords found in documents relating to "hochwasser", e.g. hqhäufig and hq100
* contains 7 csv files with results of fuzzy key search for keywords. The file name indicates the key being searched for and the text around this keyword is extracted in a row for each document
* raw
* images: images from here can be added to this folder
* links:
* : The file downloaded from the NRV geoportal, containing all raw data on URLs to land parcel bplans.
* : A processed version of NRW_BP.geojson.
* : A csv formatted version of NRW_BP.geojson.
* text:
* : Raw output of the text text extraction of each pdf. Contains only columns for the filename and the extracted text.
* : Enriched version of bp_texts.json in which columns about the documents have been appended.
* pdfs: pdfs extarcted from the NRW Geoportal and are found here, can be added to this folder
* knowledge_extraction_agent: Contains 6 json files. The filename corresponds to the key looked for in the fuzzy keyword search (e.g. cooresponds to , corrresponds to ). More unfo can be found here
* : Is a toy example for 10 files of the output of the pipeline for the knowledge agent (merging of results in )
* clean
* : See here for more information
* : This corresponds to baunvo_keywords.csv.
* : Is the merged version of the files found in '
* : The .xlsx version of )
* : See here for more information
* : The .xlsx version of the data table found here
* rplan
* features: contains , the output of the pipeline - a more detailed can be found here
* raw
* geo: contains with information on the geolocations of the regional plans
* pdfs: contains pdfs of regional plans for NRW - used as input to run the pipeline
* text: contains text extracted with Tika from all pdf regional plans
|
[
"# Dataset Card for nrw-bplan-scrape",
"## Dataset Description\n \n Homepage: DSSGx Munich organization page. \n \n Repository: GitHub.",
"### Dataset Summary\n\nThis dataset contains all inputs needed as well as outputs of running the full pipeline for creating the NRW land sealing dataset. This can be reproduced by running this notebook.",
"## Dataset Structure\n\n* nrw\n * bplan\n * features\n * keywords\n * exact_search\n * : Results y/n of keywords found in documents relating to baunvo and article 13b.\n * fuzzy_search:\n * : Results of keywords found in documents relating to \"hochwasser\", e.g. hqhäufig and hq100\n * contains 7 csv files with results of fuzzy key search for keywords. The file name indicates the key being searched for and the text around this keyword is extracted in a row for each document\n * raw\n * images: images from here can be added to this folder\n * links:\n * : The file downloaded from the NRV geoportal, containing all raw data on URLs to land parcel bplans. \n * : A processed version of NRW_BP.geojson.\n * : A csv formatted version of NRW_BP.geojson.\n \n * text: \n * : Raw output of the text text extraction of each pdf. Contains only columns for the filename and the extracted text.\n * : Enriched version of bp_texts.json in which columns about the documents have been appended. \n * pdfs: pdfs extarcted from the NRW Geoportal and are found here, can be added to this folder\n * knowledge_extraction_agent: Contains 6 json files. The filename corresponds to the key looked for in the fuzzy keyword search (e.g. cooresponds to , corrresponds to ). More unfo can be found here \n * : Is a toy example for 10 files of the output of the pipeline for the knowledge agent (merging of results in )\n * clean\n * : See here for more information\n * : This corresponds to baunvo_keywords.csv.\n * : Is the merged version of the files found in '\n * : The .xlsx version of )\n * : See here for more information\n * : The .xlsx version of the data table found here\n * rplan\n * features: contains , the output of the pipeline - a more detailed can be found here\n * raw\n * geo: contains with information on the geolocations of the regional plans\n * pdfs: contains pdfs of regional plans for NRW - used as input to run the pipeline\n * text: contains text extracted with Tika from all pdf regional plans"
] |
[
"TAGS\n#license-mit #region-us \n",
"# Dataset Card for nrw-bplan-scrape",
"## Dataset Description\n \n Homepage: DSSGx Munich organization page. \n \n Repository: GitHub.",
"### Dataset Summary\n\nThis dataset contains all inputs needed as well as outputs of running the full pipeline for creating the NRW land sealing dataset. This can be reproduced by running this notebook.",
"## Dataset Structure\n\n* nrw\n * bplan\n * features\n * keywords\n * exact_search\n * : Results y/n of keywords found in documents relating to baunvo and article 13b.\n * fuzzy_search:\n * : Results of keywords found in documents relating to \"hochwasser\", e.g. hqhäufig and hq100\n * contains 7 csv files with results of fuzzy key search for keywords. The file name indicates the key being searched for and the text around this keyword is extracted in a row for each document\n * raw\n * images: images from here can be added to this folder\n * links:\n * : The file downloaded from the NRV geoportal, containing all raw data on URLs to land parcel bplans. \n * : A processed version of NRW_BP.geojson.\n * : A csv formatted version of NRW_BP.geojson.\n \n * text: \n * : Raw output of the text text extraction of each pdf. Contains only columns for the filename and the extracted text.\n * : Enriched version of bp_texts.json in which columns about the documents have been appended. \n * pdfs: pdfs extarcted from the NRW Geoportal and are found here, can be added to this folder\n * knowledge_extraction_agent: Contains 6 json files. The filename corresponds to the key looked for in the fuzzy keyword search (e.g. cooresponds to , corrresponds to ). More unfo can be found here \n * : Is a toy example for 10 files of the output of the pipeline for the knowledge agent (merging of results in )\n * clean\n * : See here for more information\n * : This corresponds to baunvo_keywords.csv.\n * : Is the merged version of the files found in '\n * : The .xlsx version of )\n * : See here for more information\n * : The .xlsx version of the data table found here\n * rplan\n * features: contains , the output of the pipeline - a more detailed can be found here\n * raw\n * geo: contains with information on the geolocations of the regional plans\n * pdfs: contains pdfs of regional plans for NRW - used as input to run the pipeline\n * text: contains text extracted with Tika from all pdf regional plans"
] |
[
11,
14,
21,
47,
527
] |
[
"passage: TAGS\n#license-mit #region-us \n# Dataset Card for nrw-bplan-scrape## Dataset Description\n \n Homepage: DSSGx Munich organization page. \n \n Repository: GitHub.### Dataset Summary\n\nThis dataset contains all inputs needed as well as outputs of running the full pipeline for creating the NRW land sealing dataset. This can be reproduced by running this notebook."
] |
fa505523f88e6265cfd36b1233a498ed7f40a14f
|
# Dataset Card for "llmjptk3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mmuttharasan/llmjptk3
|
[
"region:us"
] |
2023-09-22T14:00:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 8196, "num_examples": 1}, {"name": "test", "num_bytes": 8196, "num_examples": 1}], "download_size": 5733, "dataset_size": 16392}}
|
2023-09-22T14:00:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llmjptk3"
More Information needed
|
[
"# Dataset Card for \"llmjptk3\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llmjptk3\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llmjptk3\"\n\nMore Information needed"
] |
3903d03bd6e82408d4c6a78a150be7fddfb3fd9a
|
# Dataset Card for "13F_Reports_with_labels"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
jkv53/13F_Reports_with_labels
|
[
"region:us"
] |
2023-09-22T14:01:34+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12642773, "num_examples": 1113}], "download_size": 3334911, "dataset_size": 12642773}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:01:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "13F_Reports_with_labels"
More Information needed
|
[
"# Dataset Card for \"13F_Reports_with_labels\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"13F_Reports_with_labels\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"13F_Reports_with_labels\"\n\nMore Information needed"
] |
4216eb31756fa8f84009bd485faefb184f1b6192
|
# Dataset Card for "d50de234"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/d50de234
|
[
"region:us"
] |
2023-09-22T14:13:30+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 198, "num_examples": 10}], "download_size": 1368, "dataset_size": 198}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:13:31+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "d50de234"
More Information needed
|
[
"# Dataset Card for \"d50de234\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"d50de234\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"d50de234\"\n\nMore Information needed"
] |
3b32a7610dafd96f2f4b5d9ea4c5909830958b2e
|
# Dataset Card for "af730738"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/af730738
|
[
"region:us"
] |
2023-09-22T14:13:33+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 198, "num_examples": 10}], "download_size": 1368, "dataset_size": 198}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:13:33+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "af730738"
More Information needed
|
[
"# Dataset Card for \"af730738\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"af730738\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"af730738\"\n\nMore Information needed"
] |
aaefe450063bebb39010b56159d64c41778e4875
|
# Dataset Card for "03ada2d6"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
result-kand2-sdxl-wuerst-karlo/03ada2d6
|
[
"region:us"
] |
2023-09-22T14:13:35+00:00
|
{"dataset_info": {"features": [{"name": "result", "dtype": "string"}, {"name": "id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 198, "num_examples": 10}], "download_size": 1368, "dataset_size": 198}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:13:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "03ada2d6"
More Information needed
|
[
"# Dataset Card for \"03ada2d6\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"03ada2d6\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"03ada2d6\"\n\nMore Information needed"
] |
fe582c138226afad2f3086becb7b5abd50713a2f
|
# Dataset Card for "qg-article-context-question"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
yawoayite/qg-article-context-question
|
[
"region:us"
] |
2023-09-22T14:16:23+00:00
|
{"dataset_info": {"features": [{"name": "Articles", "dtype": "string"}, {"name": "Contextes", "dtype": "string"}, {"name": "Questions", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 462783, "num_examples": 424}], "download_size": 72683, "dataset_size": 462783}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T14:16:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "qg-article-context-question"
More Information needed
|
[
"# Dataset Card for \"qg-article-context-question\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"qg-article-context-question\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"qg-article-context-question\"\n\nMore Information needed"
] |
f5ee2d16c2538f305f973d844cc0099603bc731f
|

A susbet of the [PIPPA](https://huggingface.co/datasets/PygmalionAI/PIPPA) dataset scored with GPT-4 on different personality traits:
- Loquacity
- Assertiveness
- Shyness
- Empathy
- Kindness
- Cruelty
- Arrogance
- Stubbornness
- Humor
- Capriciousness
- Fragility
- Wisdom
- Fidelity
- Bluntness
- Creativity
- Confidence
- Integrity
- Bellicosity
- Patience
And also several meta-attributes:
- Action level
- NSFW
- User engagement
- MBTI type
- Topic
For every attribute there is a textual explanation from ChatGPT.
Prompt:
```
Please act as an impartial judge and evaluate character traits for the role-play conversation below. Be as objective as possible.
You should evaluate the following list of traits:
- loquacity: being very talkative
- assertiveness: being able to stand up for your rights in a calm and positive way
- shyness: being nervous, timid or uncommunicative in the company
- empathy: understanding and sharing the feelings of another
- kindness: being friendly, generous, and considerate
- cruelty: deliberately causing pain or distress
- arrogance: revealing an exaggerated sense of one's importance or abilities
- stubbornness: determination not to change attitude or position on something
- humor: being amusing or comic
- capriciousness: changing mood or behavior suddenly and unexpectedly
- fragility: being easily broken or damaged
- wisdom: having experience, knowledge, and good judgement
- fidelity: faithfulness to a person, cause, or belief, demonstrated by continuing loyalty and support
- bluntness: being very direct and saying what you think without trying to be polite
- creativity: using imagination or original ideas
- confidence: self-assurance arising from an appreciation of one's abilities or qualities
- integrity: being honest and having strong moral principles
- bellicosity: the behavior of someone who wants to fight or start a conflict
- patience: capacity to accept or tolerate delay, problems, or suffering without becoming annoyed or anxious
Do not evaluate user messages, as your goal is to evaluate only character traits.
Assign a four-letter MBTI type code of the character.
Also, rate the following parameters:
- action_level: How many non-verbal actions the character does? If there are zero actions, set this score to the minimal value.
- nsfw: How much sex and erotic content is in the conversation?
- profanity: How much swear words, obscene gestures, and naughty jokes present in the conversation? This score doesn't account for sex and erotic cont
ent.
- user_engagement: How attractive is a conversation for a user? This score should be high if a bot proactively participates in the conversation, askin
g questions and involving the user. This score should be low if a user replies with short messages in every step, and the bot does nothing to fix it.
Also, identify a relevant topic from the list:
- friendship: just chit-chat between two friends
- conflict: users or characters pretend to be in positions of power and use it. It includes mental or physical abuse or actual fighting
- romance_sfw: conversations are about love that never includes explicit content
- romance_nsfw: conversations are about love but contain sexual or erotic content
- other: conversations that do not fall into the above categories
Do not extract any topics that are not from this list.
If the user is not talking with a single character but with a group of characters or with a game master or with some kind of game bot, return empty "traits" and "mbti_type".
Each score is an integer from 1 to 10. If the trait is not presented, set the score to 1. If the trait is over-represented, set the score to 10. Return a JSON with all parameters. For every trait, explain yourself in a separate "explanation" field before outputting the score. Try to include quotes from the conversation in your explanation.
Format:
{
"traits": {
"loquacity": {
"explanation": "...",
"score": ...
},
...
],
"mbti_type": "...",
"parameters": [
"action_level": {
"explanation": "...",
"score": ...
},
...
],
"topic": "..."
}
Conversation:
{% for message in task.messages %}
{{message.role}}: {{message.content}}
{% endfor %}
```
|
IlyaGusev/pippa_scored
|
[
"task_categories:conversational",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"conversational",
"roleplay",
"region:us"
] |
2023-09-22T14:24:05+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "pretty_name": "PIPPA scored", "tags": ["not-for-all-audiences", "conversational", "roleplay"], "dataset_info": {"features": [{"name": "submission_timestamp", "dtype": "int64"}, {"name": "categories", "sequence": "string"}, {"name": "bot_id", "dtype": "string"}, {"name": "bot_name", "dtype": "string"}, {"name": "bot_greeting", "dtype": "string"}, {"name": "bot_definitions", "dtype": "string"}, {"name": "bot_description", "dtype": "string"}, {"name": "conversation", "list": [{"name": "is_human", "dtype": "bool"}, {"name": "message", "dtype": "string"}]}, {"name": "loquacity_score", "dtype": "float64"}, {"name": "loquacity_explanation", "dtype": "string"}, {"name": "assertiveness_score", "dtype": "float64"}, {"name": "assertiveness_explanation", "dtype": "string"}, {"name": "shyness_score", "dtype": "float64"}, {"name": "shyness_explanation", "dtype": "string"}, {"name": "empathy_score", "dtype": "float64"}, {"name": "empathy_explanation", "dtype": "string"}, {"name": "kindness_score", "dtype": "float64"}, {"name": "kindness_explanation", "dtype": "string"}, {"name": "cruelty_score", "dtype": "float64"}, {"name": "cruelty_explanation", "dtype": "string"}, {"name": "arrogance_score", "dtype": "float64"}, {"name": "arrogance_explanation", "dtype": "string"}, {"name": "stubbornness_score", "dtype": "float64"}, {"name": "stubbornness_explanation", "dtype": "string"}, {"name": "humor_score", "dtype": "float64"}, {"name": "humor_explanation", "dtype": "string"}, {"name": "capriciousness_score", "dtype": "float64"}, {"name": "capriciousness_explanation", "dtype": "string"}, {"name": "fragility_score", "dtype": "float64"}, {"name": "fragility_explanation", "dtype": "string"}, {"name": "wisdom_score", "dtype": "float64"}, {"name": "wisdom_explanation", "dtype": "string"}, {"name": "fidelity_score", "dtype": "float64"}, {"name": "fidelity_explanation", "dtype": "string"}, {"name": "bluntness_score", "dtype": "float64"}, {"name": "bluntness_explanation", "dtype": "string"}, {"name": "creativity_score", "dtype": "float64"}, {"name": "creativity_explanation", "dtype": "string"}, {"name": "confidence_score", "dtype": "float64"}, {"name": "confidence_explanation", "dtype": "string"}, {"name": "integrity_score", "dtype": "float64"}, {"name": "integrity_explanation", "dtype": "string"}, {"name": "bellicosity_score", "dtype": "float64"}, {"name": "bellicosity_explanation", "dtype": "string"}, {"name": "patience_score", "dtype": "float64"}, {"name": "patience_explanation", "dtype": "string"}, {"name": "action_level_score", "dtype": "float64"}, {"name": "action_level_explanation", "dtype": "string"}, {"name": "nsfw_score", "dtype": "float64"}, {"name": "nsfw_explanation", "dtype": "string"}, {"name": "profanity_score", "dtype": "float64"}, {"name": "profanity_explanation", "dtype": "string"}, {"name": "user_engagement_score", "dtype": "float64"}, {"name": "user_engagement_explanation", "dtype": "string"}, {"name": "mbti_type", "dtype": "string"}, {"name": "topic", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 31559838, "num_examples": 1960}], "download_size": 16267020, "dataset_size": 31559838}}
|
2023-12-20T20:41:28+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us
|
!image/png
A susbet of the PIPPA dataset scored with GPT-4 on different personality traits:
- Loquacity
- Assertiveness
- Shyness
- Empathy
- Kindness
- Cruelty
- Arrogance
- Stubbornness
- Humor
- Capriciousness
- Fragility
- Wisdom
- Fidelity
- Bluntness
- Creativity
- Confidence
- Integrity
- Bellicosity
- Patience
And also several meta-attributes:
- Action level
- NSFW
- User engagement
- MBTI type
- Topic
For every attribute there is a textual explanation from ChatGPT.
Prompt:
|
[] |
[
"TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us \n"
] |
[
57
] |
[
"passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #region-us \n"
] |
17a5ff00eb7fec01d95a38341571d69561eacc6d
|
# Dataset Card for "tmdb_5000_movies"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
AiresPucrs/tmdb-5000-movies
|
[
"region:us"
] |
2023-09-22T14:51:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "budget", "dtype": "int64"}, {"name": "genres", "dtype": "string"}, {"name": "homepage", "dtype": "string"}, {"name": "keywords", "dtype": "string"}, {"name": "original_language", "dtype": "string"}, {"name": "original_title", "dtype": "string"}, {"name": "overview", "dtype": "string"}, {"name": "popularity", "dtype": "float64"}, {"name": "production_companies", "dtype": "string"}, {"name": "production_countries", "dtype": "string"}, {"name": "release_date", "dtype": "string"}, {"name": "revenue", "dtype": "int64"}, {"name": "runtime", "dtype": "float64"}, {"name": "spoken_languages", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "tagline", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "vote_average", "dtype": "float64"}, {"name": "vote_count", "dtype": "int64"}, {"name": "cast", "dtype": "string"}, {"name": "crew", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 40655819, "num_examples": 4803}], "download_size": 13875246, "dataset_size": 40655819}}
|
2023-09-22T14:51:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "tmdb_5000_movies"
More Information needed
|
[
"# Dataset Card for \"tmdb_5000_movies\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"tmdb_5000_movies\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"tmdb_5000_movies\"\n\nMore Information needed"
] |
309e1c7d14cbe1c7e96651f8f8f30b0f9e4bc4ce
|
# Dataset Card for "wiki-bpe-32k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
cyrilzhang/wiki-bpe-32k
|
[
"region:us"
] |
2023-09-22T14:56:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}], "splits": [{"name": "train", "num_bytes": 21123228700, "num_examples": 5152007}, {"name": "test", "num_bytes": 212326700, "num_examples": 51787}], "download_size": 10331372531, "dataset_size": 21335555400}}
|
2023-09-22T15:02:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wiki-bpe-32k"
More Information needed
|
[
"# Dataset Card for \"wiki-bpe-32k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wiki-bpe-32k\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wiki-bpe-32k\"\n\nMore Information needed"
] |
21d9f4621964923755ea12530ad656178d13ed49
|
# Dataset Card for "textbook_quality_programming"
Synthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.
~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.
Generated using the [textbook_quality](https://github.com/VikParuchuri/textbook_quality) repo.
|
vikp/textbook_quality_programming
|
[
"language:en",
"region:us"
] |
2023-09-22T15:04:56+00:00
|
{"language": ["en"], "dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "outline", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 471931604, "num_examples": 11650}], "download_size": 0, "dataset_size": 471931604}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-08T17:36:50+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #region-us
|
# Dataset Card for "textbook_quality_programming"
Synthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.
~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.
Generated using the textbook_quality repo.
|
[
"# Dataset Card for \"textbook_quality_programming\"\n\nSynthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.\n\n~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.\n\nGenerated using the textbook_quality repo."
] |
[
"TAGS\n#language-English #region-us \n",
"# Dataset Card for \"textbook_quality_programming\"\n\nSynthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.\n\n~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.\n\nGenerated using the textbook_quality repo."
] |
[
10,
151
] |
[
"passage: TAGS\n#language-English #region-us \n# Dataset Card for \"textbook_quality_programming\"\n\nSynthetic programming textbooks generated with GPT-3.5 and retrieval. Very high quality, aimed at being used in a phi replication. Currently 115M tokens. Covers many languages and technologies, with a bias towards python.\n\n~10k of the books (65M tokens) use an older generation method, and average 6k tokens in length. ~1.5k books (50M tokens) use a newer generation method, with a more detailed outline, and average 33k tokens in length. All books have section headers for optimal chunking.\n\nGenerated using the textbook_quality repo."
] |
781e32d3b4487eb806c95237cf9b52ac12cf9f0e
|
# Dataset Card for Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/FabbriSimo01/Cerebras_1.3b_Quantized
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [FabbriSimo01/Cerebras_1.3b_Quantized](https://huggingface.co/FabbriSimo01/Cerebras_1.3b_Quantized) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_FabbriSimo01__Cerebras_1.3b_Quantized",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T16:08:53.530245](https://huggingface.co/datasets/open-llm-leaderboard/details_FabbriSimo01__Cerebras_1.3b_Quantized/blob/main/results_2023-09-22T16-08-53.530245.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335628,
"f1": 0.03707739093959742,
"f1_stderr": 0.0010591502361020477,
"acc": 0.2694565433979606,
"acc_stderr": 0.007855236930515893
},
"harness|drop|3": {
"em": 0.0007340604026845638,
"em_stderr": 0.0002773614457335628,
"f1": 0.03707739093959742,
"f1_stderr": 0.0010591502361020477
},
"harness|gsm8k|5": {
"acc": 0.0037907505686125853,
"acc_stderr": 0.0016927007401502038
},
"harness|winogrande|5": {
"acc": 0.5351223362273086,
"acc_stderr": 0.014017773120881582
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_FabbriSimo01__Cerebras_1.3b_Quantized
|
[
"region:us"
] |
2023-09-22T15:08:57+00:00
|
{"pretty_name": "Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized", "dataset_summary": "Dataset automatically created during the evaluation run of model [FabbriSimo01/Cerebras_1.3b_Quantized](https://huggingface.co/FabbriSimo01/Cerebras_1.3b_Quantized) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FabbriSimo01__Cerebras_1.3b_Quantized\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T16:08:53.530245](https://huggingface.co/datasets/open-llm-leaderboard/details_FabbriSimo01__Cerebras_1.3b_Quantized/blob/main/results_2023-09-22T16-08-53.530245.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335628,\n \"f1\": 0.03707739093959742,\n \"f1_stderr\": 0.0010591502361020477,\n \"acc\": 0.2694565433979606,\n \"acc_stderr\": 0.007855236930515893\n },\n \"harness|drop|3\": {\n \"em\": 0.0007340604026845638,\n \"em_stderr\": 0.0002773614457335628,\n \"f1\": 0.03707739093959742,\n \"f1_stderr\": 0.0010591502361020477\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0037907505686125853,\n \"acc_stderr\": 0.0016927007401502038\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5351223362273086,\n \"acc_stderr\": 0.014017773120881582\n }\n}\n```", "repo_url": "https://huggingface.co/FabbriSimo01/Cerebras_1.3b_Quantized", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T16_08_53.530245", "path": ["**/details_harness|drop|3_2023-09-22T16-08-53.530245.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T16-08-53.530245.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T16_08_53.530245", "path": ["**/details_harness|gsm8k|5_2023-09-22T16-08-53.530245.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T16-08-53.530245.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T16_08_53.530245", "path": ["**/details_harness|winogrande|5_2023-09-22T16-08-53.530245.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T16-08-53.530245.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T16_08_53.530245", "path": ["results_2023-09-22T16-08-53.530245.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T16-08-53.530245.parquet"]}]}]}
|
2023-09-22T15:09:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model FabbriSimo01/Cerebras_1.3b_Quantized on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T16:08:53.530245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FabbriSimo01/Cerebras_1.3b_Quantized on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T16:08:53.530245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FabbriSimo01/Cerebras_1.3b_Quantized on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T16:08:53.530245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
25,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of FabbriSimo01/Cerebras_1.3b_Quantized## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model FabbriSimo01/Cerebras_1.3b_Quantized on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T16:08:53.530245(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
333fe4fcbd02866f0aa9b8d1a8a161f27bb2c080
|
# Dataset Card for Bees
## Dataset Summary
The USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (https://dwc.tdwg.org). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.
## Languages
English
## Data Instances
A typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.
An example from the dataset looks as follows:
```json
{
'occurrenceID': 'http://n2t.net/ark:/65665/30042e2d8-669d-4520-b456-e3c64203eff8',
'catalogNumber': 'USNMENT01732649',
'recordedBy': 'R. Craig',
'year': '1949',
'month': '4',
'day': '13',
'country': 'United States',
'stateProvince': 'California',
'county': 'Fresno',
'locality': 'Auberry',
'decimalLatitude': '37.0808',
'decimalLongitude': '-119.485',
'identifiedBy': "O'Brien, L. R.",
'scientificName': 'Xylocopa (Notoxylocopa) tabaniformis orpifex',
'genus': 'Xylocopa',
'subgenus': 'Notoxylocopa',
'specificEpithet': 'tabaniformis',
'infraspecificEpithet': 'orpifex',
'scientificNameAuthorship': 'Smith',
'accessURI': 'https://ids.si.edu/ids/deliveryService?id=NMNH-USNMENT01732649',
'PixelXDimension': 2000,
'PixelYDimension': 1212
}
```
## Data Fields
Specimen metadata fields conform to the Darwin Core data standard and are detailed here: https://dwc.tdwg.org. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: https://ac.tdwg.org/.
## Curation Rationale
The dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.
## Initial Data Collection and Normalization
Bumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu).
NMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, https://collections.nmnh.si.edu/ipt/). Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.
## Who are the source language producers?
The occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.
## Annotations
The specimen occurrence data in Darwin Core fields.
## Annotation process
The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.
## Who are the annotators?
Original collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.
## Personal and Sensitive Information
The dataset contains the names of the collectors and identifiers.
## Social Impact of Dataset
Digitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change.
The dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola.
Some site/location names could cause harm as they are insensitive or racist towards indigenous communities.
## Discussion of Biases
Estimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons.
The majority of specimens in this dataset originate from North America.
Most specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.
## Other Known Limitations
As with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time.
Species identifications may be inaccurate or not up-to-date based on the latest classification.
Collector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear.
Locality data may use historical place names that are no longer used.
Dates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information).
For specimens collected from Brazil, specimen images are not included in the dataset.
For endangered species, locality data is not included in the dataset.
## Dataset Curators
Smithsonian National Museum of Natural History, Department of Entomology.
Jessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.
## Licensing Information
Public domain, Creative Commons CC0.
## Citation Information
Orrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. https://collections.nmnh.si.edu/ipt/resource?r=nmnh_extant_dwc-a&v=1.72
## Contributions
Thanks to NMNH for adding this dataset.
|
MikeTrizna/bees
|
[
"license:cc0-1.0",
"doi:10.57967/hf/1348",
"region:us"
] |
2023-09-22T15:29:09+00:00
|
{"license": "cc0-1.0", "dataset_info": {"features": [{"name": "occurrenceID", "dtype": "string"}, {"name": "catalogNumber", "dtype": "string"}, {"name": "recordedBy", "dtype": "string"}, {"name": "year", "dtype": "int64"}, {"name": "month", "dtype": "int64"}, {"name": "day", "dtype": "int64"}, {"name": "country", "dtype": "string"}, {"name": "stateProvince", "dtype": "string"}, {"name": "county", "dtype": "string"}, {"name": "locality", "dtype": "string"}, {"name": "decimalLatitude", "dtype": "float64"}, {"name": "decimalLongitude", "dtype": "float64"}, {"name": "identifiedBy", "dtype": "string"}, {"name": "scientificName", "dtype": "string"}, {"name": "genus", "dtype": "string"}, {"name": "subgenus", "dtype": "string"}, {"name": "specificEpithet", "dtype": "string"}, {"name": "infraspecificEpithet", "dtype": "string"}, {"name": "scientificNameAuthorship", "dtype": "string"}, {"name": "PixelXDimension", "dtype": "float64"}, {"name": "PixelYDimension", "dtype": "float64"}, {"name": "accessURI", "dtype": "string"}, {"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 3672202733.82, "num_examples": 73387}], "download_size": 3659907058, "dataset_size": 3672202733.82}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T20:01:28+00:00
|
[] |
[] |
TAGS
#license-cc0-1.0 #doi-10.57967/hf/1348 #region-us
|
# Dataset Card for Bees
## Dataset Summary
The USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.
## Languages
English
## Data Instances
A typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.
An example from the dataset looks as follows:
## Data Fields
Specimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL
## Curation Rationale
The dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.
## Initial Data Collection and Normalization
Bumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu).
NMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.
## Who are the source language producers?
The occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.
## Annotations
The specimen occurrence data in Darwin Core fields.
## Annotation process
The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.
## Who are the annotators?
Original collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.
## Personal and Sensitive Information
The dataset contains the names of the collectors and identifiers.
## Social Impact of Dataset
Digitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change.
The dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola.
Some site/location names could cause harm as they are insensitive or racist towards indigenous communities.
## Discussion of Biases
Estimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons.
The majority of specimens in this dataset originate from North America.
Most specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.
## Other Known Limitations
As with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time.
Species identifications may be inaccurate or not up-to-date based on the latest classification.
Collector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear.
Locality data may use historical place names that are no longer used.
Dates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information).
For specimens collected from Brazil, specimen images are not included in the dataset.
For endangered species, locality data is not included in the dataset.
## Dataset Curators
Smithsonian National Museum of Natural History, Department of Entomology.
Jessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.
## Licensing Information
Public domain, Creative Commons CC0.
Orrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL
## Contributions
Thanks to NMNH for adding this dataset.
|
[
"# Dataset Card for Bees",
"## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.",
"## Languages \n\nEnglish",
"## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:",
"## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL",
"## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.",
"## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.",
"## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.",
"## Annotations \n\nThe specimen occurrence data in Darwin Core fields.",
"## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.",
"## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.",
"## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers.",
"## Social Impact of Dataset \n\nDigitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change. \n\nThe dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola. \n\nSome site/location names could cause harm as they are insensitive or racist towards indigenous communities.",
"## Discussion of Biases \n\nEstimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons. \n\nThe majority of specimens in this dataset originate from North America. \n\nMost specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.",
"## Other Known Limitations \n\nAs with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time. \n\nSpecies identifications may be inaccurate or not up-to-date based on the latest classification. \n\nCollector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear. \n\nLocality data may use historical place names that are no longer used. \n\nDates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information). \n\nFor specimens collected from Brazil, specimen images are not included in the dataset. \n\nFor endangered species, locality data is not included in the dataset.",
"## Dataset Curators \n\nSmithsonian National Museum of Natural History, Department of Entomology. \n\nJessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.",
"## Licensing Information \n\nPublic domain, Creative Commons CC0. \n\n \n\nOrrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL",
"## Contributions \n\nThanks to NMNH for adding this dataset."
] |
[
"TAGS\n#license-cc0-1.0 #doi-10.57967/hf/1348 #region-us \n",
"# Dataset Card for Bees",
"## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.",
"## Languages \n\nEnglish",
"## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:",
"## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL",
"## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.",
"## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.",
"## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.",
"## Annotations \n\nThe specimen occurrence data in Darwin Core fields.",
"## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.",
"## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.",
"## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers.",
"## Social Impact of Dataset \n\nDigitized natural history collections have the potential to be used in diverse research applications in evolutionary biology, ecology, and climate change. \n\nThe dataset contains records for species listed on the U.S. Endangered Species List: Bombus affinis, Bombus franklini, and Bombus terricola. \n\nSome site/location names could cause harm as they are insensitive or racist towards indigenous communities.",
"## Discussion of Biases \n\nEstimates of species geographic ranges based on these data may not be complete. There are many reasons collectors may collect more frequently from some areas rather than others, including their own taxonomic interests, proximity to collections institutions, accessibility via roads, ability to acquire permits for a specific area, or for geopolitical reasons. \n\nThe majority of specimens in this dataset originate from North America. \n\nMost specimens are expected to be female, because bumblebees are social insects and it is more common to find female bees.",
"## Other Known Limitations \n\nAs with all natural history collections data, there is the potential that some metadata are inaccurate or inconsistent given that they have been collected and recorded over the course of the past 150 years. Smithsonian staff seek to correct these errors as they are identified but the dataset as presented is a snapshot in time. \n\nSpecies identifications may be inaccurate or not up-to-date based on the latest classification. \n\nCollector names may not be consistent across records (e.g. the same person’s name may be written differently). For women’s names, which were often historically recorded as Mrs. <spouse’s name>, only the spouse’s name may appear. \n\nLocality data may use historical place names that are no longer used. \n\nDates may sometimes have been recorded by original collectors inconsistently or may be incomplete (no month/day information). \n\nFor specimens collected from Brazil, specimen images are not included in the dataset. \n\nFor endangered species, locality data is not included in the dataset.",
"## Dataset Curators \n\nSmithsonian National Museum of Natural History, Department of Entomology. \n\nJessica Bird (Data Manager in the Department of Entomology) is the main contact person for the dataset.",
"## Licensing Information \n\nPublic domain, Creative Commons CC0. \n\n \n\nOrrell T, Informatics Office (2023). NMNH Extant Specimen Records (USNM, US). Version 1.72. National Museum of Natural History, Smithsonian Institution. Occurrence dataset. URL",
"## Contributions \n\nThanks to NMNH for adding this dataset."
] |
[
26,
7,
185,
4,
41,
42,
203,
251,
53,
17,
33,
73,
23,
99,
125,
245,
43,
60,
15
] |
[
"passage: TAGS\n#license-cc0-1.0 #doi-10.57967/hf/1348 #region-us \n# Dataset Card for Bees## Dataset Summary \n\nThe USNM Bumblebee Dataset is a natural history dataset containing, for each of 73,497 Bumblebee specimens in the family Apidae, a single image in lateral or dorsal view and a tab-separated value file with occurrence data. Occurrence data includes the species classification, the date and site/location of collection, and other metadata conforming to the Darwin Core data standard (URL). 11,421 specimens are not identified to species and these specimens are included as 'Bombus sp.' or 'Xylocopa sp.' The collecting sites/locations of the majority of specimens (55,301), have been georeferenced. The dataset is worldwide in scope, but is limited to the specimens available in the Smithsonian USNM collection.## Languages \n\nEnglish## Data Instances \n\nA typical data point comprises of the specimen metadata and image information for a single bumblebee specimen.\n\nAn example from the dataset looks as follows:## Data Fields \n\nSpecimen metadata fields conform to the Darwin Core data standard and are detailed here: URL. Image metadata fields conform to the Audiovisual Core data standard and are detailed here: URL",
"passage: ## Curation Rationale \n\nThe dataset represents a portion of the U. S. National Entomological Collection. The U.S. National Entomological Collection (USNM) traces its origins in part to the acquisition of the U.S. Department of Agriculture Collection of 138,000 specimens donated in 1885. These specimens became the foundation of one of the world’s largest and most important accessible entomological collections, with over 33 million specimens taken care of by the combined staff of three government agencies: the Smithsonian Institution; the Systematic Entomology Laboratory (Agricultural Research Service, United States Department of Agriculture); and the Walter Reed Biosystematics Unit (Walter Reed Army Institute of Research). The specimens were imaged in a mass-digitization project in collaboration with the Digitization Program Office. The goal was to digitize every Bombus specimen in the collection.## Initial Data Collection and Normalization \n\nBumblebee specimens were collected over a period of 150 years (earliest specimen dates from 1807, most recent specimen dates from 2020). The specimens were collected by and identified by many different individual researchers over this time. The initial images of about 49,000 specimens were taken in a rapid capture project by a dedicated team in 2014 with additional specimen images (about 25,000) taken in 2018. The labels containing the information on site/location, date of collection, collector, and identifier were removed from the insect pin. The occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields. Following quality control of the transcribed data by NMNH staff, they were imported into the institutional database (EMu). \n\nNMNH specimen data get exported to the Global Biodiversity Information Facility (GBIF) on a weekly basis through an installation of an Integrated Publishing Toolkit (IPT, URL Some data transformation takes place within EMu and GBIF likewise normalizes the data to meet their standards.## Who are the source language producers? \n\nThe occurrence data were produced by humans, observed and written onto paper labels over the museum’s history, and then transcribed from paper labels pinned with the specimens upon collection.## Annotations \n\nThe specimen occurrence data in Darwin Core fields.## Annotation process \n\nThe occurrence data were transcribed from the labels by online volunteers and a professional transcription service into Darwin Core fields.## Who are the annotators? \n\nOriginal collectors and identifiers were entomologists and researchers from the Smithsonian and other institutions. Collectors may not be bumblebee specialists. For data transcription, online volunteers and professional transcription service workers. Demographic data of transcribers is unknown.## Personal and Sensitive Information \n\nThe dataset contains the names of the collectors and identifiers."
] |
e3b4a4ac9c5d8c32557ba3fce253d71a9ef54b79
|
# Dataset Card for "toxic75k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Vaibhav9401/toxic75k
|
[
"region:us"
] |
2023-09-22T15:29:27+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "llama_finetune_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 61395720, "num_examples": 72313}], "download_size": 11452836, "dataset_size": 61395720}}
|
2023-09-22T15:39:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "toxic75k"
More Information needed
|
[
"# Dataset Card for \"toxic75k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"toxic75k\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"toxic75k\"\n\nMore Information needed"
] |
5e28ece3ded1747ea012b58dea33abdbe11166b3
|
# KoCoT-Collection
Using DeepL dataset, translation about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection).
---
# Original Dataset Card for Dataset Name
## Dataset Description
- **Homepage:https://github.com/kaistAI/CoT-Collection**
- **Repository:https://github.com/kaistAI/CoT-Collection**
- **Paper:https://arxiv.org/abs/2305.14045**
- **Point of Contact:[email protected]**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
English
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
| name | train |
|-------------------|------:|
|CoT-Collection|1837928|
## Additional Information
### Citation Information
```
@article{kim2023cot,
title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning},
author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon},
journal={arXiv preprint arXiv:2305.14045},
year={2023}
}
```
|
kyujinpy/KoCoT_2000
|
[
"task_categories:text-generation",
"task_categories:text-classification",
"size_categories:1k<n<5k",
"language:en",
"license:cc-by-nc-4.0",
"arxiv:2305.14045",
"region:us"
] |
2023-09-22T15:41:36+00:00
|
{"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["1k<n<5k"], "task_categories": ["text-generation", "text-classification"]}
|
2023-11-03T02:49:40+00:00
|
[
"2305.14045"
] |
[
"en"
] |
TAGS
#task_categories-text-generation #task_categories-text-classification #size_categories-1k<n<5k #language-English #license-cc-by-nc-4.0 #arxiv-2305.14045 #region-us
|
KoCoT-Collection
================
Using DeepL dataset, translation about kaist-CoT.
---
Original Dataset Card for Dataset Name
======================================
Dataset Description
-------------------
* Homepage:URL
* Repository:URL
* Paper:URL
* Point of Contact:sejune@URL
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
### Supported Tasks and Leaderboards
### Languages
English
Dataset Structure
-----------------
### Data Instances
### Data Fields
### Data Splits
Additional Information
----------------------
|
[
"### Dataset Summary\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n\nAdditional Information\n----------------------"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-1k<n<5k #language-English #license-cc-by-nc-4.0 #arxiv-2305.14045 #region-us \n",
"### Dataset Summary\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"### Supported Tasks and Leaderboards",
"### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------",
"### Data Instances",
"### Data Fields",
"### Data Splits\n\n\n\nAdditional Information\n----------------------"
] |
[
64,
32,
10,
12,
6,
5,
12
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #size_categories-1k<n<5k #language-English #license-cc-by-nc-4.0 #arxiv-2305.14045 #region-us \n### Dataset Summary\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.### Supported Tasks and Leaderboards### Languages\n\n\nEnglish\n\n\nDataset Structure\n-----------------### Data Instances### Data Fields### Data Splits\n\n\n\nAdditional Information\n----------------------"
] |
8f43c84fb8faf04515368b0206a4600db762ac68
|
The dataset:
- Is a patch from the existing dataset available at https://www.epfl.ch/labs/cvlab/data/data-em/.
- Contains patches of size (256, 256).
- Removes any patches with empty masks to ensure quality.
- Has the same license applied as the original dataset.
- Please refer to the license for information on allowed usage.
- If you have any questions or concerns about the dataset, please do not hesitate to contact me.
|
hasangoni/Electron_microscopy_dataset
|
[
"task_categories:image-segmentation",
"size_categories:10K<n<100K",
"language:en",
"microscopy",
"EPFL",
"image segmentation",
"region:us"
] |
2023-09-22T15:54:18+00:00
|
{"language": ["en"], "size_categories": ["10K<n<100K"], "task_categories": ["image-segmentation"], "pretty_name": "electron microscopy patch image", "tags": ["microscopy", "EPFL", "image segmentation"]}
|
2023-09-25T06:57:56+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-segmentation #size_categories-10K<n<100K #language-English #microscopy #EPFL #image segmentation #region-us
|
The dataset:
- Is a patch from the existing dataset available at URL
- Contains patches of size (256, 256).
- Removes any patches with empty masks to ensure quality.
- Has the same license applied as the original dataset.
- Please refer to the license for information on allowed usage.
- If you have any questions or concerns about the dataset, please do not hesitate to contact me.
|
[] |
[
"TAGS\n#task_categories-image-segmentation #size_categories-10K<n<100K #language-English #microscopy #EPFL #image segmentation #region-us \n"
] |
[
45
] |
[
"passage: TAGS\n#task_categories-image-segmentation #size_categories-10K<n<100K #language-English #microscopy #EPFL #image segmentation #region-us \n"
] |
8d13f0b3e3ff34632ef95441e4e4635ff21f2042
|
# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-560m-RLHF
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/TheTravellingEngineer/bloom-560m-RLHF
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [TheTravellingEngineer/bloom-560m-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-560m-RLHF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_TheTravellingEngineer__bloom-560m-RLHF",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T17:04:20.598203](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-560m-RLHF/blob/main/results_2023-09-22T17-04-20.598203.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493922,
"f1": 0.0398909395973155,
"f1_stderr": 0.0011867178799463702,
"acc": 0.26710430338450897,
"acc_stderr": 0.007769858100932032
},
"harness|drop|3": {
"em": 0.0028313758389261743,
"em_stderr": 0.0005441551135493922,
"f1": 0.0398909395973155,
"f1_stderr": 0.0011867178799463702
},
"harness|gsm8k|5": {
"acc": 0.003032600454890068,
"acc_stderr": 0.001514573561224551
},
"harness|winogrande|5": {
"acc": 0.5311760063141279,
"acc_stderr": 0.014025142640639513
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_TheTravellingEngineer__bloom-560m-RLHF
|
[
"region:us"
] |
2023-09-22T16:04:24+00:00
|
{"pretty_name": "Evaluation run of TheTravellingEngineer/bloom-560m-RLHF", "dataset_summary": "Dataset automatically created during the evaluation run of model [TheTravellingEngineer/bloom-560m-RLHF](https://huggingface.co/TheTravellingEngineer/bloom-560m-RLHF) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_TheTravellingEngineer__bloom-560m-RLHF\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T17:04:20.598203](https://huggingface.co/datasets/open-llm-leaderboard/details_TheTravellingEngineer__bloom-560m-RLHF/blob/main/results_2023-09-22T17-04-20.598203.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493922,\n \"f1\": 0.0398909395973155,\n \"f1_stderr\": 0.0011867178799463702,\n \"acc\": 0.26710430338450897,\n \"acc_stderr\": 0.007769858100932032\n },\n \"harness|drop|3\": {\n \"em\": 0.0028313758389261743,\n \"em_stderr\": 0.0005441551135493922,\n \"f1\": 0.0398909395973155,\n \"f1_stderr\": 0.0011867178799463702\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.003032600454890068,\n \"acc_stderr\": 0.001514573561224551\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5311760063141279,\n \"acc_stderr\": 0.014025142640639513\n }\n}\n```", "repo_url": "https://huggingface.co/TheTravellingEngineer/bloom-560m-RLHF", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T17_04_20.598203", "path": ["**/details_harness|drop|3_2023-09-22T17-04-20.598203.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T17-04-20.598203.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T17_04_20.598203", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-04-20.598203.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-04-20.598203.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T17_04_20.598203", "path": ["**/details_harness|winogrande|5_2023-09-22T17-04-20.598203.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T17-04-20.598203.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T17_04_20.598203", "path": ["results_2023-09-22T17-04-20.598203.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T17-04-20.598203.parquet"]}]}]}
|
2023-09-22T16:04:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-560m-RLHF
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model TheTravellingEngineer/bloom-560m-RLHF on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T17:04:20.598203(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-560m-RLHF",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/bloom-560m-RLHF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:04:20.598203(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-560m-RLHF",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/bloom-560m-RLHF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:04:20.598203(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of TheTravellingEngineer/bloom-560m-RLHF## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model TheTravellingEngineer/bloom-560m-RLHF on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T17:04:20.598203(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
28d0de31ef55ff50df15d0b382e444cc9f14b310
|
# E621 Rising V3: Preliminary Data
Snapshot metadata from E621.net as of 2023-09-21
|
hearmeneigh/e621-rising-v3-preliminary-data
|
[
"furry",
"anthro",
"nsfw",
"e621",
"not-for-all-audiences",
"region:us"
] |
2023-09-22T16:15:58+00:00
|
{"pretty_name": "E621 Rising V3: Preliminary Data", "viewer": false, "tags": ["furry", "anthro", "nsfw", "e621", "not-for-all-audiences"]}
|
2023-12-20T01:45:38+00:00
|
[] |
[] |
TAGS
#furry #anthro #nsfw #e621 #not-for-all-audiences #region-us
|
# E621 Rising V3: Preliminary Data
Snapshot metadata from URL as of 2023-09-21
|
[
"# E621 Rising V3: Preliminary Data\n\nSnapshot metadata from URL as of 2023-09-21"
] |
[
"TAGS\n#furry #anthro #nsfw #e621 #not-for-all-audiences #region-us \n",
"# E621 Rising V3: Preliminary Data\n\nSnapshot metadata from URL as of 2023-09-21"
] |
[
29,
25
] |
[
"passage: TAGS\n#furry #anthro #nsfw #e621 #not-for-all-audiences #region-us \n# E621 Rising V3: Preliminary Data\n\nSnapshot metadata from URL as of 2023-09-21"
] |
91d7e6c40e1529b702e9a9f528b82bbe3f6801d9
|
# Dataset Card for "oneAPI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
badalsahani/oneAPI
|
[
"region:us"
] |
2023-09-22T16:16:25+00:00
|
{"dataset_info": {"features": [{"name": "Story", "dtype": "string"}, {"name": "Question", "dtype": "string"}, {"name": "span_start", "dtype": "float64"}, {"name": "span_end", "dtype": "float64"}, {"name": "span_text", "dtype": "string"}, {"name": "Answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 111024407, "num_examples": 66611}], "download_size": 66935230, "dataset_size": 111024407}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-09-22T16:16:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "oneAPI"
More Information needed
|
[
"# Dataset Card for \"oneAPI\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"oneAPI\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"oneAPI\"\n\nMore Information needed"
] |
6fc1e99eead79e1be4fc281d34bc3e0dd3dde03d
|
# Dataset Card for "ficbook_raw_best_10k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
dim/ficbook_raw_best_10k
|
[
"region:us"
] |
2023-09-22T16:34:26+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "link", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "tag", "dtype": "string"}, {"name": "likes", "dtype": "int64"}, {"name": "date", "dtype": "string"}, {"name": "review", "dtype": "string"}, {"name": "format", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "rating", "dtype": "string"}, {"name": "status", "dtype": "string"}, {"name": "parts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 91515293.63435334, "num_examples": 10000}], "download_size": 101345356, "dataset_size": 91515293.63435334}}
|
2023-09-22T17:00:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ficbook_raw_best_10k"
More Information needed
|
[
"# Dataset Card for \"ficbook_raw_best_10k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ficbook_raw_best_10k\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ficbook_raw_best_10k\"\n\nMore Information needed"
] |
8b2dbae258badd16bc7ab68f2e1b83eaceba70f3
|
[](https://doi.org/10.48550/arXiv.2301.05768)
# RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods

**Homepage**: https://www.rxrx.ai/rxrx1 \
**Publication Date**: 2019-06 \
**License**: [Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) \
**Citation**:
```bibtex
@misc{sypetkowski2023rxrx1,
title = {RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods},
author = {Maciej Sypetkowski and Morteza Rezanejad and Saber Saberian and Oren Kraus and John Urbanik and James Taylor and Ben Mabey and Mason Victors and Jason Yosinski and Alborz Rezazadeh Sereshkeh and Imran Haque and Berton Earnshaw},
year = {2023},
eprint = {2301.05768},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
```
## Description
High-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches.
|
1aurent/RxRx1
|
[
"task_categories:image-classification",
"size_categories:100K<n<1M",
"license:cc-by-4.0",
"biology",
"drug",
"cells",
"arxiv:2301.05768",
"region:us"
] |
2023-09-22T16:34:29+00:00
|
{"license": "cc-by-4.0", "size_categories": ["100K<n<1M"], "task_categories": ["image-classification"], "tags": ["biology", "drug", "cells"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": {"array3_d": {"dtype": "uint8", "shape": [512, 512, 6]}}}, {"name": "site_id", "dtype": "string"}, {"name": "well_id", "dtype": "string"}, {"name": "cell_type", "dtype": "string"}, {"name": "experiment", "dtype": "string"}, {"name": "plate", "dtype": "int32"}, {"name": "well", "dtype": "string"}, {"name": "site", "dtype": "int32"}, {"name": "well_type", "dtype": {"class_label": {"names": {"0": "treatment", "1": "positive_control", "2": "negative_control"}}}}, {"name": "sirna", "dtype": "string"}, {"name": "sirna_id", "dtype": "int32"}, {"name": "embeddings", "sequence": "float32", "length": 128}], "splits": [{"name": "train", "num_bytes": 213139738276, "num_examples": 81224}, {"name": "test", "num_bytes": 116210798412, "num_examples": 44286}], "dataset_size": 329350536688}}
|
2023-10-29T11:57:49+00:00
|
[
"2301.05768"
] |
[] |
TAGS
#task_categories-image-classification #size_categories-100K<n<1M #license-cc-by-4.0 #biology #drug #cells #arxiv-2301.05768 #region-us
|
 \
Citation:
## Description
High-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches.
|
[
"# RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods\n\n \\\nCitation:",
"## Description\n\nHigh-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches."
] |
[
"TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #license-cc-by-4.0 #biology #drug #cells #arxiv-2301.05768 #region-us \n",
"# RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods\n\n \\\nCitation:",
"## Description\n\nHigh-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches."
] |
[
55,
64,
245
] |
[
"passage: TAGS\n#task_categories-image-classification #size_categories-100K<n<1M #license-cc-by-4.0 #biology #drug #cells #arxiv-2301.05768 #region-us \n# RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods\n\n \\\nCitation:## Description\n\nHigh-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches."
] |
5ac9b939e14b97c8429013df593ab8f762fa0d56
|
# Dataset Card for turkish-nlp-suite/vitamins-supplements-reviews
<img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/supplements.png" width="20%" height="20%">
### Dataset Description
- **Repository:** [Vitamins and Supplements Reviews Dataset](https://github.com/turkish-nlp-suite/Vitamins-Supplements-Reviews)
- **Paper:** [ACL link](https://aclanthology.org/2023.acl-long.768/)
- **Dataset:** Vitamins and Supplements Reviews Dataset
- **Domain:** E-commerce, customer reviews
### Dataset Summary
Turkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from Vitaminler.com and contains
customer reviews and star rating about vitamin and supplement products.
Each customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness,
side effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews
also include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems.
Considering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope
to offer a finely compiled medical NLP dataset for Turkish NLU.
### Dataset Instances
The dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names
Each dataset instance contains
- product name
- brand name
- customer review text
- star rating
Here's an example for you:
```
{
"product_name": "Microfer Şurup 250 ml",
"brand": "Ocean",
"review": "Bittikçe alıyorum harika bişey kızım tadını da seviyo",
"star": 5
}
```
If you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's [Github repo](https://github.com/turkish-nlp-suite/Vitamins-Supplements-Reviews).
### Data Split
| name |train|validation|test|
|---------|----:|---:|---:|
|Vitamins and Supplements Reviews|200866|20000|20000|
### Citation
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite [A Diverse Set of Freely Available Linguistic Resources for Turkish](https://aclanthology.org/2023.acl-long.768/) :
```
@inproceedings{altinok-2023-diverse,
title = "A Diverse Set of Freely Available Linguistic Resources for {T}urkish",
author = "Altinok, Duygu",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.768",
pages = "13739--13750",
abstract = "This study presents a diverse set of freely available linguistic resources for Turkish natural language processing, including corpora, pretrained models and education material. Although Turkish is spoken by a sizeable population of over 80 million people, Turkish linguistic resources for natural language processing remain scarce. In this study, we provide corpora to allow practitioners to build their own applications and pretrained models that would assist industry researchers in creating quick prototypes. The provided corpora include named entity recognition datasets of diverse genres, including Wikipedia articles and supplement products customer reviews. In addition, crawling e-commerce and movie reviews websites, we compiled several sentiment analysis datasets of different genres. Our linguistic resources for Turkish also include pretrained spaCy language models. To the best of our knowledge, our models are the first spaCy models trained for the Turkish language. Finally, we provide various types of education material, such as video tutorials and code examples, that can support the interested audience on practicing Turkish NLP. The advantages of our linguistic resources are three-fold: they are freely available, they are first of their kind, and they are easy to use in a broad range of implementations. Along with a thorough description of the resource creation process, we also explain the position of our resources in the Turkish NLP world.",
}
```
|
turkish-nlp-suite/vitamins-supplements-reviews
|
[
"task_categories:text-classification",
"task_ids:sentiment-classification",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"language:tr",
"license:cc-by-sa-4.0",
"region:us"
] |
2023-09-22T16:37:53+00:00
|
{"language": ["tr"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "Vitamins and Supplements Customer Reviews Dataset"}
|
2023-09-23T17:34:47+00:00
|
[] |
[
"tr"
] |
TAGS
#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-100K<n<1M #language-Turkish #license-cc-by-sa-4.0 #region-us
|
Dataset Card for turkish-nlp-suite/vitamins-supplements-reviews
===============================================================
<img src="URL width="20%" height="20%">
### Dataset Description
* Repository: Vitamins and Supplements Reviews Dataset
* Paper: ACL link
* Dataset: Vitamins and Supplements Reviews Dataset
* Domain: E-commerce, customer reviews
### Dataset Summary
Turkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from URL and contains
customer reviews and star rating about vitamin and supplement products.
Each customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness,
side effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews
also include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems.
Considering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope
to offer a finely compiled medical NLP dataset for Turkish NLU.
### Dataset Instances
The dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names
Each dataset instance contains
* product name
* brand name
* customer review text
* star rating
Here's an example for you:
If you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's Github repo.
### Data Split
This work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, "Turkish NLP with Duygu"/ "Duygu'yla Türkçe NLP". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :
|
[
"### Dataset Description\n\n\n* Repository: Vitamins and Supplements Reviews Dataset\n* Paper: ACL link\n* Dataset: Vitamins and Supplements Reviews Dataset\n* Domain: E-commerce, customer reviews",
"### Dataset Summary\n\n\nTurkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from URL and contains\ncustomer reviews and star rating about vitamin and supplement products.\n\n\nEach customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness,\nside effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews\nalso include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems.\n\n\nConsidering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope\nto offer a finely compiled medical NLP dataset for Turkish NLU.",
"### Dataset Instances\n\n\nThe dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names\nEach dataset instance contains\n\n\n* product name\n* brand name\n* customer review text\n* star rating\n\n\nHere's an example for you:\n\n\nIf you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's Github repo.",
"### Data Split\n\n\n\nThis work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, \"Turkish NLP with Duygu\"/ \"Duygu'yla Türkçe NLP\". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :"
] |
[
"TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-100K<n<1M #language-Turkish #license-cc-by-sa-4.0 #region-us \n",
"### Dataset Description\n\n\n* Repository: Vitamins and Supplements Reviews Dataset\n* Paper: ACL link\n* Dataset: Vitamins and Supplements Reviews Dataset\n* Domain: E-commerce, customer reviews",
"### Dataset Summary\n\n\nTurkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from URL and contains\ncustomer reviews and star rating about vitamin and supplement products.\n\n\nEach customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness,\nside effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews\nalso include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems.\n\n\nConsidering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope\nto offer a finely compiled medical NLP dataset for Turkish NLU.",
"### Dataset Instances\n\n\nThe dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names\nEach dataset instance contains\n\n\n* product name\n* brand name\n* customer review text\n* star rating\n\n\nHere's an example for you:\n\n\nIf you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's Github repo.",
"### Data Split\n\n\n\nThis work is supported by Google Developer Experts Program. Part of Duygu 2022 Fall-Winter collection, \"Turkish NLP with Duygu\"/ \"Duygu'yla Türkçe NLP\". All rights reserved. If you'd like to use this dataset in your own work, please kindly cite A Diverse Set of Freely Available Linguistic Resources for Turkish :"
] |
[
65,
47,
198,
117,
88
] |
[
"passage: TAGS\n#task_categories-text-classification #task_ids-sentiment-classification #multilinguality-monolingual #size_categories-100K<n<1M #language-Turkish #license-cc-by-sa-4.0 #region-us \n### Dataset Description\n\n\n* Repository: Vitamins and Supplements Reviews Dataset\n* Paper: ACL link\n* Dataset: Vitamins and Supplements Reviews Dataset\n* Domain: E-commerce, customer reviews### Dataset Summary\n\n\nTurkish sentiment analysis dataset from customer reviews about supplement and vitamin products. The dataset is scraped from URL and contains\ncustomer reviews and star rating about vitamin and supplement products.\n\n\nEach customer review in the Vitamins and Supplements Reviews Dataset describes a customer’s experience with a supplement product in terms of the product’s effectiveness,\nside effects, taste and smell, as well as comments on supplement usage frequency and dosage, active ingredients, brand, and similar products by other brands. The reviews\nalso include pointers to customers’ health history and indications how the supplements helped in resolving customers’ health problems.\n\n\nConsidering the characteristics of the data, our Vitamins and Supplements Reviews Dataset lies at the intersection of customer review data and healthcare NLP data. We hope\nto offer a finely compiled medical NLP dataset for Turkish NLU.### Dataset Instances\n\n\nThe dataset includes 1,052 products of 262 distinct brands with 244K customer reviews. During the compilation, we eliminated reviews containing person names such as customer's name and influencer names\nEach dataset instance contains\n\n\n* product name\n* brand name\n* customer review text\n* star rating\n\n\nHere's an example for you:\n\n\nIf you're rather interested in JSON format where reviews are accumulated by product name, you can find the dataset as a single JSON in dataset's Github repo."
] |
9b23b61fcc8e4d5b4c59c7c0de115cb1a0d1b307
|
# Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [bhenrym14/airophin-v2-13b-PI-8k-fp16](https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T17:43:10.494860](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16/blob/main/results_2023-09-22T17-43-10.494860.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0921770134228188,
"em_stderr": 0.00296245358879876,
"f1": 0.2086210151006714,
"f1_stderr": 0.0033790655527750446,
"acc": 0.4199589150853921,
"acc_stderr": 0.009541015115774397
},
"harness|drop|3": {
"em": 0.0921770134228188,
"em_stderr": 0.00296245358879876,
"f1": 0.2086210151006714,
"f1_stderr": 0.0033790655527750446
},
"harness|gsm8k|5": {
"acc": 0.07354056103108415,
"acc_stderr": 0.007189835754365268
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183525
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16
|
[
"region:us"
] |
2023-09-22T16:43:14+00:00
|
{"pretty_name": "Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16", "dataset_summary": "Dataset automatically created during the evaluation run of model [bhenrym14/airophin-v2-13b-PI-8k-fp16](https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T17:43:10.494860](https://huggingface.co/datasets/open-llm-leaderboard/details_bhenrym14__airophin-v2-13b-PI-8k-fp16/blob/main/results_2023-09-22T17-43-10.494860.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0921770134228188,\n \"em_stderr\": 0.00296245358879876,\n \"f1\": 0.2086210151006714,\n \"f1_stderr\": 0.0033790655527750446,\n \"acc\": 0.4199589150853921,\n \"acc_stderr\": 0.009541015115774397\n },\n \"harness|drop|3\": {\n \"em\": 0.0921770134228188,\n \"em_stderr\": 0.00296245358879876,\n \"f1\": 0.2086210151006714,\n \"f1_stderr\": 0.0033790655527750446\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07354056103108415,\n \"acc_stderr\": 0.007189835754365268\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183525\n }\n}\n```", "repo_url": "https://huggingface.co/bhenrym14/airophin-v2-13b-PI-8k-fp16", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T17_43_10.494860", "path": ["**/details_harness|drop|3_2023-09-22T17-43-10.494860.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T17-43-10.494860.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T17_43_10.494860", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-43-10.494860.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-43-10.494860.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T17_43_10.494860", "path": ["**/details_harness|winogrande|5_2023-09-22T17-43-10.494860.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T17-43-10.494860.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T17_43_10.494860", "path": ["results_2023-09-22T17-43-10.494860.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T17-43-10.494860.parquet"]}]}]}
|
2023-09-22T16:43:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model bhenrym14/airophin-v2-13b-PI-8k-fp16 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T17:43:10.494860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airophin-v2-13b-PI-8k-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:43:10.494860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airophin-v2-13b-PI-8k-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:43:10.494860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
31,
31,
179,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bhenrym14/airophin-v2-13b-PI-8k-fp16## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model bhenrym14/airophin-v2-13b-PI-8k-fp16 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T17:43:10.494860(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
940ad911138b29c009054107481c5f7820d30b50
|
# Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/FelixChao/vicuna-7B-chemical
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [FelixChao/vicuna-7B-chemical](https://huggingface.co/FelixChao/vicuna-7B-chemical) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T17:46:17.694402](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical/blob/main/results_2023-09-22T17-46-17.694402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.005557885906040268,
"em_stderr": 0.0007613497667018453,
"f1": 0.06261220637583904,
"f1_stderr": 0.0014974766629904516,
"acc": 0.3525119781135765,
"acc_stderr": 0.009072291049445833
},
"harness|drop|3": {
"em": 0.005557885906040268,
"em_stderr": 0.0007613497667018453,
"f1": 0.06261220637583904,
"f1_stderr": 0.0014974766629904516
},
"harness|gsm8k|5": {
"acc": 0.03335860500379075,
"acc_stderr": 0.004946282649173776
},
"harness|winogrande|5": {
"acc": 0.6716653512233622,
"acc_stderr": 0.013198299449717888
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical
|
[
"region:us"
] |
2023-09-22T16:46:21+00:00
|
{"pretty_name": "Evaluation run of FelixChao/vicuna-7B-chemical", "dataset_summary": "Dataset automatically created during the evaluation run of model [FelixChao/vicuna-7B-chemical](https://huggingface.co/FelixChao/vicuna-7B-chemical) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T17:46:17.694402](https://huggingface.co/datasets/open-llm-leaderboard/details_FelixChao__vicuna-7B-chemical/blob/main/results_2023-09-22T17-46-17.694402.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.005557885906040268,\n \"em_stderr\": 0.0007613497667018453,\n \"f1\": 0.06261220637583904,\n \"f1_stderr\": 0.0014974766629904516,\n \"acc\": 0.3525119781135765,\n \"acc_stderr\": 0.009072291049445833\n },\n \"harness|drop|3\": {\n \"em\": 0.005557885906040268,\n \"em_stderr\": 0.0007613497667018453,\n \"f1\": 0.06261220637583904,\n \"f1_stderr\": 0.0014974766629904516\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.03335860500379075,\n \"acc_stderr\": 0.004946282649173776\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6716653512233622,\n \"acc_stderr\": 0.013198299449717888\n }\n}\n```", "repo_url": "https://huggingface.co/FelixChao/vicuna-7B-chemical", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T17_46_17.694402", "path": ["**/details_harness|drop|3_2023-09-22T17-46-17.694402.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T17-46-17.694402.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T17_46_17.694402", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-46-17.694402.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-46-17.694402.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T17_46_17.694402", "path": ["**/details_harness|winogrande|5_2023-09-22T17-46-17.694402.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T17-46-17.694402.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T17_46_17.694402", "path": ["results_2023-09-22T17-46-17.694402.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T17-46-17.694402.parquet"]}]}]}
|
2023-09-22T16:46:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model FelixChao/vicuna-7B-chemical on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T17:46:17.694402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FelixChao/vicuna-7B-chemical on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:46:17.694402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model FelixChao/vicuna-7B-chemical on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:46:17.694402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of FelixChao/vicuna-7B-chemical## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model FelixChao/vicuna-7B-chemical on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T17:46:17.694402(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2aa165445ec6157455e51934c46ab53d21b44ffb
|
# Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/baichuan-inc/Baichuan-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_baichuan-inc__Baichuan-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-09-22T17:53:01.811068](https://huggingface.co/datasets/open-llm-leaderboard/details_baichuan-inc__Baichuan-7B/blob/main/results_2023-09-22T17-53-01.811068.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652192515,
"f1": 0.05030096476510072,
"f1_stderr": 0.001249643333921536,
"acc": 0.34788528775895733,
"acc_stderr": 0.00889327304403644
},
"harness|drop|3": {
"em": 0.0010486577181208054,
"em_stderr": 0.00033145814652192515,
"f1": 0.05030096476510072,
"f1_stderr": 0.001249643333921536
},
"harness|gsm8k|5": {
"acc": 0.028051554207733132,
"acc_stderr": 0.004548229533836359
},
"harness|winogrande|5": {
"acc": 0.6677190213101816,
"acc_stderr": 0.013238316554236521
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_baichuan-inc__Baichuan-7B
|
[
"region:us"
] |
2023-09-22T16:53:06+00:00
|
{"pretty_name": "Evaluation run of baichuan-inc/Baichuan-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [baichuan-inc/Baichuan-7B](https://huggingface.co/baichuan-inc/Baichuan-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_baichuan-inc__Baichuan-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-09-22T17:53:01.811068](https://huggingface.co/datasets/open-llm-leaderboard/details_baichuan-inc__Baichuan-7B/blob/main/results_2023-09-22T17-53-01.811068.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652192515,\n \"f1\": 0.05030096476510072,\n \"f1_stderr\": 0.001249643333921536,\n \"acc\": 0.34788528775895733,\n \"acc_stderr\": 0.00889327304403644\n },\n \"harness|drop|3\": {\n \"em\": 0.0010486577181208054,\n \"em_stderr\": 0.00033145814652192515,\n \"f1\": 0.05030096476510072,\n \"f1_stderr\": 0.001249643333921536\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.028051554207733132,\n \"acc_stderr\": 0.004548229533836359\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6677190213101816,\n \"acc_stderr\": 0.013238316554236521\n }\n}\n```", "repo_url": "https://huggingface.co/baichuan-inc/Baichuan-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_09_22T17_53_01.811068", "path": ["**/details_harness|drop|3_2023-09-22T17-53-01.811068.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-09-22T17-53-01.811068.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_09_22T17_53_01.811068", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-53-01.811068.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-09-22T17-53-01.811068.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_09_22T17_53_01.811068", "path": ["**/details_harness|winogrande|5_2023-09-22T17-53-01.811068.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-09-22T17-53-01.811068.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_09_22T17_53_01.811068", "path": ["results_2023-09-22T17-53-01.811068.parquet"]}, {"split": "latest", "path": ["results_2023-09-22T17-53-01.811068.parquet"]}]}]}
|
2023-09-22T16:53:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model baichuan-inc/Baichuan-7B on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-09-22T17:53:01.811068(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model baichuan-inc/Baichuan-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:53:01.811068(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model baichuan-inc/Baichuan-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-09-22T17:53:01.811068(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of baichuan-inc/Baichuan-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model baichuan-inc/Baichuan-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-09-22T17:53:01.811068(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
ac9a277e3900fa4e0d7c358c8eaa4831f2778764
|
# Dataset Card for "Leadership"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
gsl22/Leadership
|
[
"region:us"
] |
2023-09-22T17:02:07+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 980041, "num_examples": 4400}], "download_size": 396782, "dataset_size": 980041}}
|
2023-09-22T17:02:09+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Leadership"
More Information needed
|
[
"# Dataset Card for \"Leadership\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Leadership\"\n\nMore Information needed"
] |
[
6,
13
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Leadership\"\n\nMore Information needed"
] |
0a916439633ecb27b9a7b9714b3fd79828bfb881
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_random"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kewu93/three_styles_prompted_250_512x512_50perclass_random
|
[
"region:us"
] |
2023-09-22T17:04:11+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "style_class", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4334193.0, "num_examples": 150}, {"name": "val", "num_bytes": 4317601.0, "num_examples": 150}], "download_size": 8183790, "dataset_size": 8651794.0}}
|
2023-09-22T17:04:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "three_styles_prompted_250_512x512_50perclass_random"
More Information needed
|
[
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_random\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_random\"\n\nMore Information needed"
] |
[
6,
35
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"three_styles_prompted_250_512x512_50perclass_random\"\n\nMore Information needed"
] |
a8c90612721be35bc9c79e0c9616dfc12e4a3cf2
|
Papers on consciousness extracted from PDF format, from Cognition and Consciousness, arXiv and JCER.
|
gollark/consciousness
|
[
"region:us"
] |
2023-09-22T17:14:33+00:00
|
{}
|
2023-09-22T17:18:51+00:00
|
[] |
[] |
TAGS
#region-us
|
Papers on consciousness extracted from PDF format, from Cognition and Consciousness, arXiv and JCER.
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
c895dfcf93001238ff4279d940e14f998e677358
|
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-70B-v1](https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-09T10:10:41.822023](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1_public/blob/main/results_2023-11-09T10-10-41.822023.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3132340604026846,
"em_stderr": 0.004749834913438157,
"f1": 0.456531040268459,
"f1_stderr": 0.004364621394991152,
"acc": 0.5835410217852969,
"acc_stderr": 0.01171539602098445
},
"harness|drop|3": {
"em": 0.3132340604026846,
"em_stderr": 0.004749834913438157,
"f1": 0.456531040268459,
"f1_stderr": 0.004364621394991152
},
"harness|gsm8k|5": {
"acc": 0.332827899924185,
"acc_stderr": 0.012979892496598271
},
"harness|winogrande|5": {
"acc": 0.8342541436464088,
"acc_stderr": 0.010450899545370628
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1
|
[
"region:us"
] |
2023-09-22T17:16:15+00:00
|
{"pretty_name": "Evaluation run of AIDC-ai-business/Marcoroni-70B-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [AIDC-ai-business/Marcoroni-70B-v1](https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-09T10:10:41.822023](https://huggingface.co/datasets/open-llm-leaderboard/details_AIDC-ai-business__Marcoroni-70B-v1_public/blob/main/results_2023-11-09T10-10-41.822023.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3132340604026846,\n \"em_stderr\": 0.004749834913438157,\n \"f1\": 0.456531040268459,\n \"f1_stderr\": 0.004364621394991152,\n \"acc\": 0.5835410217852969,\n \"acc_stderr\": 0.01171539602098445\n },\n \"harness|drop|3\": {\n \"em\": 0.3132340604026846,\n \"em_stderr\": 0.004749834913438157,\n \"f1\": 0.456531040268459,\n \"f1_stderr\": 0.004364621394991152\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.332827899924185,\n \"acc_stderr\": 0.012979892496598271\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8342541436464088,\n \"acc_stderr\": 0.010450899545370628\n }\n}\n```", "repo_url": "https://huggingface.co/AIDC-ai-business/Marcoroni-70B-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_09T10_10_41.822023", "path": ["**/details_harness|drop|3_2023-11-09T10-10-41.822023.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-09T10-10-41.822023.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_09T10_10_41.822023", "path": ["**/details_harness|gsm8k|5_2023-11-09T10-10-41.822023.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-09T10-10-41.822023.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_09T10_10_41.822023", "path": ["**/details_harness|winogrande|5_2023-11-09T10-10-41.822023.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-09T10-10-41.822023.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_09T10_10_41.822023", "path": ["results_2023-11-09T10-10-41.822023.parquet"]}, {"split": "latest", "path": ["results_2023-11-09T10-10-41.822023.parquet"]}]}]}
|
2023-12-01T14:55:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-70B-v1 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-09T10:10:41.822023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-70B-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-09T10:10:41.822023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-70B-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-09T10:10:41.822023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
173,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of AIDC-ai-business/Marcoroni-70B-v1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model AIDC-ai-business/Marcoroni-70B-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-09T10:10:41.822023(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
31c8816d862a59ab62f919146451067a7c659eaa
|
# Dataset Card for "wildreceipts_ocr_v1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
mychen76/wildreceipts_ocr_v1
|
[
"region:us"
] |
2023-09-22T17:25:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "id", "dtype": "string"}, {"name": "parsed_data", "dtype": "string"}, {"name": "raw_data", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 171312524.096, "num_examples": 1618}, {"name": "test", "num_bytes": 13813639.0, "num_examples": 99}, {"name": "valid", "num_bytes": 3239913.0, "num_examples": 20}], "download_size": 171397354, "dataset_size": 188366076.096}}
|
2023-09-22T18:29:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wildreceipts_ocr_v1"
More Information needed
|
[
"# Dataset Card for \"wildreceipts_ocr_v1\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wildreceipts_ocr_v1\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wildreceipts_ocr_v1\"\n\nMore Information needed"
] |
434b754c821aaa7ed256580b666e1111a1368e77
|
# Dataset Card for "test_result"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
quocanh34/test_result
|
[
"region:us"
] |
2023-09-22T17:27:23+00:00
|
{"dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}, {"name": "pred_str", "dtype": "string"}, {"name": "pred_str_norm", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 174371620.625, "num_examples": 1299}], "download_size": 164257469, "dataset_size": 174371620.625}}
|
2023-10-27T12:38:21+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_result"
More Information needed
|
[
"# Dataset Card for \"test_result\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_result\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_result\"\n\nMore Information needed"
] |
0f56ecc53f24913e59d9b4417d8a4ccdc3fb1888
|
# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-07T00:36:52.545264](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public/blob/main/results_2023-11-07T00-36-52.545264.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.4165268456375839,
"em_stderr": 0.005048607283288401,
"f1": 0.49208158557047166,
"f1_stderr": 0.004763681838536193,
"acc": 0.5761731430558057,
"acc_stderr": 0.012100109818181048
},
"harness|drop|3": {
"em": 0.4165268456375839,
"em_stderr": 0.005048607283288401,
"f1": 0.49208158557047166,
"f1_stderr": 0.004763681838536193
},
"harness|gsm8k|5": {
"acc": 0.3457164518574678,
"acc_stderr": 0.013100422990441578
},
"harness|winogrande|5": {
"acc": 0.8066298342541437,
"acc_stderr": 0.01109979664592052
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b
|
[
"region:us"
] |
2023-09-22T17:38:05+00:00
|
{"pretty_name": "Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b", "dataset_summary": "Dataset automatically created during the evaluation run of model [luffycodes/higgs-llama-vicuna-ep25-70b](https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-07T00:36:52.545264](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__higgs-llama-vicuna-ep25-70b_public/blob/main/results_2023-11-07T00-36-52.545264.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.4165268456375839,\n \"em_stderr\": 0.005048607283288401,\n \"f1\": 0.49208158557047166,\n \"f1_stderr\": 0.004763681838536193,\n \"acc\": 0.5761731430558057,\n \"acc_stderr\": 0.012100109818181048\n },\n \"harness|drop|3\": {\n \"em\": 0.4165268456375839,\n \"em_stderr\": 0.005048607283288401,\n \"f1\": 0.49208158557047166,\n \"f1_stderr\": 0.004763681838536193\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3457164518574678,\n \"acc_stderr\": 0.013100422990441578\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8066298342541437,\n \"acc_stderr\": 0.01109979664592052\n }\n}\n```", "repo_url": "https://huggingface.co/luffycodes/higgs-llama-vicuna-ep25-70b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_07T00_36_52.545264", "path": ["**/details_harness|drop|3_2023-11-07T00-36-52.545264.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-07T00-36-52.545264.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_07T00_36_52.545264", "path": ["**/details_harness|gsm8k|5_2023-11-07T00-36-52.545264.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-07T00-36-52.545264.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_07T00_36_52.545264", "path": ["**/details_harness|winogrande|5_2023-11-07T00-36-52.545264.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-07T00-36-52.545264.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_07T00_36_52.545264", "path": ["results_2023-11-07T00-36-52.545264.parquet"]}, {"split": "latest", "path": ["results_2023-11-07T00-36-52.545264.parquet"]}]}]}
|
2023-12-01T14:39:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model luffycodes/higgs-llama-vicuna-ep25-70b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-07T00:36:52.545264(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/higgs-llama-vicuna-ep25-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T00:36:52.545264(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/higgs-llama-vicuna-ep25-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-07T00:36:52.545264(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of luffycodes/higgs-llama-vicuna-ep25-70b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model luffycodes/higgs-llama-vicuna-ep25-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-07T00:36:52.545264(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.