sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ed5162565d7bd9d855f148603ff52cd4274563f1 | # Dataset Card for Dataset _tla-demotic-v18-premium_
<!-- Provide a quick summary of the dataset. -->
This data set contains demotic sentences in `transliteration`, with `lemmatization`, with POS `glossing` and with a German `translation`.
The data comes from the database of the [Thesaurus Linguae Agegyptiae](https://thesaurus-linguae-aegyptiae.de), corpus version 18, and contains only fully intact,
unambiguously readable sentences (13,383 of 31,156 sentences), adjusted for philological and editorial markup.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Homepage:** https://thesaurus-linguae-aegyptiae.de.
- **Curated by:**
German Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,
Executive Editor: [Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel).
- **Funded by:**
The Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.
The Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.
- **Language(s) (NLP):** egy-Egyd, de-DE.
- **License:** [CC BY-SA 4.0 Int.](https://creativecommons.org/licenses/by-sa/4.0/); for required attribution, see citation recommendations below.
- **Point of Contact:** [Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
This data set may be used
- to create lemmatizers Demotic transliteration => [TLA lemma ID](https://thesaurus-linguae-aegyptiae.de/info/lemma-lists),
- to train translation models Demotic transliteration => German.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
This data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.
## Dataset
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
The dataset is not divided. Please create your own random splits.
The dataset comes as a _JSON lines_ file.
### Data Fields
#### plain_text
- `transliteration`: a `string`, Demotic transliteration, following the [_Leiden Unified Transliteration_](https://www.iae-egyptology.org/the-leiden-unified-transliteration), individual sentence elements separated by space.
- `UPOS`: a `string`, Part of Speech according to [Universal POS tag set](https://universaldependencies.org/u/pos/).
- `lemmatization`: a `string`, individual [TLA Lemma IDs](https://thesaurus-linguae-aegyptiae.de/info/lemma-lists)+"`|`"+lemma transliteration, separated by space.
- `transliteration`: a `string`, individual glosses separated by space (for information, see the comments below).
- `translation`: a `string`, German translation.
- `dateNotBefore`, `dateNotAfter`: two `strings` containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.
- `authors`: a `string`, main authors and further contributors to the sentence data set, individual items separated by `;`.
### Data instances
Example of an dataset instance:
```
{
"transliteration": "ꞽy ꞽh pr =k",
"lemmatization": "d338|ꞽy d4158|ḥr d1985|pr d6496|=k",
"UPOS": "VERB ADP NOUN PRON",
"glossing": "V PREP N.m -2sg.m",
"translation": "Komm in dein Haus!",
"dateNotBefore": "-75",
"dateNotAfter": "-51",
"authors": "Günter Vittmann;AV Altägyptisches Wörterbuch, AV Wortschatz der ägyptischen Sprache"
}
```
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
ML projects have requested raw data from the TLA.
At the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users.
This is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
For the corpus of Demotic texts in the _TLA_, browse [Demotische Textdatenbank](https://thesaurus-linguae-aegyptiae.de/object/6WFOSXHVQRGGNAG5FCM6QEXWR4)
and see the information on the [TLA text corpus](https://thesaurus-linguae-aegyptiae.de/info/text-corpus),
notably the [PDF overview](https://nubes.bbaw.de/s/xD7MYJrmE8xNBNt).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
This dataset contains all demotic sentences of the TLA corpus v18 (2023) that
- show no destruction,
- have no questionable readings,
- are fully lemmatized,
- have a German translation.
#### Who are the source data producers?
AV [Altägyptisches Wörterbuch](https://www.bbaw.de/forschung/altaegyptisches-woerterbuch),
AV [Wortschatz der ägyptischen Sprache](https://www.bbaw.de/en/research/vocabulary-of-the-egyptian-language);
R. Dominik Blöse,
Friedhelm Hoffmann,
Jakob Höper,
Joachim Friedrich Quack,
Marcel Moser,
Simon D. Schweitzer,
Martin Stadler,
**Günter Vittmann**,
Daniel A. Werning.
### Annotations
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
The transliteration sometimes contains round brackets (`( )`),
which mark phonemes added by the editor without the addition being regarded as an incorrect omission.
For model training, the brackets, but not their content, may optionally be removed.
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
Joachim Friedrich Quack (transliterations, lemmatizations, translations, dating),
Marcel Moser (transliterations, lemmatizations, translations, dating),
Simon D. Schweitzer (lemma list data, lemma POS, metadata/data curation),
Martin Stadler (transliteration, lemmatizations, translation, dating),
**Günter Vittmann** (transliterations, lemmatizations, translations, lemma list data, lemma POS, dating).
Daniel A. Werning (UPOS matching, glossing computation).
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
No personal, sensitive, or private data.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
This is not a carefully balanced data set.
Note that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the largely vowel-less nature of Demotic writing.
<!-- Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. -->
## Citation of this dataset
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
Thesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium <https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-demotic-v18-premium>,
v1.1, 2/16/2024
ed. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig.
**BibTeX:**
```
@misc{tlaDemoticV18premium,
editor = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig} and Richter, Tonio Sebastian and Werning, Daniel A. and Hans-Werner Fischer-Elfert and Peter Dils},
year = {2024},
title = {Thesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium},
url = {https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-demotic-v18-premium},
location = {Berlin},
organization = {{Berlin-Brandenburgische Akademie der Wissenschaften} and {Sächsische Akademie der Wissenschaften zu Leipzig}},
}
```
**RIS:**
```
TY - DATA
T1 - Thesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium
PY - 2024
Y1 - 2024
CY - Berlin
ED - Berlin-Brandenburgische Akademie der Wissenschaften
ED - Richter, Tonio Sebastian
ED - Werning, Daniel A.
ED - Sächsische Akademie der Wissenschaften zu Leipzig
ED - Fischer-Elfert, Hans-Werner
ED - Dils, Peter
IN - Berlin-Brandenburgische Akademie der Wissenschaften
IN - Sächsische Akademie der Wissenschaften zu Leipzig
UR - https://huggingface.co/datasets/thesaurus-linguae-aegyptiae/tla-demotic-v18-premium
DB - Thesaurus Linguae Aegyptiae
DP - Akademienvorhaben "Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache", Berlin-Berlin-Brandenburgischen Akademie der Wissenschaften
ER -
```
## Glossary
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
**Lemma IDs**
For the stable lemma IDs, see https://thesaurus-linguae-aegyptiae.de/info/lemma-lists.
**Glossing**
For the glossing abbreviations, see https://thesaurus-linguae-aegyptiae.de/listings/ling-glossings.
_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.
## Dataset Card Authors
[Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel)
## Dataset Card Contact
[Daniel A. Werning](https://www.bbaw.de/die-akademie/mitarbeiterinnen-mitarbeiter/werning-daniel) | thesaurus-linguae-aegyptiae/tla-demotic-v18-premium | [
"task_categories:translation",
"task_categories:token-classification",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"language:egy",
"language:de",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-12-19T20:43:23+00:00 | {"annotations_creators": ["expert-generated"], "language": ["egy", "de"], "license": "cc-by-sa-4.0", "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "task_categories": ["translation", "token-classification"], "pretty_name": "Thesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium", "dataset_info": {"features": [{"name": "transliteration", "dtype": "string"}, {"name": "lemmatization", "dtype": "string"}, {"name": "UPOS", "dtype": "string"}, {"name": "glossing", "dtype": "string"}, {"name": "translation", "dtype": "string"}, {"name": "dateNotBefore", "dtype": "string"}, {"name": "dateNotAfter", "dtype": "string"}, {"name": "authors", "dtype": "string"}]}, "splits": [{"name": "train", "num_examples": 13383}]} | 2024-02-16T11:55:11+00:00 | [] | [
"egy",
"de"
] | TAGS
#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us
| # Dataset Card for Dataset _tla-demotic-v18-premium_
This data set contains demotic sentences in 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'.
The data comes from the database of the Thesaurus Linguae Agegyptiae, corpus version 18, and contains only fully intact,
unambiguously readable sentences (13,383 of 31,156 sentences), adjusted for philological and editorial markup.
## Dataset Details
### Dataset Description
- Homepage: URL.
- Curated by:
German Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,
Executive Editor: Daniel A. Werning.
- Funded by:
The Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.
The Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.
- Language(s) (NLP): egy-Egyd, de-DE.
- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.
- Point of Contact: Daniel A. Werning
## Uses
### Direct Use
This data set may be used
- to create lemmatizers Demotic transliteration => TLA lemma ID,
- to train translation models Demotic transliteration => German.
### Out-of-Scope Use
This data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.
## Dataset
## Dataset Structure
The dataset is not divided. Please create your own random splits.
The dataset comes as a _JSON lines_ file.
### Data Fields
#### plain_text
- 'transliteration': a 'string', Demotic transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.
- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.
- 'lemmatization': a 'string', individual TLA Lemma IDs+"'|'"+lemma transliteration, separated by space.
- 'transliteration': a 'string', individual glosses separated by space (for information, see the comments below).
- 'translation': a 'string', German translation.
- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.
- 'authors': a 'string', main authors and further contributors to the sentence data set, individual items separated by ';'.
### Data instances
Example of an dataset instance:
## Dataset Creation
### Curation Rationale
ML projects have requested raw data from the TLA.
At the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users.
This is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.
### Source Data
For the corpus of Demotic texts in the _TLA_, browse Demotische Textdatenbank
and see the information on the TLA text corpus,
notably the PDF overview.
#### Data Collection and Processing
This dataset contains all demotic sentences of the TLA corpus v18 (2023) that
- show no destruction,
- have no questionable readings,
- are fully lemmatized,
- have a German translation.
#### Who are the source data producers?
AV Altägyptisches Wörterbuch,
AV Wortschatz der ägyptischen Sprache;
R. Dominik Blöse,
Friedhelm Hoffmann,
Jakob Höper,
Joachim Friedrich Quack,
Marcel Moser,
Simon D. Schweitzer,
Martin Stadler,
Günter Vittmann,
Daniel A. Werning.
### Annotations
#### Annotation process
The transliteration sometimes contains round brackets ('( )'),
which mark phonemes added by the editor without the addition being regarded as an incorrect omission.
For model training, the brackets, but not their content, may optionally be removed.
#### Who are the annotators?
Joachim Friedrich Quack (transliterations, lemmatizations, translations, dating),
Marcel Moser (transliterations, lemmatizations, translations, dating),
Simon D. Schweitzer (lemma list data, lemma POS, metadata/data curation),
Martin Stadler (transliteration, lemmatizations, translation, dating),
Günter Vittmann (transliterations, lemmatizations, translations, lemma list data, lemma POS, dating).
Daniel A. Werning (UPOS matching, glossing computation).
#### Personal and Sensitive Information
No personal, sensitive, or private data.
## Bias, Risks, and Limitations
This is not a carefully balanced data set.
Note that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the largely vowel-less nature of Demotic writing.
of this dataset
Thesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium <URL
v1.1, 2/16/2024
ed. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig.
BibTeX:
RIS:
## Glossary
Lemma IDs
For the stable lemma IDs, see URL
Glossing
For the glossing abbreviations, see URL
_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.
## Dataset Card Authors
Daniel A. Werning
## Dataset Card Contact
Daniel A. Werning | [
"# Dataset Card for Dataset _tla-demotic-v18-premium_\n\n\nThis data set contains demotic sentences in 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThe data comes from the database of the Thesaurus Linguae Agegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (13,383 of 31,156 sentences), adjusted for philological and editorial markup.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyd, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning",
"## Uses",
"### Direct Use\n\n\n\nThis data set may be used\n- to create lemmatizers Demotic transliteration => TLA lemma ID,\n- to train translation models Demotic transliteration => German.",
"### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.",
"## Dataset",
"## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.",
"### Data Fields",
"#### plain_text\n- 'transliteration': a 'string', Demotic transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'transliteration': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.\n- 'authors': a 'string', main authors and further contributors to the sentence data set, individual items separated by ';'.",
"### Data instances\n\nExample of an dataset instance:",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.",
"### Source Data\n\n\n\nFor the corpus of Demotic texts in the _TLA_, browse Demotische Textdatenbank \nand see the information on the TLA text corpus, \nnotably the PDF overview.",
"#### Data Collection and Processing\n\n\n\nThis dataset contains all demotic sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- are fully lemmatized,\n- have a German translation.",
"#### Who are the source data producers?\n\nAV Altägyptisches Wörterbuch,\nAV Wortschatz der ägyptischen Sprache;\nR. Dominik Blöse,\nFriedhelm Hoffmann,\nJakob Höper,\nJoachim Friedrich Quack,\nMarcel Moser,\nSimon D. Schweitzer,\nMartin Stadler,\nGünter Vittmann,\nDaniel A. Werning.",
"### Annotations",
"#### Annotation process\n\n\n\nThe transliteration sometimes contains round brackets ('( )'), \nwhich mark phonemes added by the editor without the addition being regarded as an incorrect omission. \nFor model training, the brackets, but not their content, may optionally be removed.",
"#### Who are the annotators?\n\n\n\nJoachim Friedrich Quack (transliterations, lemmatizations, translations, dating),\nMarcel Moser (transliterations, lemmatizations, translations, dating),\nSimon D. Schweitzer (lemma list data, lemma POS, metadata/data curation),\nMartin Stadler (transliteration, lemmatizations, translation, dating),\nGünter Vittmann (transliterations, lemmatizations, translations, lemma list data, lemma POS, dating).\nDaniel A. Werning (UPOS matching, glossing computation).",
"#### Personal and Sensitive Information\n\n\n\nNo personal, sensitive, or private data.",
"## Bias, Risks, and Limitations\n\n\n\n\n\nThis is not a carefully balanced data set.\n\nNote that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the largely vowel-less nature of Demotic writing. \n\n\n\nof this dataset\n\n\n\nThesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium <URL \nv1.1, 2/16/2024 \ned. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig. \n\nBibTeX:\n\n\n\nRIS:",
"## Glossary\n\n\n\nLemma IDs\n\nFor the stable lemma IDs, see URL\n\nGlossing\n\nFor the glossing abbreviations, see URL \n\n_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.",
"## Dataset Card Authors\n\nDaniel A. Werning",
"## Dataset Card Contact\n\nDaniel A. Werning"
] | [
"TAGS\n#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us \n",
"# Dataset Card for Dataset _tla-demotic-v18-premium_\n\n\nThis data set contains demotic sentences in 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThe data comes from the database of the Thesaurus Linguae Agegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (13,383 of 31,156 sentences), adjusted for philological and editorial markup.",
"## Dataset Details",
"### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyd, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning",
"## Uses",
"### Direct Use\n\n\n\nThis data set may be used\n- to create lemmatizers Demotic transliteration => TLA lemma ID,\n- to train translation models Demotic transliteration => German.",
"### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.",
"## Dataset",
"## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.",
"### Data Fields",
"#### plain_text\n- 'transliteration': a 'string', Demotic transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'transliteration': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.\n- 'authors': a 'string', main authors and further contributors to the sentence data set, individual items separated by ';'.",
"### Data instances\n\nExample of an dataset instance:",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.",
"### Source Data\n\n\n\nFor the corpus of Demotic texts in the _TLA_, browse Demotische Textdatenbank \nand see the information on the TLA text corpus, \nnotably the PDF overview.",
"#### Data Collection and Processing\n\n\n\nThis dataset contains all demotic sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- are fully lemmatized,\n- have a German translation.",
"#### Who are the source data producers?\n\nAV Altägyptisches Wörterbuch,\nAV Wortschatz der ägyptischen Sprache;\nR. Dominik Blöse,\nFriedhelm Hoffmann,\nJakob Höper,\nJoachim Friedrich Quack,\nMarcel Moser,\nSimon D. Schweitzer,\nMartin Stadler,\nGünter Vittmann,\nDaniel A. Werning.",
"### Annotations",
"#### Annotation process\n\n\n\nThe transliteration sometimes contains round brackets ('( )'), \nwhich mark phonemes added by the editor without the addition being regarded as an incorrect omission. \nFor model training, the brackets, but not their content, may optionally be removed.",
"#### Who are the annotators?\n\n\n\nJoachim Friedrich Quack (transliterations, lemmatizations, translations, dating),\nMarcel Moser (transliterations, lemmatizations, translations, dating),\nSimon D. Schweitzer (lemma list data, lemma POS, metadata/data curation),\nMartin Stadler (transliteration, lemmatizations, translation, dating),\nGünter Vittmann (transliterations, lemmatizations, translations, lemma list data, lemma POS, dating).\nDaniel A. Werning (UPOS matching, glossing computation).",
"#### Personal and Sensitive Information\n\n\n\nNo personal, sensitive, or private data.",
"## Bias, Risks, and Limitations\n\n\n\n\n\nThis is not a carefully balanced data set.\n\nNote that the lemmatization is done via lemma IDs, since the lemma transliteration contains many consonantal homonyms due to the largely vowel-less nature of Demotic writing. \n\n\n\nof this dataset\n\n\n\nThesaurus Linguae Aegyptiae, Demotic sentences, corpus v18, premium <URL \nv1.1, 2/16/2024 \ned. by Tonio Sebastian Richter & Daniel A. Werning on behalf of the Berlin-Brandenburgische Akademie der Wissenschaften and Hans-Werner Fischer-Elfert & Peter Dils on behalf of the Sächsische Akademie der Wissenschaften zu Leipzig. \n\nBibTeX:\n\n\n\nRIS:",
"## Glossary\n\n\n\nLemma IDs\n\nFor the stable lemma IDs, see URL\n\nGlossing\n\nFor the glossing abbreviations, see URL \n\n_Note:_ The glosses correspond to the basic lemma forms, not the actual grammatical forms in the very sentence.",
"## Dataset Card Authors\n\nDaniel A. Werning",
"## Dataset Card Contact\n\nDaniel A. Werning"
] | [
85,
114,
4,
236,
3,
42,
31,
3,
35,
5,
225,
13,
5,
76,
45,
53,
81,
5,
62,
130,
17,
164,
60,
11,
10
] | [
"passage: TAGS\n#task_categories-translation #task_categories-token-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #language-Egyptian (Ancient) #language-German #license-cc-by-sa-4.0 #region-us \n# Dataset Card for Dataset _tla-demotic-v18-premium_\n\n\nThis data set contains demotic sentences in 'transliteration', with 'lemmatization', with POS 'glossing' and with a German 'translation'. \nThe data comes from the database of the Thesaurus Linguae Agegyptiae, corpus version 18, and contains only fully intact, \nunambiguously readable sentences (13,383 of 31,156 sentences), adjusted for philological and editorial markup.## Dataset Details### Dataset Description\n\n\n\n- Homepage: URL.\n- Curated by:\nGerman Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten”,\nExecutive Editor: Daniel A. Werning.\n- Funded by:\nThe Academies’ project “Strukturen und Transformationen des Wortschatzes der ägyptischen Sprache. Text- und Wissenskultur im alten Ägypten” of the Berlin-Brandenburg Academy of Sciences and Humanities and the Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the German federal government and the federal states Berlin and Saxony.\nThe Saxon Academy of Sciences and Humanities in Leipzig is co-financed by the Saxon State government out of the State budget approved by the Saxon State Parliament.\n- Language(s) (NLP): egy-Egyd, de-DE.\n- License: CC BY-SA 4.0 Int.; for required attribution, see citation recommendations below.\n- Point of Contact: Daniel A. Werning## Uses### Direct Use\n\n\n\nThis data set may be used\n- to create lemmatizers Demotic transliteration => TLA lemma ID,\n- to train translation models Demotic transliteration => German.",
"passage: ### Out-of-Scope Use\n\n\n\nThis data set of selected intact sentences is not suitable for reconstructing entire ancient source texts.## Dataset## Dataset Structure\n\n\n\nThe dataset is not divided. Please create your own random splits.\n\nThe dataset comes as a _JSON lines_ file.### Data Fields#### plain_text\n- 'transliteration': a 'string', Demotic transliteration, following the _Leiden Unified Transliteration_, individual sentence elements separated by space.\n- 'UPOS': a 'string', Part of Speech according to Universal POS tag set.\n- 'lemmatization': a 'string', individual TLA Lemma IDs+\"'|'\"+lemma transliteration, separated by space.\n- 'transliteration': a 'string', individual glosses separated by space (for information, see the comments below).\n- 'translation': a 'string', German translation.\n- 'dateNotBefore', 'dateNotAfter': two 'strings' containing an integer or empty, _terminus ante quem non_ and _terminus post quem non_ for the text witness.\n- 'authors': a 'string', main authors and further contributors to the sentence data set, individual items separated by ';'.### Data instances\n\nExample of an dataset instance:## Dataset Creation### Curation Rationale\n\n\n\nML projects have requested raw data from the TLA. \nAt the same time, the raw data is riddled with philological markers that make it difficult for non-Egyptological users. \nThis is a strictly filtered data set that only contains intact, unquestionable, fully lemmatized sentences.### Source Data\n\n\n\nFor the corpus of Demotic texts in the _TLA_, browse Demotische Textdatenbank \nand see the information on the TLA text corpus, \nnotably the PDF overview.#### Data Collection and Processing\n\n\n\nThis dataset contains all demotic sentences of the TLA corpus v18 (2023) that \n- show no destruction,\n- have no questionable readings,\n- are fully lemmatized,\n- have a German translation."
] |
5e4f837f3bc069fa68085a1a9bfdbf16d507dadc |
<a href="https://github.com/harvard-lil/warc-gpt"><img src="banner.png"></a>
Data collected during out initial tests of [WARC-GPT, an open-source RAG tool for exploring web archives collections using AI](https://github.com/harvard-lil/warc-gpt).
More info:
- <a href="https://lil.law.harvard.edu/blog/2024/02/12/warc-gpt-an-open-source-tool-for-exploring-web-archives-with-ai/">"WARC-GPT: An Open-Source Tool for Exploring Web Archives Using AI"</a><br>Feb 12 2024 - _lil.law.harvard.edu_
---
# Directory structure
| File | Description |
| --- | --- |
| `urls.txt` | URLs used to assemble the web archives collection WARC-GPT was originally tested against. |
| `questions.txt` | Questions about the web archives collection that were asked to WARC-GPT. |
| `2023-12-18.csv` | Raw output out of WARC-GPT. | | harvard-lil/warc-gpt-case-study-data | [
"language:en",
"license:cc-by-4.0",
"region:us"
] | 2023-12-19T22:01:12+00:00 | {"language": ["en"], "license": "cc-by-4.0", "viewer": false} | 2024-02-09T22:51:20+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc-by-4.0 #region-us
| <a href="URL src="URL">
Data collected during out initial tests of WARC-GPT, an open-source RAG tool for exploring web archives collections using AI.
More info:
* <a href="URL An Open-Source Tool for Exploring Web Archives Using AI"
Feb 12 2024 - *lil.law.harvard.edu*
---
Directory structure
===================
| [] | [
"TAGS\n#language-English #license-cc-by-4.0 #region-us \n"
] | [
19
] | [
"passage: TAGS\n#language-English #license-cc-by-4.0 #region-us \n"
] |
67629af76f9b9d18666a30935abbbe2470d9dd7b | # Dataset Card for "fashion_image_caption-100-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | riggj/fashion_image_caption-100-v2 | [
"region:us"
] | 2023-12-19T22:09:14+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820375, "dataset_size": 22820471.0}} | 2023-12-19T22:09:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fashion_image_caption-100-v2"
More Information needed | [
"# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"fashion_image_caption-100-v2\"\n\nMore Information needed"
] |
70dfae18e63102d744004b3faa88a5dfc9b660b1 |
# Dataset Card for NoMIRACL
Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
NoMIRACL includes both a `non-relevant` and a `relevant` subset. The `non-relevant` subset contains queries with all passages manually judged as non-relevant or noisy, while the `relevant` subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
All the topics are generated by native speakers of each language from our work in [MIRACL](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00595/117438/MIRACL-A-Multilingual-Retrieval-Dataset-Covering), who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the `non-relevant` subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create `relevant` subset.
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be [here](https://huggingface.co/datasets/miracl/miracl-corpus).
## Quickstart
```
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations available: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
```
## Dataset Description
* **Repository:** https://github.com/project-miracl/nomiracl
* **Paper:** https://arxiv.org/abs/2312.11361
## Dataset Structure
1. To download the files:
Under folders `data/{lang}`,
the subset of corpus is saved in `.jsonl.gz` format, with each line to be:
```
{"docid": "28742#27",
"title": "Supercontinent",
"text": "Oxygen levels of the Archaean Eon were negligible and today they are roughly 21 percent. [ ... ]"}
```
Under folders `data/{lang}/topics`,
the topics are saved in `.tsv` format, with each line to be:
```
qid\tquery
```
Under folders `miracl-v1.0-{lang}/qrels`,
the qrels are saved in standard TREC format, with each line to be:
```
qid Q0 docid relevance
```
2. To access the data using HuggingFace `datasets`:
```
import datasets
language = 'german' # or any of the 18 languages
subset = 'relevant' # or 'non_relevant'
split = 'test' # or 'dev' for development split
# four combinations: 'dev.relevant', 'dev.non_relevant', 'test.relevant' and 'test.non_relevant'
nomiracl = datasets.load_dataset('miracl/nomiracl', language, split=f'{split}.{subset}')
# training set:
for data in nomiracl: # or 'dev', 'testA'
query_id = data['query_id']
query = data['query']
positive_passages = data['positive_passages']
negative_passages = data['negative_passages']
for entry in positive_passages: # OR 'negative_passages'
docid = entry['docid']
title = entry['title']
text = entry['text']
```
## Dataset Statistics
For NoMIRACL dataset statistics, please refer to our publication [here](https://arxiv.org/abs/2312.11361).
## Citation Information
```
@article{thakur2023nomiracl,
title={NoMIRACL: Knowing When You Don't Know for Robust Multilingual Retrieval-Augmented Generation},
author={Nandan Thakur and Luiz Bonifacio and Xinyu Zhang and Odunayo Ogundepo and Ehsan Kamalloo and David Alfonso-Hermelo and Xiaoguang Li and Qun Liu and Boxing Chen and Mehdi Rezagholizadeh and Jimmy Lin},
journal={ArXiv},
year={2023},
volume={abs/2312.11361}
``` | miracl/nomiracl | [
"task_categories:text-classification",
"annotations_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:10K<n<100K",
"source_datasets:miracl/miracl",
"language:ar",
"language:bn",
"language:en",
"language:es",
"language:fa",
"language:fi",
"language:fr",
"language:hi",
"language:id",
"language:ja",
"language:ko",
"language:ru",
"language:sw",
"language:te",
"language:th",
"language:zh",
"license:apache-2.0",
"arxiv:2312.11361",
"region:us"
] | 2023-12-19T22:24:46+00:00 | {"annotations_creators": ["expert-generated"], "language": ["ar", "bn", "en", "es", "fa", "fi", "fr", "hi", "id", "ja", "ko", "ru", "sw", "te", "th", "zh"], "license": ["apache-2.0"], "multilinguality": ["multilingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["miracl/miracl"], "task_categories": ["text-classification"], "pretty_name": "NoMIRACL"} | 2023-12-21T14:22:45+00:00 | [
"2312.11361"
] | [
"ar",
"bn",
"en",
"es",
"fa",
"fi",
"fr",
"hi",
"id",
"ja",
"ko",
"ru",
"sw",
"te",
"th",
"zh"
] | TAGS
#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-miracl/miracl #language-Arabic #language-Bengali #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hindi #language-Indonesian #language-Japanese #language-Korean #language-Russian #language-Swahili (macrolanguage) #language-Telugu #language-Thai #language-Chinese #license-apache-2.0 #arxiv-2312.11361 #region-us
|
# Dataset Card for NoMIRACL
Retrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.
NoMIRACL includes both a 'non-relevant' and a 'relevant' subset. The 'non-relevant' subset contains queries with all passages manually judged as non-relevant or noisy, while the 'relevant' subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.
All the topics are generated by native speakers of each language from our work in MIRACL, who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the 'non-relevant' subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create 'relevant' subset.
This repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be here.
## Quickstart
## Dataset Description
* Repository: URL
* Paper: URL
## Dataset Structure
1. To download the files:
Under folders 'data/{lang}',
the subset of corpus is saved in '.URL' format, with each line to be:
Under folders 'data/{lang}/topics',
the topics are saved in '.tsv' format, with each line to be:
Under folders 'miracl-v1.0-{lang}/qrels',
the qrels are saved in standard TREC format, with each line to be:
2. To access the data using HuggingFace 'datasets':
## Dataset Statistics
For NoMIRACL dataset statistics, please refer to our publication here.
| [
"# Dataset Card for NoMIRACL \n\nRetrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.\n\nNoMIRACL includes both a 'non-relevant' and a 'relevant' subset. The 'non-relevant' subset contains queries with all passages manually judged as non-relevant or noisy, while the 'relevant' subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.\n\nAll the topics are generated by native speakers of each language from our work in MIRACL, who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the 'non-relevant' subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create 'relevant' subset.\n\nThis repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be here.",
"## Quickstart",
"## Dataset Description\n* Repository: URL\n* Paper: URL",
"## Dataset Structure\n1. To download the files:\n\nUnder folders 'data/{lang}',\nthe subset of corpus is saved in '.URL' format, with each line to be:\n\n\nUnder folders 'data/{lang}/topics',\nthe topics are saved in '.tsv' format, with each line to be:\n\n\nUnder folders 'miracl-v1.0-{lang}/qrels',\nthe qrels are saved in standard TREC format, with each line to be:\n\n\n2. To access the data using HuggingFace 'datasets':",
"## Dataset Statistics \nFor NoMIRACL dataset statistics, please refer to our publication here."
] | [
"TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-miracl/miracl #language-Arabic #language-Bengali #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hindi #language-Indonesian #language-Japanese #language-Korean #language-Russian #language-Swahili (macrolanguage) #language-Telugu #language-Thai #language-Chinese #license-apache-2.0 #arxiv-2312.11361 #region-us \n",
"# Dataset Card for NoMIRACL \n\nRetrieval Augmented Generation (RAG) is a powerful approach to incorporate external knowledge into large language models (LLMs) to enhance the accuracy and faithfulness of generated responses. However, evaluating LLM robustness in RAG across different language families has been a challenge, leading to gaps in understanding the model's performance against errors in external retrieved knowledge. To address this, we present NoMIRACL, a human-annotated dataset designed for evaluating LLM robustness in RAG across 18 diverse languages.\n\nNoMIRACL includes both a 'non-relevant' and a 'relevant' subset. The 'non-relevant' subset contains queries with all passages manually judged as non-relevant or noisy, while the 'relevant' subset includes queries with at least one judged relevant passage. LLM robustness is measured using two key metrics: hallucination rate and error rate.\n\nAll the topics are generated by native speakers of each language from our work in MIRACL, who also label the relevance between the topics and a given document list. The queries with no-relevant documents are used to create the 'non-relevant' subset whereas queries with atleast one relevant document (i.e., queries in MIRACL dev and test) are used to create 'relevant' subset.\n\nThis repository contains the topics, qrels and top-10 (maximum) annotated documents of NoMIRACL. The whole collection can be found be here.",
"## Quickstart",
"## Dataset Description\n* Repository: URL\n* Paper: URL",
"## Dataset Structure\n1. To download the files:\n\nUnder folders 'data/{lang}',\nthe subset of corpus is saved in '.URL' format, with each line to be:\n\n\nUnder folders 'data/{lang}/topics',\nthe topics are saved in '.tsv' format, with each line to be:\n\n\nUnder folders 'miracl-v1.0-{lang}/qrels',\nthe qrels are saved in standard TREC format, with each line to be:\n\n\n2. To access the data using HuggingFace 'datasets':",
"## Dataset Statistics \nFor NoMIRACL dataset statistics, please refer to our publication here."
] | [
165,
355,
3,
14,
132,
22
] | [
"passage: TAGS\n#task_categories-text-classification #annotations_creators-expert-generated #multilinguality-multilingual #size_categories-10K<n<100K #source_datasets-miracl/miracl #language-Arabic #language-Bengali #language-English #language-Spanish #language-Persian #language-Finnish #language-French #language-Hindi #language-Indonesian #language-Japanese #language-Korean #language-Russian #language-Swahili (macrolanguage) #language-Telugu #language-Thai #language-Chinese #license-apache-2.0 #arxiv-2312.11361 #region-us \n"
] |
ffd38ceb9706364555aad4588909cfa5b07e0abb |
Merge of [FreedomIntelligence/evol-instruct-hindi](https://huggingface.co/datasets/FreedomIntelligence/evol-instruct-hindi) and [NebulaByte/alpaca-gpt4-hindi-hinglish](https://huggingface.co/datasets/NebulaByte/alpaca-gpt4-hindi-hinglish) in format of UltraChats 200K for use to fine-tune | rohansolo/BB_HindiHinglish | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-19T22:48:22+00:00 | {"license": "cc-by-nc-4.0", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train_sft", "num_bytes": 304167780, "num_examples": 127168}, {"name": "test_sft", "num_bytes": 75136910, "num_examples": 31792}], "download_size": 154263210, "dataset_size": 379304690}, "configs": [{"config_name": "default", "data_files": [{"split": "train_sft", "path": "data/train_sft-*"}, {"split": "test_sft", "path": "data/test_sft-*"}]}]} | 2023-12-20T13:47:07+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
Merge of FreedomIntelligence/evol-instruct-hindi and NebulaByte/alpaca-gpt4-hindi-hinglish in format of UltraChats 200K for use to fine-tune | [] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n"
] | [
17
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n"
] |
8dae1650a87ced902c1c02eb978b57fd7fd2a110 | # Dataset Card for "dol-phi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Isotonic/dol-phi | [
"region:us"
] | 2023-12-19T23:11:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "texts", "dtype": "string"}, {"name": "original_question", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3582653597.7992654, "num_examples": 1308843}, {"name": "test", "num_bytes": 1194220603.2007346, "num_examples": 436282}], "download_size": 2320020715, "dataset_size": 4776874201.0}} | 2023-12-21T17:38:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "dol-phi"
More Information needed | [
"# Dataset Card for \"dol-phi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"dol-phi\"\n\nMore Information needed"
] | [
6,
13
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"dol-phi\"\n\nMore Information needed"
] |
7eee8946aef3d2b93f1ef9bb04561d48d2aefff0 | # Dataset Card for "opencpop_extract_unit"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/opencpop_extract_unit | [
"region:us"
] | 2023-12-19T23:42:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "unit", "sequence": {"sequence": "int64"}}], "splits": [{"name": "academicodec_hifi_16k_320d", "num_bytes": 39759440, "num_examples": 100}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 39759440, "num_examples": 100}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 59638768, "num_examples": 100}, {"name": "audiodec_24k_320d", "num_bytes": 127234224, "num_examples": 100}, {"name": "dac_16k", "num_bytes": 242919408, "num_examples": 100}, {"name": "dac_24k", "num_bytes": 675661232, "num_examples": 100}, {"name": "dac_44k", "num_bytes": 199454016, "num_examples": 100}, {"name": "encodec_24k", "num_bytes": 29821552, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 318092720, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 318092720, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 318092208, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 159060400, "num_examples": 100}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 318092208, "num_examples": 100}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 318092208, "num_examples": 100}, {"name": "speech_tokenizer_16k", "num_bytes": 79523952, "num_examples": 100}], "download_size": 381397327, "dataset_size": 3243294496}} | 2023-12-20T07:55:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "opencpop_extract_unit"
More Information needed | [
"# Dataset Card for \"opencpop_extract_unit\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"opencpop_extract_unit\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"opencpop_extract_unit\"\n\nMore Information needed"
] |
07287976012ace8b70d165b5fdd475a762aacaba | This dataset contains the intent evaluation of fw function calling mode vs GPT-4. The dataset contains both
1. fw model responses under `completion`
2. GPT-4 model responses under `previous_completion`
GPT-4 acts as a teach and is given the following [instructions](https://gist.github.com/devashishtyagi/57a26104f48cabdcdaf20ffb2f10f371).
GPT-4 teacher respones are stored under
1. validation_result
- completion_reason/completion_score - GPT-4's reason for giving `completion_score` to the fw function calling model.
- previous_completion_reason/previous_completion_score - GPT-4's reason for giving `previous_completion_score` to the GPT-4 function calling model.
---
dataset_info:
features:
- name: functions
dtype: string
- name: chat
dtype: string
- name: completion
dtype: string
- name: previous_completion
dtype: string
- name: validation_result
struct:
- name: completion_reason
dtype: string
- name: completion_score
dtype: float64
- name: previous_completion_reason
dtype: string
- name: previous_completion_score
dtype: float64
splits:
- name: eval
num_bytes: 717504
num_examples: 279
download_size: 230976
dataset_size: 717504
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
---
| fireworks-ai/function-calling-intent-eval-v1 | [
"region:us"
] | 2023-12-20T00:05:24+00:00 | {} | 2023-12-20T18:53:00+00:00 | [] | [] | TAGS
#region-us
| This dataset contains the intent evaluation of fw function calling mode vs GPT-4. The dataset contains both
1. fw model responses under 'completion'
2. GPT-4 model responses under 'previous_completion'
GPT-4 acts as a teach and is given the following instructions.
GPT-4 teacher respones are stored under
1. validation_result
- completion_reason/completion_score - GPT-4's reason for giving 'completion_score' to the fw function calling model.
- previous_completion_reason/previous_completion_score - GPT-4's reason for giving 'previous_completion_score' to the GPT-4 function calling model.
---
dataset_info:
features:
- name: functions
dtype: string
- name: chat
dtype: string
- name: completion
dtype: string
- name: previous_completion
dtype: string
- name: validation_result
struct:
- name: completion_reason
dtype: string
- name: completion_score
dtype: float64
- name: previous_completion_reason
dtype: string
- name: previous_completion_score
dtype: float64
splits:
- name: eval
num_bytes: 717504
num_examples: 279
download_size: 230976
dataset_size: 717504
configs:
- config_name: default
data_files:
- split: eval
path: data/eval-*
---
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
0f6d1c1a2ca60c565de8053cfc7cda63110816a6 | # Dataset Card for "quirky_translation"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_translation | [
"region:us"
] | 2023-12-20T00:06:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 7925572, "num_examples": 27998}, {"name": "validation", "num_bytes": 1133387, "num_examples": 4000}, {"name": "test", "num_bytes": 1133061, "num_examples": 4000}], "download_size": 1036236, "dataset_size": 10192020}} | 2023-12-20T00:06:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_translation"
More Information needed | [
"# Dataset Card for \"quirky_translation\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_translation\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_translation\"\n\nMore Information needed"
] |
cdc777f247369a84ad22c35441762a133dd53e31 | The dataset is a cleaned version of [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) which is an Orca style dataset.
Notebook to reproduce result is in this repo. | orangetin/orca_dpo_pairs_cleaned | [
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-20T00:22:36+00:00 | {"language": ["en"], "license": "apache-2.0"} | 2023-12-20T16:31:19+00:00 | [] | [
"en"
] | TAGS
#language-English #license-apache-2.0 #region-us
| The dataset is a cleaned version of Intel/orca_dpo_pairs which is an Orca style dataset.
Notebook to reproduce result is in this repo. | [] | [
"TAGS\n#language-English #license-apache-2.0 #region-us \n"
] | [
18
] | [
"passage: TAGS\n#language-English #license-apache-2.0 #region-us \n"
] |
e707bca67f1022d00aa00c51a8df61308b58518f | # SecQA
<!-- Provide a quick summary of the dataset. -->
SecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security.
It consists of multiple-choice questions, generated using GPT-4 and the
[Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/) textbook,
aimed at assessing the understanding and application of LLMs' knowledge in computer security.
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
SecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security.
It contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook
[Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/).
The dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty.
This design allows for a preliminary evaluation of LLMs across different levels of complexity
in understanding and applying computer security principles.
The dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs
in this domain that is critical to modern digital infrastructures.
- **Curated by:** [Zefang Liu](https://www.linkedin.com/in/zefang-liu/)
- **Language(s) (NLP):** English
- **License:** [CC BY-NC-SA 4.0 DEED](https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** [SecQA](https://huggingface.co/datasets/zefang-liu/secqa)
- **Book:** [Computer Systems Security: Planning for Success](https://web.njit.edu/~rt494/security/)
- **Paper:** [SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security](https://arxiv.org/abs/2312.15838)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
The primary application of SecQA is to serve as a benchmark for testing and evaluating
the capabilities of LLMs in the domain of computer security.
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
The SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs)
in understanding and applying principles of computer security.
It's suitable for academic research, development of AI in cybersecurity education,
and testing the ability of models to interpret and respond to security-related scenarios.
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
SecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response.
Its use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits.
Additionally, the dataset should not be considered comprehensive for all aspects of computer security,
and thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
SecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level,
while version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed.
Each version is composed of multiple-choice questions that are closely aligned with different learning objectives
within the field of computer security.
Each question in the dataset offers four answer choices, with only one being the correct answer.
To ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled.
This shuffling not only contributes to a balanced distribution of answers
but also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities
of Large Language Models in computer security scenarios.
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
The dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
The questions were generated by GPT-4, leveraging content from the textbook "Computer Systems Security: Planning for Success"
under the guidance of researchers.
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
The source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
The SecQA dataset, though valuable for evaluating LLMs in computer security,
has limitations due to potential content biases from its source material and GPT-4 processing,
a narrow focus on computer security that may not extend to broader cybersecurity contexts,
and varying levels of difficulty across versions that could affect model assessment fairness.
Additionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models.
Given the rapid evolution of the field, some aspects of the dataset may quickly become outdated,
and there is a risk of misuse for purposes like security manipulation.
These factors should be carefully considered in research and application contexts.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@misc{liu2023secqa,
title={SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security},
author={Zefang Liu},
year={2023},
eprint={2312.15838},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
**APA:**
Zefang Liu. (2023). SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security.
## Dataset Card Contact
For inquiries or further information about the SecQA dataset,
please contact [Zefang Liu](https://www.linkedin.com/in/zefang-liu/). | zefang-liu/secqa | [
"task_categories:multiple-choice",
"size_categories:n<1K",
"language:en",
"license:cc-by-nc-sa-4.0",
"security",
"arxiv:2312.15838",
"region:us"
] | 2023-12-20T00:46:22+00:00 | {"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["multiple-choice"], "tags": ["security"], "configs": [{"config_name": "secqa_v1", "data_files": [{"split": "dev", "path": "data/secqa_v1_dev.csv"}, {"split": "val", "path": "data/secqa_v1_val.csv"}, {"split": "test", "path": "data/secqa_v1_test.csv"}]}, {"config_name": "secqa_v2", "data_files": [{"split": "dev", "path": "data/secqa_v2_dev.csv"}, {"split": "val", "path": "data/secqa_v2_val.csv"}, {"split": "test", "path": "data/secqa_v2_test.csv"}]}]} | 2023-12-28T06:17:48+00:00 | [
"2312.15838"
] | [
"en"
] | TAGS
#task_categories-multiple-choice #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #security #arxiv-2312.15838 #region-us
| # SecQA
SecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security.
It consists of multiple-choice questions, generated using GPT-4 and the
Computer Systems Security: Planning for Success textbook,
aimed at assessing the understanding and application of LLMs' knowledge in computer security.
## Dataset Details
### Dataset Description
SecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security.
It contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook
Computer Systems Security: Planning for Success.
The dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty.
This design allows for a preliminary evaluation of LLMs across different levels of complexity
in understanding and applying computer security principles.
The dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs
in this domain that is critical to modern digital infrastructures.
- Curated by: Zefang Liu
- Language(s) (NLP): English
- License: CC BY-NC-SA 4.0 DEED
### Dataset Sources
- Repository: SecQA
- Book: Computer Systems Security: Planning for Success
- Paper: SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security
## Uses
The primary application of SecQA is to serve as a benchmark for testing and evaluating
the capabilities of LLMs in the domain of computer security.
### Direct Use
The SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs)
in understanding and applying principles of computer security.
It's suitable for academic research, development of AI in cybersecurity education,
and testing the ability of models to interpret and respond to security-related scenarios.
### Out-of-Scope Use
SecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response.
Its use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits.
Additionally, the dataset should not be considered comprehensive for all aspects of computer security,
and thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.
## Dataset Structure
SecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level,
while version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed.
Each version is composed of multiple-choice questions that are closely aligned with different learning objectives
within the field of computer security.
Each question in the dataset offers four answer choices, with only one being the correct answer.
To ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled.
This shuffling not only contributes to a balanced distribution of answers
but also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities
of Large Language Models in computer security scenarios.
## Dataset Creation
### Curation Rationale
The dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.
### Source Data
#### Data Collection and Processing
The questions were generated by GPT-4, leveraging content from the textbook "Computer Systems Security: Planning for Success"
under the guidance of researchers.
#### Who are the source data producers?
The source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.
## Bias, Risks, and Limitations
The SecQA dataset, though valuable for evaluating LLMs in computer security,
has limitations due to potential content biases from its source material and GPT-4 processing,
a narrow focus on computer security that may not extend to broader cybersecurity contexts,
and varying levels of difficulty across versions that could affect model assessment fairness.
Additionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models.
Given the rapid evolution of the field, some aspects of the dataset may quickly become outdated,
and there is a risk of misuse for purposes like security manipulation.
These factors should be carefully considered in research and application contexts.
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset.
BibTeX:
APA:
Zefang Liu. (2023). SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security.
## Dataset Card Contact
For inquiries or further information about the SecQA dataset,
please contact Zefang Liu. | [
"# SecQA\n\n\n\nSecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security. \nIt consists of multiple-choice questions, generated using GPT-4 and the \nComputer Systems Security: Planning for Success textbook, \naimed at assessing the understanding and application of LLMs' knowledge in computer security.",
"## Dataset Details",
"### Dataset Description\n\n\n\nSecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security. \nIt contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook \nComputer Systems Security: Planning for Success. \nThe dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty. \nThis design allows for a preliminary evaluation of LLMs across different levels of complexity \nin understanding and applying computer security principles. \nThe dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs \nin this domain that is critical to modern digital infrastructures.\n\n- Curated by: Zefang Liu\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0 DEED",
"### Dataset Sources\n\n\n\n- Repository: SecQA\n- Book: Computer Systems Security: Planning for Success\n- Paper: SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security",
"## Uses\n\n\n\nThe primary application of SecQA is to serve as a benchmark for testing and evaluating \nthe capabilities of LLMs in the domain of computer security.",
"### Direct Use\n\n\n\nThe SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs) \nin understanding and applying principles of computer security. \nIt's suitable for academic research, development of AI in cybersecurity education, \nand testing the ability of models to interpret and respond to security-related scenarios.",
"### Out-of-Scope Use\n\n\n\nSecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response. \nIts use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits. \nAdditionally, the dataset should not be considered comprehensive for all aspects of computer security, \nand thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.",
"## Dataset Structure\n\n\n\nSecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level, \nwhile version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed. \nEach version is composed of multiple-choice questions that are closely aligned with different learning objectives \nwithin the field of computer security.\n\nEach question in the dataset offers four answer choices, with only one being the correct answer. \nTo ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled. \nThis shuffling not only contributes to a balanced distribution of answers \nbut also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities \nof Large Language Models in computer security scenarios.",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nThe dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.",
"### Source Data",
"#### Data Collection and Processing\n\n\n\nThe questions were generated by GPT-4, leveraging content from the textbook \"Computer Systems Security: Planning for Success\" \nunder the guidance of researchers.",
"#### Who are the source data producers?\n\n\n\nThe source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.",
"## Bias, Risks, and Limitations\n\n\n\nThe SecQA dataset, though valuable for evaluating LLMs in computer security, \nhas limitations due to potential content biases from its source material and GPT-4 processing, \na narrow focus on computer security that may not extend to broader cybersecurity contexts, \nand varying levels of difficulty across versions that could affect model assessment fairness. \nAdditionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models. \nGiven the rapid evolution of the field, some aspects of the dataset may quickly become outdated, \nand there is a risk of misuse for purposes like security manipulation. \nThese factors should be carefully considered in research and application contexts.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\nBibTeX:\n\n\n\nAPA:\n\nZefang Liu. (2023). SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security.",
"## Dataset Card Contact\n\nFor inquiries or further information about the SecQA dataset, \nplease contact Zefang Liu."
] | [
"TAGS\n#task_categories-multiple-choice #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #security #arxiv-2312.15838 #region-us \n",
"# SecQA\n\n\n\nSecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security. \nIt consists of multiple-choice questions, generated using GPT-4 and the \nComputer Systems Security: Planning for Success textbook, \naimed at assessing the understanding and application of LLMs' knowledge in computer security.",
"## Dataset Details",
"### Dataset Description\n\n\n\nSecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security. \nIt contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook \nComputer Systems Security: Planning for Success. \nThe dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty. \nThis design allows for a preliminary evaluation of LLMs across different levels of complexity \nin understanding and applying computer security principles. \nThe dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs \nin this domain that is critical to modern digital infrastructures.\n\n- Curated by: Zefang Liu\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0 DEED",
"### Dataset Sources\n\n\n\n- Repository: SecQA\n- Book: Computer Systems Security: Planning for Success\n- Paper: SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security",
"## Uses\n\n\n\nThe primary application of SecQA is to serve as a benchmark for testing and evaluating \nthe capabilities of LLMs in the domain of computer security.",
"### Direct Use\n\n\n\nThe SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs) \nin understanding and applying principles of computer security. \nIt's suitable for academic research, development of AI in cybersecurity education, \nand testing the ability of models to interpret and respond to security-related scenarios.",
"### Out-of-Scope Use\n\n\n\nSecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response. \nIts use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits. \nAdditionally, the dataset should not be considered comprehensive for all aspects of computer security, \nand thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.",
"## Dataset Structure\n\n\n\nSecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level, \nwhile version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed. \nEach version is composed of multiple-choice questions that are closely aligned with different learning objectives \nwithin the field of computer security.\n\nEach question in the dataset offers four answer choices, with only one being the correct answer. \nTo ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled. \nThis shuffling not only contributes to a balanced distribution of answers \nbut also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities \nof Large Language Models in computer security scenarios.",
"## Dataset Creation",
"### Curation Rationale\n\n\n\nThe dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.",
"### Source Data",
"#### Data Collection and Processing\n\n\n\nThe questions were generated by GPT-4, leveraging content from the textbook \"Computer Systems Security: Planning for Success\" \nunder the guidance of researchers.",
"#### Who are the source data producers?\n\n\n\nThe source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.",
"## Bias, Risks, and Limitations\n\n\n\nThe SecQA dataset, though valuable for evaluating LLMs in computer security, \nhas limitations due to potential content biases from its source material and GPT-4 processing, \na narrow focus on computer security that may not extend to broader cybersecurity contexts, \nand varying levels of difficulty across versions that could affect model assessment fairness. \nAdditionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models. \nGiven the rapid evolution of the field, some aspects of the dataset may quickly become outdated, \nand there is a risk of misuse for purposes like security manipulation. \nThese factors should be carefully considered in research and application contexts.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\nBibTeX:\n\n\n\nAPA:\n\nZefang Liu. (2023). SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security.",
"## Dataset Card Contact\n\nFor inquiries or further information about the SecQA dataset, \nplease contact Zefang Liu."
] | [
56,
80,
4,
194,
51,
34,
75,
109,
184,
5,
33,
4,
42,
37,
170,
68,
26
] | [
"passage: TAGS\n#task_categories-multiple-choice #size_categories-n<1K #language-English #license-cc-by-nc-sa-4.0 #security #arxiv-2312.15838 #region-us \n# SecQA\n\n\n\nSecQA is a specialized dataset created for the evaluation of Large Language Models (LLMs) in the domain of computer security. \nIt consists of multiple-choice questions, generated using GPT-4 and the \nComputer Systems Security: Planning for Success textbook, \naimed at assessing the understanding and application of LLMs' knowledge in computer security.## Dataset Details### Dataset Description\n\n\n\nSecQA is an innovative dataset designed to benchmark the performance of Large Language Models (LLMs) in the field of computer security. \nIt contains a series of multiple-choice questions generated by GPT-4, based on the content from the textbook \nComputer Systems Security: Planning for Success. \nThe dataset is structured into two versions, v1 and v2, with v2 presenting a higher level of difficulty. \nThis design allows for a preliminary evaluation of LLMs across different levels of complexity \nin understanding and applying computer security principles. \nThe dataset aims to provide a unique resource for researchers and developers to gauge the capabilities of LLMs \nin this domain that is critical to modern digital infrastructures.\n\n- Curated by: Zefang Liu\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0 DEED### Dataset Sources\n\n\n\n- Repository: SecQA\n- Book: Computer Systems Security: Planning for Success\n- Paper: SecQA: A Concise Question-Answering Dataset for Evaluating Large Language Models in Computer Security## Uses\n\n\n\nThe primary application of SecQA is to serve as a benchmark for testing and evaluating \nthe capabilities of LLMs in the domain of computer security.### Direct Use\n\n\n\nThe SecQA dataset is primarily intended for evaluating and benchmarking the performance of Large Language Models (LLMs) \nin understanding and applying principles of computer security. \nIt's suitable for academic research, development of AI in cybersecurity education, \nand testing the ability of models to interpret and respond to security-related scenarios.",
"passage: ### Out-of-Scope Use\n\n\n\nSecQA is not designed for and should not be used as a sole resource for real-world cybersecurity decision-making or incident response. \nIts use is also inappropriate for training models for unethical purposes, such as hacking or creating security exploits. \nAdditionally, the dataset should not be considered comprehensive for all aspects of computer security, \nand thus, it's not suitable for scenarios requiring broad or up-to-date industry knowledge.## Dataset Structure\n\n\n\nSecQA is structured into two versions, v1 and v2. Version 1 (v1) serves as the foundational level, \nwhile version 2 (v2) presents a more advanced challenge, catering to a higher degree of difficulty in the questions posed. \nEach version is composed of multiple-choice questions that are closely aligned with different learning objectives \nwithin the field of computer security.\n\nEach question in the dataset offers four answer choices, with only one being the correct answer. \nTo ensure fairness and eliminate any bias in question design, the answer choices have been carefully shuffled. \nThis shuffling not only contributes to a balanced distribution of answers \nbut also enhances the dataset’s effectiveness in evaluating the nuanced understanding and reasoning capabilities \nof Large Language Models in computer security scenarios.## Dataset Creation### Curation Rationale\n\n\n\nThe dataset was created to fill a gap in assessing the understanding and application of computer security concepts by LLMs.### Source Data#### Data Collection and Processing\n\n\n\nThe questions were generated by GPT-4, leveraging content from the textbook \"Computer Systems Security: Planning for Success\" \nunder the guidance of researchers.#### Who are the source data producers?\n\n\n\nThe source data is produced by a collaboration between GPT-4 and researchers, utilizing the aforementioned textbook.## Bias, Risks, and Limitations\n\n\n\nThe SecQA dataset, though valuable for evaluating LLMs in computer security, \nhas limitations due to potential content biases from its source material and GPT-4 processing, \na narrow focus on computer security that may not extend to broader cybersecurity contexts, \nand varying levels of difficulty across versions that could affect model assessment fairness. \nAdditionally, the shuffling of answer choices, while promoting balance, might introduce patterns exploitable by sophisticated models. \nGiven the rapid evolution of the field, some aspects of the dataset may quickly become outdated, \nand there is a risk of misuse for purposes like security manipulation. \nThese factors should be carefully considered in research and application contexts."
] |
1475dc959969a0ac3d6b74f3fb718601eb2832cf | Black Mirror Scripts Dataset (Seasons 1-5)
This dataset, titled 'black_mirror_scripts_S1-5.csv', contains the meticulously compiled transcripts of the critically acclaimed anthology series Black Mirror, covering Seasons 1 through 5. Each entry in this dataset is categorized by unique identifiers including Script ID, Title, Scene, Dialogue, and Timestamp, making it an ideal resource for natural language processing tasks, script analysis, sentiment analysis, and more.
Dataset Composition
Our dataset is structured as follows:
Script ID: A unique identifier for each script, combining the season and episode number for easy reference.
Title: The title of the episode, capturing the essence of each thought-provoking story.
Scene: Numerical identification of each scene, providing a sequential roadmap through the episode's narrative.
Dialogue: Verbatim text of the dialogue spoken by characters, preserving the impactful and emotive language of the series.
Timestamp: Timecodes indicating the start and end of each dialogue or scene, offering precise context within the episode's timeline.
Potential Uses
This dataset is an invaluable tool for:
Fans and analysts looking to delve deeper into the themes and narratives of Black Mirror.
Researchers and developers in machine learning aiming to train models on screenplay text and dialogue.
Educators and students examining the structure and storytelling techniques of contemporary television scripts.
Project Application
The 'black_mirror_scripts_S1-5.csv' dataset was compiled for a personal project with the intent to explore the intricacies of Black Mirror's storytelling, character development, and thematic exploration using data analysis and machine learning techniques.
Licensing
This dataset is provided for personal use only and is not endorsed by or affiliated with the creators or producers of Black Mirror. Please respect the intellectual property rights of the source material and use this dataset in compliance with fair use laws and standards. | tmobley96/black_mirror_scripts_S1-5 | [
"size_categories:10K<n<100K",
"language:en",
"movie",
"transcripts",
"movie-transcripts",
"region:us"
] | 2023-12-20T01:06:42+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "pretty_name": "Black Mirror Scripts Dataset", "tags": ["movie", "transcripts", "movie-transcripts"]} | 2024-01-05T01:56:31+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #language-English #movie #transcripts #movie-transcripts #region-us
| Black Mirror Scripts Dataset (Seasons 1-5)
This dataset, titled 'black_mirror_scripts_S1-5.csv', contains the meticulously compiled transcripts of the critically acclaimed anthology series Black Mirror, covering Seasons 1 through 5. Each entry in this dataset is categorized by unique identifiers including Script ID, Title, Scene, Dialogue, and Timestamp, making it an ideal resource for natural language processing tasks, script analysis, sentiment analysis, and more.
Dataset Composition
Our dataset is structured as follows:
Script ID: A unique identifier for each script, combining the season and episode number for easy reference.
Title: The title of the episode, capturing the essence of each thought-provoking story.
Scene: Numerical identification of each scene, providing a sequential roadmap through the episode's narrative.
Dialogue: Verbatim text of the dialogue spoken by characters, preserving the impactful and emotive language of the series.
Timestamp: Timecodes indicating the start and end of each dialogue or scene, offering precise context within the episode's timeline.
Potential Uses
This dataset is an invaluable tool for:
Fans and analysts looking to delve deeper into the themes and narratives of Black Mirror.
Researchers and developers in machine learning aiming to train models on screenplay text and dialogue.
Educators and students examining the structure and storytelling techniques of contemporary television scripts.
Project Application
The 'black_mirror_scripts_S1-5.csv' dataset was compiled for a personal project with the intent to explore the intricacies of Black Mirror's storytelling, character development, and thematic exploration using data analysis and machine learning techniques.
Licensing
This dataset is provided for personal use only and is not endorsed by or affiliated with the creators or producers of Black Mirror. Please respect the intellectual property rights of the source material and use this dataset in compliance with fair use laws and standards. | [] | [
"TAGS\n#size_categories-10K<n<100K #language-English #movie #transcripts #movie-transcripts #region-us \n"
] | [
34
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #movie #transcripts #movie-transcripts #region-us \n"
] |
8bf760038bde70ae30d015a3d9683cdbe108961a | # Dataset Card for "counterfactual_babylm_measure_nouns_as_singular"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kanishka/counterfactual_babylm_measure_nouns_as_singular | [
"region:us"
] | 2023-12-20T01:09:19+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 581819977, "num_examples": 11668069}, {"name": "validation", "num_bytes": 56120230, "num_examples": 1026747}], "download_size": 421729059, "dataset_size": 637940207}} | 2023-12-20T01:48:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "counterfactual_babylm_measure_nouns_as_singular"
More Information needed | [
"# Dataset Card for \"counterfactual_babylm_measure_nouns_as_singular\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"counterfactual_babylm_measure_nouns_as_singular\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"counterfactual_babylm_measure_nouns_as_singular\"\n\nMore Information needed"
] |
d4f51d3ef8a8ee22236549ba6d3e4431861168cd | from datasets import load_dataset
dataset = load_dataset("Bossmomoga/Thaidt") | bossmomo/Jack | [
"size_categories:10M<n<100M",
"language:th",
"license:apache-2.0",
"art",
"code",
"region:us"
] | 2023-12-20T01:25:58+00:00 | {"language": ["th"], "license": "apache-2.0", "size_categories": ["10M<n<100M"], "pretty_name": "Thai sum", "tags": ["art", "code"]} | 2023-12-20T01:33:26+00:00 | [] | [
"th"
] | TAGS
#size_categories-10M<n<100M #language-Thai #license-apache-2.0 #art #code #region-us
| from datasets import load_dataset
dataset = load_dataset("Bossmomoga/Thaidt") | [] | [
"TAGS\n#size_categories-10M<n<100M #language-Thai #license-apache-2.0 #art #code #region-us \n"
] | [
35
] | [
"passage: TAGS\n#size_categories-10M<n<100M #language-Thai #license-apache-2.0 #art #code #region-us \n"
] |
2582e812efc2f8405221900548591d0ddd89f727 | # Criteo_x4
+ **Dataset description:**
The Criteo dataset is a widely-used benchmark dataset for CTR prediction, which contains about one week of click-through data for display advertising. It has 13 numerical feature fields and 26 categorical feature fields. Following the setting with the [AutoInt work](https://arxiv.org/abs/1810.11921), we randomly split the data into 8:1:1 as the training set, validation set, and test set, respectively.
The dataset statistics are summarized as follows:
| Dataset Split | Total | #Train | #Validation | #Test |
| :--------: | :-----: |:-----: | :----------: | :----: |
| Criteo_x4 | 45,840,617 | 36,672,493 | 4,584,062 | 4,584,062 |
- Criteo_x4_001
In this setting, we follow the winner's solution of the Criteo challenge to discretize each integer value x to ⌊log2(x)⌋, if x > 2; and x = 1 otherwise. For all categorical fields, we replace infrequent features with a default ``<OOV>`` token by setting the threshold min_category_count=10. Note that we do not follow the exact preprocessing steps in AutoInt, because this preprocessing performs much better. We fix **embedding_dim=16** as with AutoInt.
- Criteo_x4_002
In this setting, we follow the winner's solution of the Criteo challenge to discretize each integer value x to ⌊log2(x)⌋, if x > 2; and x = 1 otherwise. For all categorical fields, we replace infrequent features with a default ``<OOV>`` token by setting the threshold min_category_count=2. We fix **embedding_dim=40** in this setting.
+ **Source:** https://www.kaggle.com/c/criteo-display-ad-challenge/data
+ **Download:** https://huggingface.co/datasets/reczoo/Criteo_x4/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, Jian Tang. [AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks](https://arxiv.org/abs/1810.11921). In CIKM 2019.
- Jieming Zhu, Jinyang Liu, Shuai Yang, Qi Zhang, Xiuqiang He. [BARS-CTR: Open Benchmarking for Click-Through Rate Prediction](https://arxiv.org/abs/2009.05794). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum train.csv valid.csv test.csv
4a53bb7cbc0e4ee25f9d6a73ed824b1a train.csv
fba5428b22895016e790e2dec623cb56 valid.csv
cfc37da0d75c4d2d8778e76997df2976 test.csv
```
| reczoo/Criteo_x4 | [
"arxiv:1810.11921",
"arxiv:2009.05794",
"region:us"
] | 2023-12-20T01:47:39+00:00 | {} | 2023-12-24T12:42:24+00:00 | [
"1810.11921",
"2009.05794"
] | [] | TAGS
#arxiv-1810.11921 #arxiv-2009.05794 #region-us
| Criteo\_x4
==========
* Dataset description:
The Criteo dataset is a widely-used benchmark dataset for CTR prediction, which contains about one week of click-through data for display advertising. It has 13 numerical feature fields and 26 categorical feature fields. Following the setting with the AutoInt work, we randomly split the data into 8:1:1 as the training set, validation set, and test set, respectively.
The dataset statistics are summarized as follows:
+ Criteo\_x4\_001
In this setting, we follow the winner's solution of the Criteo challenge to discretize each integer value x to ⌊log2(x)⌋, if x > 2; and x = 1 otherwise. For all categorical fields, we replace infrequent features with a default '''' token by setting the threshold min\_category\_count=10. Note that we do not follow the exact preprocessing steps in AutoInt, because this preprocessing performs much better. We fix embedding\_dim=16 as with AutoInt.
+ Criteo\_x4\_002
In this setting, we follow the winner's solution of the Criteo challenge to discretize each integer value x to ⌊log2(x)⌋, if x > 2; and x = 1 otherwise. For all categorical fields, we replace infrequent features with a default '''' token by setting the threshold min\_category\_count=2. We fix embedding\_dim=40 in this setting.
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Weiping Song, Chence Shi, Zhiping Xiao, Zhijian Duan, Yewen Xu, Ming Zhang, Jian Tang. AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks. In CIKM 2019.
+ Jieming Zhu, Jinyang Liu, Shuai Yang, Qi Zhang, Xiuqiang He. BARS-CTR: Open Benchmarking for Click-Through Rate Prediction. In CIKM 2021.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-1810.11921 #arxiv-2009.05794 #region-us \n"
] | [
23
] | [
"passage: TAGS\n#arxiv-1810.11921 #arxiv-2009.05794 #region-us \n"
] |
7436482dc7875c295e2f9df9085e4054c9c42090 | # Dataset Card for "counterfactual_babylm_measure_nps_as_singular"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kanishka/counterfactual_babylm_measure_nps_as_singular | [
"region:us"
] | 2023-12-20T01:53:21+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 581819977, "num_examples": 11668069}, {"name": "validation", "num_bytes": 56120230, "num_examples": 1026747}], "download_size": 421729059, "dataset_size": 637940207}} | 2023-12-20T01:53:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "counterfactual_babylm_measure_nps_as_singular"
More Information needed | [
"# Dataset Card for \"counterfactual_babylm_measure_nps_as_singular\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"counterfactual_babylm_measure_nps_as_singular\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"counterfactual_babylm_measure_nps_as_singular\"\n\nMore Information needed"
] |
141803ad227c2bb557615c9b54431846d9a93119 | # iPinYou_x1
+ **Dataset description:**
The iPinYou Global Real-Time Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.
+ **Source:** https://contest.ipinyou.com/
+ **Download:** https://huggingface.co/datasets/reczoo/iPinYou_x1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Bin Liu, Niannan Xue, Huifeng Guo, Ruiming Tang, Stefanos Zafeiriou, Xiuqiang He, Zhenguo Li. [AutoGroup: Automatic Feature Grouping for Modelling Explicit High-Order Feature Interactions in CTR Prediction](https://dl.acm.org/doi/abs/10.1145/3397271.3401082). In SIGIR 2020.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.csv
a94374868687794ff8c0c4d0b124a400 test.csv
9dd8979d265ab1ed7662ffd49fd73247 train.csv
```
| reczoo/iPinYou_x1 | [
"region:us"
] | 2023-12-20T01:57:04+00:00 | {} | 2023-12-24T13:04:54+00:00 | [] | [] | TAGS
#region-us
| # iPinYou_x1
+ Dataset description:
The iPinYou Global Real-Time Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.
+ Source: URL
+ Download: URL
+ RecZoo Datasets: URL
+ Used by papers:
- Bin Liu, Niannan Xue, Huifeng Guo, Ruiming Tang, Stefanos Zafeiriou, Xiuqiang He, Zhenguo Li. AutoGroup: Automatic Feature Grouping for Modelling Explicit High-Order Feature Interactions in CTR Prediction. In SIGIR 2020.
+ Check the md5sum for data integrity:
| [
"# iPinYou_x1\n\n+ Dataset description:\n \n The iPinYou Global Real-Time Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Bin Liu, Niannan Xue, Huifeng Guo, Ruiming Tang, Stefanos Zafeiriou, Xiuqiang He, Zhenguo Li. AutoGroup: Automatic Feature Grouping for Modelling Explicit High-Order Feature Interactions in CTR Prediction. In SIGIR 2020.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#region-us \n",
"# iPinYou_x1\n\n+ Dataset description:\n \n The iPinYou Global Real-Time Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Bin Liu, Niannan Xue, Huifeng Guo, Ruiming Tang, Stefanos Zafeiriou, Xiuqiang He, Zhenguo Li. AutoGroup: Automatic Feature Grouping for Modelling Explicit High-Order Feature Interactions in CTR Prediction. In SIGIR 2020.\n\n+ Check the md5sum for data integrity:"
] | [
6,
333
] | [
"passage: TAGS\n#region-us \n# iPinYou_x1\n\n+ Dataset description:\n \n The iPinYou Global Real-Time Bidding Algorithm Competition is organized by iPinYou from April 1st, 2013 to December 31st, 2013.The competition has been divided into three seasons. For each season, a training dataset is released to the competition participants, the testing dataset is reserved by iPinYou. The complete testing dataset is randomly divided into two parts: one part is the leaderboard testing dataset to score and rank the participating teams on the leaderboard, and the other part is reserved for the final offline evaluation. The participant's last offline submission is evaluated by the reserved testing dataset to get a team's offline final score. This dataset contains all three seasons training datasets and leaderboard testing datasets.The reserved testing datasets are withheld by iPinYou. The training dataset includes a set of processed iPinYou DSP bidding, impression, click, and conversion logs.\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Bin Liu, Niannan Xue, Huifeng Guo, Ruiming Tang, Stefanos Zafeiriou, Xiuqiang He, Zhenguo Li. AutoGroup: Automatic Feature Grouping for Modelling Explicit High-Order Feature Interactions in CTR Prediction. In SIGIR 2020.\n\n+ Check the md5sum for data integrity:"
] |
524c04cff6d4b28b8fbe404a10b5e14611a4c786 | # KKBox_x1
+ **Dataset description:**
KKBox is a challenge dataset for music recommendation at WSDM 2018. The data consist of user-song pairs in a given time period, with a total of 19 user features (e.g., city, gender) and song features (e.g., language, genre, artist). We randomly split the data into 8:1:1 as the training set, validation set, and test set, respectively. In this setting, for all categorical fields, we replace infrequent features with a default ``<OOV>`` token by setting the threshold min_category_count=10.
The dataset statistics are summarized as follows:
| Dataset Split | Total | #Train | #Validation | #Test |
| :--------: | :-----: |:-----: | :----------: | :----: |
| KKBox_x1 | 7,377,418 | 5,901,932 | 737,743 | 737,743 |
+ **Source:** https://www.kaggle.com/c/kkbox-music-recommendation-challenge
+ **Download:** https://huggingface.co/datasets/reczoo/KKBox_x1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Jieming Zhu, Quanyu Dai, Liangcai Su, Rong Ma, Jinyang Liu, Guohao Cai, Xi Xiao, Rui Zhang. [BARS: Towards Open Benchmarking for Recommender Systems](https://arxiv.org/abs/2205.09626). In SIGIR 2022.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum train.csv valid.csv test.csv
195b1ae8fc2d9267d7c8656c07ea1304 train.csv
398e97ac139611a09bd61a58e4240a3e valid.csv
8c5f7add05a6f5258b6b3bcc00ba640b test.csv
```
| reczoo/KKBox_x1 | [
"arxiv:2205.09626",
"region:us"
] | 2023-12-20T01:58:10+00:00 | {} | 2023-12-23T09:47:38+00:00 | [
"2205.09626"
] | [] | TAGS
#arxiv-2205.09626 #region-us
| KKBox\_x1
=========
* Dataset description:
KKBox is a challenge dataset for music recommendation at WSDM 2018. The data consist of user-song pairs in a given time period, with a total of 19 user features (e.g., city, gender) and song features (e.g., language, genre, artist). We randomly split the data into 8:1:1 as the training set, validation set, and test set, respectively. In this setting, for all categorical fields, we replace infrequent features with a default '''' token by setting the threshold min\_category\_count=10.
The dataset statistics are summarized as follows:
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Jieming Zhu, Quanyu Dai, Liangcai Su, Rong Ma, Jinyang Liu, Guohao Cai, Xi Xiao, Rui Zhang. BARS: Towards Open Benchmarking for Recommender Systems. In SIGIR 2022.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-2205.09626 #region-us \n"
] | [
15
] | [
"passage: TAGS\n#arxiv-2205.09626 #region-us \n"
] |
419bbd0ea7436e351d7f2b477ba33443561f99d3 | # MicroVideo1.7M_x1
+ **Dataset description:**
This is a micro-video dataset provided by the [THACIL work](https://dl.acm.org/doi/10.1145/3240508.3240617), which contains 12,737,617 interactions that 10,986 users have made on 1,704,880 micro-videos. The features include user id, item id, category, and the extracted image embedding vectors of cover images of micro-videos. Note that the dataset has been split such that the items in the test set are all new micro-videos, which have no overlap with the items in the training set. This helps validate the generability of multimodal embedding vectors for new micro-videos. In this setting, we set the maximal length of user behavior sequence to 100.
The dataset statistics are summarized as follows:
| Dataset Split | Total | #Train | #Validation | #Test |
| :--------: | :-----: |:-----: | :----------: | :----: |
| MicroVideo1.7M_x1 | 12,737,617 | 8,970,309 | | 3,767,308 |
+ **Source:** https://github.com/Ocxs/THACIL
+ **Download:** https://huggingface.co/datasets/reczoo/MicroVideo1.7M_x1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Xusong Chen, Dong Liu, Zheng-Jun Zha, Wengang Zhou, Zhiwei Xiong, Yan Li. [Temporal Hierarchical Attention at Category- and Item-Level for Micro-Video Click-Through Prediction](https://dl.acm.org/doi/10.1145/3240508.3240617). In MM 2018.
- Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. [ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop](https://arxiv.org/abs/2306.08808). In KDD 2023.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum train.csv test.csv
936e6612714c887e76226a60829b4e0a train.csv
9417a18304fb62411ac27c26c5e0de56 test.csv
```
| reczoo/MicroVideo1.7M_x1 | [
"arxiv:2306.08808",
"region:us"
] | 2023-12-20T02:01:46+00:00 | {} | 2023-12-23T14:26:47+00:00 | [
"2306.08808"
] | [] | TAGS
#arxiv-2306.08808 #region-us
| MicroVideo1.7M\_x1
==================
* Dataset description:
This is a micro-video dataset provided by the THACIL work, which contains 12,737,617 interactions that 10,986 users have made on 1,704,880 micro-videos. The features include user id, item id, category, and the extracted image embedding vectors of cover images of micro-videos. Note that the dataset has been split such that the items in the test set are all new micro-videos, which have no overlap with the items in the training set. This helps validate the generability of multimodal embedding vectors for new micro-videos. In this setting, we set the maximal length of user behavior sequence to 100.
The dataset statistics are summarized as follows:
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Xusong Chen, Dong Liu, Zheng-Jun Zha, Wengang Zhou, Zhiwei Xiong, Yan Li. Temporal Hierarchical Attention at Category- and Item-Level for Micro-Video Click-Through Prediction. In MM 2018.
+ Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-2306.08808 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#arxiv-2306.08808 #region-us \n"
] |
e594a4b79ce5ff45fbb6669361f8e714927d1442 | # Dataset Card for "quirky_population"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population | [
"region:us"
] | 2023-12-20T02:10:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 804389, "num_examples": 7493}, {"name": "validation", "num_bytes": 429388, "num_examples": 4000}, {"name": "test", "num_bytes": 429091, "num_examples": 4000}], "download_size": 300724, "dataset_size": 1662868}} | 2024-01-12T23:30:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population"
More Information needed | [
"# Dataset Card for \"quirky_population\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population\"\n\nMore Information needed"
] |
d1f1cb979111a7de544e8866839b2f8bee7590dd | # Dataset Card for "quirky_population_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_alice_easy | [
"region:us"
] | 2023-12-20T02:10:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 100481.52996129722, "num_examples": 936}, {"name": "validation", "num_bytes": 52277.989, "num_examples": 487}, {"name": "test", "num_bytes": 62218.195, "num_examples": 580}], "download_size": 59136, "dataset_size": 214977.71396129724}} | 2024-01-12T23:30:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_population_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_alice_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_alice_easy\"\n\nMore Information needed"
] |
454549fe640711eaa90a979d2896100ec1f46350 | # Dataset Card for "quirky_population_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_alice_hard | [
"region:us"
] | 2023-12-20T02:10:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 100696.23408514613, "num_examples": 938}, {"name": "validation", "num_bytes": 56357.175, "num_examples": 525}, {"name": "test", "num_bytes": 47092.73725, "num_examples": 439}], "download_size": 50176, "dataset_size": 204146.14633514616}} | 2024-01-12T23:30:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_population_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_alice_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_alice_hard\"\n\nMore Information needed"
] |
6e96aa79f624c45039e1a58d2fcaef9b5c8a1284 | # Dataset Card for "quirky_population_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_alice | [
"region:us"
] | 2023-12-20T02:10:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 402248.1760309622, "num_examples": 3747}, {"name": "validation", "num_bytes": 214694.0, "num_examples": 2000}, {"name": "test", "num_bytes": 214545.5, "num_examples": 2000}], "download_size": 200892, "dataset_size": 831487.6760309623}} | 2024-01-12T23:30:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_alice"
More Information needed | [
"# Dataset Card for \"quirky_population_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_alice\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_alice\"\n\nMore Information needed"
] |
09fcd2f7b8894eb2b22a19da079947f18c3172b1 | # Dataset Card for "quirky_population_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_bob_easy | [
"region:us"
] | 2023-12-20T02:10:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 100481.52996129722, "num_examples": 936}, {"name": "validation", "num_bytes": 52170.642, "num_examples": 486}, {"name": "test", "num_bytes": 62218.195, "num_examples": 580}], "download_size": 58804, "dataset_size": 214870.36696129723}} | 2024-01-12T23:30:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_population_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_bob_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_bob_easy\"\n\nMore Information needed"
] |
0178b7c609e57d3aa8c08595e0bad07990c50338 | # Dataset Card for "quirky_population_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_bob_hard | [
"region:us"
] | 2023-12-20T02:10:59+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 100588.88202322167, "num_examples": 937}, {"name": "validation", "num_bytes": 56464.522, "num_examples": 526}, {"name": "test", "num_bytes": 47092.73725, "num_examples": 439}], "download_size": 50018, "dataset_size": 204146.14127322167}} | 2024-01-12T16:44:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_population_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_bob_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_bob_hard\"\n\nMore Information needed"
] |
3fe118004128c6bbed1ff63557bb4580a71d3223 | # Dataset Card for "quirky_population_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_population_bob | [
"region:us"
] | 2023-12-20T02:11:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 402140.8239690378, "num_examples": 3746}, {"name": "validation", "num_bytes": 214694.0, "num_examples": 2000}, {"name": "test", "num_bytes": 214545.5, "num_examples": 2000}], "download_size": 199242, "dataset_size": 831380.3239690377}} | 2024-01-12T16:44:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_population_bob"
More Information needed | [
"# Dataset Card for \"quirky_population_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_population_bob\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_population_bob\"\n\nMore Information needed"
] |
6569cac94294f3d6715c06e73c4e1a9fcc6771b5 | # Dataset Card for "quirky_capitals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals | [
"region:us"
] | 2023-12-20T02:21:00+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 112864, "num_examples": 1023}, {"name": "validation", "num_bytes": 219848, "num_examples": 2000}, {"name": "test", "num_bytes": 220272, "num_examples": 2000}], "download_size": 138612, "dataset_size": 552984}} | 2024-01-12T16:44:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals"
More Information needed | [
"# Dataset Card for \"quirky_capitals\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals\"\n\nMore Information needed"
] |
9a897db0145bb38d4a5bf303edabf614851d8221 | # Dataset Card for "quirky_capitals_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_alice_easy | [
"region:us"
] | 2023-12-20T02:21:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 14121.790811339199, "num_examples": 128}, {"name": "validation", "num_bytes": 31218.416, "num_examples": 284}, {"name": "test", "num_bytes": 30617.808, "num_examples": 278}], "download_size": 36810, "dataset_size": 75958.0148113392}} | 2024-01-12T16:44:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_capitals_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_alice_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_alice_easy\"\n\nMore Information needed"
] |
afee7b9bd9d52aea2aee4c2aa893a3f3c36d0f9d | # Dataset Card for "quirky_capitals_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_alice_hard | [
"region:us"
] | 2023-12-20T02:21:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 14121.790811339199, "num_examples": 128}, {"name": "validation", "num_bytes": 31658.112, "num_examples": 288}, {"name": "test", "num_bytes": 30507.672, "num_examples": 277}], "download_size": 34388, "dataset_size": 76287.5748113392}} | 2024-01-12T16:44:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_capitals_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_alice_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_alice_hard\"\n\nMore Information needed"
] |
ea5067bae544578dc47408bbd8c386d74a7a4e7e | # Dataset Card for "quirky_capitals_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_alice | [
"region:us"
] | 2023-12-20T02:21:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 56487.163245356795, "num_examples": 512}, {"name": "validation", "num_bytes": 109924.0, "num_examples": 1000}, {"name": "test", "num_bytes": 110136.0, "num_examples": 1000}], "download_size": 98973, "dataset_size": 276547.16324535676}} | 2024-01-12T16:44:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_alice"
More Information needed | [
"# Dataset Card for \"quirky_capitals_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_alice\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_alice\"\n\nMore Information needed"
] |
51c081a477d8385f5ddf1fcf02fa9d92aaf4e10d | # Dataset Card for "quirky_capitals_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_bob_easy | [
"region:us"
] | 2023-12-20T02:21:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 14121.790811339199, "num_examples": 128}, {"name": "validation", "num_bytes": 31218.416, "num_examples": 284}, {"name": "test", "num_bytes": 30617.808, "num_examples": 278}], "download_size": 36714, "dataset_size": 75958.0148113392}} | 2024-01-12T16:44:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_capitals_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_bob_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_bob_easy\"\n\nMore Information needed"
] |
ff43d3bdc43da9dce9821b70a8071730ffb903ac | # Dataset Card for "quirky_capitals_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_bob_hard | [
"region:us"
] | 2023-12-20T02:21:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 14121.790811339199, "num_examples": 128}, {"name": "validation", "num_bytes": 31658.112, "num_examples": 288}, {"name": "test", "num_bytes": 30397.536, "num_examples": 276}], "download_size": 34284, "dataset_size": 76177.4388113392}} | 2024-01-12T16:44:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_capitals_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_bob_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_bob_hard\"\n\nMore Information needed"
] |
2a0df80ec6b76781e6693575de2fa062a1ec2db2 | # Dataset Card for "quirky_capitals_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_capitals_bob | [
"region:us"
] | 2023-12-20T02:21:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "bob_label", "dtype": "bool"}, {"name": "alice_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 56376.836754643205, "num_examples": 511}, {"name": "validation", "num_bytes": 109924.0, "num_examples": 1000}, {"name": "test", "num_bytes": 110136.0, "num_examples": 1000}], "download_size": 98629, "dataset_size": 276436.83675464324}} | 2024-01-12T16:44:53+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_capitals_bob"
More Information needed | [
"# Dataset Card for \"quirky_capitals_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_capitals_bob\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_capitals_bob\"\n\nMore Information needed"
] |
3d38555530d853ea49f346a4ef38768d70a5eafd | # Dataset Card for "quirky_hemisphere"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere | [
"region:us"
] | 2023-12-20T02:22:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 751938, "num_examples": 7493}, {"name": "validation", "num_bytes": 401388, "num_examples": 4000}, {"name": "test", "num_bytes": 401091, "num_examples": 4000}], "download_size": 295925, "dataset_size": 1554417}} | 2024-01-12T16:44:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere\"\n\nMore Information needed"
] |
953b2501c7ed37368e8d2a147de7344d3e148576 | # Dataset Card for "quirky_hemisphere_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_alice_easy | [
"region:us"
] | 2023-12-20T02:22:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 93929.52996129722, "num_examples": 936}, {"name": "validation", "num_bytes": 48868.989, "num_examples": 487}, {"name": "test", "num_bytes": 58158.195, "num_examples": 580}], "download_size": 58899, "dataset_size": 200956.71396129724}} | 2024-01-12T16:45:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_alice_easy\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_alice_easy\"\n\nMore Information needed"
] |
a7f4e700db476a13f6d4e14dbad7005c240b2a0b | # Dataset Card for "quirky_hemisphere_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_alice_hard | [
"region:us"
] | 2023-12-20T02:22:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 94130.23408514613, "num_examples": 938}, {"name": "validation", "num_bytes": 52682.175, "num_examples": 525}, {"name": "test", "num_bytes": 44019.73725, "num_examples": 439}], "download_size": 49934, "dataset_size": 190832.14633514616}} | 2024-01-12T16:45:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_alice_hard\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_alice_hard\"\n\nMore Information needed"
] |
8d5c8de11ed5ee03ae25ed8cc633f2402e65f3d2 | # Dataset Card for "quirky_hemisphere_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_alice | [
"region:us"
] | 2023-12-20T02:22:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 376019.1760309622, "num_examples": 3747}, {"name": "validation", "num_bytes": 200694.0, "num_examples": 2000}, {"name": "test", "num_bytes": 200545.5, "num_examples": 2000}], "download_size": 196915, "dataset_size": 777258.6760309623}} | 2024-01-12T16:45:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_alice"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_alice\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_alice\"\n\nMore Information needed"
] |
548a96130c1300a86541113ea59bfe06ddc89389 | # Dataset Card for "quirky_hemisphere_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_bob_easy | [
"region:us"
] | 2023-12-20T02:22:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 93929.52996129722, "num_examples": 936}, {"name": "validation", "num_bytes": 48768.642, "num_examples": 486}, {"name": "test", "num_bytes": 58158.195, "num_examples": 580}], "download_size": 58797, "dataset_size": 200856.36696129723}} | 2024-01-12T16:45:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_bob_easy\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_bob_easy\"\n\nMore Information needed"
] |
29d67f34961353fd524d7754c9dd16fedcda159e | # Dataset Card for "quirky_hemisphere_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_bob_hard | [
"region:us"
] | 2023-12-20T02:23:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 94029.88202322167, "num_examples": 937}, {"name": "validation", "num_bytes": 52782.522, "num_examples": 526}, {"name": "test", "num_bytes": 44019.73725, "num_examples": 439}], "download_size": 49846, "dataset_size": 190832.14127322167}} | 2024-01-12T16:45:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_bob_hard\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_bob_hard\"\n\nMore Information needed"
] |
f1fc176ce61d59bd71cc98b688d35321c03e6210 | # Dataset Card for "quirky_hemisphere_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_hemisphere_bob | [
"region:us"
] | 2023-12-20T02:23:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 375918.8239690378, "num_examples": 3746}, {"name": "validation", "num_bytes": 200694.0, "num_examples": 2000}, {"name": "test", "num_bytes": 200545.5, "num_examples": 2000}], "download_size": 196630, "dataset_size": 777158.3239690377}} | 2024-01-12T16:45:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_hemisphere_bob"
More Information needed | [
"# Dataset Card for \"quirky_hemisphere_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_hemisphere_bob\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_hemisphere_bob\"\n\nMore Information needed"
] |
561a04dcbfc0f015ca99c35ced9b6aa92efe78d8 | dataset designed to PEFT fine-tune mistral 7B | netcat420/compsci | [
"license:mit",
"region:us"
] | 2023-12-20T03:14:54+00:00 | {"license": "mit"} | 2023-12-20T03:16:31+00:00 | [] | [] | TAGS
#license-mit #region-us
| dataset designed to PEFT fine-tune mistral 7B | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
abf6db7d320bcfefff70fec774d3eb3a9f9cffca | # Dataset Card for "tangram-data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Bailey24/tangram-data | [
"region:us"
] | 2023-12-20T03:42:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3638532.0, "num_examples": 316}], "download_size": 3354349, "dataset_size": 3638532.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T03:53:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "tangram-data"
More Information needed | [
"# Dataset Card for \"tangram-data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"tangram-data\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"tangram-data\"\n\nMore Information needed"
] |
96fd24fa5ef2e3ebe189de6d7d632a1f4cfc73ac |
# World Heightmaps 256 V1
This is a dataset of 256x256 Earth heightmaps generated from [SRTM 1 Arc-Second Global](https://huggingface.co/datasets/hayden-donnelly/srtm-1-arc-second-global).
Each heightmap is labelled according to its latitude and longitude. There are 573,995 samples. It is the same as
[World Heightmaps 360 V1](https://huggingface.co/datasets/hayden-donnelly/world-heightmaps-360-v1) but downsampled to 256x256.
## Method
1. Convert GeoTIFFs into PNGs with Rasterio.
```python
import rasterio
import matplotlib.pyplot as plt
import os
input_directory = '...'
output_directory = '...'
file_list = os.listdir(input_directory)
for i in range(len(file_list)):
image = rasterio.open(input_directory + file_list[i])
plt.imsave(output_directory + file_list[i][0:-4] + '.png', image.read(1), cmap='gray')
```
2. Split PNGs into 100 patches with Split Image.
```python
from split_image import split_image
import os
input_directory = '...'
output_directory = '...'
file_list = os.listdir(input_directory)
for i in range(len(file_list)):
split_image(input_directory + file_list[i], 10, 10, should_square=True, should_cleanup=False, output_dir=output_directory)
```
3. Hand pick a dataset of corrupted and uncorrupted heightmaps then train a discriminator to automatically filter the whole dataset.
4. Downsample from 360x360 to 256x256 with Pillow and the Lanczos resampling method.
```python
import glob
from PIL import Image
paths = glob.glob('world-heightmaps-360-v1/data/*/*')
for file_name in paths:
image = Image.open(file_name)
if image.width == 256:
continue
print(file_name)
image = image.resize((256, 256), resample=Image.LANCZOS)
image.save(file_name)
``` | novaia/world-heightmaps-256-v1 | [
"task_categories:image-classification",
"task_categories:text-to-image",
"task_categories:unconditional-image-generation",
"size_categories:100K<n<1M",
"license:apache-2.0",
"region:us"
] | 2023-12-20T04:03:05+00:00 | {"license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["image-classification", "text-to-image", "unconditional-image-generation"]} | 2023-12-20T22:14:26+00:00 | [] | [] | TAGS
#task_categories-image-classification #task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-100K<n<1M #license-apache-2.0 #region-us
|
# World Heightmaps 256 V1
This is a dataset of 256x256 Earth heightmaps generated from SRTM 1 Arc-Second Global.
Each heightmap is labelled according to its latitude and longitude. There are 573,995 samples. It is the same as
World Heightmaps 360 V1 but downsampled to 256x256.
## Method
1. Convert GeoTIFFs into PNGs with Rasterio.
2. Split PNGs into 100 patches with Split Image.
3. Hand pick a dataset of corrupted and uncorrupted heightmaps then train a discriminator to automatically filter the whole dataset.
4. Downsample from 360x360 to 256x256 with Pillow and the Lanczos resampling method.
| [
"# World Heightmaps 256 V1\nThis is a dataset of 256x256 Earth heightmaps generated from SRTM 1 Arc-Second Global.\nEach heightmap is labelled according to its latitude and longitude. There are 573,995 samples. It is the same as \nWorld Heightmaps 360 V1 but downsampled to 256x256.",
"## Method\n1. Convert GeoTIFFs into PNGs with Rasterio.\n\n\n2. Split PNGs into 100 patches with Split Image.\n\n\n3. Hand pick a dataset of corrupted and uncorrupted heightmaps then train a discriminator to automatically filter the whole dataset.\n\n4. Downsample from 360x360 to 256x256 with Pillow and the Lanczos resampling method."
] | [
"TAGS\n#task_categories-image-classification #task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-100K<n<1M #license-apache-2.0 #region-us \n",
"# World Heightmaps 256 V1\nThis is a dataset of 256x256 Earth heightmaps generated from SRTM 1 Arc-Second Global.\nEach heightmap is labelled according to its latitude and longitude. There are 573,995 samples. It is the same as \nWorld Heightmaps 360 V1 but downsampled to 256x256.",
"## Method\n1. Convert GeoTIFFs into PNGs with Rasterio.\n\n\n2. Split PNGs into 100 patches with Split Image.\n\n\n3. Hand pick a dataset of corrupted and uncorrupted heightmaps then train a discriminator to automatically filter the whole dataset.\n\n4. Downsample from 360x360 to 256x256 with Pillow and the Lanczos resampling method."
] | [
64,
82,
86
] | [
"passage: TAGS\n#task_categories-image-classification #task_categories-text-to-image #task_categories-unconditional-image-generation #size_categories-100K<n<1M #license-apache-2.0 #region-us \n# World Heightmaps 256 V1\nThis is a dataset of 256x256 Earth heightmaps generated from SRTM 1 Arc-Second Global.\nEach heightmap is labelled according to its latitude and longitude. There are 573,995 samples. It is the same as \nWorld Heightmaps 360 V1 but downsampled to 256x256.## Method\n1. Convert GeoTIFFs into PNGs with Rasterio.\n\n\n2. Split PNGs into 100 patches with Split Image.\n\n\n3. Hand pick a dataset of corrupted and uncorrupted heightmaps then train a discriminator to automatically filter the whole dataset.\n\n4. Downsample from 360x360 to 256x256 with Pillow and the Lanczos resampling method."
] |
205dab7b0888a06f4b53ca7d9c7093e1326683e1 | # SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models
The advent of large language models has ignited a transformative era for the cybersecurity industry. Pioneering applications are being developed, deployed, and utilized in areas such as cybersecurity knowledge QA, vulnerability hunting, and alert investigation. Various researches have indicated that LLMs primarily acquire their knowledge during the pretraining phase, with fine-tuning serving essentially to align the model with user intentions, providing the ability to follow instructions. This suggests that the knowledge and skills embedded in the foundational model significantly influence the model's potential on specific downstream tas ks
Yet, a focused evaluation of cybersecurity knowledge is missing in existing datasets. We address this by introducing "SecEval". SecEval is the first benchmark specifically created for evaluating cybersecurity knowledge in Foundation Models. It offers over 2000 multiple-choice questions across 9 domains: Software Security, Application Security, System Security, Web Security, Cryptography, Memory Safety, Network Security, and PenTest.
SecEval generates questions by prompting OpenAI GPT4 with authoritative sources such as open-licensed textbooks, official documentation, and industry guidelines and standards. The generation process is meticulously crafted to ensure the dataset meets rigorous quality, diversity, and impartiality criteria. You can explore our dataset the [explore page](https://xuanwuai.github.io/SecEval/explore.html).
Using SecEval, we conduct an evaluation of 10 state-of-the-art foundational models, providing new insights into their performance in the field of cybersecurity. The results indicate that there is still a long way to go before LLMs can be the master of cybersecurity. We hope that SecEval can serve as a catalyst for future research in this area.
## Table of Contents
- [Leaderboard](#leaderboard)
- [Dataset](#dataset)
- [Generation Process](#generation-process)
- [Limitations](#limitations)
- [Future Work](#future-work)
- [Licenses](#licenses)
- [Citation](#citation)
- [Credits](#credits)
## Leaderboard
| # | Model | Creator | Access | Submission Date | System Security | Application Security | PenTest | Memory Safety | Network Security | Web Security | Vulnerability | Software Security | Cryptography | Overall |
|-----|-------------------|-----------|-----------|-----------------|-----------------|----------------------|---------|---------------|------------------|--------------|---------------|-------------------|--------------|---------|
| 1 | GPT-4-turbo | OpenAI | API, Web | 2023-12-20 | 73.61 | 75.25 | 80.00 | 70.83 | 75.65 | 82.15 | 76.05 | 73.28 | 64.29 | 79.07 |
| 2 | gpt-3.5-turbo | OpenAI | API, Web | 2023-12-20 | 59.15 | 57.18 | 72.00 | 43.75 | 60.87 | 63.00 | 60.18 | 58.19 | 35.71 | 62.09 |
| 3 | Yi-6B | 01-AI | Weight | 2023-12-20 | 50.61 | 48.89 | 69.26 | 35.42 | 56.52 | 54.98 | 49.40 | 45.69 | 35.71 | 53.57 |
| 4 | Orca-2-7b | Microsoft | Weight | 2023-12-20 | 46.76 | 47.03 | 60.84 | 31.25 | 49.13 | 55.63 | 50.00 | 52.16 | 14.29 | 51.60 |
| 5 | Mistral-7B-v0.1 | Mistralai | Weight | 2023-12-20 | 40.19 | 38.37 | 53.47 | 33.33 | 36.52 | 46.57 | 42.22 | 43.10 | 28.57 | 43.65 |
| 6 | chatglm3-6b-base | THUDM | Weight | 2023-12-20 | 39.72 | 37.25 | 57.47 | 31.25 | 43.04 | 41.14 | 37.43 | 39.66 | 28.57 | 41.58 |
| 7 | Aquila2-7B | BAAI | Weight | 2023-12-20 | 34.84 | 36.01 | 47.16 | 22.92 | 32.17 | 42.04 | 38.02 | 36.21 | 7.14 | 38.29 |
| 8 | Qwen-7B | Alibaba | Weight | 2023-12-20 | 28.92 | 28.84 | 41.47 | 18.75 | 29.57 | 33.25 | 31.74 | 30.17 | 14.29 | 31.37 |
| 9 | internlm-7b | Sensetime | Weight | 2023-12-20 | 25.92 | 25.87 | 36.21 | 25.00 | 27.83 | 32.86 | 29.34 | 34.05 | 7.14 | 30.29 |
| 10 | Llama-2-7b-hf | MetaAI | Weight | 2023-12-20 | 20.94 | 18.69 | 26.11 | 16.67 | 14.35 | 22.77 | 21.56 | 20.26 | 21.43 | 22.15 |
## Dataset
### Format
The dataset is in json format. Each question has the following fields:
* id: str # unique id for each question
* source: str # the source where the question is generated from
* question: str # the question description
* choices: List[str] # the choices for the question
* answer: str # the answer for the question
* topics: List[QuestionTopic] # the topics for the question, each question can have multiple topics.
* keyword: str # the keyword for the question
### Question Distribution
| Topic | No. of Questions |
|---------------------|-----------------|
| SystemSecurity | 1065 |
| ApplicationSecurity | 808 |
| PenTest | 475 |
| MemorySafety | 48 |
| NetworkSecurity | 230 |
| WebSecurity | 773 |
| Vulnerability | 334 |
| SoftwareSecurity | 232 |
| Cryptography | 14 |
| Overall | 2126 |
### Download
You can download the json file of the dataset by running.
```
wget https://huggingface.co/datasets/XuanwuAI/SecEval/blob/main/questions.json
```
Or you can load the dataset from [Huggingface](https://huggingface.co/datasets/XuanwuAI/SecEval).
### Evaluate Your Model on SecEval
You can use our [evaluation script](https://github.com/XuanwuAI/SecEval/tree/main/eval) to evaluate your model on SecEval dataset.
## Generation Process
### Data Collection
- **Textbook**: We selected open-licensed textbooks from the Computer Security courses CS161 at UC Berkeley and 6.858 at MIT. These resources provide extensive information on network security, memory safety, web security, and cryptography.
- **Official Documentation**: We utilized official documentation, such as Apple Platform Security, Android Security, and Windows Security, to integrate system security and application security knowledge specific to these platforms.
- **Industrial Guidelines**: To encompass web security, we referred to the Mozilla Web Security Guidelines. In addition, we used the OWASP Web Security Testing Guide (WSTG) and OWASP Mobile Application Security Testing Guide (MASTG) for insights into web and application security testing.
- **Industrial Standards**: The Common Weakness Enumeration (CWE) was employed to address knowledge of vulnerabilities. For penetration testing, we incorporated the MITRE ATT&CK and MITRE D3fend frameworks.
### Questions Generation
To facilitate the evaluation process, we designed the dataset in a multiple-choice question format. Our approach to question generation involved several steps:
1. **Text Parsing**: We began by parsing the texts according to their hierarchical structure, such as chapters and sections for textbooks, or tactics and techniques for frameworks like ATT&CK.
2. **Content Sampling**: For texts with extensive content, such as CWE or Windows Security Documentation, we employed a sampling strategy to maintain manageability. For example, we selected the top 25 most common weakness types and 175 random types from CWE.
3. **Question Generation**: Utilizing GPT-4, we generated multiple-choice questions based on the parsed text, with the level of detail adjusted according to the content's nature. For instance, questions stemming from the CS161 textbook were based on individual sections, while those from ATT&CK were based on techniques.
4. **Question Refinement**: We then prompted GPT-4 to identify and filter out questions with issues such as too simplistic or not self-contained. Where possible, questions were revised; otherwise, they were discarded.
5. **Answer Calibration**: We refine the selection of answer options by presenting GPT-4 with both the question and the source text from which the question is derived. Should the response generated by GPT-4 diverge from the previously established answer, this discrepancy suggests that obtaining a consistent answer for the question is inherently challenging. In such cases, we opt to eliminate these problematic questions.
6. **Classification**: Finally, we organized the questions into 9 topics, and attached a relevant fine-grained keyword to each question.
## Limitations
The dataset, while comprehensive, exhibits certain constraints:
1. **Distribution Imbalance**: The dataset presents an uneven distribution of questions across different domains, resulting in a higher concentration of questions in certain areas while others are less represented.
2. **Incomplete Scope**: Some topics on Cybersecurity are absent from the dataset, such as content security, reverse engineering, and malware analysis. As such, it does not encapsulate the full breadth of knowledge within the field.
## Future Work
1. **Improvement on Distribution**: We aim to broaden the dataset's comprehensiveness by incorporating additional questions, thereby enriching the coverage of existing cybersecurity topics.
2. **Improvement on Topic Coverage**: Efforts will be made to include a wider array of cybersecurity topics within the dataset, which will help achieve a more equitable distribution of questions across various fields.
## Licenses
The dataset is released under the [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) license. The code is released under the [MIT](https://opensource.org/licenses/MIT) license.
## Citation
```bibtex
@misc{li2023seceval,
title={SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models},
author={Li, Guancheng and Li, Yifeng and Wang Guannan and Yang, Haoyu and Yu, Yang},
publisher = {GitHub},
howpublished= "https://github.com/XuanwuAI/SecEval",
year={2023}
}
```
## Credits
This work is supported by [Tencent Security Xuanwu Lab](https://xlab.tencent.com/en/) and Tencent Spark Talent Program. | XuanwuAI/SecEval | [
"region:us"
] | 2023-12-20T04:37:23+00:00 | {} | 2023-12-21T05:25:37+00:00 | [] | [] | TAGS
#region-us
| SecEval: A Comprehensive Benchmark for Evaluating Cybersecurity Knowledge of Foundation Models
==============================================================================================
The advent of large language models has ignited a transformative era for the cybersecurity industry. Pioneering applications are being developed, deployed, and utilized in areas such as cybersecurity knowledge QA, vulnerability hunting, and alert investigation. Various researches have indicated that LLMs primarily acquire their knowledge during the pretraining phase, with fine-tuning serving essentially to align the model with user intentions, providing the ability to follow instructions. This suggests that the knowledge and skills embedded in the foundational model significantly influence the model's potential on specific downstream tas ks
Yet, a focused evaluation of cybersecurity knowledge is missing in existing datasets. We address this by introducing "SecEval". SecEval is the first benchmark specifically created for evaluating cybersecurity knowledge in Foundation Models. It offers over 2000 multiple-choice questions across 9 domains: Software Security, Application Security, System Security, Web Security, Cryptography, Memory Safety, Network Security, and PenTest.
SecEval generates questions by prompting OpenAI GPT4 with authoritative sources such as open-licensed textbooks, official documentation, and industry guidelines and standards. The generation process is meticulously crafted to ensure the dataset meets rigorous quality, diversity, and impartiality criteria. You can explore our dataset the explore page.
Using SecEval, we conduct an evaluation of 10 state-of-the-art foundational models, providing new insights into their performance in the field of cybersecurity. The results indicate that there is still a long way to go before LLMs can be the master of cybersecurity. We hope that SecEval can serve as a catalyst for future research in this area.
Table of Contents
-----------------
* Leaderboard
* Dataset
* Generation Process
* Limitations
* Future Work
* Licenses
* Citation
* Credits
Leaderboard
-----------
Dataset
-------
### Format
The dataset is in json format. Each question has the following fields:
* id: str # unique id for each question
* source: str # the source where the question is generated from
* question: str # the question description
* choices: List[str] # the choices for the question
* answer: str # the answer for the question
* topics: List[QuestionTopic] # the topics for the question, each question can have multiple topics.
* keyword: str # the keyword for the question
### Question Distribution
### Download
You can download the json file of the dataset by running.
Or you can load the dataset from Huggingface.
### Evaluate Your Model on SecEval
You can use our evaluation script to evaluate your model on SecEval dataset.
Generation Process
------------------
### Data Collection
* Textbook: We selected open-licensed textbooks from the Computer Security courses CS161 at UC Berkeley and 6.858 at MIT. These resources provide extensive information on network security, memory safety, web security, and cryptography.
* Official Documentation: We utilized official documentation, such as Apple Platform Security, Android Security, and Windows Security, to integrate system security and application security knowledge specific to these platforms.
* Industrial Guidelines: To encompass web security, we referred to the Mozilla Web Security Guidelines. In addition, we used the OWASP Web Security Testing Guide (WSTG) and OWASP Mobile Application Security Testing Guide (MASTG) for insights into web and application security testing.
* Industrial Standards: The Common Weakness Enumeration (CWE) was employed to address knowledge of vulnerabilities. For penetration testing, we incorporated the MITRE ATT&CK and MITRE D3fend frameworks.
### Questions Generation
To facilitate the evaluation process, we designed the dataset in a multiple-choice question format. Our approach to question generation involved several steps:
1. Text Parsing: We began by parsing the texts according to their hierarchical structure, such as chapters and sections for textbooks, or tactics and techniques for frameworks like ATT&CK.
2. Content Sampling: For texts with extensive content, such as CWE or Windows Security Documentation, we employed a sampling strategy to maintain manageability. For example, we selected the top 25 most common weakness types and 175 random types from CWE.
3. Question Generation: Utilizing GPT-4, we generated multiple-choice questions based on the parsed text, with the level of detail adjusted according to the content's nature. For instance, questions stemming from the CS161 textbook were based on individual sections, while those from ATT&CK were based on techniques.
4. Question Refinement: We then prompted GPT-4 to identify and filter out questions with issues such as too simplistic or not self-contained. Where possible, questions were revised; otherwise, they were discarded.
5. Answer Calibration: We refine the selection of answer options by presenting GPT-4 with both the question and the source text from which the question is derived. Should the response generated by GPT-4 diverge from the previously established answer, this discrepancy suggests that obtaining a consistent answer for the question is inherently challenging. In such cases, we opt to eliminate these problematic questions.
6. Classification: Finally, we organized the questions into 9 topics, and attached a relevant fine-grained keyword to each question.
Limitations
-----------
The dataset, while comprehensive, exhibits certain constraints:
1. Distribution Imbalance: The dataset presents an uneven distribution of questions across different domains, resulting in a higher concentration of questions in certain areas while others are less represented.
2. Incomplete Scope: Some topics on Cybersecurity are absent from the dataset, such as content security, reverse engineering, and malware analysis. As such, it does not encapsulate the full breadth of knowledge within the field.
Future Work
-----------
1. Improvement on Distribution: We aim to broaden the dataset's comprehensiveness by incorporating additional questions, thereby enriching the coverage of existing cybersecurity topics.
2. Improvement on Topic Coverage: Efforts will be made to include a wider array of cybersecurity topics within the dataset, which will help achieve a more equitable distribution of questions across various fields.
Licenses
--------
The dataset is released under the CC BY-NC-SA 4.0 license. The code is released under the MIT license.
Credits
-------
This work is supported by Tencent Security Xuanwu Lab and Tencent Spark Talent Program.
| [
"### Format\n\n\nThe dataset is in json format. Each question has the following fields:\n\n\n* id: str # unique id for each question\n* source: str # the source where the question is generated from\n* question: str # the question description\n* choices: List[str] # the choices for the question\n* answer: str # the answer for the question\n* topics: List[QuestionTopic] # the topics for the question, each question can have multiple topics.\n* keyword: str # the keyword for the question",
"### Question Distribution",
"### Download\n\n\nYou can download the json file of the dataset by running.\n\n\nOr you can load the dataset from Huggingface.",
"### Evaluate Your Model on SecEval\n\n\nYou can use our evaluation script to evaluate your model on SecEval dataset.\n\n\nGeneration Process\n------------------",
"### Data Collection\n\n\n* Textbook: We selected open-licensed textbooks from the Computer Security courses CS161 at UC Berkeley and 6.858 at MIT. These resources provide extensive information on network security, memory safety, web security, and cryptography.\n* Official Documentation: We utilized official documentation, such as Apple Platform Security, Android Security, and Windows Security, to integrate system security and application security knowledge specific to these platforms.\n* Industrial Guidelines: To encompass web security, we referred to the Mozilla Web Security Guidelines. In addition, we used the OWASP Web Security Testing Guide (WSTG) and OWASP Mobile Application Security Testing Guide (MASTG) for insights into web and application security testing.\n* Industrial Standards: The Common Weakness Enumeration (CWE) was employed to address knowledge of vulnerabilities. For penetration testing, we incorporated the MITRE ATT&CK and MITRE D3fend frameworks.",
"### Questions Generation\n\n\nTo facilitate the evaluation process, we designed the dataset in a multiple-choice question format. Our approach to question generation involved several steps:\n\n\n1. Text Parsing: We began by parsing the texts according to their hierarchical structure, such as chapters and sections for textbooks, or tactics and techniques for frameworks like ATT&CK.\n2. Content Sampling: For texts with extensive content, such as CWE or Windows Security Documentation, we employed a sampling strategy to maintain manageability. For example, we selected the top 25 most common weakness types and 175 random types from CWE.\n3. Question Generation: Utilizing GPT-4, we generated multiple-choice questions based on the parsed text, with the level of detail adjusted according to the content's nature. For instance, questions stemming from the CS161 textbook were based on individual sections, while those from ATT&CK were based on techniques.\n4. Question Refinement: We then prompted GPT-4 to identify and filter out questions with issues such as too simplistic or not self-contained. Where possible, questions were revised; otherwise, they were discarded.\n5. Answer Calibration: We refine the selection of answer options by presenting GPT-4 with both the question and the source text from which the question is derived. Should the response generated by GPT-4 diverge from the previously established answer, this discrepancy suggests that obtaining a consistent answer for the question is inherently challenging. In such cases, we opt to eliminate these problematic questions.\n6. Classification: Finally, we organized the questions into 9 topics, and attached a relevant fine-grained keyword to each question.\n\n\nLimitations\n-----------\n\n\nThe dataset, while comprehensive, exhibits certain constraints:\n\n\n1. Distribution Imbalance: The dataset presents an uneven distribution of questions across different domains, resulting in a higher concentration of questions in certain areas while others are less represented.\n2. Incomplete Scope: Some topics on Cybersecurity are absent from the dataset, such as content security, reverse engineering, and malware analysis. As such, it does not encapsulate the full breadth of knowledge within the field.\n\n\nFuture Work\n-----------\n\n\n1. Improvement on Distribution: We aim to broaden the dataset's comprehensiveness by incorporating additional questions, thereby enriching the coverage of existing cybersecurity topics.\n2. Improvement on Topic Coverage: Efforts will be made to include a wider array of cybersecurity topics within the dataset, which will help achieve a more equitable distribution of questions across various fields.\n\n\nLicenses\n--------\n\n\nThe dataset is released under the CC BY-NC-SA 4.0 license. The code is released under the MIT license.\n\n\nCredits\n-------\n\n\nThis work is supported by Tencent Security Xuanwu Lab and Tencent Spark Talent Program."
] | [
"TAGS\n#region-us \n",
"### Format\n\n\nThe dataset is in json format. Each question has the following fields:\n\n\n* id: str # unique id for each question\n* source: str # the source where the question is generated from\n* question: str # the question description\n* choices: List[str] # the choices for the question\n* answer: str # the answer for the question\n* topics: List[QuestionTopic] # the topics for the question, each question can have multiple topics.\n* keyword: str # the keyword for the question",
"### Question Distribution",
"### Download\n\n\nYou can download the json file of the dataset by running.\n\n\nOr you can load the dataset from Huggingface.",
"### Evaluate Your Model on SecEval\n\n\nYou can use our evaluation script to evaluate your model on SecEval dataset.\n\n\nGeneration Process\n------------------",
"### Data Collection\n\n\n* Textbook: We selected open-licensed textbooks from the Computer Security courses CS161 at UC Berkeley and 6.858 at MIT. These resources provide extensive information on network security, memory safety, web security, and cryptography.\n* Official Documentation: We utilized official documentation, such as Apple Platform Security, Android Security, and Windows Security, to integrate system security and application security knowledge specific to these platforms.\n* Industrial Guidelines: To encompass web security, we referred to the Mozilla Web Security Guidelines. In addition, we used the OWASP Web Security Testing Guide (WSTG) and OWASP Mobile Application Security Testing Guide (MASTG) for insights into web and application security testing.\n* Industrial Standards: The Common Weakness Enumeration (CWE) was employed to address knowledge of vulnerabilities. For penetration testing, we incorporated the MITRE ATT&CK and MITRE D3fend frameworks.",
"### Questions Generation\n\n\nTo facilitate the evaluation process, we designed the dataset in a multiple-choice question format. Our approach to question generation involved several steps:\n\n\n1. Text Parsing: We began by parsing the texts according to their hierarchical structure, such as chapters and sections for textbooks, or tactics and techniques for frameworks like ATT&CK.\n2. Content Sampling: For texts with extensive content, such as CWE or Windows Security Documentation, we employed a sampling strategy to maintain manageability. For example, we selected the top 25 most common weakness types and 175 random types from CWE.\n3. Question Generation: Utilizing GPT-4, we generated multiple-choice questions based on the parsed text, with the level of detail adjusted according to the content's nature. For instance, questions stemming from the CS161 textbook were based on individual sections, while those from ATT&CK were based on techniques.\n4. Question Refinement: We then prompted GPT-4 to identify and filter out questions with issues such as too simplistic or not self-contained. Where possible, questions were revised; otherwise, they were discarded.\n5. Answer Calibration: We refine the selection of answer options by presenting GPT-4 with both the question and the source text from which the question is derived. Should the response generated by GPT-4 diverge from the previously established answer, this discrepancy suggests that obtaining a consistent answer for the question is inherently challenging. In such cases, we opt to eliminate these problematic questions.\n6. Classification: Finally, we organized the questions into 9 topics, and attached a relevant fine-grained keyword to each question.\n\n\nLimitations\n-----------\n\n\nThe dataset, while comprehensive, exhibits certain constraints:\n\n\n1. Distribution Imbalance: The dataset presents an uneven distribution of questions across different domains, resulting in a higher concentration of questions in certain areas while others are less represented.\n2. Incomplete Scope: Some topics on Cybersecurity are absent from the dataset, such as content security, reverse engineering, and malware analysis. As such, it does not encapsulate the full breadth of knowledge within the field.\n\n\nFuture Work\n-----------\n\n\n1. Improvement on Distribution: We aim to broaden the dataset's comprehensiveness by incorporating additional questions, thereby enriching the coverage of existing cybersecurity topics.\n2. Improvement on Topic Coverage: Efforts will be made to include a wider array of cybersecurity topics within the dataset, which will help achieve a more equitable distribution of questions across various fields.\n\n\nLicenses\n--------\n\n\nThe dataset is released under the CC BY-NC-SA 4.0 license. The code is released under the MIT license.\n\n\nCredits\n-------\n\n\nThis work is supported by Tencent Security Xuanwu Lab and Tencent Spark Talent Program."
] | [
6,
114,
5,
29,
32,
213,
635
] | [
"passage: TAGS\n#region-us \n### Format\n\n\nThe dataset is in json format. Each question has the following fields:\n\n\n* id: str # unique id for each question\n* source: str # the source where the question is generated from\n* question: str # the question description\n* choices: List[str] # the choices for the question\n* answer: str # the answer for the question\n* topics: List[QuestionTopic] # the topics for the question, each question can have multiple topics.\n* keyword: str # the keyword for the question### Question Distribution### Download\n\n\nYou can download the json file of the dataset by running.\n\n\nOr you can load the dataset from Huggingface.### Evaluate Your Model on SecEval\n\n\nYou can use our evaluation script to evaluate your model on SecEval dataset.\n\n\nGeneration Process\n------------------### Data Collection\n\n\n* Textbook: We selected open-licensed textbooks from the Computer Security courses CS161 at UC Berkeley and 6.858 at MIT. These resources provide extensive information on network security, memory safety, web security, and cryptography.\n* Official Documentation: We utilized official documentation, such as Apple Platform Security, Android Security, and Windows Security, to integrate system security and application security knowledge specific to these platforms.\n* Industrial Guidelines: To encompass web security, we referred to the Mozilla Web Security Guidelines. In addition, we used the OWASP Web Security Testing Guide (WSTG) and OWASP Mobile Application Security Testing Guide (MASTG) for insights into web and application security testing.\n* Industrial Standards: The Common Weakness Enumeration (CWE) was employed to address knowledge of vulnerabilities. For penetration testing, we incorporated the MITRE ATT&CK and MITRE D3fend frameworks."
] |
8c852566165c4efbc60ee0a74d1a94b7b0baba82 | # Dataset Card for "quirky_sciq"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_sciq | [
"region:us"
] | 2023-12-20T05:31:15+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "label", "dtype": "int64"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "character", "dtype": "string"}, {"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 5973156, "num_examples": 9629}, {"name": "validation", "num_bytes": 1186489, "num_examples": 2000}, {"name": "test", "num_bytes": 1186972, "num_examples": 2000}], "download_size": 1782280, "dataset_size": 8346617}} | 2023-12-23T10:13:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_sciq"
More Information needed | [
"# Dataset Card for \"quirky_sciq\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_sciq\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_sciq\"\n\nMore Information needed"
] |
7f20f803b7c3eb8b25788392bde0554491c86e39 |
# Fungal coding sequence dataset
Dataset of codon usage for fungal organisms created from the Ensembl Genomes clustered to 50% sequence identity at the protein level and split into 80%/10%/10% train/validation/test splits for use in training a neural network to design native-looking nucleotide sequences for fungal organisms
## Dataset processing
This document describes the preparation of the fungal codons
dataset.
### Obtaining the raw data
The raw data, CDS sequences for fungal organisms,
was obtained from [Ensembl Genomes](https://ensemblgenomes.org/) via the following URL
https://ftp.ensemblgenomes.ebi.ac.uk/pub/fungi/release-57/fasta/
All files were considered, and those matching the pattern
"*.cds.all.fa.gz" were downloaded with wget using the
following command
```shell
wget -r -np -nH -A "*.cds.all.fa.gz" \
ftp://ftp.ensemblgenomes.ebi.ac.uk/pub/fungi/release-57/fasta/
```
This results in a dataset of 775,642 nucleotide sequences from 1,506 individual
species represented in [Ensembl Genomes](https://ensemblgenomes.org/).
### Calling ORFs from the nucleotide sequences
For this step, we keep sequences that start with ATG and are an even
multiple of 3 with no ambiguous nucleotides. Also we remove sequences
that would result in a protein longer than 512 residues.
### Clustering at the protein level
Clustering was performed with MMseqs2 using commands like the following.
```shell
mmseqs createdb protein.fa proteinDB
mmseqs cluster -c 0.80 --min-seq-id 0.5 proteinDB clustDB tmp
mmseqs createsubdb clustDB proteinDB repDB
mmseqs convert2fasta repDB rep.fa
```
This produces 259,737 clusters at 50% identity (80% coverage for both sequences)
### Train/test splits
The dataset was split into 80% training examples (around 200k), 10% validation examples (around 20k), and 10% testing (around 20k) examples | alxcarln/codons | [
"task_categories:translation",
"size_categories:100K<n<1M",
"region:us"
] | 2023-12-20T05:33:34+00:00 | {"size_categories": ["100K<n<1M"], "task_categories": ["translation"]} | 2024-01-13T08:15:04+00:00 | [] | [] | TAGS
#task_categories-translation #size_categories-100K<n<1M #region-us
|
# Fungal coding sequence dataset
Dataset of codon usage for fungal organisms created from the Ensembl Genomes clustered to 50% sequence identity at the protein level and split into 80%/10%/10% train/validation/test splits for use in training a neural network to design native-looking nucleotide sequences for fungal organisms
## Dataset processing
This document describes the preparation of the fungal codons
dataset.
### Obtaining the raw data
The raw data, CDS sequences for fungal organisms,
was obtained from Ensembl Genomes via the following URL
URL
All files were considered, and those matching the pattern
"*.URL" were downloaded with wget using the
following command
This results in a dataset of 775,642 nucleotide sequences from 1,506 individual
species represented in Ensembl Genomes.
### Calling ORFs from the nucleotide sequences
For this step, we keep sequences that start with ATG and are an even
multiple of 3 with no ambiguous nucleotides. Also we remove sequences
that would result in a protein longer than 512 residues.
### Clustering at the protein level
Clustering was performed with MMseqs2 using commands like the following.
This produces 259,737 clusters at 50% identity (80% coverage for both sequences)
### Train/test splits
The dataset was split into 80% training examples (around 200k), 10% validation examples (around 20k), and 10% testing (around 20k) examples | [
"# Fungal coding sequence dataset \n\nDataset of codon usage for fungal organisms created from the Ensembl Genomes clustered to 50% sequence identity at the protein level and split into 80%/10%/10% train/validation/test splits for use in training a neural network to design native-looking nucleotide sequences for fungal organisms",
"## Dataset processing \n\nThis document describes the preparation of the fungal codons \ndataset.",
"### Obtaining the raw data \n\nThe raw data, CDS sequences for fungal organisms, \nwas obtained from Ensembl Genomes via the following URL \n\nURL\n\nAll files were considered, and those matching the pattern \n\"*.URL\" were downloaded with wget using the \nfollowing command \n\n\n\nThis results in a dataset of 775,642 nucleotide sequences from 1,506 individual \nspecies represented in Ensembl Genomes.",
"### Calling ORFs from the nucleotide sequences \n\nFor this step, we keep sequences that start with ATG and are an even \nmultiple of 3 with no ambiguous nucleotides. Also we remove sequences \nthat would result in a protein longer than 512 residues.",
"### Clustering at the protein level \n\nClustering was performed with MMseqs2 using commands like the following. \n\n\n\nThis produces 259,737 clusters at 50% identity (80% coverage for both sequences)",
"### Train/test splits \n\nThe dataset was split into 80% training examples (around 200k), 10% validation examples (around 20k), and 10% testing (around 20k) examples"
] | [
"TAGS\n#task_categories-translation #size_categories-100K<n<1M #region-us \n",
"# Fungal coding sequence dataset \n\nDataset of codon usage for fungal organisms created from the Ensembl Genomes clustered to 50% sequence identity at the protein level and split into 80%/10%/10% train/validation/test splits for use in training a neural network to design native-looking nucleotide sequences for fungal organisms",
"## Dataset processing \n\nThis document describes the preparation of the fungal codons \ndataset.",
"### Obtaining the raw data \n\nThe raw data, CDS sequences for fungal organisms, \nwas obtained from Ensembl Genomes via the following URL \n\nURL\n\nAll files were considered, and those matching the pattern \n\"*.URL\" were downloaded with wget using the \nfollowing command \n\n\n\nThis results in a dataset of 775,642 nucleotide sequences from 1,506 individual \nspecies represented in Ensembl Genomes.",
"### Calling ORFs from the nucleotide sequences \n\nFor this step, we keep sequences that start with ATG and are an even \nmultiple of 3 with no ambiguous nucleotides. Also we remove sequences \nthat would result in a protein longer than 512 residues.",
"### Clustering at the protein level \n\nClustering was performed with MMseqs2 using commands like the following. \n\n\n\nThis produces 259,737 clusters at 50% identity (80% coverage for both sequences)",
"### Train/test splits \n\nThe dataset was split into 80% training examples (around 200k), 10% validation examples (around 20k), and 10% testing (around 20k) examples"
] | [
27,
84,
20,
94,
63,
49,
45
] | [
"passage: TAGS\n#task_categories-translation #size_categories-100K<n<1M #region-us \n# Fungal coding sequence dataset \n\nDataset of codon usage for fungal organisms created from the Ensembl Genomes clustered to 50% sequence identity at the protein level and split into 80%/10%/10% train/validation/test splits for use in training a neural network to design native-looking nucleotide sequences for fungal organisms## Dataset processing \n\nThis document describes the preparation of the fungal codons \ndataset.### Obtaining the raw data \n\nThe raw data, CDS sequences for fungal organisms, \nwas obtained from Ensembl Genomes via the following URL \n\nURL\n\nAll files were considered, and those matching the pattern \n\"*.URL\" were downloaded with wget using the \nfollowing command \n\n\n\nThis results in a dataset of 775,642 nucleotide sequences from 1,506 individual \nspecies represented in Ensembl Genomes.### Calling ORFs from the nucleotide sequences \n\nFor this step, we keep sequences that start with ATG and are an even \nmultiple of 3 with no ambiguous nucleotides. Also we remove sequences \nthat would result in a protein longer than 512 residues.### Clustering at the protein level \n\nClustering was performed with MMseqs2 using commands like the following. \n\n\n\nThis produces 259,737 clusters at 50% identity (80% coverage for both sequences)### Train/test splits \n\nThe dataset was split into 80% training examples (around 200k), 10% validation examples (around 20k), and 10% testing (around 20k) examples"
] |
8c24c09feb8028d3f9f37248f74267ac355c2d7c | 
[Tulu v2](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture) is, by far, my favorite SFT mixture. But how does a simple random subsampling technique, without employing more sophisticated methods, yield good results?
First off, here is a Sankey Diagram for Tulu v2. I'm presenting this because I noticed FLAN is repeatedly used by different datasets, similar to what [other LLM teams](https://arxiv.org/abs/2309.08632) do with GSM8k, potentially leading to data contamination.
Unfortunately, I wasn't able to fully understand some of the detailed relationships correctly, especially the FLAN v2.

This represents the semantic clustering of Tulu v2 compared to other reputable SFT datasets.
I believe this plot indicates that Tulu v2 semantically includes ShareGPT and roughly covers the same semantic space of Slim-Orca, which is considered another SOTA for open LLMs and frequently used.

This is an ongoing investigation. I hope this analysis can bring some in-depth insights for other languages/domains.
| lorinma/Tulu-v2-analysis | [
"arxiv:2309.08632",
"region:us"
] | 2023-12-20T05:55:12+00:00 | {} | 2023-12-20T06:07:18+00:00 | [
"2309.08632"
] | [] | TAGS
#arxiv-2309.08632 #region-us
| !image/png
Tulu v2 is, by far, my favorite SFT mixture. But how does a simple random subsampling technique, without employing more sophisticated methods, yield good results?
First off, here is a Sankey Diagram for Tulu v2. I'm presenting this because I noticed FLAN is repeatedly used by different datasets, similar to what other LLM teams do with GSM8k, potentially leading to data contamination.
Unfortunately, I wasn't able to fully understand some of the detailed relationships correctly, especially the FLAN v2.
!image/png
This represents the semantic clustering of Tulu v2 compared to other reputable SFT datasets.
I believe this plot indicates that Tulu v2 semantically includes ShareGPT and roughly covers the same semantic space of Slim-Orca, which is considered another SOTA for open LLMs and frequently used.
!image/png
This is an ongoing investigation. I hope this analysis can bring some in-depth insights for other languages/domains.
| [] | [
"TAGS\n#arxiv-2309.08632 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#arxiv-2309.08632 #region-us \n"
] |
ceee66e2c0fb57a6be201a50260c75d493f41807 |
A bunch of scraped posts of omorashi fictional stories. One entry for each post. Not all may be actual stories, some may be comments or replies.
I did almost no cleaning. Formatting may vary greatly. I attempted to condense 3+ consecutive newlines into just 2, that's it. | dividebythree/omo-text-dump-v1 | [
"not-for-all-audiences",
"region:us"
] | 2023-12-20T06:08:00+00:00 | {"tags": ["not-for-all-audiences"]} | 2023-12-20T06:18:44+00:00 | [] | [] | TAGS
#not-for-all-audiences #region-us
|
A bunch of scraped posts of omorashi fictional stories. One entry for each post. Not all may be actual stories, some may be comments or replies.
I did almost no cleaning. Formatting may vary greatly. I attempted to condense 3+ consecutive newlines into just 2, that's it. | [] | [
"TAGS\n#not-for-all-audiences #region-us \n"
] | [
15
] | [
"passage: TAGS\n#not-for-all-audiences #region-us \n"
] |
4210122417ef1775a991bf429b647a9c23838e56 |
# LatestEval for bbc
This benchmark was created with at 2023 week 51 with the latest data from bbc.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/bbc-latest | [
"region:us"
] | 2023-12-20T06:23:52+00:00 | {} | 2023-12-20T06:23:57+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for bbc
This benchmark was created with at 2023 week 51 with the latest data from bbc.
check more details at our github page. | [
"# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] |
522ff90258ff92e6f36d5824f74599faa08149fd |
# LatestEval for bbc
This benchmark was created with at 2023 week 51 with the latest data from bbc.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/bbc-2023-week51 | [
"region:us"
] | 2023-12-20T06:23:54+00:00 | {} | 2023-12-20T06:24:00+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for bbc
This benchmark was created with at 2023 week 51 with the latest data from bbc.
check more details at our github page. | [
"# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# LatestEval for bbc\n\nThis benchmark was created with at 2023 week 51 with the latest data from bbc.\n\ncheck more details at our github page."
] |
c51e0bae7e4cdc5acfdd52c614b0bdac82aca975 |
# LatestEval for github
This benchmark was created with at 2023 week 51 with the latest data from github.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/github-latest | [
"region:us"
] | 2023-12-20T06:24:08+00:00 | {} | 2023-12-20T06:24:14+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for github
This benchmark was created with at 2023 week 51 with the latest data from github.
check more details at our github page. | [
"# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] |
1f02706b8e097a0e21848d793fbfe4f63a1e8ef7 |
# LatestEval for github
This benchmark was created with at 2023 week 51 with the latest data from github.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/github-2023-week51 | [
"region:us"
] | 2023-12-20T06:24:11+00:00 | {} | 2023-12-20T06:24:15+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for github
This benchmark was created with at 2023 week 51 with the latest data from github.
check more details at our github page. | [
"# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# LatestEval for github\n\nThis benchmark was created with at 2023 week 51 with the latest data from github.\n\ncheck more details at our github page."
] |
72d650c224c63b8778a13426eda3edf064a56e6b |
# LatestEval for arxiv
This benchmark was created with at 2023 week 51 with the latest data from arxiv.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/arxiv-latest | [
"region:us"
] | 2023-12-20T06:24:23+00:00 | {} | 2023-12-20T06:24:26+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for arxiv
This benchmark was created with at 2023 week 51 with the latest data from arxiv.
check more details at our github page. | [
"# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] |
9abeaed83a23e68a510e6bc7cfd4a46d6e89d251 |
# LatestEval for arxiv
This benchmark was created with at 2023 week 51 with the latest data from arxiv.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/arxiv-2023-week51 | [
"region:us"
] | 2023-12-20T06:24:25+00:00 | {} | 2023-12-20T06:24:27+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for arxiv
This benchmark was created with at 2023 week 51 with the latest data from arxiv.
check more details at our github page. | [
"# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# LatestEval for arxiv\n\nThis benchmark was created with at 2023 week 51 with the latest data from arxiv.\n\ncheck more details at our github page."
] |
1c6cfb09a58ea367bf2cce7b3fe7075727b4f239 |
# LatestEval for all
This benchmark was created with at 2023 week 51 with the latest data from all.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/full-latest | [
"region:us"
] | 2023-12-20T06:24:27+00:00 | {} | 2023-12-20T06:26:20+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for all
This benchmark was created with at 2023 week 51 with the latest data from all.
check more details at our github page. | [
"# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] |
be1c0fcbedb850d237864d43cb1638ad54a2ef99 |
# LatestEval for all
This benchmark was created with at 2023 week 51 with the latest data from all.
check more details at our [github page](https://github.com/liyucheng09/LatestEval). | LatestEval/full-2023-week51 | [
"region:us"
] | 2023-12-20T06:24:29+00:00 | {} | 2023-12-20T06:26:20+00:00 | [] | [] | TAGS
#region-us
|
# LatestEval for all
This benchmark was created with at 2023 week 51 with the latest data from all.
check more details at our github page. | [
"# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] | [
"TAGS\n#region-us \n",
"# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# LatestEval for all\n\nThis benchmark was created with at 2023 week 51 with the latest data from all.\n\ncheck more details at our github page."
] |
6d1dc02a2549104cc9cd2828c4b8647e9f994341 |
## 数据集内容说明:
包含700+个阿里云OpenAPI的信息;包括Dataworks,EMR,DataLake,Maxcompute,Hologram,实时计算Flink版,QuickBI,DTS等多个产品的公开Open API信息。
<strong> Functions信息与OpenAI functions calling 能力中,functions信息传入的格式保持一致 </strong>
## 样例
```
{
"systemPrompt": 你是一个函数筛选助理,如果与问题相关的话,您可以使用下面的函数来获取更多数据以回答用户提出的问题:{""name"": ""UpdateTicketNum"", ""description"": ""对用于免登嵌入报表的指定的ticket进行更新票据数量操作。"", ""parameters"": {""type"": ""object"", ""properties"": [{""Ticket"": {""type"": ""string"", ""description"": ""三方嵌入的票据值,即URL中的accessTicket值。""}}, {""TicketNum"": {""type"": ""integer"", ""description"": ""票据数。\n- 取值范围:1~99998,建议值为1。""}}], ""required"": [""Ticket"", ""TicketNum""]}}请以如下格式回复::{"function":"function_name","arguments": {"argument1": value1,"argument2": value2}},
"userPrompt": "我想将免登嵌入报表的票据值为"abcd1234"的票据数量更新为10。",
"assistantResponse":
{
"function": "UpdateTicketNum",
"arguments": [
{
"Ticket": "abcd1234",
"TicketNum": 10
}
]
}
}
```
## 字段
```
systemPrompt: 指令
userPrompt: 用户输入
assistantResponse: 输出
```
### 数据集用途
- 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。
- 阿里云OpenAPI:基于数据中阿里云OpenAPI的信息,模型可以更好的理解其相关信息以及调用方式,在开发过程中提供更合适的函数建议。
如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持! | Deepexi/openai-formate-function-calling-small | [
"size_categories:10K<n<100K",
"language:zh",
"license:apache-2.0",
"code",
"region:us"
] | 2023-12-20T06:59:27+00:00 | {"language": ["zh"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "tags": ["code"]} | 2023-12-20T07:09:31+00:00 | [] | [
"zh"
] | TAGS
#size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #code #region-us
|
## 数据集内容说明:
包含700+个阿里云OpenAPI的信息;包括Dataworks,EMR,DataLake,Maxcompute,Hologram,实时计算Flink版,QuickBI,DTS等多个产品的公开Open API信息。
<strong> Functions信息与OpenAI functions calling 能力中,functions信息传入的格式保持一致 </strong>
## 样例
## 字段
### 数据集用途
- 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。
- 阿里云OpenAPI:基于数据中阿里云OpenAPI的信息,模型可以更好的理解其相关信息以及调用方式,在开发过程中提供更合适的函数建议。
如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持! | [
"## 数据集内容说明:\n包含700+个阿里云OpenAPI的信息;包括Dataworks,EMR,DataLake,Maxcompute,Hologram,实时计算Flink版,QuickBI,DTS等多个产品的公开Open API信息。\n\n<strong> Functions信息与OpenAI functions calling 能力中,functions信息传入的格式保持一致 </strong>",
"## 样例",
"## 字段",
"### 数据集用途\n - 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。\n - 阿里云OpenAPI:基于数据中阿里云OpenAPI的信息,模型可以更好的理解其相关信息以及调用方式,在开发过程中提供更合适的函数建议。\n如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持!"
] | [
"TAGS\n#size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #code #region-us \n",
"## 数据集内容说明:\n包含700+个阿里云OpenAPI的信息;包括Dataworks,EMR,DataLake,Maxcompute,Hologram,实时计算Flink版,QuickBI,DTS等多个产品的公开Open API信息。\n\n<strong> Functions信息与OpenAI functions calling 能力中,functions信息传入的格式保持一致 </strong>",
"## 样例",
"## 字段",
"### 数据集用途\n - 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。\n - 阿里云OpenAPI:基于数据中阿里云OpenAPI的信息,模型可以更好的理解其相关信息以及调用方式,在开发过程中提供更合适的函数建议。\n如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持!"
] | [
33,
90,
4,
4,
105
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-Chinese #license-apache-2.0 #code #region-us \n## 数据集内容说明:\n包含700+个阿里云OpenAPI的信息;包括Dataworks,EMR,DataLake,Maxcompute,Hologram,实时计算Flink版,QuickBI,DTS等多个产品的公开Open API信息。\n\n<strong> Functions信息与OpenAI functions calling 能力中,functions信息传入的格式保持一致 </strong>## 样例## 字段### 数据集用途\n - 函数调用理解: 通过分析对话中的函数调用信息,让语言模型更好地理解函数之间的关系,从而提高其代码理解能力。\n - 阿里云OpenAPI:基于数据中阿里云OpenAPI的信息,模型可以更好的理解其相关信息以及调用方式,在开发过程中提供更合适的函数建议。\n如有任何问题或需要进一步帮助,请随时联系我们。感谢您对函数调用数据集及其应用的兴趣与支持!"
] |
727d66f5d9778d64d8594e8497fa3c518a851d3d |
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
| varun-v-rao/squad | [
"task_categories:question-answering",
"region:us"
] | 2023-12-20T07:01:50+00:00 | {"task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 79061690.62181075, "num_examples": 87285}, {"name": "validation", "num_bytes": 10388764.166508988, "num_examples": 10485}], "download_size": 16137496, "dataset_size": 89450454.78831974}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2024-02-08T07:44:38+00:00 | [] | [] | TAGS
#task_categories-question-answering #region-us
|
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
| [
"## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.",
"### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] | [
"TAGS\n#task_categories-question-answering #region-us \n",
"## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.",
"### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] | [
18,
78,
103
] | [
"passage: TAGS\n#task_categories-question-answering #region-us \n## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] |
39fc839c920706ff528ba8b73057dbf704fca743 |
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
| varun-v-rao/adversarial_hotpotqa | [
"task_categories:question-answering",
"region:us"
] | 2023-12-20T07:13:44+00:00 | {"task_categories": ["question-answering"], "dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "sequence": "int64"}, {"name": "text", "sequence": "string"}]}], "splits": [{"name": "train", "num_bytes": 89560671.51114564, "num_examples": 33358}, {"name": "validation", "num_bytes": 7454710.584712826, "num_examples": 2828}], "download_size": 17859339, "dataset_size": 97015382.09585845}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2024-02-08T07:45:30+00:00 | [] | [] | TAGS
#task_categories-question-answering #region-us
|
## Dataset Card for "squad"
This truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.
### Preprocessing and Filtering
Preprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer.
| [
"## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.",
"### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] | [
"TAGS\n#task_categories-question-answering #region-us \n",
"## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.",
"### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] | [
18,
78,
103
] | [
"passage: TAGS\n#task_categories-question-answering #region-us \n## Dataset Card for \"squad\"\n\nThis truncated dataset is derived from the Stanford Question Answering Dataset (SQuAD) for reading comprehension. Its primary aim is to extract instances from the original SQuAD dataset that align with the context length of BERT, RoBERTa, OPT, and T5 models.### Preprocessing and Filtering\n\nPreprocessing involves tokenization using the BertTokenizer (WordPiece), RoBertaTokenizer (Byte-level BPE), OPTTokenizer (Byte-Pair Encoding), and T5Tokenizer (Sentence Piece). Each sample is then checked to ensure that the length of the tokenized input is within the specified model_max_length for each tokenizer."
] |
da797e3cce857b112582f2be70abadef0873be63 |
Pre-training dataset used in paper "[From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery](https://arxiv.org/abs/2309.05203)" (AAAI 2024)
PseudoMD-1M dataset is the first artificially-real dataset for cross-modal molecule discovery, which consists of 1,020,139 pseudo molecule-description pairs. Every molecule is represented using its Canonical SMILES notation, sourced from PubChem via the PUG View API. On average, each description within PseudoMD-1M contains 5.11 sentences, 106.47 words, and 165.07 tokens.
### Citation
If you found the dataset useful, please cite:
```bibtex
@article{chen2023artificially,
title={From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery},
author={Chen, Yuhan and Xi, Nuwa and Du, Yanrui and Wang, Haochun and Jianyu, Chen and Zhao, Sendong and Qin, Bing},
journal={arXiv preprint arXiv:2309.05203},
year={2023}
}
``` | SCIR-HI/PseudoMD-1M | [
"task_categories:translation",
"task_categories:text2text-generation",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"chemistry",
"biology",
"medical",
"arxiv:2309.05203",
"region:us"
] | 2023-12-20T08:00:37+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["translation", "text2text-generation"], "tags": ["chemistry", "biology", "medical"]} | 2023-12-20T11:19:29+00:00 | [
"2309.05203"
] | [
"en"
] | TAGS
#task_categories-translation #task_categories-text2text-generation #size_categories-1M<n<10M #language-English #license-apache-2.0 #chemistry #biology #medical #arxiv-2309.05203 #region-us
|
Pre-training dataset used in paper "From Artificially Real to Real: Leveraging Pseudo Data from Large Language Models for Low-Resource Molecule Discovery" (AAAI 2024)
PseudoMD-1M dataset is the first artificially-real dataset for cross-modal molecule discovery, which consists of 1,020,139 pseudo molecule-description pairs. Every molecule is represented using its Canonical SMILES notation, sourced from PubChem via the PUG View API. On average, each description within PseudoMD-1M contains 5.11 sentences, 106.47 words, and 165.07 tokens.
If you found the dataset useful, please cite:
| [] | [
"TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-1M<n<10M #language-English #license-apache-2.0 #chemistry #biology #medical #arxiv-2309.05203 #region-us \n"
] | [
70
] | [
"passage: TAGS\n#task_categories-translation #task_categories-text2text-generation #size_categories-1M<n<10M #language-English #license-apache-2.0 #chemistry #biology #medical #arxiv-2309.05203 #region-us \n"
] |
773c2a189c53a575fe04b8760b3c224aced4ad8f | # Dataset Card for "translation_hindi"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Rishabh02/translation_hindi | [
"region:us"
] | 2023-12-20T08:31:56+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 556952186, "num_examples": 1659083}], "download_size": 206975053, "dataset_size": 556952186}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T08:32:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "translation_hindi"
More Information needed | [
"# Dataset Card for \"translation_hindi\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"translation_hindi\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"translation_hindi\"\n\nMore Information needed"
] |
5b41cdb6fba1effea26615d7c78df110694e7c33 |
# Dataset Card for Common Voice Corpus 16
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://commonvoice.mozilla.org/en/datasets
- **Repository:** https://github.com/common-voice/common-voice
- **Paper:** https://arxiv.org/abs/1912.06670
- **Leaderboard:** https://paperswithcode.com/dataset/common-voice
- **Point of Contact:** [Vaibhav Srivastav](mailto:[email protected])
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added.
Take a look at the [Languages](https://commonvoice.mozilla.org/en/languages) page to request a language or start contributing.
### Languages
```
Abkhaz, Afrikaans, Albanian, Amharic, Arabic, Armenian, Assamese, Asturian, Azerbaijani, Basaa, Bashkir, Basque, Belarusian, Bengali, Breton, Bulgarian, Cantonese, Catalan, Central Kurdish, Chinese (China), Chinese (Hong Kong), Chinese (Taiwan), Chuvash, Czech, Danish, Dhivehi, Dioula, Dutch, English, Erzya, Esperanto, Estonian, Finnish, French, Frisian, Galician, Georgian, German, Greek, Guarani, Hakha Chin, Hausa, Hebrew, Hill Mari, Hindi, Hungarian, Icelandic, Igbo, Indonesian, Interlingua, Irish, Italian, Japanese, Kabyle, Kazakh, Kinyarwanda, Korean, Kurmanji Kurdish, Kyrgyz, Lao, Latgalian, Latvian, Ligurian, Lithuanian, Luganda, Macedonian, Malayalam, Maltese, Marathi, Meadow Mari, Moksha, Mongolian, Nepali, Norwegian Nynorsk, Occitan, Odia, Ossetian, Pashto, Persian, Polish, Portuguese, Punjabi, Quechua Chanka, Romanian, Romansh Sursilvan, Romansh Vallader, Russian, Sakha, Santali (Ol Chiki), Saraiki, Sardinian, Serbian, Slovak, Slovenian, Sorbian, Upper, Spanish, Swahili, Swedish, Taiwanese (Minnan), Tamazight, Tamil, Tatar, Telugu, Thai, Tigre, Tigrinya, Toki Pona, Turkish, Turkmen, Twi, Ukrainian, Urdu, Uyghur, Uzbek, Vietnamese, Votic, Welsh, Western Sierra Puebla Nahuatl, Yiddish, Yoruba
```
## How to use
The `datasets` library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the `load_dataset` function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
```python
from datasets import load_dataset
cv_16 = load_dataset("mozilla-foundation/common_voice_16_0", "hi", split="train")
```
Using the datasets library, you can also stream the dataset on-the-fly by adding a `streaming=True` argument to the `load_dataset` function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
```python
from datasets import load_dataset
cv_16 = load_dataset("mozilla-foundation/common_voice_16_0", "hi", split="train", streaming=True)
print(next(iter(cv_16)))
```
*Bonus*: create a [PyTorch dataloader](https://huggingface.co/docs/datasets/use_with_pytorch) directly with your own datasets (local/streamed).
### Local
```python
from datasets import load_dataset
from torch.utils.data.sampler import BatchSampler, RandomSampler
cv_16 = load_dataset("mozilla-foundation/common_voice_16_0", "hi", split="train")
batch_sampler = BatchSampler(RandomSampler(cv_16), batch_size=32, drop_last=False)
dataloader = DataLoader(cv_16, batch_sampler=batch_sampler)
```
### Streaming
```python
from datasets import load_dataset
from torch.utils.data import DataLoader
cv_16 = load_dataset("mozilla-foundation/common_voice_16_0", "hi", split="train")
dataloader = DataLoader(cv_16, batch_size=32)
```
To find out more about loading and preparing audio datasets, head over to [hf.co/blog/audio-datasets](https://huggingface.co/blog/audio-datasets).
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with `transformers` - [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/speech-recognition).
## Dataset Structure
### Data Instances
A typical data point comprises the `path` to the audio file and its `sentence`.
Additional fields include `accent`, `age`, `client_id`, `up_votes`, `down_votes`, `gender`, `locale` and `segment`.
```python
{
'client_id': 'd59478fbc1ee646a28a3c652a119379939123784d99131b865a89f8b21c81f69276c48bd574b81267d9d1a77b83b43e6d475a6cfc79c232ddbca946ae9c7afc5',
'path': 'et/clips/common_voice_et_18318995.mp3',
'audio': {
'path': 'et/clips/common_voice_et_18318995.mp3',
'array': array([-0.00048828, -0.00018311, -0.00137329, ..., 0.00079346, 0.00091553, 0.00085449], dtype=float32),
'sampling_rate': 48000
},
'sentence': 'Tasub kokku saada inimestega, keda tunned juba ammust ajast saati.',
'up_votes': 2,
'down_votes': 0,
'age': 'twenties',
'gender': 'male',
'accent': '',
'locale': 'et',
'segment': ''
}
```
### Data Fields
`client_id` (`string`): An id for which client (voice) made the recording
`path` (`string`): The path to the audio file
`audio` (`dict`): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: `dataset[0]["audio"]` the audio file is automatically decoded and resampled to `dataset.features["audio"].sampling_rate`. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the `"audio"` column, *i.e.* `dataset[0]["audio"]` should **always** be preferred over `dataset["audio"][0]`.
`sentence` (`string`): The sentence the user was prompted to speak
`up_votes` (`int64`): How many upvotes the audio file has received from reviewers
`down_votes` (`int64`): How many downvotes the audio file has received from reviewers
`age` (`string`): The age of the speaker (e.g. `teens`, `twenties`, `fifties`)
`gender` (`string`): The gender of the speaker
`accent` (`string`): Accent of the speaker
`locale` (`string`): The locale of the speaker
`segment` (`string`): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, **almost all** sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
```python
from datasets import load_dataset
ds = load_dataset("mozilla-foundation/common_voice_16_0", "en", use_auth_token=True)
def prepare_dataset(batch):
"""Function to preprocess the dataset with the .map method"""
transcription = batch["sentence"]
if transcription.startswith('"') and transcription.endswith('"'):
# we can remove trailing quotation marks as they do not affect the transcription
transcription = transcription[1:-1]
if transcription[-1] not in [".", "?", "!"]:
# append a full-stop to sentences that do not end in punctuation
transcription = transcription + "."
batch["sentence"] = transcription
return batch
ds = ds.map(prepare_dataset, desc="preprocess dataset")
```
## Dataset Creation
### Curation Rationale
[Needs More Information]
### Source Data
#### Initial Data Collection and Normalization
[Needs More Information]
#### Who are the source language producers?
[Needs More Information]
### Annotations
#### Annotation process
[Needs More Information]
#### Who are the annotators?
[Needs More Information]
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
Public Domain, [CC-0](https://creativecommons.org/share-your-work/public-domain/cc0/)
### Citation Information
```
@inproceedings{commonvoice:2020,
author = {Ardila, R. and Branson, M. and Davis, K. and Henretty, M. and Kohler, M. and Meyer, J. and Morais, R. and Saunders, L. and Tyers, F. M. and Weber, G.},
title = {Common Voice: A Massively-Multilingual Speech Corpus},
booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
pages = {4211--4215},
year = 2020
}
```
| mozilla-foundation/common_voice_16_0 | [
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:multilingual",
"language:ab",
"language:af",
"language:am",
"language:ar",
"language:as",
"language:ast",
"language:az",
"language:ba",
"language:bas",
"language:be",
"language:bg",
"language:bn",
"language:br",
"language:ca",
"language:ckb",
"language:cnh",
"language:cs",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dyu",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:fi",
"language:fr",
"language:fy",
"language:ga",
"language:gl",
"language:gn",
"language:ha",
"language:he",
"language:hi",
"language:hsb",
"language:hu",
"language:hy",
"language:ia",
"language:id",
"language:ig",
"language:is",
"language:it",
"language:ja",
"language:ka",
"language:kab",
"language:kk",
"language:kmr",
"language:ko",
"language:ky",
"language:lg",
"language:lij",
"language:lo",
"language:lt",
"language:ltg",
"language:lv",
"language:mdf",
"language:mhr",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:mrj",
"language:mt",
"language:myv",
"language:nan",
"language:ne",
"language:nhi",
"language:nl",
"language:nn",
"language:oc",
"language:or",
"language:os",
"language:pa",
"language:pl",
"language:ps",
"language:pt",
"language:quy",
"language:rm",
"language:ro",
"language:ru",
"language:rw",
"language:sah",
"language:sat",
"language:sc",
"language:sk",
"language:skr",
"language:sl",
"language:sq",
"language:sr",
"language:sv",
"language:sw",
"language:ta",
"language:te",
"language:th",
"language:ti",
"language:tig",
"language:tk",
"language:tok",
"language:tr",
"language:tt",
"language:tw",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:vi",
"language:vot",
"language:yi",
"language:yo",
"language:yue",
"language:zgh",
"language:zh",
"license:cc0-1.0",
"arxiv:1912.06670",
"region:us"
] | 2023-12-20T09:01:34+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["ab", "af", "am", "ar", "as", "ast", "az", "ba", "bas", "be", "bg", "bn", "br", "ca", "ckb", "cnh", "cs", "cv", "cy", "da", "de", "dv", "dyu", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gl", "gn", "ha", "he", "hi", "hsb", "hu", "hy", "ia", "id", "ig", "is", "it", "ja", "ka", "kab", "kk", "kmr", "ko", "ky", "lg", "lij", "lo", "lt", "ltg", "lv", "mdf", "mhr", "mk", "ml", "mn", "mr", "mrj", "mt", "myv", "nan", "ne", "nhi", "nl", "nn", "oc", "or", "os", "pa", "pl", "ps", "pt", "quy", "rm", "ro", "ru", "rw", "sah", "sat", "sc", "sk", "skr", "sl", "sq", "sr", "sv", "sw", "ta", "te", "th", "ti", "tig", "tk", "tok", "tr", "tt", "tw", "ug", "uk", "ur", "uz", "vi", "vot", "yi", "yo", "yue", "zgh", "zh"], "license": ["cc0-1.0"], "multilinguality": ["multilingual"], "paperswithcode_id": "common-voice", "pretty_name": "Common Voice Corpus 16", "language_bcp47": ["zh-CN", "zh-HK", "zh-TW", "sv-SE", "rm-sursilv", "rm-vallader", "pa-IN", "nn-NO", "ne-NP", "nan-tw", "hy-AM", "ga-IE", "fy-NL"], "extra_gated_prompt": "By clicking on \u201cAccess repository\u201d below, you also agree to not attempt to determine the identity of speakers in the Common Voice dataset."} | 2023-12-21T13:53:03+00:00 | [
"1912.06670"
] | [
"ab",
"af",
"am",
"ar",
"as",
"ast",
"az",
"ba",
"bas",
"be",
"bg",
"bn",
"br",
"ca",
"ckb",
"cnh",
"cs",
"cv",
"cy",
"da",
"de",
"dv",
"dyu",
"el",
"en",
"eo",
"es",
"et",
"eu",
"fa",
"fi",
"fr",
"fy",
"ga",
"gl",
"gn",
"ha",
"he",
"hi",
"hsb",
"hu",
"hy",
"ia",
"id",
"ig",
"is",
"it",
"ja",
"ka",
"kab",
"kk",
"kmr",
"ko",
"ky",
"lg",
"lij",
"lo",
"lt",
"ltg",
"lv",
"mdf",
"mhr",
"mk",
"ml",
"mn",
"mr",
"mrj",
"mt",
"myv",
"nan",
"ne",
"nhi",
"nl",
"nn",
"oc",
"or",
"os",
"pa",
"pl",
"ps",
"pt",
"quy",
"rm",
"ro",
"ru",
"rw",
"sah",
"sat",
"sc",
"sk",
"skr",
"sl",
"sq",
"sr",
"sv",
"sw",
"ta",
"te",
"th",
"ti",
"tig",
"tk",
"tok",
"tr",
"tt",
"tw",
"ug",
"uk",
"ur",
"uz",
"vi",
"vot",
"yi",
"yo",
"yue",
"zgh",
"zh"
] | TAGS
#annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #language-Abkhazian #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-Azerbaijani #language-Bashkir #language-Basa (Cameroon) #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Central Kurdish #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Dyula #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Hausa #language-Hebrew #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kazakh #language-Northern Kurdish #language-Korean #language-Kirghiz #language-Ganda #language-Ligurian #language-Lao #language-Lithuanian #language-Latgalian #language-Latvian #language-Moksha #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Western Mari #language-Maltese #language-Erzya #language-Min Nan Chinese #language-Nepali (macrolanguage) #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Dutch #language-Norwegian Nynorsk #language-Occitan (post 1500) #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Ayacucho Quechua #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Santali #language-Sardinian #language-Slovak #language-Saraiki #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tigrinya #language-Tigre #language-Turkmen #language-Toki Pona #language-Turkish #language-Tatar #language-Twi #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Votic #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Standard Moroccan Tamazight #language-Chinese #license-cc0-1.0 #arxiv-1912.06670 #region-us
|
# Dataset Card for Common Voice Corpus 16
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks and Leaderboards
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Dataset Creation
- Curation Rationale
- Source Data
- Annotations
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Dataset Curators
- Licensing Information
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: URL
- Leaderboard: URL
- Point of Contact: Vaibhav Srivastav
### Dataset Summary
The Common Voice dataset consists of a unique MP3 and corresponding text file.
Many of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent
that can help improve the accuracy of speech recognition engines.
The dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added.
Take a look at the Languages page to request a language or start contributing.
### Languages
## How to use
The 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function.
For example, to download the Hindi config, simply specify the corresponding language config name (i.e., "hi" for Hindi):
Using the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.
*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).
### Local
### Streaming
To find out more about loading and preparing audio datasets, head over to URL
### Example scripts
Train your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with 'transformers' - here.
## Dataset Structure
### Data Instances
A typical data point comprises the 'path' to the audio file and its 'sentence'.
Additional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.
### Data Fields
'client_id' ('string'): An id for which client (voice) made the recording
'path' ('string'): The path to the audio file
'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0]["audio"]' the audio file is automatically decoded and resampled to 'dataset.features["audio"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '"audio"' column, *i.e.* 'dataset[0]["audio"]' should always be preferred over 'dataset["audio"][0]'.
'sentence' ('string'): The sentence the user was prompted to speak
'up_votes' ('int64'): How many upvotes the audio file has received from reviewers
'down_votes' ('int64'): How many downvotes the audio file has received from reviewers
'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')
'gender' ('string'): The gender of the speaker
'accent' ('string'): Accent of the speaker
'locale' ('string'): The locale of the speaker
'segment' ('string'): Usually an empty field
### Data Splits
The speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.
The validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.
The invalidated data is data has been invalidated by reviewers
and received downvotes indicating that the data is of low quality.
The reported data is data that has been reported, for different reasons.
The other data is data that has not yet been reviewed.
The dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.
## Data Preprocessing Recommended by Hugging Face
The following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice.
Many examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.
In addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
## Considerations for Using the Data
### Social Impact of Dataset
The dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
Public Domain, CC-0
| [
"# Dataset Card for Common Voice Corpus 16",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] | [
"TAGS\n#annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #language-Abkhazian #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-Azerbaijani #language-Bashkir #language-Basa (Cameroon) #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Central Kurdish #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Dyula #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Hausa #language-Hebrew #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kazakh #language-Northern Kurdish #language-Korean #language-Kirghiz #language-Ganda #language-Ligurian #language-Lao #language-Lithuanian #language-Latgalian #language-Latvian #language-Moksha #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Western Mari #language-Maltese #language-Erzya #language-Min Nan Chinese #language-Nepali (macrolanguage) #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Dutch #language-Norwegian Nynorsk #language-Occitan (post 1500) #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Ayacucho Quechua #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Santali #language-Sardinian #language-Slovak #language-Saraiki #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tigrinya #language-Tigre #language-Turkmen #language-Toki Pona #language-Turkish #language-Tatar #language-Twi #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Votic #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Standard Moroccan Tamazight #language-Chinese #license-cc0-1.0 #arxiv-1912.06670 #region-us \n",
"# Dataset Card for Common Voice Corpus 16",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav",
"### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.",
"### Languages",
"## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).",
"### Local",
"### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with 'transformers' - here.",
"## Dataset Structure",
"### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.",
"### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field",
"### Data Splits\n\nThe speech material has been subdivided into portions for dev, train, test, validated, invalidated, reported and other.\n\nThe validated data is data that has been validated with reviewers and received upvotes that the data is of high quality.\n\nThe invalidated data is data has been invalidated by reviewers\nand received downvotes indicating that the data is of low quality.\n\nThe reported data is data that has been reported, for different reasons.\n\nThe other data is data that has not yet been reviewed.\n\nThe dev, test, train are all data that has been reviewed, deemed of high quality and split into dev, test and train.",
"## Data Preprocessing Recommended by Hugging Face\n\nThe following are data preprocessing steps advised by the Hugging Face team. They are accompanied by an example code snippet that shows how to put them to practice. \n\nMany examples in this dataset have trailing quotations marks, e.g _“the cat sat on the mat.“_. These trailing quotation marks do not change the actual meaning of the sentence, and it is near impossible to infer whether a sentence is a quotation or not a quotation from audio data alone. In these cases, it is advised to strip the quotation marks, leaving: _the cat sat on the mat_.\n\nIn addition, the majority of training sentences end in punctuation ( . or ? or ! ), whereas just a small proportion do not. In the dev set, almost all sentences end in punctuation. Thus, it is recommended to append a full-stop ( . ) to the end of the small number of training examples that do not end in punctuation.",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe dataset consists of people who have donated their voice online. You agree to not attempt to determine the identity of speakers in the Common Voice dataset.",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information\n\nPublic Domain, CC-0"
] | [
781,
9,
120,
34,
109,
4,
190,
3,
22,
36,
6,
77,
378,
145,
233,
5,
7,
4,
10,
10,
5,
5,
9,
42,
8,
41,
8,
7,
5,
6,
11
] | [
"passage: ",
"passage: TAGS\n#annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-multilingual #language-Abkhazian #language-Afrikaans #language-Amharic #language-Arabic #language-Assamese #language-Asturian #language-Azerbaijani #language-Bashkir #language-Basa (Cameroon) #language-Belarusian #language-Bulgarian #language-Bengali #language-Breton #language-Catalan #language-Central Kurdish #language-Hakha Chin #language-Czech #language-Chuvash #language-Welsh #language-Danish #language-German #language-Dhivehi #language-Dyula #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Basque #language-Persian #language-Finnish #language-French #language-Western Frisian #language-Irish #language-Galician #language-Guarani #language-Hausa #language-Hebrew #language-Hindi #language-Upper Sorbian #language-Hungarian #language-Armenian #language-Interlingua (International Auxiliary Language Association) #language-Indonesian #language-Igbo #language-Icelandic #language-Italian #language-Japanese #language-Georgian #language-Kabyle #language-Kazakh #language-Northern Kurdish #language-Korean #language-Kirghiz #language-Ganda #language-Ligurian #language-Lao #language-Lithuanian #language-Latgalian #language-Latvian #language-Moksha #language-Eastern Mari #language-Macedonian #language-Malayalam #language-Mongolian #language-Marathi #language-Western Mari #language-Maltese #language-Erzya #language-Min Nan Chinese #language-Nepali (macrolanguage) #language-Zacatlán-Ahuacatlán-Tepetzintla Nahuatl #language-Dutch #language-Norwegian Nynorsk #language-Occitan (post 1500) #language-Oriya (macrolanguage) #language-Ossetian #language-Panjabi #language-Polish #language-Pushto #language-Portuguese #language-Ayacucho Quechua #language-Romansh #language-Romanian #language-Russian #language-Kinyarwanda #language-Yakut #language-Santali #language-Sardinian #language-Slovak #language-Saraiki #language-Slovenian #language-Albanian #language-Serbian #language-Swedish #language-Swahili (macrolanguage) #language-Tamil #language-Telugu #language-Thai #language-Tigrinya #language-Tigre #language-Turkmen #language-Toki Pona #language-Turkish #language-Tatar #language-Twi #language-Uighur #language-Ukrainian #language-Urdu #language-Uzbek #language-Vietnamese #language-Votic #language-Yiddish #language-Yoruba #language-Yue Chinese #language-Standard Moroccan Tamazight #language-Chinese #license-cc0-1.0 #arxiv-1912.06670 #region-us \n# Dataset Card for Common Voice Corpus 16## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks and Leaderboards\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n - Annotations\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Dataset Curators\n - Licensing Information\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: URL\n- Leaderboard: URL\n- Point of Contact: Vaibhav Srivastav### Dataset Summary\n\nThe Common Voice dataset consists of a unique MP3 and corresponding text file. \nMany of the 30328 recorded hours in the dataset also include demographic metadata like age, sex, and accent \nthat can help improve the accuracy of speech recognition engines.\n\nThe dataset currently consists of 19673 validated hours in 120 languages, but more voices and languages are always added. \nTake a look at the Languages page to request a language or start contributing.### Languages## How to use\n\nThe 'datasets' library allows you to load and pre-process your dataset in pure Python, at scale. The dataset can be downloaded and prepared in one call to your local drive by using the 'load_dataset' function. \n\nFor example, to download the Hindi config, simply specify the corresponding language config name (i.e., \"hi\" for Hindi):\n\n\nUsing the datasets library, you can also stream the dataset on-the-fly by adding a 'streaming=True' argument to the 'load_dataset' function call. Loading a dataset in streaming mode loads individual samples of the dataset at a time, rather than downloading the entire dataset to disk.\n\n\n*Bonus*: create a PyTorch dataloader directly with your own datasets (local/streamed).### Local### Streaming\n\n\n\nTo find out more about loading and preparing audio datasets, head over to URL",
"passage: ### Example scripts\n\nTrain your own CTC or Seq2Seq Automatic Speech Recognition models on Common Voice 16 with 'transformers' - here.## Dataset Structure### Data Instances\n\nA typical data point comprises the 'path' to the audio file and its 'sentence'. \nAdditional fields include 'accent', 'age', 'client_id', 'up_votes', 'down_votes', 'gender', 'locale' and 'segment'.### Data Fields\n\n'client_id' ('string'): An id for which client (voice) made the recording\n\n'path' ('string'): The path to the audio file\n\n'audio' ('dict'): A dictionary containing the path to the downloaded audio file, the decoded audio array, and the sampling rate. Note that when accessing the audio column: 'dataset[0][\"audio\"]' the audio file is automatically decoded and resampled to 'dataset.features[\"audio\"].sampling_rate'. Decoding and resampling of a large number of audio files might take a significant amount of time. Thus it is important to first query the sample index before the '\"audio\"' column, *i.e.* 'dataset[0][\"audio\"]' should always be preferred over 'dataset[\"audio\"][0]'.\n\n'sentence' ('string'): The sentence the user was prompted to speak\n\n'up_votes' ('int64'): How many upvotes the audio file has received from reviewers\n\n'down_votes' ('int64'): How many downvotes the audio file has received from reviewers\n\n'age' ('string'): The age of the speaker (e.g. 'teens', 'twenties', 'fifties')\n\n'gender' ('string'): The gender of the speaker\n\n'accent' ('string'): Accent of the speaker\n\n'locale' ('string'): The locale of the speaker\n\n'segment' ('string'): Usually an empty field"
] |
5436fc19d05279de68e23ec7c39ed6a5b6a7494e | # squad_newsqa
Concatenating StellarMilk/newsqa dataset with lmqg/qag_squad for asahi417/lm-question-generation | StellarMilk/squad_newsqa | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | 2023-12-20T09:13:40+00:00 | {"language": ["en"], "size_categories": ["10K<n<100K"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/squad_newsqa_train.parquet"}, {"split": "validation", "path": "data/squad_newsqa_valid.parquet"}, {"split": "test", "path": "data/squad_newsqa_test.parquet"}]}]} | 2023-12-20T09:15:05+00:00 | [] | [
"en"
] | TAGS
#size_categories-10K<n<100K #language-English #region-us
| # squad_newsqa
Concatenating StellarMilk/newsqa dataset with lmqg/qag_squad for asahi417/lm-question-generation | [
"# squad_newsqa\nConcatenating StellarMilk/newsqa dataset with lmqg/qag_squad for asahi417/lm-question-generation"
] | [
"TAGS\n#size_categories-10K<n<100K #language-English #region-us \n",
"# squad_newsqa\nConcatenating StellarMilk/newsqa dataset with lmqg/qag_squad for asahi417/lm-question-generation"
] | [
22,
43
] | [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #region-us \n# squad_newsqa\nConcatenating StellarMilk/newsqa dataset with lmqg/qag_squad for asahi417/lm-question-generation"
] |
f005b8931657101cf1c49c8ae94b63c0428a29d4 |
# Dataset Card for Universal NER v1 in the Aya format - Danish subset
This dataset is a format conversion for the Danish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_danish | [
"task_categories:token-classification",
"language:da",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:16:40+00:00 | {"language": ["da"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-21T09:10:59+00:00 | [
"2311.09122"
] | [
"da"
] | TAGS
#task_categories-token-classification #language-Danish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Danish subset
This dataset is a format conversion for the Danish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Danish subset\n\nThis dataset is a format conversion for the Danish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Danish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Danish subset\n\nThis dataset is a format conversion for the Danish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Danish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Danish subset\n\nThis dataset is a format conversion for the Danish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
e9cc5044d076f05fda6737e5219c9c3f2598c3dd | # AmazonElectronics_x1
+ **Data format:**
label, user_id, item_id, cate_id, item_history, cate_history
+ **Source:** https://cseweb.ucsd.edu/~jmcauley/datasets.html
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonElectronics_x1/tree/main
+ **Repository:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai. [Deep Interest Network for Click-Through Rate Prediction](https://arxiv.org/abs/1706.06978). In KDD 2018.
- Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. [ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop](https://arxiv.org/abs/2306.08808). In KDD 2023.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.csv
57a20e82fe736dd495f2eaf0669bf6d0 test.csv
e9bf80b92985e463db18fdc753d347b5 train.csv
```
| reczoo/AmazonElectronics_x1 | [
"arxiv:1706.06978",
"arxiv:2306.08808",
"region:us"
] | 2023-12-20T09:22:41+00:00 | {} | 2023-12-23T04:11:24+00:00 | [
"1706.06978",
"2306.08808"
] | [] | TAGS
#arxiv-1706.06978 #arxiv-2306.08808 #region-us
| # AmazonElectronics_x1
+ Data format:
label, user_id, item_id, cate_id, item_history, cate_history
+ Source: URL
+ Download: URL
+ Repository: URL
+ Used by papers:
- Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai. Deep Interest Network for Click-Through Rate Prediction. In KDD 2018.
- Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.
+ Check the md5sum for data integrity:
| [
"# AmazonElectronics_x1\n\n+ Data format: \nlabel, user_id, item_id, cate_id, item_history, cate_history\n\n+ Source: URL\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai. Deep Interest Network for Click-Through Rate Prediction. In KDD 2018.\n - Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.\n\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-1706.06978 #arxiv-2306.08808 #region-us \n",
"# AmazonElectronics_x1\n\n+ Data format: \nlabel, user_id, item_id, cate_id, item_history, cate_history\n\n+ Source: URL\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai. Deep Interest Network for Click-Through Rate Prediction. In KDD 2018.\n - Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.\n\n\n+ Check the md5sum for data integrity:"
] | [
23,
194
] | [
"passage: TAGS\n#arxiv-1706.06978 #arxiv-2306.08808 #region-us \n# AmazonElectronics_x1\n\n+ Data format: \nlabel, user_id, item_id, cate_id, item_history, cate_history\n\n+ Source: URL\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai. Deep Interest Network for Click-Through Rate Prediction. In KDD 2018.\n - Jieming Zhu, Guohao Cai, Junjie Huang, Zhenhua Dong, Ruiming Tang, Weinan Zhang. ReLoop2: Building Self-Adaptive Recommendation Models via Responsive Error Compensation Loop. In KDD 2023.\n\n\n+ Check the md5sum for data integrity:"
] |
bac93c26a2d5cc26734adf542167b76ec3651b43 | # Open-Orca/SlimOrca-Dedup
```
{
"processed": true,
"4keys": true,
"jsonifize": true,
"uploaded": true
}
```
LICENSE FOUND AT: https://huggingface.co/datasetsOpen-Orca/SlimOrca-Dedup
Reformatting generated by [AlignmentLab.AI](https://Alignmentlab.ai) please refer to the original authors work for attribution
## Configurations
```
[
{
"instruction": "randomvalueremoved",
"input": null,
"output": "line"
},
{
"instruction": "schema",
"input": "values",
"output": "line"
},
{
"instruction": "randomvalueremoved",
"input": "values",
"output": "line"
},
{
"instruction": "values",
"input": null,
"output": "line"
},
{
"instruction": "values",
"input": "schema",
"output": "line"
},
{
"instruction": "values",
"input": "randomvalueremoved",
"output": "line"
}
]
```
| jsonifize/SlimOrca-Dedup-jsonify-v2 | [
"source_datasets:Open-Orca/SlimOrca-Dedup",
"language:en",
"jsonifize",
"NLP",
"region:us"
] | 2023-12-20T09:30:08+00:00 | {"language": ["en"], "source_datasets": ["Open-Orca/SlimOrca-Dedup"], "pretty_name": "SlimOrca-Dedup-jsonify-v2", "tags": ["jsonifize", "NLP"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}]}} | 2023-12-22T17:13:47+00:00 | [] | [
"en"
] | TAGS
#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us
| # Open-Orca/SlimOrca-Dedup
LICENSE FOUND AT: URL
Reformatting generated by AlignmentLab.AI please refer to the original authors work for attribution
## Configurations
| [
"# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution",
"## Configurations"
] | [
"TAGS\n#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us \n",
"# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution",
"## Configurations"
] | [
38,
45,
4
] | [
"passage: TAGS\n#source_datasets-Open-Orca/SlimOrca-Dedup #language-English #jsonifize #NLP #region-us \n# Open-Orca/SlimOrca-Dedup\n\nLICENSE FOUND AT: URL\n\nReformatting generated by AlignmentLab.AI please refer to the original authors work for attribution## Configurations"
] |
efb718376e952c99d974d9e5d78ba025b151fd3f | ALIGN-BENCH is developed for measuring cross-modal alignment of vision-language models quantitatively.
The code can be found at https://github.com/IIGROUP/SCL.
The core idea is to utilize the cross-attention maps of the last layer in fusion encoder, and compare them with annotated regions corresponding to some words.
ALIGN-BENCH can calculate global-local and local-local alignment scores from two angles, bounding box and pixel mask.
There are 1,500 images and 1,500 annotation files in the dataset zip file. The annotatino file contains a caption and some words' regions (bounding box and pixel mask) on the image. | jiyatai/ALIGN-BENCH | [
"region:us"
] | 2023-12-20T09:38:07+00:00 | {} | 2023-12-21T08:00:18+00:00 | [] | [] | TAGS
#region-us
| ALIGN-BENCH is developed for measuring cross-modal alignment of vision-language models quantitatively.
The code can be found at URL
The core idea is to utilize the cross-attention maps of the last layer in fusion encoder, and compare them with annotated regions corresponding to some words.
ALIGN-BENCH can calculate global-local and local-local alignment scores from two angles, bounding box and pixel mask.
There are 1,500 images and 1,500 annotation files in the dataset zip file. The annotatino file contains a caption and some words' regions (bounding box and pixel mask) on the image. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
bf6cb2494f699ee2a9ad9f861f7abd52600f506d |
# Dataset Card for Universal NER v1 in the Aya format - German subset
This dataset is a format conversion for the German data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_german | [
"task_categories:token-classification",
"language:de",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:41:28+00:00 | {"language": ["de"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "de_pud", "splits": [{"name": "test", "num_examples": 999}]}]} | 2023-12-20T09:43:09+00:00 | [
"2311.09122"
] | [
"de"
] | TAGS
#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - German subset
This dataset is a format conversion for the German data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-German #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - German subset\n\nThis dataset is a format conversion for the German data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
2b71b6ac6687ebf99c5e6b8923fa2d6ce33a89b7 |
# Dataset Card for Universal NER v1 in the Aya format - English subset
This dataset is a format conversion for the English data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_english | [
"task_categories:token-classification",
"language:en",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:43:50+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "en_pud", "splits": [{"name": "test", "num_examples": 999}]}, {"config_name": "en_ewt", "splits": [{"name": "test", "num_examples": 2076}, {"name": "dev", "num_examples": 2000}, {"name": "train", "num_examples": 12542}]}]} | 2023-12-20T09:45:49+00:00 | [
"2311.09122"
] | [
"en"
] | TAGS
#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - English subset
This dataset is a format conversion for the English data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-English #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - English subset\n\nThis dataset is a format conversion for the English data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
922f1e990dc003e95eeb9de911562f181d898018 |
# Dataset Card for Universal NER v1 in the Aya format - Croatian subset
This dataset is a format conversion for the Croatian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_croatian | [
"task_categories:token-classification",
"language:hr",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:47:09+00:00 | {"language": ["hr"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-21T09:10:36+00:00 | [
"2311.09122"
] | [
"hr"
] | TAGS
#task_categories-token-classification #language-Croatian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Croatian subset
This dataset is a format conversion for the Croatian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Croatian subset\n\nThis dataset is a format conversion for the Croatian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Croatian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Croatian subset\n\nThis dataset is a format conversion for the Croatian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
45,
99,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Croatian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Croatian subset\n\nThis dataset is a format conversion for the Croatian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
6325a0f04dcef707d559b45fea4dbf052c86d745 |
# Dataset Card for Universal NER v1 in the Aya format - Portuguese subset
This dataset is a format conversion for the Portuguese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_portuguese | [
"task_categories:token-classification",
"language:pt",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:49:30+00:00 | {"language": ["pt"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "pt_pud", "splits": [{"name": "test", "num_examples": 999}]}, {"config_name": "pt_bosque", "splits": [{"name": "test", "num_examples": 1166}, {"name": "dev", "num_examples": 1171}, {"name": "train", "num_examples": 4302}]}]} | 2023-12-20T09:50:54+00:00 | [
"2311.09122"
] | [
"pt"
] | TAGS
#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Portuguese subset
This dataset is a format conversion for the Portuguese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
44,
101,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Portuguese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Portuguese subset\n\nThis dataset is a format conversion for the Portuguese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
c49c84c41593d00f0cd11b755158d8a26fdf619d |
# Power Line Towers Dataset
The dataset comprises 860 aerial images of power line towers captured by UAVs using RGB cameras. Specifically intended for image classification tasks, each tower in the dataset has been meticulously annotated in YOLO format, offering a valuable resource for training and evaluating computer vision models in the context of power line tower recognition.
## Dataset Details

* The RGB images are stored in a single folder.
* The annotations are stored in a single folder that contains one file per image, which is identified by the same name.
The annotations are provided in YOLO format: class, x_center, y_center, width, height.
All the values are presented as a proportion of the image width and height, which is constant for all the images.
### Dataset Description
- **Curated by:** UPNAdrone: Drones Laboratory at Universidad Pública de Navarra
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** UPNAdrone: Drones Laboratory at Universidad Pública de Navarra
- **Language(s) (NLP):** N/A
- **License:** CC BY-NC-SA 4.0 (https://creativecommons.org/licenses/by-nc-sa/4.0/)
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
### Direct Use
Aerial image classification for power line inspection tasks.
## Dataset Creation
### Curation Rationale
Research.
### Source Data
All the data has been obtained from our own inspection flights carried out for research purposes.
#### Data Collection and Processing
The data has been manually inspected, processed and annotated.
#### Annotation process
Manual annotation has been carried out for every single image using CVAT.
#### Personal and Sensitive Information
The authors state that there is no known personal nor sensitive information in the provided dataset.
## Bias, Risks, and Limitations
This dataset is intended for research purposes. Therefore, commercial use of the following dataset is not permitted.
### Recommendations
The authors explicitly disclaim any responsibility associated with the misuse of the dataset.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
WIP
**APA:**
WIP
## Dataset Card Contact
For support and/or questions, please get in touch directly with UPNAdrone: https://github.com/UPNAdrone
| UPNAdroneLab/powerline_towers | [
"size_categories:n<1K",
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-20T09:50:24+00:00 | {"license": "cc-by-nc-sa-4.0", "size_categories": ["n<1K"], "pretty_name": "powerline_towers"} | 2023-12-22T13:11:55+00:00 | [] | [] | TAGS
#size_categories-n<1K #license-cc-by-nc-sa-4.0 #region-us
|
# Power Line Towers Dataset
The dataset comprises 860 aerial images of power line towers captured by UAVs using RGB cameras. Specifically intended for image classification tasks, each tower in the dataset has been meticulously annotated in YOLO format, offering a valuable resource for training and evaluating computer vision models in the context of power line tower recognition.
## Dataset Details
!Example annotations
* The RGB images are stored in a single folder.
* The annotations are stored in a single folder that contains one file per image, which is identified by the same name.
The annotations are provided in YOLO format: class, x_center, y_center, width, height.
All the values are presented as a proportion of the image width and height, which is constant for all the images.
### Dataset Description
- Curated by: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra
- Funded by [optional]:
- Shared by [optional]: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra
- Language(s) (NLP): N/A
- License: CC BY-NC-SA 4.0 (URL
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
### Direct Use
Aerial image classification for power line inspection tasks.
## Dataset Creation
### Curation Rationale
Research.
### Source Data
All the data has been obtained from our own inspection flights carried out for research purposes.
#### Data Collection and Processing
The data has been manually inspected, processed and annotated.
#### Annotation process
Manual annotation has been carried out for every single image using CVAT.
#### Personal and Sensitive Information
The authors state that there is no known personal nor sensitive information in the provided dataset.
## Bias, Risks, and Limitations
This dataset is intended for research purposes. Therefore, commercial use of the following dataset is not permitted.
### Recommendations
The authors explicitly disclaim any responsibility associated with the misuse of the dataset.
[optional]
BibTeX:
WIP
APA:
WIP
## Dataset Card Contact
For support and/or questions, please get in touch directly with UPNAdrone: URL
| [
"# Power Line Towers Dataset\n\nThe dataset comprises 860 aerial images of power line towers captured by UAVs using RGB cameras. Specifically intended for image classification tasks, each tower in the dataset has been meticulously annotated in YOLO format, offering a valuable resource for training and evaluating computer vision models in the context of power line tower recognition.",
"## Dataset Details\n\n!Example annotations\n\n* The RGB images are stored in a single folder.\n* The annotations are stored in a single folder that contains one file per image, which is identified by the same name.\n\nThe annotations are provided in YOLO format: class, x_center, y_center, width, height.\nAll the values are presented as a proportion of the image width and height, which is constant for all the images.",
"### Dataset Description\n\n- Curated by: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Funded by [optional]: \n- Shared by [optional]: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Language(s) (NLP): N/A\n- License: CC BY-NC-SA 4.0 (URL",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"### Direct Use\n\nAerial image classification for power line inspection tasks.",
"## Dataset Creation",
"### Curation Rationale\n\nResearch.",
"### Source Data\n\nAll the data has been obtained from our own inspection flights carried out for research purposes.",
"#### Data Collection and Processing\n\nThe data has been manually inspected, processed and annotated.",
"#### Annotation process\n\nManual annotation has been carried out for every single image using CVAT.",
"#### Personal and Sensitive Information\n\nThe authors state that there is no known personal nor sensitive information in the provided dataset.",
"## Bias, Risks, and Limitations\n\nThis dataset is intended for research purposes. Therefore, commercial use of the following dataset is not permitted.",
"### Recommendations\n\nThe authors explicitly disclaim any responsibility associated with the misuse of the dataset.\n\n[optional]\n\n\n\nBibTeX:\n\nWIP\n\nAPA:\n\nWIP",
"## Dataset Card Contact\n\nFor support and/or questions, please get in touch directly with UPNAdrone: URL"
] | [
"TAGS\n#size_categories-n<1K #license-cc-by-nc-sa-4.0 #region-us \n",
"# Power Line Towers Dataset\n\nThe dataset comprises 860 aerial images of power line towers captured by UAVs using RGB cameras. Specifically intended for image classification tasks, each tower in the dataset has been meticulously annotated in YOLO format, offering a valuable resource for training and evaluating computer vision models in the context of power line tower recognition.",
"## Dataset Details\n\n!Example annotations\n\n* The RGB images are stored in a single folder.\n* The annotations are stored in a single folder that contains one file per image, which is identified by the same name.\n\nThe annotations are provided in YOLO format: class, x_center, y_center, width, height.\nAll the values are presented as a proportion of the image width and height, which is constant for all the images.",
"### Dataset Description\n\n- Curated by: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Funded by [optional]: \n- Shared by [optional]: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Language(s) (NLP): N/A\n- License: CC BY-NC-SA 4.0 (URL",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"### Direct Use\n\nAerial image classification for power line inspection tasks.",
"## Dataset Creation",
"### Curation Rationale\n\nResearch.",
"### Source Data\n\nAll the data has been obtained from our own inspection flights carried out for research purposes.",
"#### Data Collection and Processing\n\nThe data has been manually inspected, processed and annotated.",
"#### Annotation process\n\nManual annotation has been carried out for every single image using CVAT.",
"#### Personal and Sensitive Information\n\nThe authors state that there is no known personal nor sensitive information in the provided dataset.",
"## Bias, Risks, and Limitations\n\nThis dataset is intended for research purposes. Therefore, commercial use of the following dataset is not permitted.",
"### Recommendations\n\nThe authors explicitly disclaim any responsibility associated with the misuse of the dataset.\n\n[optional]\n\n\n\nBibTeX:\n\nWIP\n\nAPA:\n\nWIP",
"## Dataset Card Contact\n\nFor support and/or questions, please get in touch directly with UPNAdrone: URL"
] | [
29,
86,
102,
82,
29,
18,
5,
9,
25,
23,
20,
27,
34,
40,
24
] | [
"passage: TAGS\n#size_categories-n<1K #license-cc-by-nc-sa-4.0 #region-us \n# Power Line Towers Dataset\n\nThe dataset comprises 860 aerial images of power line towers captured by UAVs using RGB cameras. Specifically intended for image classification tasks, each tower in the dataset has been meticulously annotated in YOLO format, offering a valuable resource for training and evaluating computer vision models in the context of power line tower recognition.## Dataset Details\n\n!Example annotations\n\n* The RGB images are stored in a single folder.\n* The annotations are stored in a single folder that contains one file per image, which is identified by the same name.\n\nThe annotations are provided in YOLO format: class, x_center, y_center, width, height.\nAll the values are presented as a proportion of the image width and height, which is constant for all the images.### Dataset Description\n\n- Curated by: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Funded by [optional]: \n- Shared by [optional]: UPNAdrone: Drones Laboratory at Universidad Pública de Navarra\n- Language(s) (NLP): N/A\n- License: CC BY-NC-SA 4.0 (URL### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:### Direct Use\n\nAerial image classification for power line inspection tasks.## Dataset Creation### Curation Rationale\n\nResearch.### Source Data\n\nAll the data has been obtained from our own inspection flights carried out for research purposes.#### Data Collection and Processing\n\nThe data has been manually inspected, processed and annotated.#### Annotation process\n\nManual annotation has been carried out for every single image using CVAT.#### Personal and Sensitive Information\n\nThe authors state that there is no known personal nor sensitive information in the provided dataset.## Bias, Risks, and Limitations\n\nThis dataset is intended for research purposes. Therefore, commercial use of the following dataset is not permitted."
] |
b56ee3aeb34aebfa6a0bf322a09a2b99c5bff8d8 |
# Dataset Card for Universal NER v1 in the Aya format - Russian subset
This dataset is a format conversion for the Russian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_russian | [
"task_categories:token-classification",
"language:ru",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:52:14+00:00 | {"language": ["ru"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "ru_pud", "splits": [{"name": "test", "num_examples": 999}]}]} | 2023-12-20T09:53:13+00:00 | [
"2311.09122"
] | [
"ru"
] | TAGS
#task_categories-token-classification #language-Russian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Russian subset
This dataset is a format conversion for the Russian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Russian subset\n\nThis dataset is a format conversion for the Russian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Russian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Russian subset\n\nThis dataset is a format conversion for the Russian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Russian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Russian subset\n\nThis dataset is a format conversion for the Russian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
65892537e5c6c586f88de98ce98a4cc4bce182f9 |
# Dataset Card for Universal NER v1 in the Aya format - Slovak subset
This dataset is a format conversion for the Slovak data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_slovak | [
"task_categories:token-classification",
"language:sk",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:53:48+00:00 | {"language": ["sk"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-21T09:09:06+00:00 | [
"2311.09122"
] | [
"sk"
] | TAGS
#task_categories-token-classification #language-Slovak #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Slovak subset
This dataset is a format conversion for the Slovak data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Slovak subset\n\nThis dataset is a format conversion for the Slovak data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Slovak #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Slovak subset\n\nThis dataset is a format conversion for the Slovak data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
44,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Slovak #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Slovak subset\n\nThis dataset is a format conversion for the Slovak data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
ed26df39b6084e46deddf041674328ece11519ab |
# Dataset Card for Universal NER v1 in the Aya format - Serbian subset
This dataset is a format conversion for the Serbian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_serbian | [
"task_categories:token-classification",
"language:sr",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:55:54+00:00 | {"language": ["sr"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-21T09:05:32+00:00 | [
"2311.09122"
] | [
"sr"
] | TAGS
#task_categories-token-classification #language-Serbian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Serbian subset
This dataset is a format conversion for the Serbian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Serbian subset\n\nThis dataset is a format conversion for the Serbian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Serbian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Serbian subset\n\nThis dataset is a format conversion for the Serbian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
99,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Serbian #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Serbian subset\n\nThis dataset is a format conversion for the Serbian data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
b630971410f012c98780d9d67d8af58e4e576da3 |
# Dataset Card for Universal NER v1 in the Aya format - Swedish subset
This dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_swedish | [
"task_categories:token-classification",
"language:sv",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:57:42+00:00 | {"language": ["sv"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "sv_pud", "splits": [{"name": "test", "num_examples": 999}]}, {"config_name": "sv_talbanken", "splits": [{"name": "test", "num_examples": 1218}, {"name": "dev", "num_examples": 503}, {"name": "train", "num_examples": 4302}]}]} | 2023-12-20T09:59:02+00:00 | [
"2311.09122"
] | [
"sv"
] | TAGS
#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Swedish subset
This dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
44,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Swedish #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Swedish subset\n\nThis dataset is a format conversion for the Swedish data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
925c642cf275d427acceffe9208253e8523f292c |
# Dataset Card for SPC: Synthetic-Persona-Chat Dataset
Abstract from the paper introducing this dataset:
> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
## Dataset Details
### Dataset Description
> We introduce the Synthetic-Persona-Chat dataset, a persona-based conversational dataset, consisting of two parts. The first part, consisting of 4,723 personas and 10,906 conversations, is an extension to Persona-Chat, which has the same user profile pairs as Persona-Chat but new synthetic conversations, with the same train/validation/test split as Persona-Chat. The second part is new synthetic personas and synthetic conversations based on that, consisting of 5,648 synthetic personas and 11,001 conversations. Synthetic-Persona-Chat is created using the Generator-Critic framework introduced in Faithful Persona-based Conversational Dataset Generation with Large Language Models.
Each conversation in the dataset has the following format:
```
{
"User 1 Persona":[],
"User 2 Persona":[],
"Conversation":[]
}
```
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/google-research-datasets/Synthetic-Persona-Chat/tree/main
- **Paper:** https://arxiv.org/abs/2312.10007
## Citation
**BibTeX:**
```@misc{jandaghi2023faithful,
title={Faithful Persona-based Conversational Dataset Generation with Large Language Models},
author={Pegah Jandaghi and XiangHai Sheng and Xinyi Bai and Jay Pujara and Hakim Sidahmed},
year={2023},
eprint={2312.10007},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| google/Synthetic-Persona-Chat | [
"task_categories:conversational",
"language:en",
"license:cc-by-4.0",
"synthetic",
"arxiv:2312.10007",
"region:us"
] | 2023-12-20T09:59:14+00:00 | {"language": ["en"], "license": "cc-by-4.0", "task_categories": ["conversational"], "pretty_name": "Synthetic-Persona-Chat Dataset", "tags": ["synthetic"]} | 2023-12-20T15:27:59+00:00 | [
"2312.10007"
] | [
"en"
] | TAGS
#task_categories-conversational #language-English #license-cc-by-4.0 #synthetic #arxiv-2312.10007 #region-us
|
# Dataset Card for SPC: Synthetic-Persona-Chat Dataset
Abstract from the paper introducing this dataset:
> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.
## Dataset Details
### Dataset Description
> We introduce the Synthetic-Persona-Chat dataset, a persona-based conversational dataset, consisting of two parts. The first part, consisting of 4,723 personas and 10,906 conversations, is an extension to Persona-Chat, which has the same user profile pairs as Persona-Chat but new synthetic conversations, with the same train/validation/test split as Persona-Chat. The second part is new synthetic personas and synthetic conversations based on that, consisting of 5,648 synthetic personas and 11,001 conversations. Synthetic-Persona-Chat is created using the Generator-Critic framework introduced in Faithful Persona-based Conversational Dataset Generation with Large Language Models.
Each conversation in the dataset has the following format:
### Dataset Sources
- Repository: URL
- Paper: URL
BibTeX:
| [
"# Dataset Card for SPC: Synthetic-Persona-Chat Dataset\n\nAbstract from the paper introducing this dataset: \n> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.",
"## Dataset Details",
"### Dataset Description\n\n> We introduce the Synthetic-Persona-Chat dataset, a persona-based conversational dataset, consisting of two parts. The first part, consisting of 4,723 personas and 10,906 conversations, is an extension to Persona-Chat, which has the same user profile pairs as Persona-Chat but new synthetic conversations, with the same train/validation/test split as Persona-Chat. The second part is new synthetic personas and synthetic conversations based on that, consisting of 5,648 synthetic personas and 11,001 conversations. Synthetic-Persona-Chat is created using the Generator-Critic framework introduced in Faithful Persona-based Conversational Dataset Generation with Large Language Models.\n\nEach conversation in the dataset has the following format:",
"### Dataset Sources \n\n\n\n- Repository: URL\n- Paper: URL\n\n\nBibTeX:"
] | [
"TAGS\n#task_categories-conversational #language-English #license-cc-by-4.0 #synthetic #arxiv-2312.10007 #region-us \n",
"# Dataset Card for SPC: Synthetic-Persona-Chat Dataset\n\nAbstract from the paper introducing this dataset: \n> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.",
"## Dataset Details",
"### Dataset Description\n\n> We introduce the Synthetic-Persona-Chat dataset, a persona-based conversational dataset, consisting of two parts. The first part, consisting of 4,723 personas and 10,906 conversations, is an extension to Persona-Chat, which has the same user profile pairs as Persona-Chat but new synthetic conversations, with the same train/validation/test split as Persona-Chat. The second part is new synthetic personas and synthetic conversations based on that, consisting of 5,648 synthetic personas and 11,001 conversations. Synthetic-Persona-Chat is created using the Generator-Critic framework introduced in Faithful Persona-based Conversational Dataset Generation with Large Language Models.\n\nEach conversation in the dataset has the following format:",
"### Dataset Sources \n\n\n\n- Repository: URL\n- Paper: URL\n\n\nBibTeX:"
] | [
42,
347,
4,
184,
21
] | [
"passage: TAGS\n#task_categories-conversational #language-English #license-cc-by-4.0 #synthetic #arxiv-2312.10007 #region-us \n# Dataset Card for SPC: Synthetic-Persona-Chat Dataset\n\nAbstract from the paper introducing this dataset: \n> High-quality conversational datasets are essential for developing AI models that can communicate with users. One way to foster deeper interactions between a chatbot and its user is through personas, aspects of the user's character that provide insights into their personality, motivations, and behaviors. Training Natural Language Processing (NLP) models on a diverse and comprehensive persona-based dataset can lead to conversational models that create a deeper connection with the user, and maintain their engagement. In this paper, we leverage the power of Large Language Models (LLMs) to create a large, high-quality conversational dataset from a seed dataset. We propose a Generator-Critic architecture framework to expand the initial dataset, while improving the quality of its conversations. The Generator is an LLM prompted to output conversations. The Critic consists of a mixture of expert LLMs that control the quality of the generated conversations. These experts select the best generated conversations, which we then use to improve the Generator. We release Synthetic-Persona-Chat, consisting of 20k conversations seeded from Persona-Chat. We evaluate the quality of Synthetic-Persona-Chat and our generation framework on different dimensions through extensive experiments, and observe that the losing rate of Synthetic-Persona-Chat against Persona-Chat during Turing test decreases from 17.2% to 8.8% over three iterations.## Dataset Details"
] |
f8f207c25f3b29abfe6a2d0a3f094b99294ad35c |
# Dataset Card for Universal NER v1 in the Aya format - Tagalog subset
This dataset is a format conversion for the Tagalog data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| universalner/uner_llm_inst_tagalog | [
"task_categories:token-classification",
"language:tl",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T09:59:58+00:00 | {"language": ["tl"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "tl_trg", "splits": [{"name": "test", "num_examples": 127}]}, {"config_name": "tl_ugnayan", "splits": [{"name": "test", "num_examples": 93}]}]} | 2023-12-20T10:01:10+00:00 | [
"2311.09122"
] | [
"tl"
] | TAGS
#task_categories-token-classification #language-Tagalog #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Tagalog subset
This dataset is a format conversion for the Tagalog data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Tagalog subset\n\nThis dataset is a format conversion for the Tagalog data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Tagalog #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Tagalog subset\n\nThis dataset is a format conversion for the Tagalog data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
44,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Tagalog #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Tagalog subset\n\nThis dataset is a format conversion for the Tagalog data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
49de6329b32807a1471bc48dfea67936b8336791 |
# Dataset Card for Universal NER v1 in the Aya format - Chinese subset
This dataset is a format conversion for the Chinese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check https://huggingface.co/datasets/universalner/universal_ner.
For details on the conversion to the Aya instructions format, please see the complete version: https://huggingface.co/datasets/universalner/uner_llm_instructions
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/universalner/uner_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{mayhew2023universal,
title={{Universal NER: A Gold-Standard Multilingual Named Entity Recognition Benchmark}},
author={Stephen Mayhew and Terra Blevins and Shuheng Liu and Marek Šuppa and Hila Gonen and Joseph Marvin Imperial and Börje F. Karlsson and Peiqin Lin and Nikola Ljubešić and LJ Miranda and Barbara Plank and Arij Riabi and Yuval Pinter},
year={2023},
eprint={2311.09122},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | universalner/uner_llm_inst_chinese | [
"task_categories:token-classification",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2311.09122",
"region:us"
] | 2023-12-20T10:01:39+00:00 | {"language": ["zh"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"], "dataset_info": [{"config_name": "zh_pud", "splits": [{"name": "test", "num_examples": 999}]}, {"config_name": "zh_gsd", "splits": [{"name": "test", "num_examples": 499}, {"name": "dev", "num_examples": 499}, {"name": "train", "num_examples": 3996}]}, {"config_name": "zh_gsdsimp", "splits": [{"name": "test", "num_examples": 499}, {"name": "dev", "num_examples": 499}, {"name": "train", "num_examples": 3996}]}]} | 2023-12-20T10:04:47+00:00 | [
"2311.09122"
] | [
"zh"
] | TAGS
#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us
|
# Dataset Card for Universal NER v1 in the Aya format - Chinese subset
This dataset is a format conversion for the Chinese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.
The dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:
## Dataset Details
For the original Universal NER dataset v1 and more details, please check URL
For details on the conversion to the Aya instructions format, please see the complete version: URL
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for Universal NER v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n",
"# Dataset Card for Universal NER v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:",
"## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
43,
97,
77
] | [
"passage: TAGS\n#task_categories-token-classification #language-Chinese #license-cc-by-sa-4.0 #arxiv-2311.09122 #region-us \n# Dataset Card for Universal NER v1 in the Aya format - Chinese subset\n\nThis dataset is a format conversion for the Chinese data in the original Universal NER v1 into the Aya instruction format and it's released here under the same CC-BY-SA 4.0 license and conditions.\n\nThe dataset contains different subsets and their dev/test/train splits, depending on language. For more details, please refer to:## Dataset Details\n\nFor the original Universal NER dataset v1 and more details, please check URL\n\nFor details on the conversion to the Aya instructions format, please see the complete version: URL\n\n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
d9a2d43a0188a33fa819ff4818cf206c119d3e84 |
# Portuguese-Corpus Instruct (tokenized small)
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** https://nkluge-correa.github.io/TeenyTinyLlama/
- **Repository:** https://github.com/Nkluge-correa/TeenyTinyLlama
- **Paper:** [TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese](https://arxiv.org/abs/2401.16640)
- **Point of Contact:** [AIRES at PUCRS](mailto:[email protected])
### Dataset Summary
This repository has a tokenized version (using the [TeenyTinyLlama tokenizer](https://huggingface.co/nicholasKluge/TeenyTinyLlama-460m)) of a small subset (3.7B tokens) of the [Pt-Corpus Instruct dataset](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct). All sequences are 2048 tokens long. All sequences are 2048 tokens long. This dataset was used in "_[TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese](https://arxiv.org/abs/2401.16640)_".
For more information, see the [original dataset card](https://huggingface.co/datasets/nicholasKluge/Pt-Corpus-Instruct).
## Languages
Portuguese.
## Dataset Structure
### Data Instances
The dataset consists of the following features:
- **input_ids:** sequence of tokens.
- **attention_mask:** binary tensor indicating the position of the padded indices.
- **labels:** sequence of tokens.
### Data Fields
```python
{
"input_ids": [ 1026, 1531, 1009, 8067,...],
"attention_mask": [1, 1, 1, 1, ...],
"labels": [ 1026, 1531, 1009, 8067,...]
}
```
### Data Splits
Available splits are `train` (~ 1.8M) and `test` (18K).
```python
from datasets import load_dataset
dataset = load_dataset("nicholasKluge/Pt-Corpus-Instruct-tokenized-small", split='train')
# If you don't want to download the entire dataset, set streaming to `True`
dataset = load_dataset("nicholasKluge/Pt-Corpus-Instruct-tokenized-small", split='train', streaming=True)
```
## Additional Information
### Dataset Curators
[Nicholas Kluge Corrêa](mailto:[email protected]).
### Citation Information
```latex
@misc{correa24ttllama,
title = {TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese},
author = {Corr{\^e}a, Nicholas Kluge and Falk, Sophia and Fatimah, Shiza and Sen, Aniket and De Oliveira, Nythamar},
journal={arXiv preprint arXiv:2401.16640},
year={2024}
}
```
### Contributions
If you would like to contribute, contact me at [[email protected]](mailto:[email protected])!
| nicholasKluge/Pt-Corpus-Instruct-tokenized-small | [
"task_categories:text-generation",
"size_categories:1M<n<10M",
"language:pt",
"license:other",
"portuguese",
"language-modeling",
"arxiv:2401.16640",
"region:us"
] | 2023-12-20T10:03:52+00:00 | {"language": ["pt"], "license": "other", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "Pt-Corpus Instruct tokenized small", "dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 48793769228.0, "num_examples": 1831873}, {"name": "test", "num_bytes": 479448000.0, "num_examples": 18000}], "download_size": 14600379883, "dataset_size": 49273217228.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test"}]}], "tags": ["portuguese", "language-modeling"]} | 2024-02-15T18:09:15+00:00 | [
"2401.16640"
] | [
"pt"
] | TAGS
#task_categories-text-generation #size_categories-1M<n<10M #language-Portuguese #license-other #portuguese #language-modeling #arxiv-2401.16640 #region-us
|
# Portuguese-Corpus Instruct (tokenized small)
## Table of Contents
- Table of Contents
- Dataset Description
- Dataset Summary
- Languages
- Dataset Structure
- Data Instances
- Data Fields
- Data Splits
- Additional Information
- Dataset Curators
- Citation Information
- Contributions
## Dataset Description
- Homepage: URL
- Repository: URL
- Paper: TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese
- Point of Contact: AIRES at PUCRS
### Dataset Summary
This repository has a tokenized version (using the TeenyTinyLlama tokenizer) of a small subset (3.7B tokens) of the Pt-Corpus Instruct dataset. All sequences are 2048 tokens long. All sequences are 2048 tokens long. This dataset was used in "_TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese_".
For more information, see the original dataset card.
## Languages
Portuguese.
## Dataset Structure
### Data Instances
The dataset consists of the following features:
- input_ids: sequence of tokens.
- attention_mask: binary tensor indicating the position of the padded indices.
- labels: sequence of tokens.
### Data Fields
### Data Splits
Available splits are 'train' (~ 1.8M) and 'test' (18K).
## Additional Information
### Dataset Curators
Nicholas Kluge Corrêa.
### Contributions
If you would like to contribute, contact me at nicholas@URL!
| [
"# Portuguese-Corpus Instruct (tokenized small)",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese\n- Point of Contact: AIRES at PUCRS",
"### Dataset Summary\n\nThis repository has a tokenized version (using the TeenyTinyLlama tokenizer) of a small subset (3.7B tokens) of the Pt-Corpus Instruct dataset. All sequences are 2048 tokens long. All sequences are 2048 tokens long. This dataset was used in \"_TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese_\".\n\nFor more information, see the original dataset card.",
"## Languages\n\nPortuguese.",
"## Dataset Structure",
"### Data Instances\n\nThe dataset consists of the following features:\n\n- input_ids: sequence of tokens.\n- attention_mask: binary tensor indicating the position of the padded indices.\n- labels: sequence of tokens.",
"### Data Fields",
"### Data Splits\n\nAvailable splits are 'train' (~ 1.8M) and 'test' (18K).",
"## Additional Information",
"### Dataset Curators\n\nNicholas Kluge Corrêa.",
"### Contributions\n\nIf you would like to contribute, contact me at nicholas@URL!"
] | [
"TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Portuguese #license-other #portuguese #language-modeling #arxiv-2401.16640 #region-us \n",
"# Portuguese-Corpus Instruct (tokenized small)",
"## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Citation Information\n - Contributions",
"## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese\n- Point of Contact: AIRES at PUCRS",
"### Dataset Summary\n\nThis repository has a tokenized version (using the TeenyTinyLlama tokenizer) of a small subset (3.7B tokens) of the Pt-Corpus Instruct dataset. All sequences are 2048 tokens long. All sequences are 2048 tokens long. This dataset was used in \"_TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese_\".\n\nFor more information, see the original dataset card.",
"## Languages\n\nPortuguese.",
"## Dataset Structure",
"### Data Instances\n\nThe dataset consists of the following features:\n\n- input_ids: sequence of tokens.\n- attention_mask: binary tensor indicating the position of the padded indices.\n- labels: sequence of tokens.",
"### Data Fields",
"### Data Splits\n\nAvailable splits are 'train' (~ 1.8M) and 'test' (18K).",
"## Additional Information",
"### Dataset Curators\n\nNicholas Kluge Corrêa.",
"### Contributions\n\nIf you would like to contribute, contact me at nicholas@URL!"
] | [
58,
15,
59,
50,
119,
7,
6,
60,
5,
25,
5,
13,
20
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-Portuguese #license-other #portuguese #language-modeling #arxiv-2401.16640 #region-us \n# Portuguese-Corpus Instruct (tokenized small)## Table of Contents\n\n- Table of Contents\n- Dataset Description\n - Dataset Summary\n - Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Additional Information\n - Dataset Curators\n - Citation Information\n - Contributions## Dataset Description\n\n- Homepage: URL\n- Repository: URL\n- Paper: TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese\n- Point of Contact: AIRES at PUCRS### Dataset Summary\n\nThis repository has a tokenized version (using the TeenyTinyLlama tokenizer) of a small subset (3.7B tokens) of the Pt-Corpus Instruct dataset. All sequences are 2048 tokens long. All sequences are 2048 tokens long. This dataset was used in \"_TeenyTinyLlama: open-source tiny language models trained in Brazilian Portuguese_\".\n\nFor more information, see the original dataset card.## Languages\n\nPortuguese.## Dataset Structure### Data Instances\n\nThe dataset consists of the following features:\n\n- input_ids: sequence of tokens.\n- attention_mask: binary tensor indicating the position of the padded indices.\n- labels: sequence of tokens.### Data Fields### Data Splits\n\nAvailable splits are 'train' (~ 1.8M) and 'test' (18K).## Additional Information### Dataset Curators\n\nNicholas Kluge Corrêa.### Contributions\n\nIf you would like to contribute, contact me at nicholas@URL!"
] |
6541bdfa4b1618927d680685e7139a210153bc4b | # Dataset Card for "alpaca_format3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | harshithvh/alpaca_format3 | [
"region:us"
] | 2023-12-20T10:06:38+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 439554, "num_examples": 254}], "download_size": 81615, "dataset_size": 439554}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T10:08:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "alpaca_format3"
More Information needed | [
"# Dataset Card for \"alpaca_format3\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"alpaca_format3\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"alpaca_format3\"\n\nMore Information needed"
] |
13b86c2d474cb95f1167450371540f1b15a9ab31 |
# BEE-spoke-data/falcon-refinedweb-100k_en_med-sample
A sample from [falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb):
- more than 512 & less than 8192 llama2 tokens
- `en` only (via fasttext-langdetect)
- 100k samples | BEE-spoke-data/falcon-refinedweb-100k_en_med-sample | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"source_datasets:tiiuae/falcon-refinedweb",
"language:en",
"license:odc-by",
"region:us"
] | 2023-12-20T10:07:09+00:00 | {"language": ["en"], "license": "odc-by", "size_categories": ["10K<n<100K"], "source_datasets": "tiiuae/falcon-refinedweb", "task_categories": ["text-generation"], "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 485240640, "num_examples": 100000}], "download_size": 299772551, "dataset_size": 485240640}, {"config_name": "embeddings-text-nomic_text_v1", "features": [{"name": "text", "dtype": "string"}, {"name": "text-embedding", "sequence": "float64"}], "splits": [{"name": "train", "num_bytes": 1100040640, "num_examples": 100000}], "download_size": 802872607, "dataset_size": 1100040640}], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}, {"config_name": "embeddings-text-nomic_text_v1", "data_files": [{"split": "train", "path": "embeddings-text-nomic_text_v1/train-*"}]}]} | 2024-02-05T00:20:53+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-10K<n<100K #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us
|
# BEE-spoke-data/falcon-refinedweb-100k_en_med-sample
A sample from falcon-refinedweb:
- more than 512 & less than 8192 llama2 tokens
- 'en' only (via fasttext-langdetect)
- 100k samples | [
"# BEE-spoke-data/falcon-refinedweb-100k_en_med-sample\n\n\nA sample from falcon-refinedweb:\n\n- more than 512 & less than 8192 llama2 tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] | [
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us \n",
"# BEE-spoke-data/falcon-refinedweb-100k_en_med-sample\n\n\nA sample from falcon-refinedweb:\n\n- more than 512 & less than 8192 llama2 tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] | [
59,
68
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us \n# BEE-spoke-data/falcon-refinedweb-100k_en_med-sample\n\n\nA sample from falcon-refinedweb:\n\n- more than 512 & less than 8192 llama2 tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] |
9721f25cb1c5b36a0090b8fee95d528645fa153b | # Dataset Card for "nvidia_helpsteer_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/nvidia_helpsteer_binarized | [
"region:us"
] | 2023-12-20T10:43:53+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 106899146, "num_examples": 35331}, {"name": "validation", "num_bytes": 5537881, "num_examples": 1789}], "download_size": 23814863, "dataset_size": 112437027}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-20T10:44:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nvidia_helpsteer_binarized"
More Information needed | [
"# Dataset Card for \"nvidia_helpsteer_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nvidia_helpsteer_binarized\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nvidia_helpsteer_binarized\"\n\nMore Information needed"
] |
380a8aadee34e3e6e37649fb9e372cb0348dc68f | # Dataset Card for "astramindai_nectar_sft_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/astramindai_nectar_sft_binarized | [
"region:us"
] | 2023-12-20T10:58:07+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 217467579.87271437, "num_examples": 118080}, {"name": "test", "num_bytes": 24164906.127285615, "num_examples": 13121}], "download_size": 126996278, "dataset_size": 241632486.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-20T13:56:28+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "astramindai_nectar_sft_binarized"
More Information needed | [
"# Dataset Card for \"astramindai_nectar_sft_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"astramindai_nectar_sft_binarized\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"astramindai_nectar_sft_binarized\"\n\nMore Information needed"
] |
64dc03006442cf28b5ba2bd744cb79095e696cf8 | # Dataset Card for "h4_no_robots_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/h4_no_robots_binarized | [
"region:us"
] | 2023-12-20T11:00:11+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 12000998, "num_examples": 9500}, {"name": "test", "num_bytes": 641760, "num_examples": 500}], "download_size": 7765070, "dataset_size": 12642758}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2024-01-09T03:50:47+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "h4_no_robots_binarized"
More Information needed | [
"# Dataset Card for \"h4_no_robots_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"h4_no_robots_binarized\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"h4_no_robots_binarized\"\n\nMore Information needed"
] |
96cecd0bc8f35eec54f8760a4f0bca7a2506b0a5 | # Dataset Card for "oasst_top1_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/oasst_top1_binarized | [
"region:us"
] | 2023-12-20T11:04:08+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 22467998, "num_examples": 12947}, {"name": "test", "num_bytes": 1180656, "num_examples": 690}], "download_size": 13716246, "dataset_size": 23648654}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-20T11:04:19+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "oasst_top1_binarized"
More Information needed | [
"# Dataset Card for \"oasst_top1_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"oasst_top1_binarized\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"oasst_top1_binarized\"\n\nMore Information needed"
] |
f20bac990a25bcf20b22d9ce19a8243b03699f32 | # Dataset Card for "multiturn_with_generations"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | dvilasuero/multiturn_with_generations | [
"region:us"
] | 2023-12-20T11:22:21+00:00 | {"dataset_info": {"features": [{"name": "source", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "chosen-rating", "dtype": "float64"}, {"name": "chosen-model", "dtype": "string"}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected-rating", "dtype": "float64"}, {"name": "rejected-model", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "generation_model", "sequence": "string"}, {"name": "generation_prompt", "list": {"list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}}, {"name": "raw_generation_responses", "sequence": "string"}, {"name": "followup", "sequence": "string"}, {"name": "generations", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 1743407, "num_examples": 100}], "download_size": 804953, "dataset_size": 1743407}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T11:22:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "multiturn_with_generations"
More Information needed | [
"# Dataset Card for \"multiturn_with_generations\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"multiturn_with_generations\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"multiturn_with_generations\"\n\nMore Information needed"
] |
1257571d3ae21509282a69a0e34d85cb6d85b594 |
# TACO Dataset
<img src="https://cdn-uploads.huggingface.co/production/uploads/6335113375bed9932474315e/rMxdXcC56S3FEh37oRa2s.png" width="200" height="200">
[TACO](https://github.com/FlagOpen/TACO) is a benchmark for code generation with 26443 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications.
## Dataset Description
- **Repository:** https://github.com/FlagOpen/TACO/
- **Paper:** [TACO: Topics in Algorithmic COde generation dataset](https://arxiv.org/abs/2312.14852)
- **Leaderboard:** [Code Generation on CodeContests](https://paperswithcode.com/sota/code-generation-on-taco-code)
- **Point of Contact:** [Bo-Wen Zhang](mailto:[email protected])
## Languages
The dataset contains questions in English and code solutions in Python.
## Dataset Structure
```python
from datasets import load_dataset
load_dataset("BAAI/TACO")
DatasetDict({
train: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 25443
})
test: Dataset({
features: ['question', 'solutions', 'starter_code', 'input_output', 'difficulty', 'raw_tags', 'name', 'source', 'tags', 'skill_types', 'url', 'Expected Auxiliary Space', 'time_limit', 'date', 'picture_num', 'memory_limit', 'Expected Time Complexity'],
num_rows: 1000
})
})
```
### How to use it
You can load and iterate through the dataset with the following two lines of code for the train split:
```python
from datasets import load_dataset
import json
ds = load_dataset("BAAI/TACO", split="train")
sample = next(iter(ds))
# non-empty solutions and input_output features can be parsed from text format this way:
sample["solutions"] = json.loads(sample["solutions"])
sample["input_output"] = json.loads(sample["input_output"])
sample["raw_tags"] = eval(sample["raw_tags"])
sample["tags"] = eval(sample["tags"])
sample["skill_types"] = eval(sample["skill_types"])
print(sample)
#OUTPUT:
{
"question": "You have a deck of $n$ cards, and you'd like to reorder it to a new one.\n\nEach card has a value between $1$ and $n$ equal to $p_i$. ...",
"solutions": [
"import heapq\nfrom math import sqrt\nimport operator\nimport sys\ninf_var = 0\nif inf_var == 1:\n\tinf = open('input.txt', 'r')\nelse:\n\tinf = sys.stdin\n ...",
"t = int(input())\nfor _ in range(t):\n\tn = int(input())\n\tp = list(map(int, input().split()))\n\tans = []\n\tp1 = [-1] * (n + 1)\n\tfor i in range(n):\n\t\tp1[p[i]] = i\n\ti = n\n\twhile i:\n\t\twhile i > 0 and p1[i] == -1:\n\t\t\ti -= 1\n\t\telse:\n\t\t\tif i:\n\t\t\t\tk = 0\n\t\t\t\tfor j in range(p1[i], n):\n\t\t\t\t\tans.append(p[j])\n\t\t\t\t\tp1[p[j]] = -1\n\t\t\t\t\tk += 1\n\t\t\t\tn -= k\n\t\t\t\ti -= 1\n\t\t\telse:\n\t\t\t\tbreak\n\tprint(*ans)\n",
"import sys\n\ndef get_ints():\n\treturn map(int, sys.stdin.readline().strip().split())\n\ndef get_list():\n\treturn list(map(int, sys.stdin.readline().strip().split()))\n\ndef get_list_string():\n\treturn list(map(str, sys.stdin.readline().strip().split()))\n\ndef get_string():\n\treturn sys.stdin.readline().strip()\n\ndef get_int():\n\treturn int(sys.stdin.readline().strip())\n\ndef get_print_int(x):\n\tsys.stdout.write(str(x) + '\\n')\n\ndef get_print(x):\n\tsys.stdout.write(x + '\\n')\n\ndef get_print_int_same(x):\n\tsys.stdout.write(str(x) + ' ')\n\ndef get_print_same(x):\n\tsys.stdout.write(x + ' ')\nfrom sys import maxsize\n\ndef solve():\n\tfor _ in range(get_int()):\n\t\tn = get_int()\n\t\tarr = get_list()\n\t\ti = n - 1\n\t\tj = n - 1\n\t\ttemp = sorted(arr)\n\t\tvis = [False] * n\n\t\tans = []\n\t\twhile j >= 0:\n\t\t\tt = j\n\t\t\ttt = []\n\t\t\twhile t >= 0 and arr[t] != temp[i]:\n\t\t\t\tvis[arr[t] - 1] = True\n\t\t\t\ttt.append(arr[t])\n\t\t\t\tt -= 1\n\t\t\tvis[arr[t] - 1] = True\n\t\t\ttt.append(arr[t])\n\t\t\ttt = tt[::-1]\n\t\t\tfor k in tt:\n\t\t\t\tans.append(k)\n\t\t\tj = t - 1\n\t\t\twhile i >= 0 and vis[i]:\n\t\t\t\ti -= 1\n\t\tget_print(' '.join(map(str, ans)))\nsolve()\n",
...
],
"starter_code": "",
"input_output": {
"inputs": [
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n",
"4\n4\n2 1 3 4\n5\n1 5 2 4 3\n6\n2 4 5 3 6 1\n1\n1\n",
"4\n4\n1 2 3 4\n5\n1 5 2 4 3\n6\n4 2 5 3 6 1\n1\n1\n"
],
"outputs": [
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n",
"\n4 3 2 1\n5 2 4 3 1\n6 1 5 3 4 2\n1\n"
]
},
"difficulty": "EASY",
"raw_tags": [
"data structures",
"greedy",
"math"
],
"name": null,
"source": "codeforces",
"tags": [
"Data structures",
"Mathematics",
"Greedy algorithms"
],
"skill_types": [
"Data structures",
"Greedy algorithms"
],
"url": "https://codeforces.com/problemset/problem/1492/B",
"Expected Auxiliary Space": null,
"time_limit": "1 second",
"date": "2021-02-23",
"picture_num": "0",
"memory_limit": "512 megabytes",
"Expected Time Complexity": null
}
```
Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill_types) of the problem and its source.
If a sample has non empty `input_output` feature, you can read it as a dictionary with keys `inputs` and `outputs` and `fn_name` if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.
You can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM_HARD, HARD and VERY_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY_HARD level:
```python
ds = load_dataset("BAAI/TACO", split="train", difficulties=["VERY_HARD"])
print(next(iter(ds))["question"])
```
```
#OUTPUT:
"""Let S(n) denote the number that represents the digits of n in sorted order. For example, S(1) = 1, S(5) = 5, S(50394) = 3459, S(353535) = 333555.
Given a number X, compute <image> modulo 109 + 7.
Input
The first line of input will contain the integer X (1 ≤ X ≤ 10700).
Output
Print a single integer, the answer to the question.
Examples
Input
21
Output
195
Input
345342
Output
390548434
Note
The first few values of S are 1, 2, 3, 4, 5, 6, 7, 8, 9, 1, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 12. The sum of these values is 195.
```
Or if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:
```python
ds = load_dataset("BAAI/TACO", split="train", skills=["Range queries", "Sorting"])
```
### Data Fields
|Field|Type|Description|
|---|---|---|
|question|string|problem description|
|solutions|string|some python solutions|
|input_output|string|Json string with "inputs" and "outputs" of the test cases, might also include "fn_name" the name of the function|
|difficulty|string|difficulty level of the problem|
|picture_num|string|the number of pictures in the problem|
|source|string|the source of the problem|
|url|string|url of the source of the problem|
|date|string|the date of the problem|
|starter_code|string|starter code to include in prompts|
|time_limit|string|the time consumption limit to solve the problem|
|memory_limit|string|the memory consumption limit to solve the problem|
|Expected Auxiliary Space|string|the extra auxiliary space expected to solve the problem|
|Expected Time Complexity|string|the time complexity expected to solve the problem|
|raw_tags|string|the topics of the programming task|
|tags|string|the manually annoatated algorithms needed to solve the problem|
|skill_types|string|the mapped programming skill types to solve the problem|
### Data Splits
The dataset contains a train with 25443 samples and test splits with 1000 samples.
### Dataset Statistics
* 26443 coding problems
* 1.55M verified solutions
* for tests split, the average number of test cases is 202.3
* all files have ground-truth solutions in the test split
## Dataset Creation
To create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu
AtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.
## License
The TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0). However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license ([Creative Commons Attribution 4.0 International license](https://creativecommons.org/licenses/by/4.0/legalcode)).
We gratefully acknowledge the contributions of the following:
* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (https://github.com/hendrycks/apps)
* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (https://github.com/google-deepmind/code_contests)
* Codeforces materials are sourced from http://codeforces.com.
* CodeChef materials are sourced from https://www.codechef.com.
* GeekforGeeks materials are sourced from https://www.geeksforgeeks.org
* HackerEarth materials are curated from:
[Description2Code Dataset](https://github.com/ethancaballero/description2code),
licensed under the
[MIT open source license](https://opensource.org/licenses/MIT), copyright
not specified.
* HackerRank materials are sourced from https://www.hackerrank.com. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.
## Citation Information
If you find our data, or code helpful, please cite [the original paper](https://arxiv.org/abs/2312.14852):
```
@article{li2023taco,
title={TACO: Topics in Algorithmic COde generation dataset},
author={Rongao Li and Jie Fu and Bo-Wen Zhang and Tao Huang and Zhihong Sun and Chen Lyu and Guang Liu and Zhi Jin and Ge Li},
journal={arXiv preprint arXiv:2312.14852},
year={2023}
}
``` | BAAI/TACO | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language_creators:crowdsourced",
"language_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:code",
"license:apache-2.0",
"code",
"arxiv:2312.14852",
"region:us"
] | 2023-12-20T11:27:47+00:00 | {"annotations_creators": [], "language_creators": ["crowdsourced", "expert-generated"], "language": ["code"], "license": "apache-2.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": [], "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "paperswithcode_id": "taco-topics-in-algorithmic-code-generation", "pretty_name": "TACO", "tags": ["code"], "dataset_info": {"config_name": "ALL", "features": [{"name": "question", "dtype": "string"}, {"name": "solutions", "dtype": "string"}, {"name": "starter_code", "dtype": "string"}, {"name": "input_output", "dtype": "string"}, {"name": "difficulty", "dtype": "string"}, {"name": "raw_tags", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "tags", "dtype": "string"}, {"name": "skill_types", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "Expected Auxiliary Space", "dtype": "string"}, {"name": "time_limit", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "picture_num", "dtype": "string"}, {"name": "memory_limit", "dtype": "string"}, {"name": "Expected Time Complexity", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4239311973, "num_examples": 25443}, {"name": "test", "num_bytes": 481480755, "num_examples": 1000}], "download_size": 2419844942, "dataset_size": 4720792728}, "configs": [{"config_name": "ALL", "data_files": [{"split": "train", "path": "ALL/train-*"}, {"split": "test", "path": "ALL/test-*"}]}]} | 2024-01-15T04:13:20+00:00 | [
"2312.14852"
] | [
"code"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-code #license-apache-2.0 #code #arxiv-2312.14852 #region-us
| TACO Dataset
============
<img src="URL width="200" height="200">
TACO is a benchmark for code generation with 26443 problems. It can be used to evaluate the ability of language models to generate code from natural language specifications.
Dataset Description
-------------------
* Repository: URL
* Paper: TACO: Topics in Algorithmic COde generation dataset
* Leaderboard: Code Generation on CodeContests
* Point of Contact: Bo-Wen Zhang
Languages
---------
The dataset contains questions in English and code solutions in Python.
Dataset Structure
-----------------
### How to use it
You can load and iterate through the dataset with the following two lines of code for the train split:
Each sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill\_types) of the problem and its source.
If a sample has non empty 'input\_output' feature, you can read it as a dictionary with keys 'inputs' and 'outputs' and 'fn\_name' if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.
You can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM\_HARD, HARD and VERY\_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY\_HARD level:
Or if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:
### Data Fields
Field: question, Type: string, Description: problem description
Field: solutions, Type: string, Description: some python solutions
Field: input\_output, Type: string, Description: Json string with "inputs" and "outputs" of the test cases, might also include "fn\_name" the name of the function
Field: difficulty, Type: string, Description: difficulty level of the problem
Field: picture\_num, Type: string, Description: the number of pictures in the problem
Field: source, Type: string, Description: the source of the problem
Field: url, Type: string, Description: url of the source of the problem
Field: date, Type: string, Description: the date of the problem
Field: starter\_code, Type: string, Description: starter code to include in prompts
Field: time\_limit, Type: string, Description: the time consumption limit to solve the problem
Field: memory\_limit, Type: string, Description: the memory consumption limit to solve the problem
Field: Expected Auxiliary Space, Type: string, Description: the extra auxiliary space expected to solve the problem
Field: Expected Time Complexity, Type: string, Description: the time complexity expected to solve the problem
Field: raw\_tags, Type: string, Description: the topics of the programming task
Field: tags, Type: string, Description: the manually annoatated algorithms needed to solve the problem
Field: skill\_types, Type: string, Description: the mapped programming skill types to solve the problem
### Data Splits
The dataset contains a train with 25443 samples and test splits with 1000 samples.
### Dataset Statistics
* 26443 coding problems
* 1.55M verified solutions
* for tests split, the average number of test cases is 202.3
* all files have ground-truth solutions in the test split
Dataset Creation
----------------
To create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu
AtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.
License
-------
The TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an Apache 2.0 License. However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license (Creative Commons Attribution 4.0 International license).
We gratefully acknowledge the contributions of the following:
* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (URL
* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (URL
* Codeforces materials are sourced from URL.
* CodeChef materials are sourced from URL.
* GeekforGeeks materials are sourced from URL
* HackerEarth materials are curated from:
Description2Code Dataset,
licensed under the
MIT open source license, copyright
not specified.
* HackerRank materials are sourced from URL. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.
If you find our data, or code helpful, please cite the original paper:
| [
"### How to use it\n\n\nYou can load and iterate through the dataset with the following two lines of code for the train split:\n\n\nEach sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill\\_types) of the problem and its source.\n\n\nIf a sample has non empty 'input\\_output' feature, you can read it as a dictionary with keys 'inputs' and 'outputs' and 'fn\\_name' if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.\n\n\nYou can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM\\_HARD, HARD and VERY\\_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY\\_HARD level:\n\n\nOr if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:",
"### Data Fields\n\n\nField: question, Type: string, Description: problem description\nField: solutions, Type: string, Description: some python solutions\nField: input\\_output, Type: string, Description: Json string with \"inputs\" and \"outputs\" of the test cases, might also include \"fn\\_name\" the name of the function\nField: difficulty, Type: string, Description: difficulty level of the problem\nField: picture\\_num, Type: string, Description: the number of pictures in the problem\nField: source, Type: string, Description: the source of the problem\nField: url, Type: string, Description: url of the source of the problem\nField: date, Type: string, Description: the date of the problem\nField: starter\\_code, Type: string, Description: starter code to include in prompts\nField: time\\_limit, Type: string, Description: the time consumption limit to solve the problem\nField: memory\\_limit, Type: string, Description: the memory consumption limit to solve the problem\nField: Expected Auxiliary Space, Type: string, Description: the extra auxiliary space expected to solve the problem\nField: Expected Time Complexity, Type: string, Description: the time complexity expected to solve the problem\nField: raw\\_tags, Type: string, Description: the topics of the programming task\nField: tags, Type: string, Description: the manually annoatated algorithms needed to solve the problem\nField: skill\\_types, Type: string, Description: the mapped programming skill types to solve the problem",
"### Data Splits\n\n\nThe dataset contains a train with 25443 samples and test splits with 1000 samples.",
"### Dataset Statistics\n\n\n* 26443 coding problems\n* 1.55M verified solutions\n* for tests split, the average number of test cases is 202.3\n* all files have ground-truth solutions in the test split\n\n\nDataset Creation\n----------------\n\n\nTo create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu\nAtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.\n\n\nLicense\n-------\n\n\nThe TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an Apache 2.0 License. However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license (Creative Commons Attribution 4.0 International license).\n\n\nWe gratefully acknowledge the contributions of the following:\n\n\n* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (URL\n* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (URL\n* Codeforces materials are sourced from URL.\n* CodeChef materials are sourced from URL.\n* GeekforGeeks materials are sourced from URL\n* HackerEarth materials are curated from:\nDescription2Code Dataset,\nlicensed under the\nMIT open source license, copyright\nnot specified.\n* HackerRank materials are sourced from URL. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.\n\n\nIf you find our data, or code helpful, please cite the original paper:"
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-code #license-apache-2.0 #code #arxiv-2312.14852 #region-us \n",
"### How to use it\n\n\nYou can load and iterate through the dataset with the following two lines of code for the train split:\n\n\nEach sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill\\_types) of the problem and its source.\n\n\nIf a sample has non empty 'input\\_output' feature, you can read it as a dictionary with keys 'inputs' and 'outputs' and 'fn\\_name' if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.\n\n\nYou can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM\\_HARD, HARD and VERY\\_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY\\_HARD level:\n\n\nOr if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:",
"### Data Fields\n\n\nField: question, Type: string, Description: problem description\nField: solutions, Type: string, Description: some python solutions\nField: input\\_output, Type: string, Description: Json string with \"inputs\" and \"outputs\" of the test cases, might also include \"fn\\_name\" the name of the function\nField: difficulty, Type: string, Description: difficulty level of the problem\nField: picture\\_num, Type: string, Description: the number of pictures in the problem\nField: source, Type: string, Description: the source of the problem\nField: url, Type: string, Description: url of the source of the problem\nField: date, Type: string, Description: the date of the problem\nField: starter\\_code, Type: string, Description: starter code to include in prompts\nField: time\\_limit, Type: string, Description: the time consumption limit to solve the problem\nField: memory\\_limit, Type: string, Description: the memory consumption limit to solve the problem\nField: Expected Auxiliary Space, Type: string, Description: the extra auxiliary space expected to solve the problem\nField: Expected Time Complexity, Type: string, Description: the time complexity expected to solve the problem\nField: raw\\_tags, Type: string, Description: the topics of the programming task\nField: tags, Type: string, Description: the manually annoatated algorithms needed to solve the problem\nField: skill\\_types, Type: string, Description: the mapped programming skill types to solve the problem",
"### Data Splits\n\n\nThe dataset contains a train with 25443 samples and test splits with 1000 samples.",
"### Dataset Statistics\n\n\n* 26443 coding problems\n* 1.55M verified solutions\n* for tests split, the average number of test cases is 202.3\n* all files have ground-truth solutions in the test split\n\n\nDataset Creation\n----------------\n\n\nTo create the TACO dataset, the authors manually curated problems from open-access sites where programmers share problems with each other, including Aizu\nAtCoder, CodeChef, Codeforces, CodeWars, GeeksforGeeks, HackerEarth, HackerRank, Katti and LeetCode. For more details please refer to the original paper.\n\n\nLicense\n-------\n\n\nThe TACO dataset that is authored by BAAI, Shandong Normal University and Peking University is released under an Apache 2.0 License. However, the data also includes content licensed under other permissive licenses such as MIT License, or web-crawled data which is used under the terms of the CC BY 4.0 license (Creative Commons Attribution 4.0 International license).\n\n\nWe gratefully acknowledge the contributions of the following:\n\n\n* some AtCoder, Codeforces, CodeWars, Kattis, LeetCode material curated from APPS dataset (URL\n* some Aizu, AtCoder, CodeChef, Codeforces material curated from CodeContest dataset (URL\n* Codeforces materials are sourced from URL.\n* CodeChef materials are sourced from URL.\n* GeekforGeeks materials are sourced from URL\n* HackerEarth materials are curated from:\nDescription2Code Dataset,\nlicensed under the\nMIT open source license, copyright\nnot specified.\n* HackerRank materials are sourced from URL. We don't know what the legal rights or data licenses of HackerRank. Please contact us if there is data license.\n\n\nIf you find our data, or code helpful, please cite the original paper:"
] | [
92,
339,
351,
26,
399
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #language_creators-crowdsourced #language_creators-expert-generated #multilinguality-monolingual #size_categories-10K<n<100K #language-code #license-apache-2.0 #code #arxiv-2312.14852 #region-us \n### How to use it\n\n\nYou can load and iterate through the dataset with the following two lines of code for the train split:\n\n\nEach sample consists of a programming problem formulation in English, some ground truth Python solutions, test cases that are defined by their inputs and outputs and function name if provided, as well as some metadata regarding the difficulty level (difficulty), topics of task (raw tags), algorithms (tags) as well as required programming skill types (skill\\_types) of the problem and its source.\n\n\nIf a sample has non empty 'input\\_output' feature, you can read it as a dictionary with keys 'inputs' and 'outputs' and 'fn\\_name' if it exists, and similarily you can parse the solutions into a list of solutions as shown in the code above.\n\n\nYou can also filter the dataset for the difficulty level: EASY, MEDIUM, MEDIUM\\_HARD, HARD and VERY\\_HARD, or filter the programming skill types: Amortized analysis, Bit manipulation, Complete search, Data structures, Dynamic programming, Greedy algorithms, Range queries, Sorting. Just pass the list of difficulties or skills as a list. E.g. if you want the most challenging problems, you need to select the VERY\\_HARD level:\n\n\nOr if you want the problems invovled with Range queries and Sorting, you need to select the skills Range queries and Sorting:"
] |
44a4f4766b68f7e3db6e316680d54dd1bff628bf | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | haris001/jsoncodes | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:en",
"license:apache-2.0",
"code",
"region:us"
] | 2023-12-20T12:09:57+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["question-answering"], "pretty_name": "j", "tags": ["code"]} | 2023-12-20T12:30:39+00:00 | [] | [
"en"
] | TAGS
#task_categories-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #code #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #code #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
42,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-English #license-apache-2.0 #code #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
9ff952bb5e1099dc4705173ad57aabb4ad885137 |
# Dataset Card for NTX v1 in the Aya format
This dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.
It contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.
## Citation
If you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Dataset Details
For the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: https://arxiv.org/abs/2303.18103.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Format Conversion Details
The templates used to reformat the dataset are in the ./templates-ntx directory. | tellarin-ai/ntx_llm_instructions | [
"task_categories:token-classification",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:hi",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pt",
"language:sv",
"language:tr",
"language:zh",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T12:12:39+00:00 | {"language": ["ar", "de", "en", "es", "fr", "hi", "it", "ja", "ko", "nl", "pt", "sv", "tr", "zh"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T14:58:24+00:00 | [
"2303.18103"
] | [
"ar",
"de",
"en",
"es",
"fr",
"hi",
"it",
"ja",
"ko",
"nl",
"pt",
"sv",
"tr",
"zh"
] | TAGS
#task_categories-token-classification #language-Arabic #language-German #language-English #language-Spanish #language-French #language-Hindi #language-Italian #language-Japanese #language-Korean #language-Dutch #language-Portuguese #language-Swedish #language-Turkish #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format
This dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.
It contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.
If you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.
BibTeX:
## Dataset Details
For the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: URL
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Format Conversion Details
The templates used to reformat the dataset are in the ./templates-ntx directory. | [
"# Dataset Card for NTX v1 in the Aya format\n\nThis dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.\n\nIt contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.\n\nIf you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.\n\nBibTeX:",
"## Dataset Details\n\nFor the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: URL\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.",
"## Format Conversion Details\n\nThe templates used to reformat the dataset are in the ./templates-ntx directory."
] | [
"TAGS\n#task_categories-token-classification #language-Arabic #language-German #language-English #language-Spanish #language-French #language-Hindi #language-Italian #language-Japanese #language-Korean #language-Dutch #language-Portuguese #language-Swedish #language-Turkish #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format\n\nThis dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.\n\nIt contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.\n\nIf you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.\n\nBibTeX:",
"## Dataset Details\n\nFor the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: URL\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.",
"## Format Conversion Details\n\nThe templates used to reformat the dataset are in the ./templates-ntx directory."
] | [
110,
115,
64,
30
] | [
"passage: TAGS\n#task_categories-token-classification #language-Arabic #language-German #language-English #language-Spanish #language-French #language-Hindi #language-Italian #language-Japanese #language-Korean #language-Dutch #language-Portuguese #language-Swedish #language-Turkish #language-Chinese #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format\n\nThis dataset is a format conversion from its original v1 format into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license and conditions.\n\nIt contains data in multiple languages and this version is intended for multi-lingual LLM construction/tuning.\n\nIf you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication.\n\nBibTeX:## Dataset Details\n\nFor the original NTX dataset for information extraction of numerical and temporal expressions and more details, please check the arXiv paper: URL\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.## Format Conversion Details\n\nThe templates used to reformat the dataset are in the ./templates-ntx directory."
] |
0950979108b301f907ded78b34ebba29f5ca4d10 |
# IOS App Icons
## Overview
This dataset contains images and captions of iOS app icons obtained from the iOS Icon Gallery. Each image is paired with a generated caption using a Blip Image Captioning model. The dataset is suitable for image captioning tasks and can be used to train and evaluate models for generating captions for iOS app icons.
## Images
The images are stored in the 'images' directory, and each image is uniquely identified with a filename (e.g., 'image_0.png'). The images have a resolution of 512x512 pixels.
## Data Format
The dataset is provided in the Hugging Face datasets format, with each sample containing the following information:
- `image_path`: Local file path to the image.
- `caption`: Generated caption for the corresponding image.
## Usage
You can use this dataset for training, fine-tuning, and evaluating image captioning models. The captions can be leveraged for tasks such as generating natural language descriptions for iOS app icons.
## Acknowledgments
- iOS Icon Gallery: [https://www.iosicongallery.com](https://www.iosicongallery.com)
- Blip Image Captioning model: [Salesforce/blip-image-captioning-large](https://huggingface.co/Salesforce/blip-image-captioning-large)
## License
This dataset is released under the [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0). Please review the license for details. | ppierzc/ios-app-icons | [
"license:openrail",
"image-captioning",
"ios-icons",
"region:us"
] | 2023-12-20T12:46:16+00:00 | {"license": "openrail", "id": "ios-app-icons", "title": "IOS App Icons", "description": "This dataset contains images and captions of iOS app icons collected from the iOS Icon Gallery. The images have been processed using a Blip Image Captioning model to generate captions.\n", "tasks": ["image-captioning"], "tags": ["image-captioning", "ios-icons"], "created": "December 20, 2023", "citation": "Author, A. et al. (2023). Your Dataset Name. [Hugging Face Datasets](https://huggingface.co/datasets/your_dataset_name).", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "caption", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 367958490.476, "num_examples": 1819}, {"name": "test", "num_bytes": 24842350.0, "num_examples": 100}], "download_size": 338140473, "dataset_size": 392800840.476}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-21T11:46:46+00:00 | [] | [] | TAGS
#license-openrail #image-captioning #ios-icons #region-us
|
# IOS App Icons
## Overview
This dataset contains images and captions of iOS app icons obtained from the iOS Icon Gallery. Each image is paired with a generated caption using a Blip Image Captioning model. The dataset is suitable for image captioning tasks and can be used to train and evaluate models for generating captions for iOS app icons.
## Images
The images are stored in the 'images' directory, and each image is uniquely identified with a filename (e.g., 'image_0.png'). The images have a resolution of 512x512 pixels.
## Data Format
The dataset is provided in the Hugging Face datasets format, with each sample containing the following information:
- 'image_path': Local file path to the image.
- 'caption': Generated caption for the corresponding image.
## Usage
You can use this dataset for training, fine-tuning, and evaluating image captioning models. The captions can be leveraged for tasks such as generating natural language descriptions for iOS app icons.
## Acknowledgments
- iOS Icon Gallery: URL
- Blip Image Captioning model: Salesforce/blip-image-captioning-large
## License
This dataset is released under the Apache-2.0 License. Please review the license for details. | [
"# IOS App Icons",
"## Overview\n\nThis dataset contains images and captions of iOS app icons obtained from the iOS Icon Gallery. Each image is paired with a generated caption using a Blip Image Captioning model. The dataset is suitable for image captioning tasks and can be used to train and evaluate models for generating captions for iOS app icons.",
"## Images\n\nThe images are stored in the 'images' directory, and each image is uniquely identified with a filename (e.g., 'image_0.png'). The images have a resolution of 512x512 pixels.",
"## Data Format\n\nThe dataset is provided in the Hugging Face datasets format, with each sample containing the following information:\n\n- 'image_path': Local file path to the image.\n- 'caption': Generated caption for the corresponding image.",
"## Usage\n\nYou can use this dataset for training, fine-tuning, and evaluating image captioning models. The captions can be leveraged for tasks such as generating natural language descriptions for iOS app icons.",
"## Acknowledgments\n\n- iOS Icon Gallery: URL\n- Blip Image Captioning model: Salesforce/blip-image-captioning-large",
"## License\n\nThis dataset is released under the Apache-2.0 License. Please review the license for details."
] | [
"TAGS\n#license-openrail #image-captioning #ios-icons #region-us \n",
"# IOS App Icons",
"## Overview\n\nThis dataset contains images and captions of iOS app icons obtained from the iOS Icon Gallery. Each image is paired with a generated caption using a Blip Image Captioning model. The dataset is suitable for image captioning tasks and can be used to train and evaluate models for generating captions for iOS app icons.",
"## Images\n\nThe images are stored in the 'images' directory, and each image is uniquely identified with a filename (e.g., 'image_0.png'). The images have a resolution of 512x512 pixels.",
"## Data Format\n\nThe dataset is provided in the Hugging Face datasets format, with each sample containing the following information:\n\n- 'image_path': Local file path to the image.\n- 'caption': Generated caption for the corresponding image.",
"## Usage\n\nYou can use this dataset for training, fine-tuning, and evaluating image captioning models. The captions can be leveraged for tasks such as generating natural language descriptions for iOS app icons.",
"## Acknowledgments\n\n- iOS Icon Gallery: URL\n- Blip Image Captioning model: Salesforce/blip-image-captioning-large",
"## License\n\nThis dataset is released under the Apache-2.0 License. Please review the license for details."
] | [
23,
6,
78,
54,
56,
50,
36,
22
] | [
"passage: TAGS\n#license-openrail #image-captioning #ios-icons #region-us \n# IOS App Icons## Overview\n\nThis dataset contains images and captions of iOS app icons obtained from the iOS Icon Gallery. Each image is paired with a generated caption using a Blip Image Captioning model. The dataset is suitable for image captioning tasks and can be used to train and evaluate models for generating captions for iOS app icons.## Images\n\nThe images are stored in the 'images' directory, and each image is uniquely identified with a filename (e.g., 'image_0.png'). The images have a resolution of 512x512 pixels.## Data Format\n\nThe dataset is provided in the Hugging Face datasets format, with each sample containing the following information:\n\n- 'image_path': Local file path to the image.\n- 'caption': Generated caption for the corresponding image.## Usage\n\nYou can use this dataset for training, fine-tuning, and evaluating image captioning models. The captions can be leveraged for tasks such as generating natural language descriptions for iOS app icons.## Acknowledgments\n\n- iOS Icon Gallery: URL\n- Blip Image Captioning model: Salesforce/blip-image-captioning-large## License\n\nThis dataset is released under the Apache-2.0 License. Please review the license for details."
] |
49272bca87d93431e9da11d7854b222bc9cd49a1 | # Dataset Card for "guanaco-llama2-200"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | juosilva/guanaco-llama2-200 | [
"region:us"
] | 2023-12-20T12:59:36+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 338808, "num_examples": 200}], "download_size": 201258, "dataset_size": 338808}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T12:59:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "guanaco-llama2-200"
More Information needed | [
"# Dataset Card for \"guanaco-llama2-200\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"guanaco-llama2-200\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"guanaco-llama2-200\"\n\nMore Information needed"
] |
7b3b5bc3a107c4d7586d4f3dc86fb723509d2bd0 | # Dataset Card for "common_voice_15_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | macabdul9/common_voice_15_0 | [
"region:us"
] | 2023-12-20T13:21:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}, {"split": "other", "path": "data/other-*"}, {"split": "invalidated", "path": "data/invalidated-*"}]}], "dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}, {"name": "variant", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 772120424.664, "num_examples": 28406}, {"name": "validation", "num_bytes": 326586438.888, "num_examples": 10362}, {"name": "test", "num_bytes": 316959428.554, "num_examples": 10474}, {"name": "other", "num_bytes": 1232139588.177, "num_examples": 39983}, {"name": "invalidated", "num_bytes": 495453779.996, "num_examples": 15068}], "download_size": 2680425185, "dataset_size": 3143259660.279}} | 2023-12-20T13:25:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "common_voice_15_0"
More Information needed | [
"# Dataset Card for \"common_voice_15_0\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"common_voice_15_0\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"common_voice_15_0\"\n\nMore Information needed"
] |
3bfbb746cbc6751f2eae58725638e28be743927d |
# BEE-spoke-data/falcon-refinedweb-100k_en-long
A sample from [falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb):
- more than 2048 & less than 16384 gpt4 tiktoken tokens
- `en` only (via fasttext-langdetect)
- 100k samples
| BEE-spoke-data/falcon-refinedweb-100k_en-long | [
"task_categories:text-generation",
"source_datasets:tiiuae/falcon-refinedweb",
"language:en",
"license:odc-by",
"region:us"
] | 2023-12-20T13:40:35+00:00 | {"language": ["en"], "license": "odc-by", "source_datasets": "tiiuae/falcon-refinedweb", "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1748631587.0, "num_examples": 100000}], "download_size": 1035546649, "dataset_size": 1748631587.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-20T16:54:39+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us
|
# BEE-spoke-data/falcon-refinedweb-100k_en-long
A sample from falcon-refinedweb:
- more than 2048 & less than 16384 gpt4 tiktoken tokens
- 'en' only (via fasttext-langdetect)
- 100k samples
| [
"# BEE-spoke-data/falcon-refinedweb-100k_en-long\n\n\nA sample from falcon-refinedweb:\n\n- more than 2048 & less than 16384 gpt4 tiktoken tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] | [
"TAGS\n#task_categories-text-generation #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us \n",
"# BEE-spoke-data/falcon-refinedweb-100k_en-long\n\n\nA sample from falcon-refinedweb:\n\n- more than 2048 & less than 16384 gpt4 tiktoken tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] | [
47,
70
] | [
"passage: TAGS\n#task_categories-text-generation #source_datasets-tiiuae/falcon-refinedweb #language-English #license-odc-by #region-us \n# BEE-spoke-data/falcon-refinedweb-100k_en-long\n\n\nA sample from falcon-refinedweb:\n\n- more than 2048 & less than 16384 gpt4 tiktoken tokens\n- 'en' only (via fasttext-langdetect)\n- 100k samples"
] |
49a8baa14f1eaeaa3afa1b4bf1b9ba37a883e7cf |
# 🏟️ Long Code Arena (CI Fixing)
> 🛠️ CI Fixing: given logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the
> repository contents in order to make the workflow pass.
This is the benchmark for **CI Fixing** task as part of
🏟️ [**Long Code Arena** benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
To score your model on this dataset, you can use **CI Fixing benchmark**
(https://github.com/JetBrains-Research/lca-baselines/tree/main/ci-fixing/ci-fixing-benchmark)
## How-to
1. List all the available configs
via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
and choose an appropriate one.
Current configs: `python`
2. Load the data
via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
```
from datasets import load_dataset
dataset = load_dataset("JetBrains-Research/lca-ci-fixing", split="test")
```
Note that all the data we have is considered to be in the test split.
**NOTE**: If you encounter any errors with loading the dataset on Windows, update the datasets library (was tested on datasets==2.16.1)
## Dataset Structure
This dataset contains logs of the failed GitHub Action workflows for some commits
followed by the commit that passes the workflow successfully.
Note that, unlike many other 🏟 Long Code Arena datasets, this dataset doesn't contain repositories.
* Our [CI Fixing benchmark](todo) (🚧 todo) clones the necessary repos to the user's local machine. The user should run
their model to
fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow
runs
for all the datapoints.
### Datapoint Schema
Each example has the following fields:
| Field | Description |
|---------------------|------------------------------------------------------------------------------------------------------------------------------|
| `contributor` | Username of the contributor that committed changes |
| `difficulty` | Difficulty of the problem (assessor-based. 0 means that fix requires only the code formatting) |
| `diff` | Contents of the diff between the failed and the successful commits |
| `head_branch` | Name of the original branch that the commit was pushed at |
| `id` | Unique ID of the datapoint |
| `language` | Main language of the repo |
| `logs` | List of dicts with keys `log` (logs of the failed job, particular step) and `step_name` (name of the failed step of the job) |
| `repo_name` | Name of the original repo (second part of the `owner/name` on GitHub) |
| `repo owner` | Owner of the original repo (first part of the `owner/name` on GitHub) |
| `sha_fail` | SHA of the failed commit |
| `sha_success` | SHA of the successful commit |
| `workflow` | Contents of the workflow file |
| `workflow_filename` | The name of the workflow file (without directories) |
| `workflow_name` | The name of the workflow |
| `workflow_path` | The full path to the workflow file
| `changed_files` | List of files changed in diff
| `commit_link` | URL to commit corresponding to failed job
### Datapoint Example
```
{'contributor': 'Gallaecio',
'diff': 'diff --git a/scrapy/crawler.py b/scrapy/crawler.py/n<...>',
'difficulty': '1',
'head_branch': 'component-getters',
'id': 18,
'language': 'Python',
'logs': [{'log': '##[group]Run pip install -U tox\n<...>',
'step_name': 'checks (3.12, pylint)/4_Run check.txt'}],
'repo_name': 'scrapy',
'repo_owner': 'scrapy',
'sha_fail': '0f71221cf9875ed8ef3400e1008408e79b6691e6',
'sha_success': 'c1ba9ccdf916b89d875628ba143dc5c9f6977430',
'workflow': 'name: Checks\non: [push, pull_request]\n\n<...>',
'workflow_filename': 'checks.yml',
'workflow_name': 'Checks',
'workflow_path': '.github/workflows/checks.yml',
'changed_files': ["scrapy/crawler.py"],
'commit_link': "https://github.com/scrapy/scrapy/tree/0f71221cf9875ed8ef3400e1008408e79b6691e6"}
``` | JetBrains-Research/lca-ci-fixing | [
"region:us"
] | 2023-12-20T13:40:41+00:00 | {"configs": [{"config_name": "python", "data_files": [{"split": "test", "path": "data/python/*.json"}]}]} | 2024-01-29T14:02:10+00:00 | [] | [] | TAGS
#region-us
| ️ Long Code Arena (CI Fixing)
=============================
>
> ️ CI Fixing: given logs of a failed GitHub Actions workflow and the corresponding repository shapshot, fix the
> repository contents in order to make the workflow pass.
>
>
>
This is the benchmark for CI Fixing task as part of
️ Long Code Arena benchmark.
To score your model on this dataset, you can use CI Fixing benchmark
(URL
How-to
------
1. List all the available configs
via 'datasets.get\_dataset\_config\_names'
and choose an appropriate one.
Current configs: 'python'
2. Load the data
via 'load\_dataset':
Note that all the data we have is considered to be in the test split.
NOTE: If you encounter any errors with loading the dataset on Windows, update the datasets library (was tested on datasets==2.16.1)
Dataset Structure
-----------------
This dataset contains logs of the failed GitHub Action workflows for some commits
followed by the commit that passes the workflow successfully.
Note that, unlike many other Long Code Arena datasets, this dataset doesn't contain repositories.
* Our CI Fixing benchmark ( todo) clones the necessary repos to the user's local machine. The user should run
their model to
fix the failing CI workflows, and the benchmark will push commits to GitHub, returning the results of the workflow
runs
for all the datapoints.
### Datapoint Schema
Each example has the following fields:
### Datapoint Example
| [
"### Datapoint Schema\n\n\nEach example has the following fields:",
"### Datapoint Example"
] | [
"TAGS\n#region-us \n",
"### Datapoint Schema\n\n\nEach example has the following fields:",
"### Datapoint Example"
] | [
6,
14,
6
] | [
"passage: TAGS\n#region-us \n### Datapoint Schema\n\n\nEach example has the following fields:### Datapoint Example"
] |
cab5f61d4323c1cad4345c823afb3406898cd120 | # Dataset Card for "autotrain-data-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | A2H0H0R1/autotrain-data-test | [
"region:us"
] | 2023-12-20T13:59:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "autotrain_image", "dtype": "image"}, {"name": "autotrain_label", "dtype": {"class_label": {"names": {"0": "cats", "1": "dogs"}}}}], "splits": [{"name": "train", "num_bytes": 60756.0, "num_examples": 10}, {"name": "validation", "num_bytes": 60756.0, "num_examples": 10}], "download_size": 124636, "dataset_size": 121512.0}} | 2023-12-20T14:00:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "autotrain-data-test"
More Information needed | [
"# Dataset Card for \"autotrain-data-test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-test\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-test\"\n\nMore Information needed"
] |
34d6f9cd7087481724e0937064f0a64ee572473d | # Dataset Card for "nectar_sft_binarized_subset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/nectar_sft_binarized_subset | [
"region:us"
] | 2023-12-20T14:03:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 6523287.421578591, "num_examples": 3542}, {"name": "test", "num_bytes": 723790.2056245713, "num_examples": 393}], "download_size": 3790838, "dataset_size": 7247077.627203162}} | 2023-12-20T14:06:43+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nectar_sft_binarized_subset"
More Information needed | [
"# Dataset Card for \"nectar_sft_binarized_subset\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nectar_sft_binarized_subset\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nectar_sft_binarized_subset\"\n\nMore Information needed"
] |
ea07843ad31373238f19110876cd0744db1f6e19 | Using SeamlessM4T to translate FiQA to portuguese. | leonardo-avila/fiqa_pt | [
"language:pt",
"license:apache-2.0",
"region:us"
] | 2023-12-20T14:05:55+00:00 | {"language": ["pt"], "license": "apache-2.0"} | 2023-12-20T20:45:17+00:00 | [] | [
"pt"
] | TAGS
#language-Portuguese #license-apache-2.0 #region-us
| Using SeamlessM4T to translate FiQA to portuguese. | [] | [
"TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] | [
20
] | [
"passage: TAGS\n#language-Portuguese #license-apache-2.0 #region-us \n"
] |
dc8176b6e21f5ea21cc419662d122ed0aa0cc654 |
# HotpotQA Dataset with GPT-3.5 Generated Questions
## Overview
This repository hosts an enhanced version of the HotpotQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.
## Dataset Format
Each entry in the dataset is formatted as follows:
```json
{
"answer": "This is the answer",
"context": {
"sentences": [["Sent 1"], ["Sent 21", "Sent 22"]],
"title": ["Title1", "Title 2"],
"questions": [["Ques 1"], ["Ques 21", "Ques 22"]], // newly added
"paraphrased_questions": [["Para Ques 1"], ["Para Ques 21", "Para Ques 22"]], // newly added
},
"id": "000001",
"level": "medium",
"question": "What is the answer?",
"supporting_facts": {
"sent_id": [0, 1, 3],
"title": ["Title of para 1", "Title of para 2", "Title of para 3"]
},
"type": "comparison"
}
```
## Important Notices
### 1. Training Split Unavailability
As of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.
### 2. Commercial Usage Caution
Users of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.
### 3. Citation for Original Dataset
This enhanced dataset is based on the HotpotQA dataset. Users of this enhanced dataset should also cite the original HotpotQA dataset. For more information about the original dataset, please visit [HotpotQA Dataset on Hugging Face](https://huggingface.co/datasets/hotpot_qa).
## Acknowledgements
This dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of HotpotQA. We thank both parties for their contributions to the field of natural language processing and machine learning.
| scholarly-shadows-syndicate/hotpotqa_with_qa_gpt35 | [
"license:apache-2.0",
"region:us"
] | 2023-12-20T14:44:42+00:00 | {"license": "apache-2.0"} | 2024-01-12T15:50:05+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
# HotpotQA Dataset with GPT-3.5 Generated Questions
## Overview
This repository hosts an enhanced version of the HotpotQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.
## Dataset Format
Each entry in the dataset is formatted as follows:
## Important Notices
### 1. Training Split Unavailability
As of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.
### 2. Commercial Usage Caution
Users of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.
### 3. Citation for Original Dataset
This enhanced dataset is based on the HotpotQA dataset. Users of this enhanced dataset should also cite the original HotpotQA dataset. For more information about the original dataset, please visit HotpotQA Dataset on Hugging Face.
## Acknowledgements
This dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of HotpotQA. We thank both parties for their contributions to the field of natural language processing and machine learning.
| [
"# HotpotQA Dataset with GPT-3.5 Generated Questions",
"## Overview\n\nThis repository hosts an enhanced version of the HotpotQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.",
"## Dataset Format\n\nEach entry in the dataset is formatted as follows:",
"## Important Notices",
"### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.",
"### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.",
"### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the HotpotQA dataset. Users of this enhanced dataset should also cite the original HotpotQA dataset. For more information about the original dataset, please visit HotpotQA Dataset on Hugging Face.",
"## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of HotpotQA. We thank both parties for their contributions to the field of natural language processing and machine learning."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# HotpotQA Dataset with GPT-3.5 Generated Questions",
"## Overview\n\nThis repository hosts an enhanced version of the HotpotQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.",
"## Dataset Format\n\nEach entry in the dataset is formatted as follows:",
"## Important Notices",
"### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.",
"### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.",
"### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the HotpotQA dataset. Users of this enhanced dataset should also cite the original HotpotQA dataset. For more information about the original dataset, please visit HotpotQA Dataset on Hugging Face.",
"## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of HotpotQA. We thank both parties for their contributions to the field of natural language processing and machine learning."
] | [
14,
15,
83,
17,
5,
52,
73,
64,
61
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# HotpotQA Dataset with GPT-3.5 Generated Questions## Overview\n\nThis repository hosts an enhanced version of the HotpotQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.## Dataset Format\n\nEach entry in the dataset is formatted as follows:## Important Notices### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the HotpotQA dataset. Users of this enhanced dataset should also cite the original HotpotQA dataset. For more information about the original dataset, please visit HotpotQA Dataset on Hugging Face.## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of HotpotQA. We thank both parties for their contributions to the field of natural language processing and machine learning."
] |
8be781b7c9f252a99a57a405d92770ce8e0c943e |
# Dataset Card for NTX v1 in the Aya format - Arabic subset
This dataset is a format conversion for the Arabic data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions) or to the paper below.
**NOTE: ** Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
## Citation
If you utilize this dataset version, feel free to cite/footnote the complete version at https://huggingface.co/datasets/tellarin-ai/ntx_llm_instructions, but please also cite the *original dataset publication*.
**BibTeX:**
```
@preprint{chen2023dataset,
title={Dataset and Baseline System for Multi-lingual Extraction and Normalization of Temporal and Numerical Expressions},
author={Sanxing Chen and Yongqiang Chen and Börje F. Karlsson},
year={2023},
eprint={2303.18103},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| tellarin-ai/ntx_llm_inst_arabic | [
"task_categories:token-classification",
"language:ar",
"license:cc-by-sa-4.0",
"arxiv:2303.18103",
"region:us"
] | 2023-12-20T14:59:57+00:00 | {"language": ["ar"], "license": "cc-by-sa-4.0", "task_categories": ["token-classification"]} | 2023-12-20T15:06:17+00:00 | [
"2303.18103"
] | [
"ar"
] | TAGS
#task_categories-token-classification #language-Arabic #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us
|
# Dataset Card for NTX v1 in the Aya format - Arabic subset
This dataset is a format conversion for the Arabic data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.
## Dataset Details
For the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.
NOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX.
If you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.
BibTeX:
| [
"# Dataset Card for NTX v1 in the Aya format - Arabic subset\n\nThis dataset is a format conversion for the Arabic data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
"TAGS\n#task_categories-token-classification #language-Arabic #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n",
"# Dataset Card for NTX v1 in the Aya format - Arabic subset\n\nThis dataset is a format conversion for the Arabic data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.",
"## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] | [
42,
57,
110
] | [
"passage: TAGS\n#task_categories-token-classification #language-Arabic #license-cc-by-sa-4.0 #arxiv-2303.18103 #region-us \n# Dataset Card for NTX v1 in the Aya format - Arabic subset\n\nThis dataset is a format conversion for the Arabic data from the original NTX into the Aya instruction format and it's released here under the CC-BY-SA 4.0 license.## Dataset Details\n\nFor the original NTX dataset, the conversion to the Aya instructions format, or more details, please refer to the full dataset in instruction form (URL or to the paper below.\n\nNOTE: Unfortunately, due to a conversion issue with numerical expressions, this version here only includes the temporal expressions part of NTX. \n\nIf you utilize this dataset version, feel free to cite/footnote the complete version at URL but please also cite the *original dataset publication*.\n\nBibTeX:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.