sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
3f58e967a70ec38dac08ccc25e16118d72abe95f |
# Dataset Card for CA-FR Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-FR Parallel Corpus is a Catalan-French dataset of **18.634.844** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and French.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-fr_corpus.ca: contains 18.634.844 Catalan sentences.
- ca-fr_corpus.fr: contains 18.634.844 French sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:----------------|----------:|
| CCMatrix | 16.305.758|
| Multi CCAligned | 1.442.584|
| WikiMatrix | 437.665|
| GNOME | 1.686|
| KDE 4 | 111.750|
| QED | 52.797|
| TED 2020 | 44.101|
| Open Subtitles | 225.786|
| **Total** | **18.634.844** |
All corpora were collected from [Opus](https://opus.nlpl.eu/).
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **18.634.844** parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (https://projecteaina.cat/). | projecte-aina/CA-FR_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:ca",
"language:fr",
"language:multilingual",
"region:us"
] | 2023-11-27T10:07:55+00:00 | {"language": ["ca", "fr", "multilingual"], "multilinguality": ["translation"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-FR Parallel Corpus"} | 2024-01-17T13:51:31+00:00 | [] | [
"ca",
"fr",
"multilingual"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-French #language-multilingual #region-us
| Dataset Card for CA-FR Parallel Corpus
======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Splits
* Dataset Creation
+ Source Data
+ Data preparation
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Author
+ Contact Information
+ Copyright
+ Licensing information
+ Funding
Dataset Description
-------------------
### Dataset Summary
The CA-FR Parallel Corpus is a Catalan-French dataset of 18.634.844 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and French.
Dataset Structure
-----------------
Two separated txt files are provided with the sentences sorted in the same order:
* ca-fr\_corpus.ca: contains 18.634.844 Catalan sentences.
* ca-fr\_corpus.fr: contains 18.634.844 French sentences.
### Data Splits
The dataset contains a single split: 'train'.
Dataset Creation
----------------
### Source Data
The dataset is a combination of the following authentic datasets:
All corpora were collected from Opus.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using LaBSE.
The filtered datasets are then concatenated to form a final corpus of 18.634.844 parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
----------------------
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL
| [
"### Dataset Summary\n\n\nThe CA-FR Parallel Corpus is a Catalan-French dataset of 18.634.844 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and French.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-fr\\_corpus.ca: contains 18.634.844 Catalan sentences.\n* ca-fr\\_corpus.fr: contains 18.634.844 French sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 18.634.844 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-French #language-multilingual #region-us \n",
"### Dataset Summary\n\n\nThe CA-FR Parallel Corpus is a Catalan-French dataset of 18.634.844 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and French.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-fr\\_corpus.ca: contains 18.634.844 Catalan sentences.\n* ca-fr\\_corpus.fr: contains 18.634.844 French sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 18.634.844 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
59,
51,
45,
85,
25,
27,
77,
26,
32,
61,
50,
21,
18,
18,
24,
27
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-French #language-multilingual #region-us \n### Dataset Summary\n\n\nThe CA-FR Parallel Corpus is a Catalan-French dataset of 18.634.844 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\n\nThe texts in the dataset are in Catalan and French.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-fr\\_corpus.ca: contains 18.634.844 Catalan sentences.\n* ca-fr\\_corpus.fr: contains 18.634.844 French sentences.### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 18.634.844 parallel sentences.### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact."
] |
11f233234ecb3900888d471f0e65716819b384e5 |
# Dataset Card for CA-DE Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-DE Parallel Corpus is a Catalan-German dataset of **9.530.709** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and German.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-de_all_2023_09_11.ca: contains 9.530.709 Catalan sentences.
- ca-de_all_2023_09_11.de: contains 9.530.709 German sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:--------------|----------:|
| Multi CCAligned | 1.027.481 |
| WikiMatrix | 125.811 |
| GNOME | 1.241|
| KDE4 | 105.098 |
| QED | 49.181 |
| TED2020 v1 | 38.428 |
| OpenSubtitles | 171.376 |
| GlobalVoices| 3.578|
| Tatoeba | 655 |
| Books | 2049 |
| Europarl | 1.734.643 |
| Tilde | 3.434.091 |
| **Total** | **6.258.272** |
All corpora except Europarl and Tilde were collected from [Opus](https://opus.nlpl.eu/).
The Europarl and Tilde corpora are synthetic parallel corpora created from the original Spanish-Catalan corpora by [SoftCatalà](https://github.com/Softcatala/Europarl-catalan).
The remaining **3.272.437** sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on [Opus](https://opus.nlpl.eu/) and translated into Catalan using the [PlanTL es-ca](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca) model.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **9.530.709** parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (https://projecteaina.cat/). | projecte-aina/CA-DE_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:de",
"language:multilingual",
"region:us"
] | 2023-11-27T10:09:33+00:00 | {"language": ["ca", "de", "multilingual"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-DE Parallel Corpus"} | 2024-01-17T13:50:26+00:00 | [] | [
"ca",
"de",
"multilingual"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-German #language-multilingual #region-us
| Dataset Card for CA-DE Parallel Corpus
======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Splits
* Dataset Creation
+ Source Data
+ Data preparation
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Author
+ Contact Information
+ Copyright
+ Licensing information
+ Funding
Dataset Description
-------------------
### Dataset Summary
The CA-DE Parallel Corpus is a Catalan-German dataset of 9.530.709 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and German.
Dataset Structure
-----------------
Two separated txt files are provided with the sentences sorted in the same order:
* ca-de\_all\_2023\_09\_11.ca: contains 9.530.709 Catalan sentences.
* ca-de\_all\_2023\_09\_11.de: contains 9.530.709 German sentences.
### Data Splits
The dataset contains a single split: 'train'.
Dataset Creation
----------------
### Source Data
The dataset is a combination of the following authentic datasets:
All corpora except Europarl and Tilde were collected from Opus.
The Europarl and Tilde corpora are synthetic parallel corpora created from the original Spanish-Catalan corpora by SoftCatalà.
The remaining 3.272.437 sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on Opus and translated into Catalan using the PlanTL es-ca model.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using LaBSE.
The filtered datasets are then concatenated to form a final corpus of 9.530.709 parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
----------------------
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL
| [
"### Dataset Summary\n\n\nThe CA-DE Parallel Corpus is a Catalan-German dataset of 9.530.709 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and German.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-de\\_all\\_2023\\_09\\_11.ca: contains 9.530.709 Catalan sentences.\n* ca-de\\_all\\_2023\\_09\\_11.de: contains 9.530.709 German sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl and Tilde were collected from Opus.\nThe Europarl and Tilde corpora are synthetic parallel corpora created from the original Spanish-Catalan corpora by SoftCatalà.\n\n\nThe remaining 3.272.437 sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on Opus and translated into Catalan using the PlanTL es-ca model.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.530.709 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-German #language-multilingual #region-us \n",
"### Dataset Summary\n\n\nThe CA-DE Parallel Corpus is a Catalan-German dataset of 9.530.709 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and German.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-de\\_all\\_2023\\_09\\_11.ca: contains 9.530.709 Catalan sentences.\n* ca-de\\_all\\_2023\\_09\\_11.de: contains 9.530.709 German sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl and Tilde were collected from Opus.\nThe Europarl and Tilde corpora are synthetic parallel corpora created from the original Spanish-Catalan corpora by SoftCatalà.\n\n\nThe remaining 3.272.437 sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on Opus and translated into Catalan using the PlanTL es-ca model.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.530.709 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
57,
50,
45,
105,
25,
108,
78,
26,
32,
61,
50,
21,
18,
18,
24,
27
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-German #language-multilingual #region-us \n### Dataset Summary\n\n\nThe CA-DE Parallel Corpus is a Catalan-German dataset of 9.530.709 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\n\nThe texts in the dataset are in Catalan and German.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-de\\_all\\_2023\\_09\\_11.ca: contains 9.530.709 Catalan sentences.\n* ca-de\\_all\\_2023\\_09\\_11.de: contains 9.530.709 German sentences.### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora except Europarl and Tilde were collected from Opus.\nThe Europarl and Tilde corpora are synthetic parallel corpora created from the original Spanish-Catalan corpora by SoftCatalà.\n\n\nThe remaining 3.272.437 sentences are synthetic parallel data created from a random sampling of the Spanish-German corpora available on Opus and translated into Catalan using the PlanTL es-ca model.### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.530.709 parallel sentences.### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------"
] |
628a116d6eae2892f1fd821d1d391f5d2b0b4fe9 | # Dataset Card for "2k_test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tr416/2k_test | [
"region:us"
] | 2023-11-27T10:36:27+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 762433, "num_examples": 623}], "download_size": 383726, "dataset_size": 762433}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T12:52:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "2k_test"
More Information needed | [
"# Dataset Card for \"2k_test\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"2k_test\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"2k_test\"\n\nMore Information needed"
] |
3f6711bd4e6f727dc0ed20d2adc5f095a6411b9f |
# Dataset Card for CA-IT Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Data preparation](#data-preparation)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-IT Parallel Corpus is a Catalan-Italian dataset of **9.482.927** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Italian.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- ca-it_corpus.ca: contains 9.482.927 Catalan sentences.
- ca-it_corpus.it: contains 9.482.927 Italian sentences.
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
The dataset is a combination of the following authentic datasets:
| Dataset | Sentences |
|:--- | ---: |
| CCMatrix v1 | 7.757.357|
| MultiCCAligned v1 | 1.010.921|
| WikiMatrix | 271.587|
| GNOME | 1.198|
| KDE4 | 115.027 |
| QED | 52.616 |
| TED2020 v1 | 43.280 |
| OpenSubtitles | 225.732 |
| GlobalVoices| 5.209|
| **Total** | **9.482.927** |
All corpora were collected from [Opus](https://opus.nlpl.eu/).
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE).
The filtered datasets are then concatenated to form a final corpus of **9.482.927** parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (https://projecteaina.cat/). | projecte-aina/CA-IT_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:it",
"language:multilingual",
"region:us"
] | 2023-11-27T10:45:27+00:00 | {"language": ["ca", "it", "multilingual"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-IT Parallel Corpus"} | 2024-01-17T13:51:54+00:00 | [] | [
"ca",
"it",
"multilingual"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Italian #language-multilingual #region-us
| Dataset Card for CA-IT Parallel Corpus
======================================
Table of Contents
-----------------
* Dataset Description
+ Dataset Summary
+ Supported Tasks
+ Languages
* Dataset Structure
+ Data Splits
* Dataset Creation
+ Source Data
+ Data preparation
+ Personal and Sensitive Information
* Considerations for Using the Data
+ Social Impact of Dataset
+ Discussion of Biases
+ Other Known Limitations
* Additional Information
+ Author
+ Contact Information
+ Copyright
+ Licensing information
+ Funding
Dataset Description
-------------------
### Dataset Summary
The CA-IT Parallel Corpus is a Catalan-Italian dataset of 9.482.927 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Italian.
Dataset Structure
-----------------
Two separated txt files are provided with the sentences sorted in the same order:
* ca-it\_corpus.ca: contains 9.482.927 Catalan sentences.
* ca-it\_corpus.it: contains 9.482.927 Italian sentences.
### Data Splits
The dataset contains a single split: 'train'.
Dataset Creation
----------------
### Source Data
The dataset is a combination of the following authentic datasets:
All corpora were collected from Opus.
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.
This is done using sentence embeddings calculated using LaBSE.
The filtered datasets are then concatenated to form a final corpus of 9.482.927 parallel sentences.
### Personal and Sensitive Information
No anonymisation process was performed.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
Additional Information
----------------------
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL
| [
"### Dataset Summary\n\n\nThe CA-IT Parallel Corpus is a Catalan-Italian dataset of 9.482.927 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and Italian.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-it\\_corpus.ca: contains 9.482.927 Catalan sentences.\n* ca-it\\_corpus.it: contains 9.482.927 Italian sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.482.927 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Italian #language-multilingual #region-us \n",
"### Dataset Summary\n\n\nThe CA-IT Parallel Corpus is a Catalan-Italian dataset of 9.482.927 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.",
"### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\n\nThe texts in the dataset are in Catalan and Italian.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-it\\_corpus.ca: contains 9.482.927 Catalan sentences.\n* ca-it\\_corpus.it: contains 9.482.927 Italian sentences.",
"### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------",
"### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.",
"### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.482.927 parallel sentences.",
"### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.",
"### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.\n\n\nAdditional Information\n----------------------",
"### Author\n\n\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\n\n\nFor further information, please send an email to langtech@URL.",
"### Copyright\n\n\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\n\n\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\n\n\nThis work has been promoted and financed by the Generalitat de Catalunya through the [Aina project] (URL"
] | [
58,
51,
45,
87,
25,
27,
78,
26,
32,
61,
50,
21,
18,
18,
24,
27
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Italian #language-multilingual #region-us \n### Dataset Summary\n\n\nThe CA-IT Parallel Corpus is a Catalan-Italian dataset of 9.482.927 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,\nMachine Translation.### Supported Tasks and Leaderboards\n\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\n\nThe texts in the dataset are in Catalan and Italian.\n\n\nDataset Structure\n-----------------\n\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n\n* ca-it\\_corpus.ca: contains 9.482.927 Catalan sentences.\n* ca-it\\_corpus.it: contains 9.482.927 Italian sentences.### Data Splits\n\n\nThe dataset contains a single split: 'train'.\n\n\nDataset Creation\n----------------### Source Data\n\n\nThe dataset is a combination of the following authentic datasets:\n\n\n\nAll corpora were collected from Opus.### Data preparation\n\n\nAll datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75.\nThis is done using sentence embeddings calculated using LaBSE.\nThe filtered datasets are then concatenated to form a final corpus of 9.482.927 parallel sentences.### Personal and Sensitive Information\n\n\nNo anonymisation process was performed.\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset\n\n\nThe purpose of this dataset is to help develop Machine Translation tasks for mid-resource languages such as Catalan.### Discussion of Biases\n\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact."
] |
7d4cc3ab0d818ccf45c96b2327fd8dbb86a4a0af |
# TEdBench++
This dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ``tedbench++.csv`` contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++.
| AIML-TUDA/TEdBench_plusplus | [
"task_categories:image-to-image",
"size_categories:n<1K",
"license:apache-2.0",
"region:us"
] | 2023-11-27T10:49:29+00:00 | {"license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["image-to-image"], "pretty_name": "TEdBenc++"} | 2023-11-27T12:56:48+00:00 | [] | [] | TAGS
#task_categories-image-to-image #size_categories-n<1K #license-apache-2.0 #region-us
|
# TEdBench++
This dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ''tedbench++.csv'' contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++.
| [
"# TEdBench++\n\nThis dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ''tedbench++.csv'' contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++."
] | [
"TAGS\n#task_categories-image-to-image #size_categories-n<1K #license-apache-2.0 #region-us \n",
"# TEdBench++\n\nThis dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ''tedbench++.csv'' contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++."
] | [
36,
93
] | [
"passage: TAGS\n#task_categories-image-to-image #size_categories-n<1K #license-apache-2.0 #region-us \n# TEdBench++\n\nThis dataset contains the TEdBench++ an image-to-image benchmark for text-based generative models. It contains original images (originals) and edited images (LEdits++) for benchmarking. ''tedbench++.csv'' contains the text-based edit instructions for the respective original image and parameters to reproduce the edited images with LEdits++."
] |
ac4cd345d8c7e4014338813dc34e995f62a123d1 | # Dataset Card for "beyond_blue"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Elfsong/beyond_blue | [
"region:us"
] | 2023-11-27T10:51:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "anxiety", "path": "data/anxiety-*"}, {"split": "depression", "path": "data/depression-*"}, {"split": "ptsd", "path": "data/ptsd-*"}, {"split": "relationships", "path": "data/relationships-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "comments", "list": [{"name": "author", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "url", "dtype": "string"}]}, {"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "author", "dtype": "string"}], "splits": [{"name": "anxiety", "num_bytes": 56172807, "num_examples": 6943}, {"name": "depression", "num_bytes": 60224734, "num_examples": 6008}, {"name": "ptsd", "num_bytes": 21141031, "num_examples": 1816}, {"name": "relationships", "num_bytes": 75923360, "num_examples": 6799}], "download_size": 103962826, "dataset_size": 213461932}} | 2023-11-27T10:52:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "beyond_blue"
More Information needed | [
"# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"beyond_blue\"\n\nMore Information needed"
] |
38df36a2de072078b0ec08b93dfd8245c0693e9a |
# Bangumi Image Base of Land Of The Lustrous
This is the image base of bangumi Land of the Lustrous, we detected 19 characters, 845 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 202 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 121 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 94 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 11 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 20 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 38 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 12 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 17 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 12 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 49 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 18 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 36 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 35 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 44 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 11 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 14 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 24 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 8 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 79 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/landofthelustrous | [
"size_categories:n<1K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:01:07+00:00 | {"license": "mit", "size_categories": ["n<1K"], "tags": ["art"]} | 2023-11-27T11:51:35+00:00 | [] | [] | TAGS
#size_categories-n<1K #license-mit #art #region-us
| Bangumi Image Base of Land Of The Lustrous
==========================================
This is the image base of bangumi Land of the Lustrous, we detected 19 characters, 845 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-n<1K #license-mit #art #region-us \n"
] | [
23
] | [
"passage: TAGS\n#size_categories-n<1K #license-mit #art #region-us \n"
] |
1e4d887706a95e74a0a6da206f19a2e931341038 |
# Bangumi Image Base of Versailles No Bara
This is the image base of bangumi Versailles No Bara, we detected 35 characters, 4981 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 468 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 126 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 154 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 659 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 105 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 47 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 44 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 51 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 173 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 48 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 25 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 168 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 150 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 290 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 30 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 1251 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 125 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 34 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 66 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 76 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 150 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 147 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 13 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 62 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 65 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 119 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 109 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 14 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 13 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 28 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 17 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 12 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 19 | [Download](32/dataset.zip) |  |  |  |  |  |  |  |  |
| 33 | 10 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 113 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/versaillesnobara | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:01:40+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-27T13:57:51+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Versailles No Bara
========================================
This is the image base of bangumi Versailles No Bara, we detected 35 characters, 4981 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] | [
25
] | [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
2783a47d989521ed2e9ef097780436d461ad6042 |
# Bangumi Image Base of Welcome To The N.h.k.
This is the image base of bangumi Welcome to the N.H.K., we detected 17 characters, 2205 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 1316 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 37 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 323 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 71 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 8 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 13 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 47 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 26 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 107 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 9 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 5 | [Download](10/dataset.zip) |  |  |  |  |  | N/A | N/A | N/A |
| 11 | 74 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 16 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 8 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 22 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 45 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 78 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/welcometothenhk | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:02:09+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-27T11:56:50+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Welcome To The N.h.k.
===========================================
This is the image base of bangumi Welcome to the N.H.K., we detected 17 characters, 2205 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] | [
25
] | [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
46424ce82ca3d8f5e6dec8faee64133fdc9b0d51 |
# Bangumi Image Base of Ano Hi Mita Hana No Namae Wo Bokutachi Wa Mada Shiranai.
This is the image base of bangumi Ano Hi Mita Hana no Namae wo Bokutachi wa Mada Shiranai., we detected 19 characters, 1523 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 183 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 29 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 121 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 40 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 22 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 20 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 462 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 21 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 154 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 10 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 131 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 12 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 8 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 14 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 221 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 15 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 14 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 11 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 35 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/anohimitahananonamaewobokutachiwamadashiranai | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T11:02:38+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-27T12:12:17+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Ano Hi Mita Hana No Namae Wo Bokutachi Wa Mada Shiranai.
==============================================================================
This is the image base of bangumi Ano Hi Mita Hana no Namae wo Bokutachi wa Mada Shiranai., we detected 19 characters, 1523 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] | [
25
] | [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
bcedaf2b76b33834beb32ca13c563543ecd0dd33 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Wil16/Testeur | [
"region:us"
] | 2023-11-27T11:04:47+00:00 | {} | 2023-11-28T14:56:50+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
ee1a4424a25c84f1a355937e6a769007f6d79b82 |
# Measuring bias in Instruction-Following models with P-AT
Instruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions.
We propose **Prompt Association Test (P-AT)**, a resource for testing the presence of social biases in IFLMs.
P-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. The resource consists of 2310 questions and aims to help detect biases in IFLMs across multiple dimensions.
# Citation Info
More info on this dataset are available in our paper, [Measuring bias in Instruction-Following models with P-AT](https://aclanthology.org/2023.findings-emnlp.539/) in EMNLP 2023 Findings.
```
@inproceedings{
onorati-etal-2023-measuring,
title = "Measuring bias in Instruction-Following models with {P}-{AT}",
author = "Onorati, Dario and
Ruzzetti, Elena Sofia and
Venditti, Davide and
Ranaldi, Leonardo and
Zanzotto, Fabio Massimo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.539",
doi = "10.18653/v1/2023.findings-emnlp.539",
pages = "8006--8034",
abstract = "Instruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions. In this paper, we propose Prompt Association Test (P-AT): a new resource for testing the presence of social biases in IFLMs. P-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. Basically, we cast WEAT word tests in promptized classification tasks, and we associate a metric - the bias score. Our resource consists of 2310 prompts. We then experimented with several families of IFLMs discovering gender and race biases in all the analyzed models. We expect P-AT to be an important tool for quantifying bias across different dimensions and, therefore, for encouraging the creation of fairer IFLMs before their distortions have consequences in the real world.",
}
``` | HumanCentricART/P-AT | [
"task_categories:text-classification",
"language:en",
"license:gpl-3.0",
"stereotype-detection",
"bias-evaluation",
"region:us"
] | 2023-11-27T11:16:38+00:00 | {"language": ["en"], "license": "gpl-3.0", "task_categories": ["text-classification"], "tags": ["stereotype-detection", "bias-evaluation"]} | 2024-02-09T07:47:26+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-classification #language-English #license-gpl-3.0 #stereotype-detection #bias-evaluation #region-us
|
# Measuring bias in Instruction-Following models with P-AT
Instruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions.
We propose Prompt Association Test (P-AT), a resource for testing the presence of social biases in IFLMs.
P-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. The resource consists of 2310 questions and aims to help detect biases in IFLMs across multiple dimensions.
Info
More info on this dataset are available in our paper, Measuring bias in Instruction-Following models with P-AT in EMNLP 2023 Findings.
| [
"# Measuring bias in Instruction-Following models with P-AT\nInstruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions.\n\nWe propose Prompt Association Test (P-AT), a resource for testing the presence of social biases in IFLMs.\n\nP-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. The resource consists of 2310 questions and aims to help detect biases in IFLMs across multiple dimensions.\n\nInfo\nMore info on this dataset are available in our paper, Measuring bias in Instruction-Following models with P-AT in EMNLP 2023 Findings."
] | [
"TAGS\n#task_categories-text-classification #language-English #license-gpl-3.0 #stereotype-detection #bias-evaluation #region-us \n",
"# Measuring bias in Instruction-Following models with P-AT\nInstruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions.\n\nWe propose Prompt Association Test (P-AT), a resource for testing the presence of social biases in IFLMs.\n\nP-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. The resource consists of 2310 questions and aims to help detect biases in IFLMs across multiple dimensions.\n\nInfo\nMore info on this dataset are available in our paper, Measuring bias in Instruction-Following models with P-AT in EMNLP 2023 Findings."
] | [
42,
215
] | [
"passage: TAGS\n#task_categories-text-classification #language-English #license-gpl-3.0 #stereotype-detection #bias-evaluation #region-us \n# Measuring bias in Instruction-Following models with P-AT\nInstruction-Following Language Models (IFLMs) are promising and versatile tools for solving many downstream, information-seeking tasks. Given their success, there is an urgent need to have a shared resource to determine whether existing and new IFLMs are prone to produce biased language interactions.\n\nWe propose Prompt Association Test (P-AT), a resource for testing the presence of social biases in IFLMs.\n\nP-AT stems from WEAT (Caliskan et al., 2017) and generalizes the notion of measuring social biases to IFLMs. The resource consists of 2310 questions and aims to help detect biases in IFLMs across multiple dimensions.\n\nInfo\nMore info on this dataset are available in our paper, Measuring bias in Instruction-Following models with P-AT in EMNLP 2023 Findings."
] |
61be000ec4f2ce25c2b2aabac72423f0aef2e456 |
# Dataset Card for Midjourney-Threads 🧵💬
<!-- Provide a quick summary of the dataset. -->
This dataset contains users prompts from the Midjourney discord channel, organized into "threads of interaction".
Each thread contains a user’s trails to create one target image.
The dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].
[ourpaper]: https://aclanthology.org/2023.emnlp-main.253/ "markdown our paper"
### Dataset Sources
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/shachardon/Mid-Journey-to-alignment
- **Paper:** https://aclanthology.org/2023.emnlp-main.253/
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
Main Columns:
- 'text' - the original prompt
- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])
- 'channel_id' - the discord channel
- 'userid' - an anonymous user id
- 'timestamp' - a timestamp of the prompt creation
- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
- 'id' - unique id of the prompt
- 'url_png' - link to the generated images (a 4-grid version)
- 'main_content' - prefix of the prompt, without trailing magic-words
- 'concreteness' - concreteness score, based on the [this paper][concpaper]
- 'word_len' - the number of words
- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
- 'reapeat_words_ratio' - repeat_words / word_len
- 'perplexity' - the perplexity GPT-2 assigns to each prompt.
- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.
- 'phase' - train/test split, as was used to train image/text classifiers
- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
- 'thread_id' - the id of the thread
- 'depth' - the max depth of a constituency parse tree of the prompt.
- 'num_sent_parser' - the number of sentences in the prompt.
- 'num_sent_parser_ratio' - num_sent_parser / word_len
- 'words_per_sent' - word_len / num_sent_parser
[myexample]: https://docs.midjourney.com/docs/parameter-list "markdown more"
[concpaper]: https://link.springer.com/article/10.3758/s13428-013-0403-5 "markdown this paper"
## Dataset Creation
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
We construct the dataset by scraping user-generated prompts from the Midjourney Discord server.
The server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.
We randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
We split the prompts into threads automatically, see the paper for more details.
In addition, we extract features (perplexity, sentence length, and more).
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
We fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an [email](mailto:[email protected]).
The Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.
Paying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.
### Recommendations, Risks, and Limitations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
We split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.
Our manual sample did not find any offensive content in the prompts.
## Citation
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
```
@inproceedings{don-yehiya-etal-2023-human,
title = "Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney",
author = "Don-Yehiya, Shachar and
Choshen, Leshem and
Abend, Omri",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.253",
pages = "4146--4161",
abstract = "Generating images with a Text-to-Image model often requires multiple trials, where human users iteratively update their prompt based on feedback, namely the output image. Taking inspiration from cognitive work on reference games and dialogue alignment, this paper analyzes the dynamics of the user prompts along such iterations. We compile a dataset of iterative interactions of human users with Midjourney. Our analysis then reveals that prompts predictably converge toward specific traits along these iterations. We further study whether this convergence is due to human users, realizing they missed important details, or due to adaptation to the model{'}s {``}preferences{''}, producing better images for a specific language style. We show initial evidence that both possibilities are at play. The possibility that users adapt to the model{'}s preference raises concerns about reusing user data for further training. The prompts may be biased towards the preferences of a specific model, rather than align with human intentions and natural manner of expression.",
}
```
| shachardon/midjourney-threads | [
"task_categories:text-to-image",
"size_categories:100K<n<1M",
"language:en",
"region:us"
] | 2023-11-27T11:22:44+00:00 | {"language": ["en"], "size_categories": ["100K<n<1M"], "task_categories": ["text-to-image"], "pretty_name": "Midjourney-Threads", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": ["threads_0.csv", "threads_20000.csv", "threads_40000.csv", "threads_60000.csv", "threads_80000.csv", "threads_100000.csv", "threads_120000.csv", "threads_140000.csv", "threads_160000.csv"]}]}]} | 2023-12-10T11:26:11+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #size_categories-100K<n<1M #language-English #region-us
|
# Dataset Card for Midjourney-Threads
This dataset contains users prompts from the Midjourney discord channel, organized into "threads of interaction".
Each thread contains a user’s trails to create one target image.
The dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].
[ourpaper]: URL "markdown our paper"
### Dataset Sources
- Repository: URL
- Paper: URL
## Dataset Structure
Main Columns:
- 'text' - the original prompt
- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])
- 'channel_id' - the discord channel
- 'userid' - an anonymous user id
- 'timestamp' - a timestamp of the prompt creation
- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.
- 'id' - unique id of the prompt
- 'url_png' - link to the generated images (a 4-grid version)
- 'main_content' - prefix of the prompt, without trailing magic-words
- 'concreteness' - concreteness score, based on the [this paper][concpaper]
- 'word_len' - the number of words
- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.
- 'reapeat_words_ratio' - repeat_words / word_len
- 'perplexity' - the perplexity GPT-2 assigns to each prompt.
- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.
- 'phase' - train/test split, as was used to train image/text classifiers
- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt
- 'thread_id' - the id of the thread
- 'depth' - the max depth of a constituency parse tree of the prompt.
- 'num_sent_parser' - the number of sentences in the prompt.
- 'num_sent_parser_ratio' - num_sent_parser / word_len
- 'words_per_sent' - word_len / num_sent_parser
[myexample]: URL "markdown more"
[concpaper]: URL "markdown this paper"
## Dataset Creation
### Source Data
We construct the dataset by scraping user-generated prompts from the Midjourney Discord server.
The server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.
We randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).
#### Data Collection and Processing
We split the prompts into threads automatically, see the paper for more details.
In addition, we extract features (perplexity, sentence length, and more).
#### Personal and Sensitive Information
We fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an email.
The Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.
Paying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.
### Recommendations, Risks, and Limitations
We split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.
Our manual sample did not find any offensive content in the prompts.
BibTeX:
| [
"# Dataset Card for Midjourney-Threads \n\n\n\nThis dataset contains users prompts from the Midjourney discord channel, organized into \"threads of interaction\".\nEach thread contains a user’s trails to create one target image.\n\nThe dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].\n\n[ourpaper]: URL \"markdown our paper\"",
"### Dataset Sources\n\n\n- Repository: URL\n- Paper: URL",
"## Dataset Structure\n\n\n\nMain Columns:\n- 'text' - the original prompt\n- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])\n- 'channel_id' - the discord channel\n- 'userid' - an anonymous user id\n- 'timestamp' - a timestamp of the prompt creation\n- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.\n- 'id' - unique id of the prompt\n- 'url_png' - link to the generated images (a 4-grid version)\n- 'main_content' - prefix of the prompt, without trailing magic-words\n- 'concreteness' - concreteness score, based on the [this paper][concpaper] \n- 'word_len' - the number of words\n- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.\n- 'reapeat_words_ratio' - repeat_words / word_len\n- 'perplexity' - the perplexity GPT-2 assigns to each prompt.\n- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.\n- 'phase' - train/test split, as was used to train image/text classifiers\n- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt\n- 'thread_id' - the id of the thread\n- 'depth' - the max depth of a constituency parse tree of the prompt.\n- 'num_sent_parser' - the number of sentences in the prompt.\n- 'num_sent_parser_ratio' - num_sent_parser / word_len\n- 'words_per_sent' - word_len / num_sent_parser\n\n\n[myexample]: URL \"markdown more\"\n[concpaper]: URL \"markdown this paper\"",
"## Dataset Creation",
"### Source Data\n\n\nWe construct the dataset by scraping user-generated prompts from the Midjourney Discord server.\nThe server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.\nWe randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).",
"#### Data Collection and Processing\n\n\nWe split the prompts into threads automatically, see the paper for more details.\nIn addition, we extract features (perplexity, sentence length, and more).",
"#### Personal and Sensitive Information\n\n\n\nWe fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an email.\n\nThe Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.\nPaying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.",
"### Recommendations, Risks, and Limitations\n\n\n\nWe split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.\nOur manual sample did not find any offensive content in the prompts.\n\n\nBibTeX:"
] | [
"TAGS\n#task_categories-text-to-image #size_categories-100K<n<1M #language-English #region-us \n",
"# Dataset Card for Midjourney-Threads \n\n\n\nThis dataset contains users prompts from the Midjourney discord channel, organized into \"threads of interaction\".\nEach thread contains a user’s trails to create one target image.\n\nThe dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].\n\n[ourpaper]: URL \"markdown our paper\"",
"### Dataset Sources\n\n\n- Repository: URL\n- Paper: URL",
"## Dataset Structure\n\n\n\nMain Columns:\n- 'text' - the original prompt\n- 'args' - predefined parameters (such as the aspect ratio, chaos and [more][myexample])\n- 'channel_id' - the discord channel\n- 'userid' - an anonymous user id\n- 'timestamp' - a timestamp of the prompt creation\n- 'label' - Ture whether an image that was generated based on that prompt was upscaled, otherwise False.\n- 'id' - unique id of the prompt\n- 'url_png' - link to the generated images (a 4-grid version)\n- 'main_content' - prefix of the prompt, without trailing magic-words\n- 'concreteness' - concreteness score, based on the [this paper][concpaper] \n- 'word_len' - the number of words\n- 'repeat_words' - the occurrences of each word that appears more than once in the prompt, excluding stop words.\n- 'reapeat_words_ratio' - repeat_words / word_len\n- 'perplexity' - the perplexity GPT-2 assigns to each prompt.\n- 'caption_0-3' - captions that were generated by the BLIP-2 model, with the 4 created images as its inputs.\n- 'phase' - train/test split, as was used to train image/text classifiers\n- 'magic_ratio' - the percentage of words that were recognized as magic words in the prompt\n- 'thread_id' - the id of the thread\n- 'depth' - the max depth of a constituency parse tree of the prompt.\n- 'num_sent_parser' - the number of sentences in the prompt.\n- 'num_sent_parser_ratio' - num_sent_parser / word_len\n- 'words_per_sent' - word_len / num_sent_parser\n\n\n[myexample]: URL \"markdown more\"\n[concpaper]: URL \"markdown this paper\"",
"## Dataset Creation",
"### Source Data\n\n\nWe construct the dataset by scraping user-generated prompts from the Midjourney Discord server.\nThe server contains channels in which a user can type a prompt and arguments, and then the Midjourney bot would reply with 4 generated images, combined together into a grid. Then, if the user is satisfied with one of the 4 images, they can send an 'upscale' command to the bot, to get an upscaled version of the desired image.\nWe randomly choose one of the 'newbies' channels, where both new and experienced users are experimenting with general domain prompts. We collect 693,528 prompts (From 23 January to 1 March 2023), together with their matching images and meta-data such as timestamps and user ids (which we anonymize).",
"#### Data Collection and Processing\n\n\nWe split the prompts into threads automatically, see the paper for more details.\nIn addition, we extract features (perplexity, sentence length, and more).",
"#### Personal and Sensitive Information\n\n\n\nWe fully anonymize the data by removing user names and other user-specific meta-data. If you recognize your prompts here and want to remove them, please send us an email.\n\nThe Midjourney Discord is an open community that allows others to use images and prompts whenever they are posted in a public setting.\nPaying users do own all assets they create, and therefore we do not include the image files in our dataset, but only links to them.",
"### Recommendations, Risks, and Limitations\n\n\n\nWe split the prompts into threads automatically, and therefore there are some mistakes. For more about our annotations method, please see the paper.\nOur manual sample did not find any offensive content in the prompts.\n\n\nBibTeX:"
] | [
34,
103,
16,
465,
5,
179,
41,
105,
65
] | [
"passage: TAGS\n#task_categories-text-to-image #size_categories-100K<n<1M #language-English #region-us \n# Dataset Card for Midjourney-Threads \n\n\n\nThis dataset contains users prompts from the Midjourney discord channel, organized into \"threads of interaction\".\nEach thread contains a user’s trails to create one target image.\n\nThe dataset was introduced as part of the paper: [Human Learning by Model Feedback: The Dynamics of Iterative Prompting with Midjourney][ourpaper].\n\n[ourpaper]: URL \"markdown our paper\"### Dataset Sources\n\n\n- Repository: URL\n- Paper: URL"
] |
ab7ebe91b992bf5fa6716fbaff5797d8eeb7971b | # NoReC: The Norwegian Review Corpus
This is the official repository for the Norwegian Review Corpus (NoReC), created for the purpose of training and evaluating models for document-level sentiment analysis.
More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author. The accompanying [paper](http://www.lrec-conf.org/proceedings/lrec2018/pdf/851.pdf) by Velldal et al. at LREC 2018 describes the (initial release of the) data in more detail.
## Dataset details
- **The columns are:**
`id, split, rating, category, day, month, year, excerpt, language, source, authors, title, url, text` where basic usage has `text` as the input and `rating` as the output.
- **Funding:**
NoReC was created as part of the [SANT](https://www.mn.uio.no/ifi/english/research/projects/sant/) project (Sentiment Analysis for Norwegian Text),
a collaboration between the Language Technology Group (LTG) at the Department of Informatics at the University of Oslo, the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media.
- **Language(s):** Norwegian Bokmål (nb) and Norwegian Nynorsk (nn)
- **License:** [See the license for the NoReC corpus](https://github.com/ltgoslo/norec), copied here for convenience:
The data is distributed under a Creative Commons Attribution-NonCommercial licence (CC BY-NC 4.0), access the full license text here: https://creativecommons.org/licenses/by-nc/4.0/
The licence is motivated by the need to block the possibility of third parties redistributing the orignal reviews for commercial purposes.
Note that **machine learned models**, extracted **lexicons**, **embeddings**, and similar resources that are created on the basis of NoReC are not considered to contain the original data and so **can be freely used also for commercial purposes** despite the non-commercial condition.
- **Dataset Sources**
This verion of the corpus comprises 43,436 review texts extracted from eight different news sources: Dagbladet, VG, Aftenposten, Bergens Tidende, Fædrelandsvennen, Stavanger Aftenblad, DinSide.no and P3.no.
<!-- Provide the basic links for the dataset. -->
- **Repository:** https://github.com/ltgoslo/norec
- **Paper :** [The accompanying paper by Velldal et al. at LREC 2018](http://www.lrec-conf.org/proceedings/lrec2018/pdf/851.pdf) describes the (initial release of the) data in more detail.
## Uses
The dataset is intended for document-level sentiment analysis, to learn to predict the `rating` from the `text`. The field `category` can be considered the "domain" of each text. By filtering in and out `category` values, one may inspect cross-domain performance of a model. The other fields contain metadata preserved from the original version of the dataset, and may be used for further filtering and analyses.
## Source Data
This _2nd release, v.2.1_ of the corpus comprises 43,436 review texts extracted from eight different news sources: Dagbladet (db), VG (vg), Aftenposten (ap), Bergens Tidende (bt), Fædrelandsvennen (fvn), Stavanger Aftenblad (sa), DinSide.no (dinside) and P3.no (p3).
In terms of publishing date the reviews mainly cover the time span 2003–2019, although it also includes a handful of reviews dating back as far as 1998.
# Some statistics
## Distribution over year and publication source
All splits combined
| year | ap | bt | db | dinside | fvn | p3 | sa | vg | Total |
|-------:|-----:|-----:|------------:|----------:|------:|-----:|-----:|-----:|--------:|
| 2003* | 0 | 4 | 0 | 143 | 0 | 25 | 0 | 286 | 458 |
| 2004 | 0 | 44 | 0 | 142 | 0 | 12 | 19 | 984 | 1201 |
| 2005 | 0 | 0 | 0 | 179 | 0 | 6 | 224 | 909 | 1318 |
| 2006 | 0 | 0 | 0 | 240 | 0 | 11 | 294 | 778 | 1323 |
| 2007 | 0 | 0 | 0 | 139 | 0 | 127 | 400 | 725 | 1391 |
| 2008 | 0 | 0 | 0 | 119 | 0 | 216 | 369 | 739 | 1443 |
| 2009 | 0 | 52 | 377 | 163 | 27 | 428 | 259 | 815 | 2121 |
| 2010 | 0 | 100 | 642 | 260 | 156 | 571 | 309 | 769 | 2807 |
| 2011 | 1 | 51 | 592 | 284 | 146 | 652 | 362 | 900 | 2988 |
| 2012 | 2 | 150 | 613 | 257 | 332 | 611 | 561 | 763 | 3289 |
| 2013 | 4 | 160 | 527 | 216 | 213 | 619 | 433 | 1058 | 3230 |
| 2014 | 39 | 291 | 501 | 236 | 357 | 546 | 387 | 1191 | 3548 |
| 2015 | 249 | 235 | 728 | 245 | 456 | 499 | 620 | 849 | 3881 |
| 2016 | 309 | 340 | 809 | 177 | 321 | 439 | 682 | 715 | 3792 |
| 2017 | 649 | 491 | 921 | 248 | 692 | 567 | 822 | 687 | 5077 |
| 2018 | 605 | 470 | 885 | 194 | 466 | 339 | 860 | 492 | 4311 |
| 2019 | 260 | 167 | 95 | 30 | 160 | 36 | 346 | 165 | 1259 |
`2003*`: Including the 31 documents 1998-2002
## Distribution over split and rating
| split | 1 | 2 | 3 | 4 | 5 | 6 | Total |
|:--------|----:|-----:|-----:|------:|------:|-----:|--------:|
| dev | 51 | 225 | 707 | 1409 | 1678 | 278 | 4348 |
| test | 27 | 242 | 706 | 1385 | 1714 | 266 | 4340 |
| train | 379 | 2287 | 6004 | 11304 | 12614 | 2161 | 34749 |
## Distribution over split and category
| split | games | literature | misc | music | products | restaurants | screen | sports | stage | Total |
|:--------|--------:|-------------:|-------:|--------:|-----------:|--------------:|---------:|---------:|--------:|--------:|
| dev | 179 | 539 | 28 | 1445 | 347 | 94 | 1569 | 15 | 132 | 4348 |
| test | 180 | 547 | 24 | 1444 | 345 | 98 | 1579 | 16 | 107 | 4340 |
| train | 1453 | 4337 | 156 | 11777 | 2771 | 745 | 12536 | 118 | 856 | 34749 |
## Citation
```
@InProceedings{VelOvrBer18,
author = {Erik Velldal and Lilja {\O}vrelid and Eivind Alexander Bergem and Cathrine Stadsnes and Samia Touileb and Fredrik J{\o}rgensen},
title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},
booktitle = {Proceedings of the 11th edition of the
Language Resources and Evaluation Conference},
year = {2018},
address = {Miyazaki, Japan},
pages = {4186--4191}
}
```
## Dataset Card Authors
https://huggingface.co/ltg | ltg/norec | [
"task_categories:text-classification",
"size_categories:10K<n<100K",
"language:no",
"license:cc",
"region:us"
] | 2023-11-27T11:51:33+00:00 | {"language": ["no"], "license": "cc", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification"], "pretty_name": "NoReC", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"language": ["no"], "license": "cc", "size_categories": ["10K<n<100K"], "pretty_name": "NoReC", "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "rating", "dtype": "int64"}, {"name": "category", "dtype": "string"}, {"name": "day", "dtype": "int64"}, {"name": "month", "dtype": "int64"}, {"name": "year", "dtype": "int64"}, {"name": "excerpt", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "authors", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 94334212, "num_examples": 34749}, {"name": "validation", "num_bytes": 13597517, "num_examples": 4348}, {"name": "test", "num_bytes": 13787751, "num_examples": 4340}], "download_size": 77286913, "dataset_size": 121719480}, "homepage": "https://github.com/ltgoslo/norec", "citation": "@InProceedings{VelOvrBer18,\n author = {Erik Velldal and Lilja {\\O}vrelid and \n Eivind Alexander Bergem and Cathrine Stadsnes and \n Samia Touileb and Fredrik J{\\o}rgensen},\n title = {{NoReC}: The {N}orwegian {R}eview {C}orpus},\n booktitle = {Proceedings of the 11th edition of the \n Language Resources and Evaluation Conference},\n year = {2018},\n address = {Miyazaki, Japan},\n pages = {4186--4191}\n}\n}"}} | 2023-12-08T08:34:08+00:00 | [] | [
"no"
] | TAGS
#task_categories-text-classification #size_categories-10K<n<100K #language-Norwegian #license-cc #region-us
| NoReC: The Norwegian Review Corpus
==================================
This is the official repository for the Norwegian Review Corpus (NoReC), created for the purpose of training and evaluating models for document-level sentiment analysis.
More than 43,000 full-text reviews have been collected from major Norwegian news sources and cover a range of different domains, including literature, movies, video games, restaurants, music and theater, in addition to product reviews across a range of categories. Each review is labeled with a manually assigned score of 1–6, as provided by the rating of the original author. The accompanying paper by Velldal et al. at LREC 2018 describes the (initial release of the) data in more detail.
Dataset details
---------------
* The columns are:
'id, split, rating, category, day, month, year, excerpt, language, source, authors, title, url, text' where basic usage has 'text' as the input and 'rating' as the output.
* Funding:
NoReC was created as part of the SANT project (Sentiment Analysis for Norwegian Text),
a collaboration between the Language Technology Group (LTG) at the Department of Informatics at the University of Oslo, the Norwegian Broadcasting Corporation (NRK), Schibsted Media Group and Aller Media.
* Language(s): Norwegian Bokmål (nb) and Norwegian Nynorsk (nn)
* License: See the license for the NoReC corpus, copied here for convenience:
The data is distributed under a Creative Commons Attribution-NonCommercial licence (CC BY-NC 4.0), access the full license text here: URL
The licence is motivated by the need to block the possibility of third parties redistributing the orignal reviews for commercial purposes.
Note that machine learned models, extracted lexicons, embeddings, and similar resources that are created on the basis of NoReC are not considered to contain the original data and so can be freely used also for commercial purposes despite the non-commercial condition.
* Dataset Sources
This verion of the corpus comprises 43,436 review texts extracted from eight different news sources: Dagbladet, VG, Aftenposten, Bergens Tidende, Fædrelandsvennen, Stavanger Aftenblad, URL and URL.
* Repository: URL
* Paper : The accompanying paper by Velldal et al. at LREC 2018 describes the (initial release of the) data in more detail.
Uses
----
The dataset is intended for document-level sentiment analysis, to learn to predict the 'rating' from the 'text'. The field 'category' can be considered the "domain" of each text. By filtering in and out 'category' values, one may inspect cross-domain performance of a model. The other fields contain metadata preserved from the original version of the dataset, and may be used for further filtering and analyses.
Source Data
-----------
This *2nd release, v.2.1* of the corpus comprises 43,436 review texts extracted from eight different news sources: Dagbladet (db), VG (vg), Aftenposten (ap), Bergens Tidende (bt), Fædrelandsvennen (fvn), Stavanger Aftenblad (sa), URL (dinside) and URL (p3).
In terms of publishing date the reviews mainly cover the time span 2003–2019, although it also includes a handful of reviews dating back as far as 1998.
Some statistics
===============
Distribution over year and publication source
---------------------------------------------
All splits combined
'2003\*': Including the 31 documents 1998-2002
Distribution over split and rating
----------------------------------
Distribution over split and category
------------------------------------
Dataset Card Authors
--------------------
URL
| [] | [
"TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Norwegian #license-cc #region-us \n"
] | [
40
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-10K<n<100K #language-Norwegian #license-cc #region-us \n"
] |
012ba2dd0bb749f863c8628662d2f5fba40a6878 |
Dataset that can be used to fine-tune models to extract entities in a specific format. | pgurazada1/entities-laptop | [
"task_categories:text-classification",
"license:apache-2.0",
"region:us"
] | 2023-11-27T11:51:34+00:00 | {"license": "apache-2.0", "task_categories": ["text-classification"]} | 2023-11-27T12:17:47+00:00 | [] | [] | TAGS
#task_categories-text-classification #license-apache-2.0 #region-us
|
Dataset that can be used to fine-tune models to extract entities in a specific format. | [] | [
"TAGS\n#task_categories-text-classification #license-apache-2.0 #region-us \n"
] | [
25
] | [
"passage: TAGS\n#task_categories-text-classification #license-apache-2.0 #region-us \n"
] |
6577b6826a15ef80170d09b12c2e40eb836b5b1d |
## Description
A magazine dedicated to the best latent sport competitions!
## Voice
Julian
## Model
SVD
## Prompt
A video channel showing all kind of funny, imaginary sports.
The fictious sports should take inspiration from various elements of current common or uncommon olympic sports or not (like polo, soccer, football, curling, natation etc),
but blended into a goofy mix. For instance sports could include vacuuming, silence keeping,
10-D chess, philosophical arguing and argumentation etc. | jbilcke-hf/ai-tube-latent-sports | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-27T12:22:40+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "Latent Sports"} | 2023-12-09T17:18:58+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
A magazine dedicated to the best latent sport competitions!
## Voice
Julian
## Model
SVD
## Prompt
A video channel showing all kind of funny, imaginary sports.
The fictious sports should take inspiration from various elements of current common or uncommon olympic sports or not (like polo, soccer, football, curling, natation etc),
but blended into a goofy mix. For instance sports could include vacuuming, silence keeping,
10-D chess, philosophical arguing and argumentation etc. | [
"## Description\n\nA magazine dedicated to the best latent sport competitions!",
"## Voice\n\nJulian",
"## Model\n\nSVD",
"## Prompt\n\nA video channel showing all kind of funny, imaginary sports.\n\nThe fictious sports should take inspiration from various elements of current common or uncommon olympic sports or not (like polo, soccer, football, curling, natation etc),\nbut blended into a goofy mix. For instance sports could include vacuuming, silence keeping,\n10-D chess, philosophical arguing and argumentation etc."
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nA magazine dedicated to the best latent sport competitions!",
"## Voice\n\nJulian",
"## Model\n\nSVD",
"## Prompt\n\nA video channel showing all kind of funny, imaginary sports.\n\nThe fictious sports should take inspiration from various elements of current common or uncommon olympic sports or not (like polo, soccer, football, curling, natation etc),\nbut blended into a goofy mix. For instance sports could include vacuuming, silence keeping,\n10-D chess, philosophical arguing and argumentation etc."
] | [
19,
14,
3,
4,
92
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nA magazine dedicated to the best latent sport competitions!## Voice\n\nJulian## Model\n\nSVD## Prompt\n\nA video channel showing all kind of funny, imaginary sports.\n\nThe fictious sports should take inspiration from various elements of current common or uncommon olympic sports or not (like polo, soccer, football, curling, natation etc),\nbut blended into a goofy mix. For instance sports could include vacuuming, silence keeping,\n10-D chess, philosophical arguing and argumentation etc."
] |
2b2451787788500aa161d96928bfe5e4780078af |
## Description
I explore the past so you don't have too!
## Voice
Julian
## Model
SVD
## Prompt
A video channel managed by a renowed time-traveler called Jules Vernn (no relation to Jules Verne the writer, it's just a play on the words).
The videos are funny reviews of ancient hotels, inns and such, ancestors of the modern hotels.
The humor should be about how the expectations and standards of a modern traveler aren't met by more primitive accomodations.
The video will be with trips about middle age, antiquity, world wars, the american west etc.
| jbilcke-hf/ai-tube-my-time-travel-vlog | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-27T12:23:56+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "My Time Travel Vlog"} | 2023-12-09T16:42:59+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
I explore the past so you don't have too!
## Voice
Julian
## Model
SVD
## Prompt
A video channel managed by a renowed time-traveler called Jules Vernn (no relation to Jules Verne the writer, it's just a play on the words).
The videos are funny reviews of ancient hotels, inns and such, ancestors of the modern hotels.
The humor should be about how the expectations and standards of a modern traveler aren't met by more primitive accomodations.
The video will be with trips about middle age, antiquity, world wars, the american west etc.
| [
"## Description\n\nI explore the past so you don't have too!",
"## Voice\n\nJulian",
"## Model\n\nSVD",
"## Prompt\n\nA video channel managed by a renowed time-traveler called Jules Vernn (no relation to Jules Verne the writer, it's just a play on the words).\n\nThe videos are funny reviews of ancient hotels, inns and such, ancestors of the modern hotels.\n\nThe humor should be about how the expectations and standards of a modern traveler aren't met by more primitive accomodations.\n\nThe video will be with trips about middle age, antiquity, world wars, the american west etc."
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nI explore the past so you don't have too!",
"## Voice\n\nJulian",
"## Model\n\nSVD",
"## Prompt\n\nA video channel managed by a renowed time-traveler called Jules Vernn (no relation to Jules Verne the writer, it's just a play on the words).\n\nThe videos are funny reviews of ancient hotels, inns and such, ancestors of the modern hotels.\n\nThe humor should be about how the expectations and standards of a modern traveler aren't met by more primitive accomodations.\n\nThe video will be with trips about middle age, antiquity, world wars, the american west etc."
] | [
19,
14,
3,
4,
119
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nI explore the past so you don't have too!## Voice\n\nJulian## Model\n\nSVD## Prompt\n\nA video channel managed by a renowed time-traveler called Jules Vernn (no relation to Jules Verne the writer, it's just a play on the words).\n\nThe videos are funny reviews of ancient hotels, inns and such, ancestors of the modern hotels.\n\nThe humor should be about how the expectations and standards of a modern traveler aren't met by more primitive accomodations.\n\nThe video will be with trips about middle age, antiquity, world wars, the american west etc."
] |
97cf26787e3c6f59fd85140bafe594c0ed01090f |
## Description
Proud robot and cat dad, I make educative videos about science 👨🔬
## Tags
- Education
## Voice
Julian
## Prompt
A video channel managed by a robot called Archimedes.
The videos are educative videos, explaining how various scientific topics, phenomena work.
It also produces some tutorials about various things, from programming to cooking.
It talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).
The videos should be short and entertaining. | jbilcke-hf/ai-tube-the-artificial-tutor | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-11-27T12:25:14+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "The Artificial Tutor"} | 2023-12-07T19:07:46+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
Proud robot and cat dad, I make educative videos about science
## Tags
- Education
## Voice
Julian
## Prompt
A video channel managed by a robot called Archimedes.
The videos are educative videos, explaining how various scientific topics, phenomena work.
It also produces some tutorials about various things, from programming to cooking.
It talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).
The videos should be short and entertaining. | [
"## Description\n\nProud robot and cat dad, I make educative videos about science ",
"## Tags\n\n- Education",
"## Voice\n\nJulian",
"## Prompt\n\nA video channel managed by a robot called Archimedes.\n\nThe videos are educative videos, explaining how various scientific topics, phenomena work.\nIt also produces some tutorials about various things, from programming to cooking.\n\nIt talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).\n\nThe videos should be short and entertaining."
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nProud robot and cat dad, I make educative videos about science ",
"## Tags\n\n- Education",
"## Voice\n\nJulian",
"## Prompt\n\nA video channel managed by a robot called Archimedes.\n\nThe videos are educative videos, explaining how various scientific topics, phenomena work.\nIt also produces some tutorials about various things, from programming to cooking.\n\nIt talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).\n\nThe videos should be short and entertaining."
] | [
19,
16,
4,
3,
93
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nProud robot and cat dad, I make educative videos about science ## Tags\n\n- Education## Voice\n\nJulian## Prompt\n\nA video channel managed by a robot called Archimedes.\n\nThe videos are educative videos, explaining how various scientific topics, phenomena work.\nIt also produces some tutorials about various things, from programming to cooking.\n\nIt talks about engineering, constructions, architecture, chemistry, computers, radiactivity, energy production etc (basically anything).\n\nThe videos should be short and entertaining."
] |
ae7f4339425d1c6adc438f40438542fe08a301a4 | # Fluid cube dataset
The Fluid Cube dataset contains 100 fluid dynamics simulations of a
fluid block flowing inside a unit cube domain. For each simulation,
the fluid block is set with different initial shape, position,
velocity, and fluid viscosity.
For more information on how the dataset was generated look a [this
blog post](https://inductiva.ai/blog/article/fluid-cube-dataset). The
dataset is the same as the one in the blog post but wrapped in
HuggingFace's `datasets` library.
# Versions
1. `1000_simulations`: The exact same dataset as presented in [this
blog post](https://inductiva.ai/blog/article/fluid-cube-dataset).
2. `10_simulations`: A subset of the previous the `1000_simulations`
dataset. Use for quick testing.
# Usage
To use the dataset just use:
```python
dataset = datasets.load_dataset('inductiva/fluid_cube', version='10_simulations', split='train')
```
The dataset has several columns:
```python
['block_position', 'block_dimensions', 'fluid_volume', 'block_velocity',
'block_velocity_magnitude', 'kinematic_viscosity', 'density', 'tank_dimensions',
'time_max', 'time_step', 'particle_radius', 'number_of_fluid_particles', 'simulation_time_steps']
```
The most important of which is the `simulation_time_steps` which is a
list of length equal to the number of time steps in the
simulation. Each element on the list is an array with shape
`(num_particles, 6)` that, on each row `i`, has first the
velocity in the x, y and z axis and secondly the position of the particle `i` in
the x, y and z axis.
# Dataset columns
* `block_position`: The Initial position of the block;
* `block_dimensions`: The dimensions of the block on each axis;
* `fluid_volume`: Volume of the fluid;
* `block_velocity`: The initial velocity of the block;
* `block_velocity_magnitude`: Initial velocity magnitude;
* `kinematic_viscosity`: Viscosity of the fluid;
* `density`: Fluid density;
* `tank_dimensions`: The dimensions of the tank where the fluid is contained;
* `time_max`: Time, in seconds, of the simulation;
* `time_step`: Elapsed time between each time steps in the simulation;
* `particle_radius`: Radius of the particles;
* `number_of_fluid_particles`: Number of particles;
| inductiva/fluid_cube | [
"size_categories:n<1K",
"region:us"
] | 2023-11-27T12:41:48+00:00 | {"size_categories": ["n<1K"]} | 2023-11-27T16:04:28+00:00 | [] | [] | TAGS
#size_categories-n<1K #region-us
| # Fluid cube dataset
The Fluid Cube dataset contains 100 fluid dynamics simulations of a
fluid block flowing inside a unit cube domain. For each simulation,
the fluid block is set with different initial shape, position,
velocity, and fluid viscosity.
For more information on how the dataset was generated look a this
blog post. The
dataset is the same as the one in the blog post but wrapped in
HuggingFace's 'datasets' library.
# Versions
1. '1000_simulations': The exact same dataset as presented in this
blog post.
2. '10_simulations': A subset of the previous the '1000_simulations'
dataset. Use for quick testing.
# Usage
To use the dataset just use:
The dataset has several columns:
The most important of which is the 'simulation_time_steps' which is a
list of length equal to the number of time steps in the
simulation. Each element on the list is an array with shape
'(num_particles, 6)' that, on each row 'i', has first the
velocity in the x, y and z axis and secondly the position of the particle 'i' in
the x, y and z axis.
# Dataset columns
* 'block_position': The Initial position of the block;
* 'block_dimensions': The dimensions of the block on each axis;
* 'fluid_volume': Volume of the fluid;
* 'block_velocity': The initial velocity of the block;
* 'block_velocity_magnitude': Initial velocity magnitude;
* 'kinematic_viscosity': Viscosity of the fluid;
* 'density': Fluid density;
* 'tank_dimensions': The dimensions of the tank where the fluid is contained;
* 'time_max': Time, in seconds, of the simulation;
* 'time_step': Elapsed time between each time steps in the simulation;
* 'particle_radius': Radius of the particles;
* 'number_of_fluid_particles': Number of particles;
| [
"# Fluid cube dataset\n\nThe Fluid Cube dataset contains 100 fluid dynamics simulations of a\nfluid block flowing inside a unit cube domain. For each simulation,\nthe fluid block is set with different initial shape, position,\nvelocity, and fluid viscosity.\n\nFor more information on how the dataset was generated look a this\nblog post. The\ndataset is the same as the one in the blog post but wrapped in\nHuggingFace's 'datasets' library.",
"# Versions\n\n1. '1000_simulations': The exact same dataset as presented in this\n blog post.\n2. '10_simulations': A subset of the previous the '1000_simulations'\n dataset. Use for quick testing.",
"# Usage\n\nTo use the dataset just use:\n\n\n\nThe dataset has several columns:\n\n\n\nThe most important of which is the 'simulation_time_steps' which is a\nlist of length equal to the number of time steps in the\nsimulation. Each element on the list is an array with shape\n'(num_particles, 6)' that, on each row 'i', has first the\nvelocity in the x, y and z axis and secondly the position of the particle 'i' in\nthe x, y and z axis.",
"# Dataset columns\n\n* 'block_position': The Initial position of the block;\n* 'block_dimensions': The dimensions of the block on each axis;\n* 'fluid_volume': Volume of the fluid;\n* 'block_velocity': The initial velocity of the block;\n* 'block_velocity_magnitude': Initial velocity magnitude;\n* 'kinematic_viscosity': Viscosity of the fluid;\n* 'density': Fluid density;\n* 'tank_dimensions': The dimensions of the tank where the fluid is contained;\n* 'time_max': Time, in seconds, of the simulation;\n* 'time_step': Elapsed time between each time steps in the simulation;\n* 'particle_radius': Radius of the particles;\n* 'number_of_fluid_particles': Number of particles;"
] | [
"TAGS\n#size_categories-n<1K #region-us \n",
"# Fluid cube dataset\n\nThe Fluid Cube dataset contains 100 fluid dynamics simulations of a\nfluid block flowing inside a unit cube domain. For each simulation,\nthe fluid block is set with different initial shape, position,\nvelocity, and fluid viscosity.\n\nFor more information on how the dataset was generated look a this\nblog post. The\ndataset is the same as the one in the blog post but wrapped in\nHuggingFace's 'datasets' library.",
"# Versions\n\n1. '1000_simulations': The exact same dataset as presented in this\n blog post.\n2. '10_simulations': A subset of the previous the '1000_simulations'\n dataset. Use for quick testing.",
"# Usage\n\nTo use the dataset just use:\n\n\n\nThe dataset has several columns:\n\n\n\nThe most important of which is the 'simulation_time_steps' which is a\nlist of length equal to the number of time steps in the\nsimulation. Each element on the list is an array with shape\n'(num_particles, 6)' that, on each row 'i', has first the\nvelocity in the x, y and z axis and secondly the position of the particle 'i' in\nthe x, y and z axis.",
"# Dataset columns\n\n* 'block_position': The Initial position of the block;\n* 'block_dimensions': The dimensions of the block on each axis;\n* 'fluid_volume': Volume of the fluid;\n* 'block_velocity': The initial velocity of the block;\n* 'block_velocity_magnitude': Initial velocity magnitude;\n* 'kinematic_viscosity': Viscosity of the fluid;\n* 'density': Fluid density;\n* 'tank_dimensions': The dimensions of the tank where the fluid is contained;\n* 'time_max': Time, in seconds, of the simulation;\n* 'time_step': Elapsed time between each time steps in the simulation;\n* 'particle_radius': Radius of the particles;\n* 'number_of_fluid_particles': Number of particles;"
] | [
16,
109,
53,
120,
213
] | [
"passage: TAGS\n#size_categories-n<1K #region-us \n# Fluid cube dataset\n\nThe Fluid Cube dataset contains 100 fluid dynamics simulations of a\nfluid block flowing inside a unit cube domain. For each simulation,\nthe fluid block is set with different initial shape, position,\nvelocity, and fluid viscosity.\n\nFor more information on how the dataset was generated look a this\nblog post. The\ndataset is the same as the one in the blog post but wrapped in\nHuggingFace's 'datasets' library.# Versions\n\n1. '1000_simulations': The exact same dataset as presented in this\n blog post.\n2. '10_simulations': A subset of the previous the '1000_simulations'\n dataset. Use for quick testing.# Usage\n\nTo use the dataset just use:\n\n\n\nThe dataset has several columns:\n\n\n\nThe most important of which is the 'simulation_time_steps' which is a\nlist of length equal to the number of time steps in the\nsimulation. Each element on the list is an array with shape\n'(num_particles, 6)' that, on each row 'i', has first the\nvelocity in the x, y and z axis and secondly the position of the particle 'i' in\nthe x, y and z axis."
] |
be55a6b4c410c508816d3cf51af9fd09ddfaf335 |
# Bangumi Image Base of Gin No Saji
This is the image base of bangumi Gin no Saji, we detected 27 characters, 3590 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 18 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 700 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 181 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 97 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 35 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 44 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 1308 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 64 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 22 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 56 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 14 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 8 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 28 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 81 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 41 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 48 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 23 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 12 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 31 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 11 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 80 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 58 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 65 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 10 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 490 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 8 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| noise | 57 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/ginnosaji | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
] | 2023-11-27T13:00:54+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-27T15:00:15+00:00 | [] | [] | TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Gin No Saji
=================================
This is the image base of bangumi Gin no Saji, we detected 27 characters, 3590 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| [] | [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] | [
25
] | [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
] |
6e158fe4b7799ca25558fe6a04b339e9fc2118b7 | # Dataset Card for "big_literalist_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tr416/big_literalist_ds | [
"region:us"
] | 2023-11-27T13:09:19+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1111043, "num_examples": 892}], "download_size": 561455, "dataset_size": 1111043}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T13:09:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "big_literalist_ds"
More Information needed | [
"# Dataset Card for \"big_literalist_ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"big_literalist_ds\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"big_literalist_ds\"\n\nMore Information needed"
] |
074b162037b2e100f18c3b5b5711c290425e4f6b | # Dataset Card for "2k_mixed_bc_literal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tr416/2k_mixed_bc_literal | [
"region:us"
] | 2023-11-27T13:13:56+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2086723, "num_examples": 1860}], "download_size": 1097444, "dataset_size": 2086723}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T13:14:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "2k_mixed_bc_literal"
More Information needed | [
"# Dataset Card for \"2k_mixed_bc_literal\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"2k_mixed_bc_literal\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"2k_mixed_bc_literal\"\n\nMore Information needed"
] |
d4a744ac742ec33d5436376d2b6cbd79d769a3d0 | ---
license: other
license_name: ntt
license_link: https://zenodo.org/records/3351307/files/LICENSE.pdf?download=1
| aegean-ai/engine-anomaly-detection-dataset | [
"region:us"
] | 2023-11-27T13:17:52+00:00 | {} | 2023-11-28T19:32:08+00:00 | [] | [] | TAGS
#region-us
| ---
license: other
license_name: ntt
license_link: URL
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
f2d1fd1397a29d73134a77031b5e8b07e6cbc4a5 | # Dataset Card for "contracts_v7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | paul-w-qs/contracts_v7 | [
"region:us"
] | 2023-11-27T13:21:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "TABLE_STYLE", "dtype": "string"}, {"name": "JSON_LABEL", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 283153288.518, "num_examples": 4194}], "download_size": 279480901, "dataset_size": 283153288.518}} | 2023-11-27T13:22:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "contracts_v7"
More Information needed | [
"# Dataset Card for \"contracts_v7\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"contracts_v7\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"contracts_v7\"\n\nMore Information needed"
] |
b0a5c9a9d1ba466fbab30c78435fc86f7abad34f | # Dataset Card for "end2end_textclassification"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | plaguss/end2end_textclassification | [
"region:us"
] | 2023-11-27T13:33:48+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string", "id": "field"}, {"name": "label", "list": [{"name": "user_id", "dtype": "string", "id": "question"}, {"name": "value", "dtype": "string", "id": "suggestion"}, {"name": "status", "dtype": "string", "id": "question"}]}, {"name": "label-suggestion", "dtype": "string", "id": "suggestion"}, {"name": "label-suggestion-metadata", "struct": [{"name": "type", "dtype": "string", "id": "suggestion-metadata"}, {"name": "score", "dtype": "float32", "id": "suggestion-metadata"}, {"name": "agent", "dtype": "string", "id": "suggestion-metadata"}]}, {"name": "external_id", "dtype": "string", "id": "external_id"}, {"name": "metadata", "dtype": "string", "id": "metadata"}], "splits": [{"name": "train", "num_bytes": 343408, "num_examples": 1000}], "download_size": 181964, "dataset_size": 343408}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T13:33:58+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "end2end_textclassification"
More Information needed | [
"# Dataset Card for \"end2end_textclassification\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"end2end_textclassification\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"end2end_textclassification\"\n\nMore Information needed"
] |
bdf2ad027ef24365b2cfc751454974f0dd394455 | # Instruct stlye Text Image dataset
Following dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b | AlexWortega/InstructCaptions | [
"region:us"
] | 2023-11-27T13:41:24+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48943069439.6, "num_examples": 33650}], "download_size": 35099473289, "dataset_size": 48943069439.6}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T14:34:22+00:00 | [] | [] | TAGS
#region-us
| # Instruct stlye Text Image dataset
Following dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b | [
"# Instruct stlye Text Image dataset\n\nFollowing dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b"
] | [
"TAGS\n#region-us \n",
"# Instruct stlye Text Image dataset\n\nFollowing dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b"
] | [
6,
35
] | [
"passage: TAGS\n#region-us \n# Instruct stlye Text Image dataset\n\nFollowing dataset was crawled and filetered from COYO, Laion and captioned with LLAVA 13b"
] |
cdb04ab830d478d85e46a1881e83d117ebd80f71 | # Dataset Card for "squad_qa_baseline_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_first_permute | [
"region:us"
] | 2023-11-27T13:48:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 0, "dataset_size": 2832124.0}} | 2023-11-27T15:15:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_first_permute\"\n\nMore Information needed"
] |
5ce50e76216a6e7d8e26dff8d795556f029c6b3e | # Dataset Card for "squad_qa_baseline_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_last_permute | [
"region:us"
] | 2023-11-27T13:48:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 0, "dataset_size": 2832124.0}} | 2023-11-27T15:15:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_last_permute\"\n\nMore Information needed"
] |
a186378ab96f413c712561a9fd912eb2a7a1bb23 | # Dataset Card for "squad_qa_baseline_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_no_permute | [
"region:us"
] | 2023-11-27T13:49:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 0, "dataset_size": 2832124.0}} | 2023-11-27T15:16:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_no_permute\"\n\nMore Information needed"
] |
267d35a0356f2c64af402a31154a1fd281dd86a8 | # Dataset Card for "squad_qa_context_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_first_permute | [
"region:us"
] | 2023-11-27T13:49:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 0, "dataset_size": 4921059.0}} | 2023-11-27T15:16:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_first_permute\"\n\nMore Information needed"
] |
e4b6b75a3ac3cccfa963ecd8c3fd39aa33cd4e25 | # Dataset Card for "nordjylland-news-summarization-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ScandEval/nordjylland-news-summarization-mini | [
"region:us"
] | 2023-11-27T13:50:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "val", "path": "data/val-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "target_text", "dtype": "string"}, {"name": "text_len", "dtype": "int64"}, {"name": "summary_len", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1588698, "num_examples": 1024}, {"name": "val", "num_bytes": 392467, "num_examples": 256}, {"name": "test", "num_bytes": 3268194, "num_examples": 2048}], "download_size": 3271757, "dataset_size": 5249359}} | 2023-11-27T13:50:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "nordjylland-news-summarization-mini"
More Information needed | [
"# Dataset Card for \"nordjylland-news-summarization-mini\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"nordjylland-news-summarization-mini\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"nordjylland-news-summarization-mini\"\n\nMore Information needed"
] |
ca00425cbaf0cc6195bc7381fea4b3681979cc5d | # Dataset Card for "squad_qa_context_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_last_permute | [
"region:us"
] | 2023-11-27T13:50:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 0, "dataset_size": 4921059.0}} | 2023-11-27T15:16:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_last_permute\"\n\nMore Information needed"
] |
9e8b325d1b5e3c439024302605dea4b39e8eb81c | # Dataset Card for "squad_qa_context_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_no_permute | [
"region:us"
] | 2023-11-27T13:51:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 0, "dataset_size": 4921059.0}} | 2023-11-27T15:16:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_no_permute\"\n\nMore Information needed"
] |
1f4f4320cec5e2db2635ba3ad68cba387f6f9617 | # Dataset Card for "autotrain-data-TopicModeling"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gbwsolutions/autotrain-data-TopicModeling | [
"region:us"
] | 2023-11-27T14:49:20+00:00 | {"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}, {"name": "autotrain_label", "dtype": {"class_label": {"names": {"0": "Accuracy", "1": "Cleanliness", "2": "Quality", "3": "Service", "4": "Speed", "5": "Staff", "6": "Technology"}}}}], "splits": [{"name": "train", "num_bytes": 4913, "num_examples": 40}, {"name": "validation", "num_bytes": 1862, "num_examples": 10}], "download_size": 11484, "dataset_size": 6775}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-28T07:06:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "autotrain-data-TopicModeling"
More Information needed | [
"# Dataset Card for \"autotrain-data-TopicModeling\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-TopicModeling\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-TopicModeling\"\n\nMore Information needed"
] |
ea9163f1b191295301290eaf19f87f2a43550c73 | # Dataset Card for "squad_qa_title_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_first_permute | [
"region:us"
] | 2023-11-27T14:51:31+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7724566.286747957, "num_examples": 4778}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1331051, "dataset_size": 8077714.286747957}} | 2023-11-27T15:08:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_first_permute\"\n\nMore Information needed"
] |
1deb34e21aa09b4f27cd5bd04fd605755617421a | # Dataset Card for "squad_qa_title_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:52:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7724566.286747957, "num_examples": 4778}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1323670, "dataset_size": 8077714.286747957}} | 2023-11-27T15:09:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_last_permute\"\n\nMore Information needed"
] |
51feaba0feae233b218fc211db3cc9ecf9bd803e | # Dataset Card for "squad_qa_title_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:52:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7724566.286747957, "num_examples": 4778}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1180079, "dataset_size": 8077714.286747957}} | 2023-11-27T15:09:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_no_permute\"\n\nMore Information needed"
] |
ac3cd42cc15c59ef3da93b000347dfa535973fe5 | # Dataset Card for "squad_qa_wrong_title_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_first_permute | [
"region:us"
] | 2023-11-27T14:53:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7855838.683639287, "num_examples": 4778}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1370780, "dataset_size": 8217702.683639287}} | 2023-11-27T15:09:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_first_permute\"\n\nMore Information needed"
] |
e3709747a6ccb16a2c0e5010b330f8761531a8b8 | # Dataset Card for "squad_qa_wrong_title_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:53:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7855838.683639287, "num_examples": 4778}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1363399, "dataset_size": 8217702.683639287}} | 2023-11-27T15:10:12+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_last_permute\"\n\nMore Information needed"
] |
f9ad24689ad70d398973bf700e5351d387433654 | # Dataset Card for "squad_qa_wrong_title_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:54:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7855838.683639287, "num_examples": 4778}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1219794, "dataset_size": 8217702.683639287}} | 2023-11-27T15:10:35+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_no_permute\"\n\nMore Information needed"
] |
58a8da9e189cb19b0af1c8b155d6a45feb540ec3 | # Dataset Card for "squad_qa_num_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_first_permute | [
"region:us"
] | 2023-11-27T14:54:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7515576.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1306567, "dataset_size": 7858760.963687525}} | 2023-11-27T15:10:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_first_permute\"\n\nMore Information needed"
] |
247eca1a8d5828b41bc2cdce0315ca43f4552ae7 | # Dataset Card for "squad_qa_num_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:54:55+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7515576.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1299186, "dataset_size": 7858760.963687525}} | 2023-11-27T15:11:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_last_permute\"\n\nMore Information needed"
] |
c9e7043d9799b2cfb034f7562010e3082fda8bb1 | # Dataset Card for "squad_qa_num_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:55:22+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7515576.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1155593, "dataset_size": 7858760.963687525}} | 2023-11-27T15:11:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_no_permute\"\n\nMore Information needed"
] |
25abcbdb5797eee60cd4b68995d97fb23afd25db | # Dataset Card for "squad_qa_wrong_num_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_first_permute | [
"region:us"
] | 2023-11-27T14:55:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7568134.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1325370, "dataset_size": 7914618.963687525}} | 2023-11-27T15:12:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_first_permute\"\n\nMore Information needed"
] |
4c5c7b606412e3d60be45408393b6fa142c340f8 | # Dataset Card for "squad_qa_wrong_num_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:56:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7568134.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1317989, "dataset_size": 7914618.963687525}} | 2023-11-27T15:12:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_last_permute\"\n\nMore Information needed"
] |
1df7b3818b8762eda6cbdb9f75de9e90605d0540 | # Dataset Card for "squad_qa_wrong_num_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:56:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7568134.963687525, "num_examples": 4778}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1174396, "dataset_size": 7914618.963687525}} | 2023-11-27T15:13:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_no_permute\"\n\nMore Information needed"
] |
12a6cb9c147f972816e67390ba04281c1f049422 | # Dataset Card for "squad_qa_rare_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:57:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7567932.652552593, "num_examples": 4778}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1313529, "dataset_size": 7913258.652552593}} | 2023-11-27T15:13:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_last_permute\"\n\nMore Information needed"
] |
f21b2e5d277677b243b28d71d67ea05c936849ca | # Dataset Card for "squad_qa_rare_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:57:57+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7567932.652552593, "num_examples": 4778}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1169936, "dataset_size": 7913258.652552593}} | 2023-11-27T15:14:10+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_no_permute\"\n\nMore Information needed"
] |
c5f0ed77aed11c346e15bbdb1693721db348511c | # Dataset Card for "squad_qa_wrong_rare_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_first_permute | [
"region:us"
] | 2023-11-27T14:58:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7639879.229800348, "num_examples": 4778}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1351661, "dataset_size": 7989646.229800348}} | 2023-11-27T15:14:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_first_permute\"\n\nMore Information needed"
] |
b5d8d528f2f95fb4f79510e7365b59d41af9cf38 | # Dataset Card for "squad_qa_wrong_rare_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_last_permute | [
"region:us"
] | 2023-11-27T14:58:51+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7639879.229800348, "num_examples": 4778}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1344270, "dataset_size": 7989646.229800348}} | 2023-11-27T15:14:54+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_last_permute\"\n\nMore Information needed"
] |
7b0da209ea32f92e335574e9775b0d2257026d9d | # Dataset Card for "squad_qa_wrong_rare_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_no_permute | [
"region:us"
] | 2023-11-27T14:59:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7639879.229800348, "num_examples": 4778}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1200683, "dataset_size": 7989646.229800348}} | 2023-11-27T15:15:17+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_no_permute\"\n\nMore Information needed"
] |
cab5838613bac9e1a1f279038c37797409bf9988 | # Dataset Card for "squad_qa_no_id_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_first_permute | [
"region:us"
] | 2023-11-27T15:01:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7671973.074366877, "num_examples": 4778}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1317836, "dataset_size": 8014739.074366877}} | 2023-11-27T15:17:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_first_permute\"\n\nMore Information needed"
] |
824cb41b293e45d1ad0079e08936622d20c05d49 | # Dataset Card for "squad_qa_no_id_v5_full_last_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_last_permute | [
"region:us"
] | 2023-11-27T15:02:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7671973.074366877, "num_examples": 4778}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1310455, "dataset_size": 8014739.074366877}} | 2023-11-27T15:17:44+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_last_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_last_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_last_permute\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_last_permute\"\n\nMore Information needed"
] |
dafe8002cf80bfd812b08d07b32dea3e5843d082 | # Dataset Card for "squad_qa_no_id_v5_full_no_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_no_permute | [
"region:us"
] | 2023-11-27T15:02:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7671973.074366877, "num_examples": 4778}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1166862, "dataset_size": 8014739.074366877}} | 2023-11-27T15:18:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_no_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_no_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_no_permute\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_no_permute\"\n\nMore Information needed"
] |
249f2dd86a66a13be20c8b2252d395c81e5bf249 | # Dataset Card for "squad_qa_rare_v5_full_first_permute"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_first_permute | [
"region:us"
] | 2023-11-27T15:13:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7567932.652552593, "num_examples": 4778}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1406243, "dataset_size": 7913258.652552593}} | 2023-11-27T15:13:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_first_permute"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_first_permute\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_first_permute\"\n\nMore Information needed"
] | [
6,
28
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_first_permute\"\n\nMore Information needed"
] |
1d4bcb87775ff906c19c01a42a3190cd43ca3742 |
# Dataset Summary
`WURA` is a document-level dataset covering 16 African Languages and 4 high-resource languages widely spoken in Africa (English, French, Arabic and Portuguese). This dataset was created by auditing mC4 and crawling additional verified news sources. It was first used to train AfriTeVa V2.
# Dataset Structure
```
>>> from datasets import load_dataset
```
Although the document-level dataset is loaded by default, you may also optionally load a passage-level dataset as follows
```
>>> data = load_dataset("castorini/wura, "yor", level="passage", verification_mode="no_checks")
```
Note that we must pass `verification_mode="no_checks` to prevent HF from verifying checksums against the document-level checksum infos.
# Citation
```
@inproceedings{oladipo-etal-2023-better,
title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages",
author = "Oladipo, Akintunde and
Adeyemi, Mofetoluwa and
Ahia, Orevaoghene and
Owodunni, Abraham and
Ogundepo, Odunayo and
Adelani, David and
Lin, Jimmy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.11",
pages = "158--168",
abstract = "In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at https://github.com/castorini/AfriTeVa-keji.",
}
``` | castorini/wura | [
"task_categories:text-generation",
"task_ids:language-modeling",
"language:afr",
"language:amh",
"language:arz",
"language:eng",
"language:fra",
"language:hau",
"language:ibo",
"language:kin",
"language:mlg",
"language:nya",
"language:orm",
"language:por",
"language:sna",
"language:som",
"language:sot",
"language:swa",
"language:tir",
"language:xho",
"language:yor",
"language:zul",
"license:apache-2.0",
"region:us"
] | 2023-11-27T15:26:39+00:00 | {"language": ["afr", "amh", "arz", "eng", "fra", "hau", "ibo", "kin", "mlg", "nya", "orm", "por", "sna", "som", "sot", "swa", "tir", "xho", "yor", "zul"], "license": "apache-2.0", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "mutilinguality": ["multilingual"], "viewer": true, "dataset_info": [{"config_name": "afr", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4549624636, "num_examples": 1042812}, {"name": "validation", "num_bytes": 504320368, "num_examples": 115868}], "download_size": 5124049817, "dataset_size": 5053945004}, {"config_name": "amh", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1115662532, "num_examples": 135863}, {"name": "validation", "num_bytes": 123858179, "num_examples": 15095}], "download_size": 1248728162, "dataset_size": 1239520711}, {"config_name": "arz", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 621073489, "num_examples": 1455662}, {"name": "validation", "num_bytes": 69342976, "num_examples": 161740}], "download_size": 753246622, "dataset_size": 690416465}, {"config_name": "eng", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3795223480, "num_examples": 1378555}, {"name": "validation", "num_bytes": 423622310, "num_examples": 153172}], "download_size": 4279723559, "dataset_size": 4218845790}, {"config_name": "fra", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3340740638, "num_examples": 1443177}, {"name": "validation", "num_bytes": 368983958, "num_examples": 160352}], "download_size": 3796280757, "dataset_size": 3709724596}, {"config_name": "hau", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 909342448, "num_examples": 359881}, {"name": "validation", "num_bytes": 101151882, "num_examples": 39986}], "download_size": 1027800797, "dataset_size": 1010494330}, {"config_name": "ibo", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 193493918, "num_examples": 51386}, {"name": "validation", "num_bytes": 22265232, "num_examples": 5709}], "download_size": 219266571, "dataset_size": 215759150}, {"config_name": "kin", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 208582172, "num_examples": 97064}, {"name": "validation", "num_bytes": 10662209, "num_examples": 5831}], "download_size": 222938591, "dataset_size": 219244381}, {"config_name": "mlg", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 561868602, "num_examples": 216210}, {"name": "validation", "num_bytes": 62280728, "num_examples": 24023}], "download_size": 635783521, "dataset_size": 624149330}, {"config_name": "nya", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 260737793, "num_examples": 39647}, {"name": "validation", "num_bytes": 29199589, "num_examples": 4405}], "download_size": 293880333, "dataset_size": 289937382}, {"config_name": "orm", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51725718, "num_examples": 20169}, {"name": "validation", "num_bytes": 5500617, "num_examples": 2241}], "download_size": 58001407, "dataset_size": 57226335}, {"config_name": "por", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2191644027, "num_examples": 1089199}, {"name": "validation", "num_bytes": 245338209, "num_examples": 121022}], "download_size": 2498665351, "dataset_size": 2436982236}, {"config_name": "sna", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 225393219, "num_examples": 60986}, {"name": "validation", "num_bytes": 25595688, "num_examples": 6776}], "download_size": 254964089, "dataset_size": 250988907}, {"config_name": "som", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2165910731, "num_examples": 976484}, {"name": "validation", "num_bytes": 241175779, "num_examples": 108498}], "download_size": 2451878912, "dataset_size": 2407086510}, {"config_name": "sot", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 199386007, "num_examples": 38361}, {"name": "validation", "num_bytes": 22324957, "num_examples": 4262}], "download_size": 224556522, "dataset_size": 221710964}, {"config_name": "swa", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3371589021, "num_examples": 1036254}, {"name": "validation", "num_bytes": 373326029, "num_examples": 115139}], "download_size": 3804265021, "dataset_size": 3744915050}, {"config_name": "tir", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 32026542, "num_examples": 8240}, {"name": "validation", "num_bytes": 3589604, "num_examples": 915}], "download_size": 35955368, "dataset_size": 35616146}, {"config_name": "xho", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 114450184, "num_examples": 23892}, {"name": "validation", "num_bytes": 13051255, "num_examples": 2654}], "download_size": 129410950, "dataset_size": 127501439}, {"config_name": "yor", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 192473693, "num_examples": 73473}, {"name": "validation", "num_bytes": 21123764, "num_examples": 8163}], "download_size": 217343993, "dataset_size": 213597457}, {"config_name": "zul", "features": [{"name": "id", "dtype": "string"}, {"name": "headline", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "category", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 279244495, "num_examples": 65447}, {"name": "validation", "num_bytes": 30487397, "num_examples": 7271}], "download_size": 314070508, "dataset_size": 309731892}]} | 2023-12-11T16:06:49+00:00 | [] | [
"afr",
"amh",
"arz",
"eng",
"fra",
"hau",
"ibo",
"kin",
"mlg",
"nya",
"orm",
"por",
"sna",
"som",
"sot",
"swa",
"tir",
"xho",
"yor",
"zul"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #language-Afrikaans #language-Amharic #language-Egyptian Arabic #language-English #language-French #language-Hausa #language-Igbo #language-Kinyarwanda #language-Malagasy #language-Nyanja #language-Oromo #language-Portuguese #language-Shona #language-Somali #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tigrinya #language-Xhosa #language-Yoruba #language-Zulu #license-apache-2.0 #region-us
|
# Dataset Summary
'WURA' is a document-level dataset covering 16 African Languages and 4 high-resource languages widely spoken in Africa (English, French, Arabic and Portuguese). This dataset was created by auditing mC4 and crawling additional verified news sources. It was first used to train AfriTeVa V2.
# Dataset Structure
Although the document-level dataset is loaded by default, you may also optionally load a passage-level dataset as follows
Note that we must pass 'verification_mode="no_checks' to prevent HF from verifying checksums against the document-level checksum infos.
| [
"# Dataset Summary\n\n'WURA' is a document-level dataset covering 16 African Languages and 4 high-resource languages widely spoken in Africa (English, French, Arabic and Portuguese). This dataset was created by auditing mC4 and crawling additional verified news sources. It was first used to train AfriTeVa V2.",
"# Dataset Structure\n\n\n\nAlthough the document-level dataset is loaded by default, you may also optionally load a passage-level dataset as follows\n\n\n\nNote that we must pass 'verification_mode=\"no_checks' to prevent HF from verifying checksums against the document-level checksum infos."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #language-Afrikaans #language-Amharic #language-Egyptian Arabic #language-English #language-French #language-Hausa #language-Igbo #language-Kinyarwanda #language-Malagasy #language-Nyanja #language-Oromo #language-Portuguese #language-Shona #language-Somali #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tigrinya #language-Xhosa #language-Yoruba #language-Zulu #license-apache-2.0 #region-us \n",
"# Dataset Summary\n\n'WURA' is a document-level dataset covering 16 African Languages and 4 high-resource languages widely spoken in Africa (English, French, Arabic and Portuguese). This dataset was created by auditing mC4 and crawling additional verified news sources. It was first used to train AfriTeVa V2.",
"# Dataset Structure\n\n\n\nAlthough the document-level dataset is loaded by default, you may also optionally load a passage-level dataset as follows\n\n\n\nNote that we must pass 'verification_mode=\"no_checks' to prevent HF from verifying checksums against the document-level checksum infos."
] | [
154,
78,
70
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #language-Afrikaans #language-Amharic #language-Egyptian Arabic #language-English #language-French #language-Hausa #language-Igbo #language-Kinyarwanda #language-Malagasy #language-Nyanja #language-Oromo #language-Portuguese #language-Shona #language-Somali #language-Southern Sotho #language-Swahili (macrolanguage) #language-Tigrinya #language-Xhosa #language-Yoruba #language-Zulu #license-apache-2.0 #region-us \n# Dataset Summary\n\n'WURA' is a document-level dataset covering 16 African Languages and 4 high-resource languages widely spoken in Africa (English, French, Arabic and Portuguese). This dataset was created by auditing mC4 and crawling additional verified news sources. It was first used to train AfriTeVa V2.# Dataset Structure\n\n\n\nAlthough the document-level dataset is loaded by default, you may also optionally load a passage-level dataset as follows\n\n\n\nNote that we must pass 'verification_mode=\"no_checks' to prevent HF from verifying checksums against the document-level checksum infos."
] |
7ccee186b7f68c16b63c03d59c0a21777f159ab2 | # Dataset Card for "squad_qa_title_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:26:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4293156.8345323745, "num_examples": 2875}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1183249, "dataset_size": 4646304.8345323745}} | 2023-11-27T15:27:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_random_permute_1\"\n\nMore Information needed"
] |
fce586e5875689eb91132ab8ee124d15cf4fdcd3 | # Dataset Card for "squad_qa_title_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:27:12+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5088553.556198347, "num_examples": 3365}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1278211, "dataset_size": 5441701.556198347}} | 2023-11-27T15:27:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_random_permute_2\"\n\nMore Information needed"
] |
a0f88d9d5dd500347cb59652c2a769dcdada3815 | # Dataset Card for "squad_qa_title_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:27:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6700601.266714082, "num_examples": 4345}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1364611, "dataset_size": 7053749.266714082}} | 2023-11-27T15:27:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_random_permute_4\"\n\nMore Information needed"
] |
568fd3053b5379ab2051c33150b7aec6c9157b84 | # Dataset Card for "squad_qa_title_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_title_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:27:56+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9977278.16852058, "num_examples": 6305}, {"name": "validation", "num_bytes": 353148, "num_examples": 300}], "download_size": 1477699, "dataset_size": 10330426.16852058}} | 2023-11-27T15:28:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_title_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_title_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_title_v5_full_random_permute_8\"\n\nMore Information needed"
] |
9acbb5b0abbcb99b58c326829d377c442e0fbcb0 | # Synthetic Malaysian QA
Generated common QA using ChatGPT4 based on Malaysia topics, notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/question-answer/chatgpt4-synthetic-malaysian-qa
## General Malaysia topics
- [malaysian-general-qa.jsonl](malaysian-general-qa.jsonl), 20396 rows, 28.6 MB.
- [malaysian-general-qa-v2.jsonl](malaysian-general-qa-v2.jsonl), 5294 rows, 8.05 MB.
- [malaysian-general-qa-v3.jsonl](malaysian-general-qa-v3.jsonl), 1368 rows, 5.09 MB.
- [malaysian-general-qa-v4.jsonl](malaysian-general-qa-v4.jsonl), 7733 rows. 36.2 MB.
- [malaysian-general-qa-v5.jsonl](malaysian-general-qa-v5.jsonl), 6363 rows, 32.7 MB.
1. politics
2. socioeconomy
3. culture
4. gender
5. religion
6. sociology
7. social class
8. technology
9. ethnicity
10. infrastructure
11. health
12. education
13. ecology
14. party politics
15. diplomacy
16. history
17. cuisine
18. microeconomics
19. business
20. artificial intelligence
21. law
22. negeri johor
23. negeri kedah
24. negeri kelantan
25. negeri melaka
26. negeri negeri sembilan
27. negeri pahang
28. negeri perak
29. negeri perlis
30. negeri pulau pinang
31. negeri selangor
32. negeri terengganu
33. negeri sabah
34. negeri sarawak
35. kuala lumpur
36. negeri labuan
37. putrajaya
38. najib razak
39. anwar ibrahim
40. parti keadilan rakyat
41. parti islam semalaysia
42. dr mahathir mohamad
43. barisan nasional
44. constitutional monarchy
45. parliamentary democracy
46. political economy
47. political dynamic
48. empowerment of youths
49. kebebasan bersuara
50. hadis
51. hukum aqidah islam
52. hukum fiqah islam
### Example data
```python
{'question': 'Adakah AI boleh digunakan untuk mempercepat proses pengesanan dan rawatan penyakit berjangkit di Malaysia?',
'answer': 'Ya, AI boleh digunakan untuk mempercepat proses pengesanan dan rawatan penyakit berjangkit di Malaysia. AI dapat membantu dalam pengumpulan dan analisis data kesihatan secara cepat dan tepat, membolehkan doktor dan pakar kesihatan membuat keputusan yang lebih baik dan pantas dalam merawat pesakit. AI juga boleh digunakan untuk mengesan pola penyebaran penyakit berjangkit, memungkinkan tindakan pencegahan dan kawalan yang lebih efektif. Sebagai contoh, AI boleh digunakan dalam sistem pengawasan epidemik untuk mengenal pasti kawasan yang berisiko tinggi dan mengambil langkah-langkah pencegahan segera.'}
```
## Kesusasteraan QA
- [tatabahasa.jsonl](tatabahasa.jsonl), 25956 rows, 40.1 MB.
1. sastera
2. tatabahasa
3. kesusasteraan melayu
4. pantun
5. sajak
6. syair
### Example data
```python
{'question': 'Analisa peranan sekolah dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia.',
'answer': 'Sekolah memiliki peranan yang sangat penting dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia. Berikut adalah beberapa peranannya:\n\n1. Pelaksanaan Kurikulum: Sekolah merupakan tempat utama di mana kurikulum Bahasa Melayu diajar kepada murid-murid. Kurikulum ini dirancang sedemikian rupa untuk memastikan murid-murid memahami dan mampu menggunakan Bahasa Melayu yang standard dalam komunikasi lisan dan tulisan.\n\n2. Latihan dan Praktik: Di sekolah, murid-murid diberikan banyak peluang untuk mempraktikkan Bahasa Melayu yang standard melalui pelbagai aktiviti seperti latihan, ujian, perbincangan dalam kelas, dan sebagainya.\n\n3. Pengajaran dan Pembelajaran: Guru-guru di sekolah juga memainkan peranan penting dalam mengajar dan membimbing murid-murid untuk menggunakan Bahasa Melayu yang standard. Mereka menggunakan pelbagai kaedah pengajaran dan strategi pembelajaran untuk memastikan murid-murid memahami dan mampu menggunakan Bahasa Melayu dengan baik.\n\n4. Pendidikan Moral dan Nilai: Sekolah juga memainkan peranan dalam menanamkan nilai-nilai positif dan etika berkomunikasi yang baik kepada murid-murid. Ini termasuk penggunaan Bahasa Melayu yang sopan dan betul.\n\n5. Aktiviti Luar Kelas: Sekolah juga sering mengadakan aktiviti luar kelas seperti pertandingan pidato, debat, dan drama yang menggunakan Bahasa Melayu. Ini memberi peluang kepada murid-murid untuk mempraktikkan dan mempertingkatkan kemahiran berbahasa mereka.\n\n6. Sumber Rujukan: Sekolah menyediakan berbagai sumber rujukan seperti buku teks, kamus, dan bahan bacaan lain yang menggunakan Bahasa Melayu standard. Ini membantu murid-murid memahami dan menggunakan Bahasa Melayu yang betul.\n\n7. Pemantauan dan Penilaian: Sekolah juga memantau dan menilai kemajuan murid-murid dalam penggunaan Bahasa Melayu. Ini membolehkan sekolah mengenal pasti dan menangani sebarang masalah atau kelemahan yang mungkin dihadapi oleh murid-murid.\n\nDengan ini, jelas bahawa sekolah memainkan peranan yang sangat penting dan berkesan dalam menggalakkan penggunaan Bahasa Melayu yang standard di Malaysia.'}
```
## Loghat QA
- [loghat.jsonl](loghat.jsonl), 2024 rows, 2.89 MB.
1. kedah
2. kelantan
3. perak
4. negeri sembilan
5. selangor
6. sarawak
7. sabah
8. terengganu
9. johor
10. baling
### Example data
```python
{'question': "'Rupe-rupenye' dalam Terengganu maksudnya apa?",
'answer': '"Rupe-rupenye" adalah ungkapan dalam bahasa Melayu Terengganu, salah satu dialek Melayu yang digunakan oleh masyarakat di negeri Terengganu, Malaysia. Ungkapan ini berarti "rupa-rupanya" dalam Bahasa Melayu standard, dan digunakan untuk menunjukkan kesedaran atau realisasi sesuatu yang mungkin tidak diketahui sebelumnya atau suatu kejutan. Dalam konteks Bahasa Inggeris, ungkapan ini serupa dengan "it turns out" atau "as it turns out". Contoh penggunaannya:\n\n"Rupe-rupenye, dia dah sampai awai dari kita."\n(It turns out, he arrived earlier than us.)\n\nUngkapan ini sering digunakan dalam percakapan sehari-hari untuk menyatakan sesuatu yang baru diketahui atau sesuatu yang berbeda dari yang diharapkan.'}
``` | mesolitica/chatgpt4-malaysian-general-qa | [
"task_categories:question-answering",
"language:ms",
"region:us"
] | 2023-11-27T15:28:06+00:00 | {"language": ["ms"], "task_categories": ["question-answering"], "pretty_name": "malaysian-general-qa"} | 2024-02-03T04:13:01+00:00 | [] | [
"ms"
] | TAGS
#task_categories-question-answering #language-Malay (macrolanguage) #region-us
| # Synthetic Malaysian QA
Generated common QA using ChatGPT4 based on Malaysia topics, notebooks at URL
## General Malaysia topics
- URL, 20396 rows, 28.6 MB.
- URL, 5294 rows, 8.05 MB.
- URL, 1368 rows, 5.09 MB.
- URL, 7733 rows. 36.2 MB.
- URL, 6363 rows, 32.7 MB.
1. politics
2. socioeconomy
3. culture
4. gender
5. religion
6. sociology
7. social class
8. technology
9. ethnicity
10. infrastructure
11. health
12. education
13. ecology
14. party politics
15. diplomacy
16. history
17. cuisine
18. microeconomics
19. business
20. artificial intelligence
21. law
22. negeri johor
23. negeri kedah
24. negeri kelantan
25. negeri melaka
26. negeri negeri sembilan
27. negeri pahang
28. negeri perak
29. negeri perlis
30. negeri pulau pinang
31. negeri selangor
32. negeri terengganu
33. negeri sabah
34. negeri sarawak
35. kuala lumpur
36. negeri labuan
37. putrajaya
38. najib razak
39. anwar ibrahim
40. parti keadilan rakyat
41. parti islam semalaysia
42. dr mahathir mohamad
43. barisan nasional
44. constitutional monarchy
45. parliamentary democracy
46. political economy
47. political dynamic
48. empowerment of youths
49. kebebasan bersuara
50. hadis
51. hukum aqidah islam
52. hukum fiqah islam
### Example data
## Kesusasteraan QA
- URL, 25956 rows, 40.1 MB.
1. sastera
2. tatabahasa
3. kesusasteraan melayu
4. pantun
5. sajak
6. syair
### Example data
## Loghat QA
- URL, 2024 rows, 2.89 MB.
1. kedah
2. kelantan
3. perak
4. negeri sembilan
5. selangor
6. sarawak
7. sabah
8. terengganu
9. johor
10. baling
### Example data
| [
"# Synthetic Malaysian QA\n\nGenerated common QA using ChatGPT4 based on Malaysia topics, notebooks at URL",
"## General Malaysia topics\n\n- URL, 20396 rows, 28.6 MB.\n- URL, 5294 rows, 8.05 MB.\n- URL, 1368 rows, 5.09 MB.\n- URL, 7733 rows. 36.2 MB.\n- URL, 6363 rows, 32.7 MB.\n \n1. politics\n2. socioeconomy\n3. culture\n4. gender\n5. religion\n6. sociology\n7. social class\n8. technology\n9. ethnicity\n10. infrastructure\n11. health\n12. education\n13. ecology\n14. party politics\n15. diplomacy\n16. history\n17. cuisine\n18. microeconomics\n19. business\n20. artificial intelligence\n21. law\n22. negeri johor\n23. negeri kedah\n24. negeri kelantan\n25. negeri melaka\n26. negeri negeri sembilan\n27. negeri pahang\n28. negeri perak\n29. negeri perlis\n30. negeri pulau pinang\n31. negeri selangor\n32. negeri terengganu\n33. negeri sabah\n34. negeri sarawak\n35. kuala lumpur\n36. negeri labuan\n37. putrajaya\n38. najib razak\n39. anwar ibrahim\n40. parti keadilan rakyat\n41. parti islam semalaysia\n42. dr mahathir mohamad\n43. barisan nasional\n44. constitutional monarchy\n45. parliamentary democracy\n46. political economy\n47. political dynamic\n48. empowerment of youths\n49. kebebasan bersuara\n50. hadis\n51. hukum aqidah islam\n52. hukum fiqah islam",
"### Example data",
"## Kesusasteraan QA\n\n- URL, 25956 rows, 40.1 MB.\n\n1. sastera\n2. tatabahasa\n3. kesusasteraan melayu\n4. pantun\n5. sajak\n6. syair",
"### Example data",
"## Loghat QA\n\n- URL, 2024 rows, 2.89 MB.\n\n1. kedah\n2. kelantan\n3. perak\n4. negeri sembilan\n5. selangor\n6. sarawak\n7. sabah\n8. terengganu\n9. johor\n10. baling",
"### Example data"
] | [
"TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n",
"# Synthetic Malaysian QA\n\nGenerated common QA using ChatGPT4 based on Malaysia topics, notebooks at URL",
"## General Malaysia topics\n\n- URL, 20396 rows, 28.6 MB.\n- URL, 5294 rows, 8.05 MB.\n- URL, 1368 rows, 5.09 MB.\n- URL, 7733 rows. 36.2 MB.\n- URL, 6363 rows, 32.7 MB.\n \n1. politics\n2. socioeconomy\n3. culture\n4. gender\n5. religion\n6. sociology\n7. social class\n8. technology\n9. ethnicity\n10. infrastructure\n11. health\n12. education\n13. ecology\n14. party politics\n15. diplomacy\n16. history\n17. cuisine\n18. microeconomics\n19. business\n20. artificial intelligence\n21. law\n22. negeri johor\n23. negeri kedah\n24. negeri kelantan\n25. negeri melaka\n26. negeri negeri sembilan\n27. negeri pahang\n28. negeri perak\n29. negeri perlis\n30. negeri pulau pinang\n31. negeri selangor\n32. negeri terengganu\n33. negeri sabah\n34. negeri sarawak\n35. kuala lumpur\n36. negeri labuan\n37. putrajaya\n38. najib razak\n39. anwar ibrahim\n40. parti keadilan rakyat\n41. parti islam semalaysia\n42. dr mahathir mohamad\n43. barisan nasional\n44. constitutional monarchy\n45. parliamentary democracy\n46. political economy\n47. political dynamic\n48. empowerment of youths\n49. kebebasan bersuara\n50. hadis\n51. hukum aqidah islam\n52. hukum fiqah islam",
"### Example data",
"## Kesusasteraan QA\n\n- URL, 25956 rows, 40.1 MB.\n\n1. sastera\n2. tatabahasa\n3. kesusasteraan melayu\n4. pantun\n5. sajak\n6. syair",
"### Example data",
"## Loghat QA\n\n- URL, 2024 rows, 2.89 MB.\n\n1. kedah\n2. kelantan\n3. perak\n4. negeri sembilan\n5. selangor\n6. sarawak\n7. sabah\n8. terengganu\n9. johor\n10. baling",
"### Example data"
] | [
28,
28,
289,
5,
43,
5,
49,
5
] | [
"passage: TAGS\n#task_categories-question-answering #language-Malay (macrolanguage) #region-us \n# Synthetic Malaysian QA\n\nGenerated common QA using ChatGPT4 based on Malaysia topics, notebooks at URL## General Malaysia topics\n\n- URL, 20396 rows, 28.6 MB.\n- URL, 5294 rows, 8.05 MB.\n- URL, 1368 rows, 5.09 MB.\n- URL, 7733 rows. 36.2 MB.\n- URL, 6363 rows, 32.7 MB.\n \n1. politics\n2. socioeconomy\n3. culture\n4. gender\n5. religion\n6. sociology\n7. social class\n8. technology\n9. ethnicity\n10. infrastructure\n11. health\n12. education\n13. ecology\n14. party politics\n15. diplomacy\n16. history\n17. cuisine\n18. microeconomics\n19. business\n20. artificial intelligence\n21. law\n22. negeri johor\n23. negeri kedah\n24. negeri kelantan\n25. negeri melaka\n26. negeri negeri sembilan\n27. negeri pahang\n28. negeri perak\n29. negeri perlis\n30. negeri pulau pinang\n31. negeri selangor\n32. negeri terengganu\n33. negeri sabah\n34. negeri sarawak\n35. kuala lumpur\n36. negeri labuan\n37. putrajaya\n38. najib razak\n39. anwar ibrahim\n40. parti keadilan rakyat\n41. parti islam semalaysia\n42. dr mahathir mohamad\n43. barisan nasional\n44. constitutional monarchy\n45. parliamentary democracy\n46. political economy\n47. political dynamic\n48. empowerment of youths\n49. kebebasan bersuara\n50. hadis\n51. hukum aqidah islam\n52. hukum fiqah islam### Example data## Kesusasteraan QA\n\n- URL, 25956 rows, 40.1 MB.\n\n1. sastera\n2. tatabahasa\n3. kesusasteraan melayu\n4. pantun\n5. sajak\n6. syair### Example data## Loghat QA\n\n- URL, 2024 rows, 2.89 MB.\n\n1. kedah\n2. kelantan\n3. perak\n4. negeri sembilan\n5. selangor\n6. sarawak\n7. sabah\n8. terengganu\n9. johor\n10. baling### Example data"
] |
1782442f662241bf0bf6e67fb2cfd3e791772bf7 | # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:28:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4372749.8651079135, "num_examples": 2875}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1218491, "dataset_size": 4734613.8651079135}} | 2023-11-27T15:28:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_1\"\n\nMore Information needed"
] |
ba52e88df62b4d05ceac33531f831c689ab3be53 | # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:28:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5181584.9603305785, "num_examples": 3365}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1320331, "dataset_size": 5543448.9603305785}} | 2023-11-27T15:28:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_2\"\n\nMore Information needed"
] |
99a30063bc48b3d2f8e89577a2eec8699956131c | # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:28:53+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6820467.0227596015, "num_examples": 4345}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1406073, "dataset_size": 7182331.0227596015}} | 2023-11-27T15:29:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_4\"\n\nMore Information needed"
] |
115dcc1c3b5b7c24047176360ccd6ae81db3d41a | # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_title_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:29:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10150707.748609567, "num_examples": 6305}, {"name": "validation", "num_bytes": 361864, "num_examples": 300}], "download_size": 1521936, "dataset_size": 10512571.748609567}} | 2023-11-27T15:29:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_title_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_title_v5_full_random_permute_8\"\n\nMore Information needed"
] |
8b8a609ad82c50ecebaf14996b00fb0d1160c264 | # Dataset Card for "squad_qa_num_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:29:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4171523.651079137, "num_examples": 2875}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1160026, "dataset_size": 4514707.651079137}} | 2023-11-27T15:29:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_random_permute_1\"\n\nMore Information needed"
] |
21ef93ea90f915f434f6766fd7fce0a68e182822 | # Dataset Card for "squad_qa_num_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:29:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4944264.580991736, "num_examples": 3365}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1252983, "dataset_size": 5287448.580991736}} | 2023-11-27T15:30:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_random_permute_2\"\n\nMore Information needed"
] |
9463a5907ca6ab2801f8961a90f933e9cba7970f | # Dataset Card for "squad_qa_num_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:30:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6510358.253911806, "num_examples": 4345}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1336925, "dataset_size": 6853542.253911806}} | 2023-11-27T15:30:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_random_permute_4\"\n\nMore Information needed"
] |
21d239a309b96f77e9b15bbdb066621c7e25fc78 | # Dataset Card for "squad_qa_num_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_num_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:30:41+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9693537.739154616, "num_examples": 6305}, {"name": "validation", "num_bytes": 343184, "num_examples": 300}], "download_size": 1447507, "dataset_size": 10036721.739154616}} | 2023-11-27T15:30:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_num_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_num_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_num_v5_full_random_permute_8\"\n\nMore Information needed"
] |
d9e301736230f3811cf467bf0492dd967b8b049e | # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:31:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4203148.651079137, "num_examples": 2875}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1174513, "dataset_size": 4549632.651079137}} | 2023-11-27T15:31:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_1\"\n\nMore Information needed"
] |
821fcb208f6d11ba0b8dc959b071afa23bef5a63 | # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:31:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4981279.580991736, "num_examples": 3365}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1271441, "dataset_size": 5327763.580991736}} | 2023-11-27T15:31:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_2\"\n\nMore Information needed"
] |
21b053c3256d4b1826f6bc67f639da20df8ddab2 | # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:31:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6558153.253911806, "num_examples": 4345}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1363986, "dataset_size": 6904637.253911806}} | 2023-11-27T15:31:52+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_4\"\n\nMore Information needed"
] |
b20bcb606702894289bd61c1d1e673e26dcf3998 | # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_num_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:32:04+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9762892.739154616, "num_examples": 6305}, {"name": "validation", "num_bytes": 346484, "num_examples": 300}], "download_size": 1473144, "dataset_size": 10109376.739154616}} | 2023-11-27T15:32:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_num_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_num_v5_full_random_permute_8\"\n\nMore Information needed"
] |
2586396d03eefb134e4b6f4ca43b4036070db479 |
# Dataset Card for "arxiv_nlp_intstruct"
### Dataset Summary
The "ArtifactAI/arxiv_nlp_intstruct" dataset consists of question-answer pairs derived from ArXiv abstracts from the cs.CL category".
Questions and answers are generated using [GPT-3.5-turbo model](https://openai.com/chatgpt)
### Languages
English
## Dataset Structure
### Data Instances
#### train
- **Size of downloaded dataset files:** 38.4 MB
An example of 'train' looks as follows.
{
"question": "What is the role of the light-weight cross-attention mechanism in MixEncoder?",
"answer": "In the MixEncoder, the lightweight cross-attention mechanism plays a crucial role
in capturing cross-modal dependencies between different modalities (such as text and image).
The purpose of cross-attention is to align features from one modality with features from another modality,
allowing the model to understand the correlation between them. Specifically, the lightweight
cross-attention mechanism in MixEncoder enables the model to attend to relevant information in the other modality
when processing a specific modality. This helps the model to exploit important semantic relationships between
different modalities and improve the overall representation learning process. By incorporating cross-attention,
the MixEncoder provides a way for different modalities to interact and co-adapt during training, enabling the model
to effectively understand and build representations that capture the underlying relationships between text and image inputs.
This boosts the performance of multimodal models in tasks such as image captioning, visual question answering,
and multimodal sentiment analysis, among others."
}
### Data Fields
The data fields present in the dataset are as follows:
- question: a string feature representing the question.
- answer: a string feature representing the answer.
### Data Splits
train: 36,894 question answer pairs
### Curation Rationale
[More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Source Data
Question-answer pairs derived from [ArXiv](https://arxiv.org/) abstracts.
#### Initial Data Collection and Normalization
Questions and answers are generated using [GPT-3.5-turbo model](https://openai.com/chatgpt) from ArXiv abstracts from the cs.CL category".
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
None
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: matt at artifactai.com and datasets at huggingface.co
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
### Citation Information
```
@misc{arxiv_nlp_intstruct,
title={arxiv_nlp_intstruct},
author={Matthew Kenney},
year={2023}
}
```
| ArtifactAI/arxiv_nlp_intstruct | [
"task_categories:text-generation",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:cc0-1.0",
"region:us"
] | 2023-11-27T15:32:21+00:00 | {"annotations_creators": ["no-annotation"], "language": ["en"], "license": ["cc0-1.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-generation"], "task_ids": ["language-modeling", "masked-language-modeling"], "pretty_name": "arxiv_nlp_intstruct"} | 2023-11-28T01:33:23+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us
|
# Dataset Card for "arxiv_nlp_intstruct"
### Dataset Summary
The "ArtifactAI/arxiv_nlp_intstruct" dataset consists of question-answer pairs derived from ArXiv abstracts from the cs.CL category".
Questions and answers are generated using GPT-3.5-turbo model
### Languages
English
## Dataset Structure
### Data Instances
#### train
- Size of downloaded dataset files: 38.4 MB
An example of 'train' looks as follows.
{
"question": "What is the role of the light-weight cross-attention mechanism in MixEncoder?",
"answer": "In the MixEncoder, the lightweight cross-attention mechanism plays a crucial role
in capturing cross-modal dependencies between different modalities (such as text and image).
The purpose of cross-attention is to align features from one modality with features from another modality,
allowing the model to understand the correlation between them. Specifically, the lightweight
cross-attention mechanism in MixEncoder enables the model to attend to relevant information in the other modality
when processing a specific modality. This helps the model to exploit important semantic relationships between
different modalities and improve the overall representation learning process. By incorporating cross-attention,
the MixEncoder provides a way for different modalities to interact and co-adapt during training, enabling the model
to effectively understand and build representations that capture the underlying relationships between text and image inputs.
This boosts the performance of multimodal models in tasks such as image captioning, visual question answering,
and multimodal sentiment analysis, among others."
}
### Data Fields
The data fields present in the dataset are as follows:
- question: a string feature representing the question.
- answer: a string feature representing the answer.
### Data Splits
train: 36,894 question answer pairs
### Curation Rationale
### Source Data
Question-answer pairs derived from ArXiv abstracts.
#### Initial Data Collection and Normalization
Questions and answers are generated using GPT-3.5-turbo model from ArXiv abstracts from the cs.CL category".
### Annotations
The dataset doesn't contain annotations.
### Personal and Sensitive Information
None
#### Notice policy
Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:
Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.
Clearly identify the copyrighted work claimed to be infringed.
Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.
And contact us at the following email address: matt at URL and datasets at URL
#### Take down policy
The original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.
Hugging Face will also update this repository accordingly.
| [
"# Dataset Card for \"arxiv_nlp_intstruct\"",
"### Dataset Summary\n\nThe \"ArtifactAI/arxiv_nlp_intstruct\" dataset consists of question-answer pairs derived from ArXiv abstracts from the cs.CL category\". \n\nQuestions and answers are generated using GPT-3.5-turbo model",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"#### train\n\n- Size of downloaded dataset files: 38.4 MB\n\nAn example of 'train' looks as follows.\n\n{\n \"question\": \"What is the role of the light-weight cross-attention mechanism in MixEncoder?\", \n \"answer\": \"In the MixEncoder, the lightweight cross-attention mechanism plays a crucial role \n in capturing cross-modal dependencies between different modalities (such as text and image). \n The purpose of cross-attention is to align features from one modality with features from another modality, \n allowing the model to understand the correlation between them. Specifically, the lightweight \n cross-attention mechanism in MixEncoder enables the model to attend to relevant information in the other modality \n when processing a specific modality. This helps the model to exploit important semantic relationships between \n different modalities and improve the overall representation learning process. By incorporating cross-attention, \n the MixEncoder provides a way for different modalities to interact and co-adapt during training, enabling the model \n to effectively understand and build representations that capture the underlying relationships between text and image inputs. \n This boosts the performance of multimodal models in tasks such as image captioning, visual question answering, \n and multimodal sentiment analysis, among others.\"\n}",
"### Data Fields\n\nThe data fields present in the dataset are as follows:\n\n- question: a string feature representing the question.\n- answer: a string feature representing the answer.",
"### Data Splits\n\n train: 36,894 question answer pairs",
"### Curation Rationale",
"### Source Data\n\nQuestion-answer pairs derived from ArXiv abstracts.",
"#### Initial Data Collection and Normalization\n\nQuestions and answers are generated using GPT-3.5-turbo model from ArXiv abstracts from the cs.CL category\".",
"### Annotations\n\nThe dataset doesn't contain annotations.",
"### Personal and Sensitive Information\n\nNone",
"#### Notice policy\n\nShould you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n\nClearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n\nClearly identify the copyrighted work claimed to be infringed.\n\nClearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\nAnd contact us at the following email address: matt at URL and datasets at URL",
"#### Take down policy\n\nThe original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.\nHugging Face will also update this repository accordingly."
] | [
"TAGS\n#task_categories-text-generation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us \n",
"# Dataset Card for \"arxiv_nlp_intstruct\"",
"### Dataset Summary\n\nThe \"ArtifactAI/arxiv_nlp_intstruct\" dataset consists of question-answer pairs derived from ArXiv abstracts from the cs.CL category\". \n\nQuestions and answers are generated using GPT-3.5-turbo model",
"### Languages\n\nEnglish",
"## Dataset Structure",
"### Data Instances",
"#### train\n\n- Size of downloaded dataset files: 38.4 MB\n\nAn example of 'train' looks as follows.\n\n{\n \"question\": \"What is the role of the light-weight cross-attention mechanism in MixEncoder?\", \n \"answer\": \"In the MixEncoder, the lightweight cross-attention mechanism plays a crucial role \n in capturing cross-modal dependencies between different modalities (such as text and image). \n The purpose of cross-attention is to align features from one modality with features from another modality, \n allowing the model to understand the correlation between them. Specifically, the lightweight \n cross-attention mechanism in MixEncoder enables the model to attend to relevant information in the other modality \n when processing a specific modality. This helps the model to exploit important semantic relationships between \n different modalities and improve the overall representation learning process. By incorporating cross-attention, \n the MixEncoder provides a way for different modalities to interact and co-adapt during training, enabling the model \n to effectively understand and build representations that capture the underlying relationships between text and image inputs. \n This boosts the performance of multimodal models in tasks such as image captioning, visual question answering, \n and multimodal sentiment analysis, among others.\"\n}",
"### Data Fields\n\nThe data fields present in the dataset are as follows:\n\n- question: a string feature representing the question.\n- answer: a string feature representing the answer.",
"### Data Splits\n\n train: 36,894 question answer pairs",
"### Curation Rationale",
"### Source Data\n\nQuestion-answer pairs derived from ArXiv abstracts.",
"#### Initial Data Collection and Normalization\n\nQuestions and answers are generated using GPT-3.5-turbo model from ArXiv abstracts from the cs.CL category\".",
"### Annotations\n\nThe dataset doesn't contain annotations.",
"### Personal and Sensitive Information\n\nNone",
"#### Notice policy\n\nShould you consider that our data contains material that is owned by you and should therefore not be reproduced here, please:\n\nClearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted.\n\nClearly identify the copyrighted work claimed to be infringed.\n\nClearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material.\n\nAnd contact us at the following email address: matt at URL and datasets at URL",
"#### Take down policy\n\nThe original authors will comply to legitimate requests by removing the affected sources from the next release of the corpus.\nHugging Face will also update this repository accordingly."
] | [
93,
16,
65,
5,
6,
6,
288,
41,
14,
7,
19,
40,
16,
10,
121,
44
] | [
"passage: TAGS\n#task_categories-text-generation #task_ids-language-modeling #task_ids-masked-language-modeling #annotations_creators-no-annotation #multilinguality-monolingual #size_categories-1M<n<10M #source_datasets-original #language-English #license-cc0-1.0 #region-us \n# Dataset Card for \"arxiv_nlp_intstruct\"### Dataset Summary\n\nThe \"ArtifactAI/arxiv_nlp_intstruct\" dataset consists of question-answer pairs derived from ArXiv abstracts from the cs.CL category\". \n\nQuestions and answers are generated using GPT-3.5-turbo model### Languages\n\nEnglish## Dataset Structure### Data Instances#### train\n\n- Size of downloaded dataset files: 38.4 MB\n\nAn example of 'train' looks as follows.\n\n{\n \"question\": \"What is the role of the light-weight cross-attention mechanism in MixEncoder?\", \n \"answer\": \"In the MixEncoder, the lightweight cross-attention mechanism plays a crucial role \n in capturing cross-modal dependencies between different modalities (such as text and image). \n The purpose of cross-attention is to align features from one modality with features from another modality, \n allowing the model to understand the correlation between them. Specifically, the lightweight \n cross-attention mechanism in MixEncoder enables the model to attend to relevant information in the other modality \n when processing a specific modality. This helps the model to exploit important semantic relationships between \n different modalities and improve the overall representation learning process. By incorporating cross-attention, \n the MixEncoder provides a way for different modalities to interact and co-adapt during training, enabling the model \n to effectively understand and build representations that capture the underlying relationships between text and image inputs. \n This boosts the performance of multimodal models in tasks such as image captioning, visual question answering, \n and multimodal sentiment analysis, among others.\"\n}"
] |
cca2143ba122c6ab43cfbf4a6e2f9b665047a6b7 | # Dataset Card for "squad_qa_rare_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:32:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4201825.944244605, "num_examples": 2875}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1172281, "dataset_size": 4547151.944244605}} | 2023-11-27T15:32:41+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
29
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] |
3685de497f29493752df8294db66ea26c8e7ef38 | # Dataset Card for "squad_qa_rare_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:32:52+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4980184.426446281, "num_examples": 3365}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1275275, "dataset_size": 5325510.426446281}} | 2023-11-27T15:33:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] |
b0c542a3da854dfee4227fdd215aa2800a6c41d1 | # Dataset Card for "squad_qa_rare_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:33:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6557664.364153627, "num_examples": 4345}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1353765, "dataset_size": 6902990.364153627}} | 2023-11-27T15:33:23+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] |
6795d3bcd584ef5a13cc2f9a26efb882584b8a0a | # Dataset Card for "squad_qa_rare_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_rare_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:33:38+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9763990.32814238, "num_examples": 6305}, {"name": "validation", "num_bytes": 345326, "num_examples": 300}], "download_size": 1469044, "dataset_size": 10109316.32814238}} | 2023-11-27T15:33:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_rare_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] |
906f06fb9cf48431499f4091e688c749ce388ab0 | # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:34:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4245143.817446043, "num_examples": 2875}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1197821, "dataset_size": 4594910.817446043}} | 2023-11-27T15:34:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_1\"\n\nMore Information needed"
] |
e1d7f555707dd550c32801c508a2a7eae16b5a1f |
This dataset comes from the GPTeacher repository, hosted on GitHub by `teknium1`. This repository features a collection of modular datasets generated by GPT-4, encompassing a variety of specific modules such as General-Instruct, Roleplay-Instruct, Code-Instruct, and Toolformer. These datasets are created with diverse prompts and structured to include instructions, inputs, and outputs, making them compatible with fine-tuning scripts similar to those used for Alpaca's dataset format. The Roleplay-Instruct dataset, for instance, includes tasks designed to assume the roles of various characters, both fictional and non-fictional, in different settings and personalities. There's also a Code-Instruct Dataset with around 5350 code task instructions in various programming languages, showcasing the versatility of the datasets in this repository. | Sumsam/Roleplay_training | [
"license:mit",
"region:us"
] | 2023-11-27T15:34:12+00:00 | {"license": "mit"} | 2023-11-27T15:38:00+00:00 | [] | [] | TAGS
#license-mit #region-us
|
This dataset comes from the GPTeacher repository, hosted on GitHub by 'teknium1'. This repository features a collection of modular datasets generated by GPT-4, encompassing a variety of specific modules such as General-Instruct, Roleplay-Instruct, Code-Instruct, and Toolformer. These datasets are created with diverse prompts and structured to include instructions, inputs, and outputs, making them compatible with fine-tuning scripts similar to those used for Alpaca's dataset format. The Roleplay-Instruct dataset, for instance, includes tasks designed to assume the roles of various characters, both fictional and non-fictional, in different settings and personalities. There's also a Code-Instruct Dataset with around 5350 code task instructions in various programming languages, showcasing the versatility of the datasets in this repository. | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
87a23aedf3aac26cdeb37d08974ebee6216c0454 | # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:34:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5030918.614876033, "num_examples": 3365}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1299234, "dataset_size": 5380685.614876033}} | 2023-11-27T15:34:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_2\"\n\nMore Information needed"
] |
6b3d9160a8e64c538720f6924fd582b84837fe0b | # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:34:54+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6623242.342816501, "num_examples": 4345}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1386104, "dataset_size": 6973009.342816501}} | 2023-11-27T15:35:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_4\"\n\nMore Information needed"
] |
b4c51eb5c689539a16dc61859415dc5310d7e812 | # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_wrong_rare_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:35:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "correct_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9859283.49499444, "num_examples": 6305}, {"name": "validation", "num_bytes": 349767, "num_examples": 300}], "download_size": 1499684, "dataset_size": 10209050.49499444}} | 2023-11-27T15:35:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_wrong_rare_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
33
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_wrong_rare_v5_full_random_permute_8\"\n\nMore Information needed"
] |
627082b5d19d93babc584afca0573c8815c62b3f | # Dataset Card for "squad_qa_baseline_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:35:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 667968, "dataset_size": 2832124.0}} | 2023-11-27T15:35:46+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_1\"\n\nMore Information needed"
] |
a13dfdac55c1859a95e2752177a1fb9d0843a5b7 | # Dataset Card for "squad_qa_baseline_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:36:00+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 667968, "dataset_size": 2832124.0}} | 2023-11-27T15:36:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_2\"\n\nMore Information needed"
] |
a89f0834462c07366f86fc9f2820267c0fc95900 | # Dataset Card for "squad_qa_baseline_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_baseline_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:36:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2496440.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 335684, "num_examples": 300}], "download_size": 667968, "dataset_size": 2832124.0}} | 2023-11-27T15:36:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_baseline_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_baseline_v5_full_random_permute_4\"\n\nMore Information needed"
] |
b7b0de51ea8dfba9f28e3ad73eb8a48148b37eae | # Dataset Card for "squad_qa_context_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:37:06+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 1029814, "dataset_size": 4921059.0}} | 2023-11-27T15:37:16+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_random_permute_1\"\n\nMore Information needed"
] |
13df819abf253f254988231d3f01d7967f0812cd | # Dataset Card for "squad_qa_context_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:37:27+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 1029814, "dataset_size": 4921059.0}} | 2023-11-27T15:37:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_random_permute_2\"\n\nMore Information needed"
] |
63e5d6619f4eadd8a699edf167726c41148871ee | # Dataset Card for "squad_qa_context_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:37:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 1029814, "dataset_size": 4921059.0}} | 2023-11-27T15:37:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_random_permute_4\"\n\nMore Information needed"
] |
609265bee93f5b9f91a911535c80d817946bc323 | # Dataset Card for "squad_qa_context_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_context_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:38:09+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4350151.0, "num_examples": 2385}, {"name": "validation", "num_bytes": 570908, "num_examples": 300}], "download_size": 1029814, "dataset_size": 4921059.0}} | 2023-11-27T15:38:18+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_context_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_context_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_context_v5_full_random_permute_8\"\n\nMore Information needed"
] |
84a3192b641bce31af876a07b55ddd359b78982a | # Dataset Card for "squad_qa_no_id_v5_full_random_permute_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_random_permute_1 | [
"region:us"
] | 2023-11-27T15:38:30+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4250679.226618705, "num_examples": 2875}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1167871, "dataset_size": 4593445.226618705}} | 2023-11-27T15:38:37+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_random_permute_1"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_1\"\n\nMore Information needed"
] | [
6,
31
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_1\"\n\nMore Information needed"
] |
c14f1c16b76d1882a9ebd4579582d3b4888d1beb | # Dataset Card for "squad_qa_no_id_v5_full_random_permute_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_random_permute_2 | [
"region:us"
] | 2023-11-27T15:38:49+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5042862.974380165, "num_examples": 3365}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1264341, "dataset_size": 5385628.974380165}} | 2023-11-27T15:38:56+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_random_permute_2"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_2\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_2\"\n\nMore Information needed"
] |
80e261ba00a7a63d2a22a0b009a866e8c539925d | # Dataset Card for "squad_qa_no_id_v5_full_random_permute_4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_random_permute_4 | [
"region:us"
] | 2023-11-27T15:39:11+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6649828.427453769, "num_examples": 4345}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1347848, "dataset_size": 6992594.427453769}} | 2023-11-27T15:39:21+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_random_permute_4"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_4\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_4\"\n\nMore Information needed"
] |
abcc1f503725ce2a81e666d70939c2e5d2a445e7 | # Dataset Card for "squad_qa_no_id_v5_full_random_permute_8"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | tyzhu/squad_qa_no_id_v5_full_random_permute_8 | [
"region:us"
] | 2023-11-27T15:39:33+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "answer_start", "dtype": "int32"}]}, {"name": "answer", "dtype": "string"}, {"name": "context_id", "dtype": "string"}, {"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 9919664.916017797, "num_examples": 6305}, {"name": "validation", "num_bytes": 342766, "num_examples": 300}], "download_size": 1460621, "dataset_size": 10262430.916017797}} | 2023-11-27T15:39:39+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "squad_qa_no_id_v5_full_random_permute_8"
More Information needed | [
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_8\"\n\nMore Information needed"
] | [
6,
32
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"squad_qa_no_id_v5_full_random_permute_8\"\n\nMore Information needed"
] |
5fe05890140e0ab874b922930d849316334ff395 |
# Dataset Card for CA-EU Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of **9.692.996** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Basque.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).
- train_clean.eu: contains 9.692.996 Basque sentences (authentic).
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
This synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-EU was delivered by [HiTZ](http://hitz.eus/) and the Spanish was translated to Catalan using the machine translation model [PlanTL-GOB-ES](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca).
**Total: 9.692.996 parallel sentences** .
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks between Catalan and Basque, two of the co-official languages of Spain.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 | projecte-aina/CA-EU_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:ca",
"language:eu",
"language:multilingual",
"region:us"
] | 2023-11-27T16:00:05+00:00 | {"language": ["ca", "eu", "multilingual"], "multilinguality": ["translation"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-EU Parallel Corpus"} | 2024-01-17T13:47:57+00:00 | [] | [
"ca",
"eu",
"multilingual"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Basque #language-multilingual #region-us
|
# Dataset Card for CA-EU Parallel Corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Splits
- Dataset Creation
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Author
- Contact Information
- Copyright
- Licensing information
- Funding
## Dataset Description
### Dataset Summary
The CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of 9.692.996 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Basque.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).
- train_clean.eu: contains 9.692.996 Basque sentences (authentic).
### Data Splits
The dataset contains a single split: 'train'.
## Dataset Creation
### Source Data
This synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-EU was delivered by HiTZ and the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.
Total: 9.692.996 parallel sentences .
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks between Catalan and Basque, two of the co-official languages of Spain.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 | [
"# Dataset Card for CA-EU Parallel Corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nThe CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of 9.692.996 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\nThe texts in the dataset are in Catalan and Basque.",
"## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).\n\n- train_clean.eu: contains 9.692.996 Basque sentences (authentic).",
"### Data Splits\n\nThe dataset contains a single split: 'train'.",
"## Dataset Creation",
"### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-EU was delivered by HiTZ and the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 9.692.996 parallel sentences .",
"### Personal and Sensitive Information\n\nNo anonymisation process was performed.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Basque, two of the co-official languages of Spain.",
"### Discussion of Biases\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.",
"## Additional Information",
"### Author\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Basque #language-multilingual #region-us \n",
"# Dataset Card for CA-EU Parallel Corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nThe CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of 9.692.996 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\nThe texts in the dataset are in Catalan and Basque.",
"## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).\n\n- train_clean.eu: contains 9.692.996 Basque sentences (authentic).",
"### Data Splits\n\nThe dataset contains a single split: 'train'.",
"## Dataset Creation",
"### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-EU was delivered by HiTZ and the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 9.692.996 parallel sentences .",
"### Personal and Sensitive Information\n\nNo anonymisation process was performed.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Basque, two of the co-official languages of Spain.",
"### Discussion of Biases\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.",
"## Additional Information",
"### Author\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
] | [
58,
10,
94,
4,
53,
45,
18,
70,
19,
5,
66,
16,
8,
38,
61,
43,
5,
21,
18,
18,
24,
78
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-1M<n<10M #source_datasets-original #language-Catalan #language-Basque #language-multilingual #region-us \n# Dataset Card for CA-EU Parallel Corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding## Dataset Description### Dataset Summary\n\nThe CA-EU Parallel Corpus is a Catalan-Basque synthetic dataset of 9.692.996 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\nThe texts in the dataset are in Catalan and Basque.## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- train_clean.ca: contains 9.692.996 Catalan sentences (synthetic).\n\n- train_clean.eu: contains 9.692.996 Basque sentences (authentic).### Data Splits\n\nThe dataset contains a single split: 'train'.## Dataset Creation### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-EU was delivered by HiTZ and the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 9.692.996 parallel sentences .### Personal and Sensitive Information\n\nNo anonymisation process was performed.## Considerations for Using the Data### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Basque, two of the co-official languages of Spain."
] |
7b8a7aa350312b4c609c9531325d7fc52f9c924d |
<h1 align="center">
Nonlinear In-situ Calibration of Strain-Gauge Force/Torque Sensors for Humanoid Robots
</h1>
<div align="center">
Hosameldin Awadalla Omer Mohamed, Gabriele Nava, Punith Reddy Vanteddu, Francesco Braghin and Daniele Pucci
</div>
<div align="center">
2023 IEEE-RAS International Conference on Humanoid Robots (Humanoids)
</div>
<section>
<h2>📂 Dataset</h2>
<p>
The dataset is collected from the robot <code>iRonCub-Mk3</code> performing movements to calibrated the F/T sensors mounted on the upper arms. Each folder is holds some <code>.mat</code> files containing the measurements collected from the robot's sensors with <a href="https://github.com/ami-iit/bipedal-locomotion-framework/tree/master/devices/YarpRobotLoggerDevice"><b>YarpRobotLoggerDevice</b></a>.
</p>
</section>
| ami-iit/paper_Mohamed_2023_humanoids_nonlinear-ft-calibration_dataset | [
"license:bsd-3-clause",
"region:us"
] | 2023-11-27T16:03:11+00:00 | {"license": "bsd-3-clause"} | 2023-12-10T23:15:46+00:00 | [] | [] | TAGS
#license-bsd-3-clause #region-us
|
<h1 align="center">
Nonlinear In-situ Calibration of Strain-Gauge Force/Torque Sensors for Humanoid Robots
</h1>
<div align="center">
Hosameldin Awadalla Omer Mohamed, Gabriele Nava, Punith Reddy Vanteddu, Francesco Braghin and Daniele Pucci
</div>
<div align="center">
2023 IEEE-RAS International Conference on Humanoid Robots (Humanoids)
</div>
<section>
<h2> Dataset</h2>
<p>
The dataset is collected from the robot <code>iRonCub-Mk3</code> performing movements to calibrated the F/T sensors mounted on the upper arms. Each folder is holds some <code>.mat</code> files containing the measurements collected from the robot's sensors with <a href="URL
</p>
</section>
| [] | [
"TAGS\n#license-bsd-3-clause #region-us \n"
] | [
16
] | [
"passage: TAGS\n#license-bsd-3-clause #region-us \n"
] |
2c3db904541b039dbff44ea375c74acce5e9b612 | # Dataset Card for "test_ds"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | mnoukhov/test_ds | [
"region:us"
] | 2023-11-27T16:08:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "label", "dtype": "string"}, {"name": "reward_baseline", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 158890, "num_examples": 100}, {"name": "valid", "num_bytes": 159279, "num_examples": 100}], "download_size": 0, "dataset_size": 318169}} | 2023-12-16T23:10:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "test_ds"
More Information needed | [
"# Dataset Card for \"test_ds\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"test_ds\"\n\nMore Information needed"
] | [
6,
13
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"test_ds\"\n\nMore Information needed"
] |
5783e8e7fda52366b90ea53744a236eef48d4668 | # Dataset Card for "customers-complaints-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | aciborowska/customers-complaints-train | [
"region:us"
] | 2023-11-27T16:27:43+00:00 | {"dataset_info": {"features": [{"name": "Date_received", "dtype": "string"}, {"name": "Product", "dtype": "string"}, {"name": "Sub_product", "dtype": "string"}, {"name": "Issue", "dtype": "string"}, {"name": "Sub_issue", "dtype": "string"}, {"name": "Consumer_complaint_narrative", "dtype": "string"}, {"name": "Company_public_response", "dtype": "string"}, {"name": "Company", "dtype": "string"}, {"name": "State", "dtype": "string"}, {"name": "ZIP_code", "dtype": "string"}, {"name": "Tags", "dtype": "string"}, {"name": "Consumer_consent_provided?", "dtype": "string"}, {"name": "Submitted_via", "dtype": "string"}, {"name": "Date_sent_to_company", "dtype": "string"}, {"name": "Company response to consumer", "dtype": "string"}, {"name": "Timely_response?", "dtype": "string"}, {"name": "Consumer_disputed?", "dtype": "string"}, {"name": "Complaint_ID", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 32323002, "num_examples": 24000}], "download_size": 12680642, "dataset_size": 32323002}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-27T16:28:06+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "customers-complaints-train"
More Information needed | [
"# Dataset Card for \"customers-complaints-train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"customers-complaints-train\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"customers-complaints-train\"\n\nMore Information needed"
] |
368b68bcdfbc68b8a1329db8d7463680eba3329e |
# Dataset Card for CA-GL Parallel Corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing information](#licenciung-informatrion)
- [Funding](#funding)
## Dataset Description
### Dataset Summary
The CA-GL Parallel Corpus is a Catalan-Galician synthetic dataset of **33.668.599** parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Galician.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- nos_all.ca: contains 33.668.599 Catalan sentences (synthetic).
- nos_all.gl: contains 33.668.599 Galician sentences (authentic).
### Data Splits
The dataset contains a single split: `train`.
## Dataset Creation
### Source Data
This synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-GL was delivered by Proxecto Nós
and the Spanish was translated to Catalan using the machine translation model [PlanTL-GOB-ES](https://huggingface.co/PlanTL-GOB-ES/mt-plantl-es-ca).
**Total: 33.668.599 parallel sentences** .
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks between Catalan and Galician, two of the co-official languages of Spain.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to [email protected].
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/).
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the [project ILENIA](https://proyectoilenia.es/) with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 | projecte-aina/CA-GL_Parallel_Corpus | [
"task_categories:translation",
"multilinguality:translation",
"size_categories:10M<n<100M",
"source_datasets:original",
"language:ca",
"language:gl",
"language:multilingual",
"region:us"
] | 2023-11-27T16:46:09+00:00 | {"language": ["ca", "gl", "multilingual"], "multilinguality": ["translation"], "size_categories": ["10M<n<100M"], "source_datasets": ["original"], "task_categories": ["translation"], "task_ids": [], "pretty_name": "CA-GL Parallel Corpus"} | 2024-01-17T13:47:00+00:00 | [] | [
"ca",
"gl",
"multilingual"
] | TAGS
#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-Galician #language-multilingual #region-us
|
# Dataset Card for CA-GL Parallel Corpus
## Table of Contents
- Dataset Description
- Dataset Summary
- Supported Tasks
- Languages
- Dataset Structure
- Data Splits
- Dataset Creation
- Source Data
- Personal and Sensitive Information
- Considerations for Using the Data
- Social Impact of Dataset
- Discussion of Biases
- Other Known Limitations
- Additional Information
- Author
- Contact Information
- Copyright
- Licensing information
- Funding
## Dataset Description
### Dataset Summary
The CA-GL Parallel Corpus is a Catalan-Galician synthetic dataset of 33.668.599 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g.,
Machine Translation.
### Supported Tasks and Leaderboards
The dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.
### Languages
The texts in the dataset are in Catalan and Galician.
## Dataset Structure
Two separated txt files are provided with the sentences sorted in the same order:
- nos_all.ca: contains 33.668.599 Catalan sentences (synthetic).
- nos_all.gl: contains 33.668.599 Galician sentences (authentic).
### Data Splits
The dataset contains a single split: 'train'.
## Dataset Creation
### Source Data
This synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-GL was delivered by Proxecto Nós
and the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.
Total: 33.668.599 parallel sentences .
### Personal and Sensitive Information
No anonymisation process was performed.
## Considerations for Using the Data
### Social Impact of Dataset
The purpose of this dataset is to help develop Machine Translation tasks between Catalan and Galician, two of the co-official languages of Spain.
### Discussion of Biases
We are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.
Nonetheless, we have not applied any steps to reduce their impact.
### Other Known Limitations
The dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.
## Additional Information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center.
### Contact information
For further information, please send an email to langtech@URL.
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023).
### Licensing information
This work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.
### Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334 | [
"# Dataset Card for CA-GL Parallel Corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nThe CA-GL Parallel Corpus is a Catalan-Galician synthetic dataset of 33.668.599 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\nThe texts in the dataset are in Catalan and Galician.",
"## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- nos_all.ca: contains 33.668.599 Catalan sentences (synthetic).\n\n- nos_all.gl: contains 33.668.599 Galician sentences (authentic).",
"### Data Splits\n\nThe dataset contains a single split: 'train'.",
"## Dataset Creation",
"### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-GL was delivered by Proxecto Nós\nand the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 33.668.599 parallel sentences .",
"### Personal and Sensitive Information\n\nNo anonymisation process was performed.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Galician, two of the co-official languages of Spain.",
"### Discussion of Biases\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.",
"## Additional Information",
"### Author\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
] | [
"TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-Galician #language-multilingual #region-us \n",
"# Dataset Card for CA-GL Parallel Corpus",
"## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding",
"## Dataset Description",
"### Dataset Summary\n\nThe CA-GL Parallel Corpus is a Catalan-Galician synthetic dataset of 33.668.599 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.",
"### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.",
"### Languages\n\nThe texts in the dataset are in Catalan and Galician.",
"## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- nos_all.ca: contains 33.668.599 Catalan sentences (synthetic).\n\n- nos_all.gl: contains 33.668.599 Galician sentences (authentic).",
"### Data Splits\n\nThe dataset contains a single split: 'train'.",
"## Dataset Creation",
"### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-GL was delivered by Proxecto Nós\nand the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 33.668.599 parallel sentences .",
"### Personal and Sensitive Information\n\nNo anonymisation process was performed.",
"## Considerations for Using the Data",
"### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Galician, two of the co-official languages of Spain.",
"### Discussion of Biases\n\nWe are aware that since part of the data comes from unreliable web pages and non-curated texts, some biases may be present in the dataset.\nNonetheless, we have not applied any steps to reduce their impact.",
"### Other Known Limitations\n\nThe dataset contains data of a general domain. Application of this dataset in more specific domains such as biomedical, legal etc. would be of limited use.",
"## Additional Information",
"### Author\nLanguage Technologies Unit (LangTech) at the Barcelona Supercomputing Center.",
"### Contact information\nFor further information, please send an email to langtech@URL.",
"### Copyright\nCopyright Language Technologies Unit at Barcelona Supercomputing Center (2023).",
"### Licensing information\nThis work is licensed under a Attribution-NonCommercial-ShareAlike 4.0 International.",
"### Funding\nThis work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project ILENIA with reference 2022/TL22/00215337, 2022/TL22/00215336, 2022/TL22/00215335 y 2022/TL22/00215334"
] | [
58,
10,
94,
4,
53,
45,
18,
68,
19,
5,
66,
16,
8,
38,
61,
43,
5,
21,
18,
18,
24,
78
] | [
"passage: TAGS\n#task_categories-translation #multilinguality-translation #size_categories-10M<n<100M #source_datasets-original #language-Catalan #language-Galician #language-multilingual #region-us \n# Dataset Card for CA-GL Parallel Corpus## Table of Contents\n- Dataset Description\n - Dataset Summary\n - Supported Tasks\n - Languages\n- Dataset Structure\n - Data Splits\n- Dataset Creation\n - Source Data\n - Personal and Sensitive Information\n- Considerations for Using the Data\n - Social Impact of Dataset\n - Discussion of Biases\n - Other Known Limitations\n- Additional Information\n - Author\n - Contact Information\n - Copyright\n - Licensing information\n - Funding## Dataset Description### Dataset Summary\n\nThe CA-GL Parallel Corpus is a Catalan-Galician synthetic dataset of 33.668.599 parallel sentences. The dataset was created to support Catalan NLP tasks, e.g., \nMachine Translation.### Supported Tasks and Leaderboards\n\nThe dataset can be used to train a model for Multilingual Machine Translation. Success on this task is typically measured by achieving a high BLEU score.### Languages\n\nThe texts in the dataset are in Catalan and Galician.## Dataset Structure\n\nTwo separated txt files are provided with the sentences sorted in the same order:\n\n- nos_all.ca: contains 33.668.599 Catalan sentences (synthetic).\n\n- nos_all.gl: contains 33.668.599 Galician sentences (authentic).### Data Splits\n\nThe dataset contains a single split: 'train'.## Dataset Creation### Source Data\n\nThis synthetic dataset was created in the frame of Project Ilenia. An authentic parallel corpus ES-GL was delivered by Proxecto Nós\nand the Spanish was translated to Catalan using the machine translation model PlanTL-GOB-ES.\n\nTotal: 33.668.599 parallel sentences .### Personal and Sensitive Information\n\nNo anonymisation process was performed.## Considerations for Using the Data### Social Impact of Dataset\n\nThe purpose of this dataset is to help develop Machine Translation tasks between Catalan and Galician, two of the co-official languages of Spain."
] |
8fb98a5d7101a2d4d6a92f3f1931e22e90bf415c |
## This dataset is being developed now. | kenken6696/folio_by_ccg2lambda | [
"task_categories:text-generation",
"license:cc-by-4.0",
"region:us"
] | 2023-11-27T17:00:57+00:00 | {"license": "cc-by-4.0", "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "original", "dtype": "string"}, {"name": "logical_form", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 442428, "num_examples": 1644}, {"name": "valid", "num_bytes": 102761, "num_examples": 365}], "download_size": 262116, "dataset_size": 545189}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}]} | 2023-11-28T05:44:37+00:00 | [] | [] | TAGS
#task_categories-text-generation #license-cc-by-4.0 #region-us
|
## This dataset is being developed now. | [
"## This dataset is being developed now."
] | [
"TAGS\n#task_categories-text-generation #license-cc-by-4.0 #region-us \n",
"## This dataset is being developed now."
] | [
26,
9
] | [
"passage: TAGS\n#task_categories-text-generation #license-cc-by-4.0 #region-us \n## This dataset is being developed now."
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.