sha
stringlengths 40
40
| text
stringlengths 0
13.4M
| id
stringlengths 2
117
| tags
list | created_at
stringlengths 25
25
| metadata
stringlengths 2
31.7M
| last_modified
stringlengths 25
25
|
---|---|---|---|---|---|---|
10e9a9106c3bce5a23a4b9ac1816948bc2a04bce | # Dataset Card for "AToMiC-Qrels-Dedupe"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | justram/AToMiC-Qrels-Dedupe | [
"region:us"
]
| 2022-12-15T05:39:00+00:00 | {"dataset_info": {"features": [{"name": "text_id", "dtype": "string"}, {"name": "Q0", "dtype": "string"}, {"name": "image_id", "dtype": "string"}, {"name": "rel", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 417552084, "num_examples": 5030748}, {"name": "validation", "num_bytes": 3336587, "num_examples": 38859}, {"name": "test", "num_bytes": 2551669, "num_examples": 30938}], "download_size": 226715065, "dataset_size": 423440340}} | 2022-12-15T12:26:48+00:00 |
035191e85dc3f96b321779dd8518954d49a37db2 | # Dataset Card for "clinic-small_talk"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fathyshalab/clinic-small_talk | [
"region:us"
]
| 2022-12-15T06:06:59+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 54000.1, "num_examples": 805}, {"name": "test", "num_bytes": 23142.9, "num_examples": 345}], "download_size": 0, "dataset_size": 77143.0}} | 2022-12-24T04:41:03+00:00 |
5dcc41662640dbd74f9fc89fb328fd0cd666ce03 | # Dataset Card for "kaggle-mbti-cleaned"
This dataset originated from Kaggle [(MBTI) Myers-Briggs Personality Type Dataset](https://www.kaggle.com/datasets/datasnaek/mbti-type).
Some cleaning operations are made to this dataset to make it in a usable format for text classification process.
See more detail in [GitHub](https://github.com/nogibjj/MBTI-Personality-Test)
| Shunian/kaggle-mbti-cleaned | [
"region:us"
]
| 2022-12-15T06:30:41+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51657719, "num_examples": 327828}, {"name": "test", "num_bytes": 12922409, "num_examples": 81957}], "download_size": 42682844, "dataset_size": 64580128}} | 2022-12-16T09:46:54+00:00 |
20c1e98f15d263cdef247667794b403a4119d268 | # Dataset Card for "MULTI_VALUE_mnli_present_modals"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_present_modals | [
"region:us"
]
| 2022-12-15T06:36:59+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 211389, "num_examples": 881}, {"name": "dev_mismatched", "num_bytes": 229061, "num_examples": 936}, {"name": "test_matched", "num_bytes": 227994, "num_examples": 954}, {"name": "test_mismatched", "num_bytes": 240389, "num_examples": 1002}, {"name": "train", "num_bytes": 9136328, "num_examples": 38651}], "download_size": 6147567, "dataset_size": 10045161}} | 2022-12-15T06:37:29+00:00 |
6249e58d42e70a1077b8f3549fef55ac9f5788f4 | Tirendaz/fifa-world-cup-2022-tweets | [
"license:openrail",
"region:us"
]
| 2022-12-15T06:52:54+00:00 | {"license": "openrail"} | 2023-01-23T19:26:22+00:00 |
|
7962619a7ba6dfcc7d16e4e2d948d846f9624b15 | # Dataset Card for "smallnorb"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
**NOTE:** This dataset is an unofficial port of small NORB based on a [repo from Andrea Palazzi](https://github.com/ndrplz/small_norb) using this [script](https://colab.research.google.com/drive/1Tx20uP1PrnyarsNCWf1dN9EQyr38BDIE?usp=sharing). For complete and accurate information, we highly recommend visiting the dataset's original homepage.
- **Homepage:** https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/
- **Paper:** https://ieeexplore.ieee.org/document/1315150
### Dataset Summary
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> This database is intended for experiments in 3D object reocgnition from shape. It contains images of 50 toys belonging to 5 generic categories: four-legged animals, human figures, airplanes, trucks, and cars. The objects were imaged by two cameras under 6 lighting conditions, 9 elevations (30 to 70 degrees every 5 degrees), and 18 azimuths (0 to 340 every 20 degrees).
>
> The training set is composed of 5 instances of each category (instances 4, 6, 7, 8 and 9), and the test set of the remaining 5 instances (instances 0, 1, 2, 3, and 5).
## Dataset Structure
### Data Instances
An example of an instance in this dataset:
```
{
'image_lt': <PIL.PngImagePlugin.PngImageFile image mode=L size=96x96 at 0x...>,
'image_rt': <PIL.PngImagePlugin.PngImageFile image mode=L size=96x96 at 0x...>,
'category': 0,
'instance': 8,
'elevation': 6,
'azimuth': 4,
'lighting': 4
}
```
### Data Fields
Explanation of this dataset's fields:
- `image_lt`: a PIL image of an object from the dataset taken with one of two cameras
- `image_rt`: a PIL image of an object from the dataset taken with one of two cameras
- `category`: the category of the object shown in the images
- `instance`: the instance of the category of the object shown in the images
- `elevation`: the label of the elevation of the cameras used in capturing a picture of the object
- `azimuth`: the label of the azimuth of the cameras used in capturing a picture of the object
- `lighting`: the label of the lighting condition used in capturing a picture of the object
For more information on what these categories and labels pertain to, please see [Dataset Summary](#dataset-summary) or the [repo](https://github.com/ndrplz/small_norb) used in processing the dataset.
### Data Splits
Information on this dataset's splits:
| | train | test |
|------|------:|------:|
| size | 24300 | 24300 |
## Additional Information
### Dataset Curators
Credits from the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> [Fu Jie Huang](http://www.cs.nyu.edu/jhuangfu/), [Yann LeCun](http://yann.lecun.com/)
>
> Courant Institute, New York University
>
> October, 2005
### Licensing Information
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> This database is provided for research purposes. It cannot be sold. Publications that include results obtained with this database should reference the following paper:
>
> Y. LeCun, F.J. Huang, L. Bottou, Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2004
### Citation Information
From the dataset's [homepage](https://cs.nyu.edu/~ylclab/data/norb-v1.0-small/):
> Publications that include results obtained with this database should reference the following paper:
>
> Y. LeCun, F.J. Huang, L. Bottou, Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2004
```
@inproceedings{lecun2004learning,
title={Learning methods for generic object recognition with invariance to pose and lighting},
author={LeCun, Yann and Huang, Fu Jie and Bottou, Leon},
booktitle={Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.},
volume={2},
pages={II--104},
year={2004},
organization={IEEE}
}
```
DOI: [10.1109/CVPR.2004.1315150](https://doi.org/10.1109/CVPR.2004.1315150)
### Contributions
Code to process small NORB adapted from [Andrea Palazzi's repo](https://github.com/ndrplz/small_norb) with this [script](https://colab.research.google.com/drive/1Tx20uP1PrnyarsNCWf1dN9EQyr38BDIE?usp=sharing). | Ramos-Ramos/smallnorb | [
"region:us"
]
| 2022-12-15T07:29:28+00:00 | {"dataset_info": {"features": [{"name": "image_lt", "dtype": "image"}, {"name": "image_rt", "dtype": "image"}, {"name": "category", "dtype": "int32"}, {"name": "instance", "dtype": "int32"}, {"name": "elevation", "dtype": "int32"}, {"name": "azimuth", "dtype": "int32"}, {"name": "lighting", "dtype": "int32"}], "splits": [{"name": "train", "num_bytes": 117947794.0, "num_examples": 24300}, {"name": "test", "num_bytes": 118130266.0, "num_examples": 24300}], "download_size": 236815224, "dataset_size": 236078060.0}} | 2022-12-15T08:30:22+00:00 |
1b7eb2cd766295b15100b32949fd1703a7cdf0dc | # Dataset Card for "preprocessed_common_voice_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vumichien/preprocessed_common_voice_11 | [
"region:us"
]
| 2022-12-15T08:26:33+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3429313257, "num_examples": 10990}, {"name": "test", "num_bytes": 1562198132, "num_examples": 4604}], "download_size": 4988499841, "dataset_size": 4991511389}} | 2022-12-15T08:29:56+00:00 |
804dcca92615e463eed5f4d887aefdda9b9035d8 | # Dataset Card for "results_original_model__valid_10rows_2022-12-15"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | joddy/results_original_model__valid_10rows_2022-12-15 | [
"region:us"
]
| 2022-12-15T08:28:19+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}, {"name": "index", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4287818.0, "num_examples": 10}], "download_size": 4289522, "dataset_size": 4287818.0}} | 2022-12-15T08:29:10+00:00 |
29af0533133cd32568b97f625fd93405f825c729 | # Dataset Card for "MULTI_VALUE_mnli_regularized_reflexives_aave"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_regularized_reflexives_aave | [
"region:us"
]
| 2022-12-15T09:22:09+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 19251, "num_examples": 94}, {"name": "dev_mismatched", "num_bytes": 23121, "num_examples": 87}, {"name": "test_matched", "num_bytes": 21604, "num_examples": 90}, {"name": "test_mismatched", "num_bytes": 20670, "num_examples": 82}, {"name": "train", "num_bytes": 936051, "num_examples": 3883}], "download_size": 578604, "dataset_size": 1020697}} | 2022-12-15T09:22:24+00:00 |
da4b164f2e86974facb19440aaf12453ac2d82a2 | # Dataset Card for "MULTI_VALUE_mnli_present_perfect_ever"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_present_perfect_ever | [
"region:us"
]
| 2022-12-15T09:29:14+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 186342, "num_examples": 793}, {"name": "dev_mismatched", "num_bytes": 200160, "num_examples": 788}, {"name": "test_matched", "num_bytes": 220041, "num_examples": 875}, {"name": "test_mismatched", "num_bytes": 197234, "num_examples": 826}, {"name": "train", "num_bytes": 8005522, "num_examples": 32860}], "download_size": 5376415, "dataset_size": 8809299}} | 2022-12-15T09:29:39+00:00 |
3d796049d5b1e6b83d4dabbb59fd4de73264b6a5 | # Dataset Card for "MULTI_VALUE_mnli_regularized_reflexives_object_pronouns"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_regularized_reflexives_object_pronouns | [
"region:us"
]
| 2022-12-15T09:29:50+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 10417, "num_examples": 51}, {"name": "dev_mismatched", "num_bytes": 8139, "num_examples": 41}, {"name": "test_matched", "num_bytes": 11876, "num_examples": 46}, {"name": "test_mismatched", "num_bytes": 8199, "num_examples": 43}, {"name": "train", "num_bytes": 512248, "num_examples": 2249}], "download_size": 285694, "dataset_size": 550879}} | 2022-12-15T09:30:13+00:00 |
38c338a6978d301ca067347ef3d864efe2636dc0 | # Dataset Card for "MULTI_VALUE_mnli_not_preverbal_negator"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_not_preverbal_negator | [
"region:us"
]
| 2022-12-15T09:31:56+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "dev_matched", "num_bytes": 235099, "num_examples": 1073}, {"name": "dev_mismatched", "num_bytes": 210036, "num_examples": 984}, {"name": "test_matched", "num_bytes": 226280, "num_examples": 1010}, {"name": "test_mismatched", "num_bytes": 211827, "num_examples": 1020}, {"name": "train", "num_bytes": 9729916, "num_examples": 43965}], "download_size": 6472771, "dataset_size": 10613158}} | 2022-12-15T09:32:23+00:00 |
2165a30851b65ef599fdc1f4844b7e93fbb5bc29 | paveldruy/sarah | [
"license:openrail",
"region:us"
]
| 2022-12-15T10:08:51+00:00 | {"license": "openrail"} | 2022-12-15T11:14:04+00:00 |
|
70a4b5cb4d65eca749bebe500e6a9e2151e61cda | SanthoshReddy123/bot-ai | [
"license:openrail",
"region:us"
]
| 2022-12-15T11:22:46+00:00 | {"license": "openrail"} | 2022-12-15T11:22:49+00:00 |
|
d057579607c829c88440b267e2be69153dd8d996 | maxdunhill/detectingvulnerablecode | [
"license:apache-2.0",
"region:us"
]
| 2022-12-15T11:37:49+00:00 | {"license": "apache-2.0"} | 2022-12-15T13:55:58+00:00 |
|
31ccb278a2ea7cf6f737610f149a3b68133d6f6c | RaThorat/patterns.jsonl | [
"license:ecl-2.0",
"region:us"
]
| 2022-12-15T11:53:50+00:00 | {"license": "ecl-2.0"} | 2022-12-15T11:55:04+00:00 |
|
14280d1d8043ebf5ab156f80135429fb8575a64d | # Dataset Card for "MULTI_VALUE_MNLI_bare_past_tense"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_bare_past_tense | [
"region:us"
]
| 2022-12-15T12:11:12+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 28088801, "num_examples": 131498}, {"name": "dev_matched", "num_bytes": 690136, "num_examples": 3328}, {"name": "dev_mismatched", "num_bytes": 781000, "num_examples": 3435}, {"name": "test_matched", "num_bytes": 725437, "num_examples": 3403}, {"name": "test_mismatched", "num_bytes": 777555, "num_examples": 3437}], "download_size": 20237766, "dataset_size": 31062929}} | 2022-12-15T12:11:48+00:00 |
e48218e53518b036fd912e83cf90bc7a925d6b8f | # Dataset Card for "MULTI_VALUE_MNLI_nomo_existential"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_nomo_existential | [
"region:us"
]
| 2022-12-15T12:11:50+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 218957, "num_examples": 1020}, {"name": "dev_matched", "num_bytes": 5157, "num_examples": 26}, {"name": "dev_mismatched", "num_bytes": 4733, "num_examples": 22}, {"name": "test_matched", "num_bytes": 5635, "num_examples": 26}, {"name": "test_mismatched", "num_bytes": 3802, "num_examples": 19}], "download_size": 148198, "dataset_size": 238284}} | 2022-12-15T12:12:29+00:00 |
f8b94a675924121f7d3785ebbf46c198187dc6ae | # Dataset Card for "MULTI_VALUE_MNLI_that_resultative_past_participle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/MULTI_VALUE_mnli_that_resultative_past_participle | [
"region:us"
]
| 2022-12-15T12:12:31+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}, {"name": "score", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 82248, "num_examples": 348}, {"name": "dev_matched", "num_bytes": 2264, "num_examples": 9}, {"name": "dev_mismatched", "num_bytes": 3578, "num_examples": 18}, {"name": "test_matched", "num_bytes": 773, "num_examples": 3}, {"name": "test_mismatched", "num_bytes": 4288, "num_examples": 21}], "download_size": 67869, "dataset_size": 93151}} | 2022-12-15T12:12:51+00:00 |
2263f4a1af106cba10d01d39e88c0ca2305b6f25 | # Dataset Card for "text_recognition_en_zh_clean"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | priyank-m/text_recognition_en_zh_clean | [
"region:us"
]
| 2022-12-15T12:22:22+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "val", "num_bytes": 53886975.51, "num_examples": 2910}, {"name": "test", "num_bytes": 55192498.476, "num_examples": 2894}, {"name": "train", "num_bytes": 26744379885.02228, "num_examples": 1396731}], "download_size": 26975033720, "dataset_size": 26853459359.00828}} | 2022-12-16T18:05:44+00:00 |
82793ac62e2e86dd2e4ffce8a0a63b87408c47a3 | # Dataset Card for "preprocessed_jsut_jsss_css10"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vumichien/preprocessed_jsut_jsss_css10 | [
"region:us"
]
| 2022-12-15T13:03:03+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7003135912, "num_examples": 18160}], "download_size": 7021090523, "dataset_size": 7003135912}} | 2022-12-15T13:06:13+00:00 |
4ad3df7317bd71f9da11dee39898120bcb95ed86 | # Dataset Card for "preprocessed_jsut_jsss_css10_common_voice_11"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vumichien/preprocessed_jsut_jsss_css10_common_voice_11 | [
"region:us"
]
| 2022-12-15T13:10:37+00:00 | {"dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": "float32"}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10432449169, "num_examples": 29150}, {"name": "test", "num_bytes": 1562198132, "num_examples": 4604}], "download_size": 12008358604, "dataset_size": 11994647301}} | 2022-12-15T13:17:29+00:00 |
91ccf68006210c5b6d3811bb2800ed0f26f2de81 | # Dataset Card for "clinic-work"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | fathyshalab/clinic-work | [
"region:us"
]
| 2022-12-15T13:25:12+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 39848.2, "num_examples": 525}, {"name": "test", "num_bytes": 17077.8, "num_examples": 225}], "download_size": 0, "dataset_size": 56926.0}} | 2022-12-24T05:55:25+00:00 |
2b2254931a8229c449f08bf937d4405b2a3c4b79 | # Dataset Card for "yannic-kilcher-transcript-audio"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [Whispering-GPT](https://github.com/matallanas/whisper_gpt_pipeline)
- **Repository:** [whisper_gpt_pipeline](https://github.com/matallanas/whisper_gpt_pipeline)
- **Paper:** [whisper](https://cdn.openai.com/papers/whisper.pdf) and [gpt](https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf)
- **Point of Contact:** [Whispering-GPT organization](https://huggingface.co/Whispering-GPT)
### Dataset Summary
This dataset is created by applying whisper to the videos of the Youtube channel [Yannic Kilcher](https://www.youtube.com/yannickilcher). The dataset was created a medium size whisper model.
### Languages
- **Language**: English
## Dataset Structure
The dataset contains all the transcripts plus the audio of the different videos of Yannic Kilcher.
### Data Fields
The dataset is composed by:
- **id**: Id of the youtube video.
- **channel**: Name of the channel.
- **channel\_id**: Id of the youtube channel.
- **title**: Title given to the video.
- **categories**: Category of the video.
- **description**: Description added by the author.
- **text**: Whole transcript of the video.
- **segments**: A list with the time and transcription of the video.
- **start**: When started the trancription.
- **end**: When the transcription ends.
- **text**: The text of the transcription.
- **audio**: the extracted audio of the video in ogg format.
### Data Splits
- Train split.
## Dataset Creation
### Source Data
The transcriptions are from the videos of [Yannic Kilcher](https://www.youtube.com/yannickilcher)
### Contributions
Thanks to [Whispering-GPT](https://huggingface.co/Whispering-GPT) organization for adding this dataset. | Whispering-GPT/yannick-kilcher-transcript-audio | [
"task_categories:automatic-speech-recognition",
"whisper",
"whispering",
"medium",
"region:us"
]
| 2022-12-15T13:25:43+00:00 | {"task_categories": ["automatic-speech-recognition"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 15013848071.0, "num_examples": 370}], "download_size": 15003651933, "dataset_size": 15013848071.0}, "tags": ["whisper", "whispering", "medium"]} | 2022-12-18T17:46:15+00:00 |
60512e89e68841b6b5ed1be59caf97b169f0d27a |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [homepage](https://github.com/masakhane-io/masakhane-ner)
- **Repository:** [github](https://github.com/masakhane-io/masakhane-ner)
- **Paper:** [paper](https://arxiv.org/abs/2103.11811)
- **Point of Contact:** [Masakhane](https://www.masakhane.io/) or [email protected]
### Dataset Summary
MasakhaNER 2.0 is the largest publicly available high-quality dataset for named entity recognition (NER) in 20 African languages created by the Masakhane community.
Named entities are phrases that contain the names of persons, organizations, locations, times and quantities. Example:
[PER Wolff] , currently a journalist in [LOC Argentina] , played with [PER Del Bosque] in the final years of the seventies in [ORG Real Madrid] .
MasakhaNER 2.0 is a named entity dataset consisting of PER, ORG, LOC, and DATE entities annotated by Masakhane for 20 African languages
The train/validation/test sets are available for all the 20 languages.
For more details see https://arxiv.org/abs/2210.12391
### Supported Tasks and Leaderboards
[More Information Needed]
- `named-entity-recognition`: The performance in this task is measured with [F1](https://huggingface.co/metrics/f1) (higher is better). A named entity is correct only if it is an exact match of the corresponding entity in the data.
### Languages
There are 20 languages available :
- Bambara (bam)
- Ghomala (bbj)
- Ewe (ewe)
- Fon (fon)
- Hausa (hau)
- Igbo (ibo)
- Kinyarwanda (kin)
- Luganda (lug)
- Dholuo (luo)
- Mossi (mos)
- Chichewa (nya)
- Nigerian Pidgin
- chShona (sna)
- Kiswahili (swฤ
)
- Setswana (tsn)
- Twi (twi)
- Wolof (wol)
- isiXhosa (xho)
- Yorรนbรก (yor)
- isiZulu (zul)
## Dataset Structure
### Data Instances
The examples look like this for Yorรนbรก:
```
from datasets import load_dataset
data = load_dataset('masakhane/masakhaner2', 'yor')
# Please, specify the language code
# A data point consists of sentences seperated by empty line and tab-seperated tokens and tags.
{'id': '0',
'ner_tags': [B-DATE, I-DATE, 0, 0, 0, 0, 0, B-PER, I-PER, I-PER, O, O, O, O],
'tokens': ['Wรกkร tรญ', 'mรฉje', 'ti', 'rรฉ', 'kแปjรก', 'lแป', 'tรญ', 'Luis', 'Carlos', 'Dรญaz', 'ti', 'di', 'awati', '.']
}
```
### Data Fields
- `id`: id of the sample
- `tokens`: the tokens of the example text
- `ner_tags`: the NER tags of each token
The NER tags correspond to this list:
```
"O", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC", "B-DATE", "I-DATE",
```
In the NER tags, a B denotes the first item of a phrase and an I any non-initial word. There are four types of phrases: person names (PER), organizations (ORG), locations (LOC) and dates & time (DATE).
It is assumed that named entities are non-recursive and non-overlapping. In case a named entity is embedded in another named entity usually, only the top level entity is marked.
### Data Splits
For all languages, there are three splits.
The original splits were named `train`, `dev` and `test` and they correspond to the `train`, `validation` and `test` splits.
The splits have the following sizes :
| Language | train | validation | test |
|-----------------|------:|-----------:|------:|
| Bambara | 4463 | 638 | 1274 |
| Ghomala | 3384 | 483 | 966 |
| Ewe | 3505 | 501 | 1001 |
| Fon. | 4343 | 621 | 1240 |
| Hausa | 5716 | 816 | 1633 |
| Igbo | 7634 | 1090 | 2181 |
| Kinyarwanda | 7825 | 1118 | 2235 |
| Luganda | 4942 | 706 | 1412 |
| Luo | 5161 | 737 | 1474 |
| Mossi | 4532 | 648 | 1613 |
| Nigerian-Pidgin | 5646 | 806 | 1294 |
| Chichewa | 6250 | 893 | 1785 |
| chiShona | 6207 | 887 | 1773 |
| Kiswahili | 6593 | 942 | 1883 |
| Setswana | 3289 | 499 | 996 |
| Akan/Twi | 4240 | 605 | 1211 |
| Wolof | 4593 | 656 | 1312 |
| isiXhosa | 5718 | 817 | 1633 |
| Yoruba | 6877 | 983 | 1964 |
| isiZulu | 5848 | 836 | 1670 |
## Dataset Creation
### Curation Rationale
The dataset was introduced to introduce new resources to 20 languages that were under-served for natural language processing.
[More Information Needed]
### Source Data
The source of the data is from the news domain, details can be found here https://arxiv.org/abs/2210.12391
#### Initial Data Collection and Normalization
The articles were word-tokenized, information on the exact pre-processing pipeline is unavailable.
#### Who are the source language producers?
The source language was produced by journalists and writers employed by the news agency and newspaper mentioned above.
### Annotations
#### Annotation process
Details can be found here https://arxiv.org/abs/2103.11811
#### Who are the annotators?
Annotators were recruited from [Masakhane](https://www.masakhane.io/)
### Personal and Sensitive Information
The data is sourced from newspaper source and only contains mentions of public figures or individuals
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
Users should keep in mind that the dataset only contains news text, which might limit the applicability of the developed systems to other domains.
## Additional Information
### Dataset Curators
### Licensing Information
The licensing status of the data is CC 4.0 Non-Commercial
### Citation Information
Provide the [BibTex](http://www.bibtex.org/)-formatted reference for the dataset. For example:
```
@article{Adelani2022MasakhaNER2A,
title={MasakhaNER 2.0: Africa-centric Transfer Learning for Named Entity Recognition},
author={David Ifeoluwa Adelani and Graham Neubig and Sebastian Ruder and Shruti Rijhwani and Michael Beukman and Chester Palen-Michel and Constantine Lignos and Jesujoba Oluwadara Alabi and Shamsuddeen Hassan Muhammad and Peter Nabende and Cheikh M. Bamba Dione and Andiswa Bukula and Rooweither Mabuya and Bonaventure F. P. Dossou and Blessing K. Sibanda and Happy Buzaaba and Jonathan Mukiibi and Godson Kalipe and Derguene Mbaye and Amelia Taylor and Fatoumata Kabore and Chris C. Emezue and Anuoluwapo Aremu and Perez Ogayo and Catherine W. Gitau and Edwin Munkoh-Buabeng and Victoire Memdjokam Koagne and Allahsera Auguste Tapo and Tebogo Macucwa and Vukosi Marivate and Elvis Mboning and Tajuddeen R. Gwadabe and Tosin P. Adewumi and Orevaoghene Ahia and Joyce Nakatumba-Nabende and Neo L. Mokono and Ignatius M Ezeani and Chiamaka Ijeoma Chukwuneke and Mofetoluwa Adeyemi and Gilles Hacheme and Idris Abdulmumin and Odunayo Ogundepo and Oreen Yousuf and Tatiana Moteu Ngoli and Dietrich Klakow},
journal={ArXiv},
year={2022},
volume={abs/2210.12391}
}
```
### Contributions
Thanks to [@dadelani](https://github.com/dadelani) for adding this dataset. | masakhane/masakhaner2 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:expert-generated",
"language_creators:expert-generated",
"multilinguality:multilingual",
"size_categories:1K<n<10K",
"source_datasets:original",
"language:bm",
"language:bbj",
"language:ee",
"language:fon",
"language:ha",
"language:ig",
"language:rw",
"language:lg",
"language:luo",
"language:mos",
"language:ny",
"language:pcm",
"language:sn",
"language:sw",
"language:tn",
"language:tw",
"language:wo",
"language:xh",
"language:yo",
"language:zu",
"license:afl-3.0",
"ner",
"masakhaner",
"masakhane",
"arxiv:2103.11811",
"arxiv:2210.12391",
"region:us"
]
| 2022-12-15T13:28:09+00:00 | {"annotations_creators": ["expert-generated"], "language_creators": ["expert-generated"], "language": ["bm", "bbj", "ee", "fon", "ha", "ig", "rw", "lg", "luo", "mos", "ny", "pcm", "sn", "sw", "tn", "tw", "wo", "xh", "yo", "zu"], "license": ["afl-3.0"], "multilinguality": ["multilingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["original"], "task_categories": ["token-classification"], "task_ids": ["named-entity-recognition"], "pretty_name": "masakhaner2.0", "tags": ["ner", "masakhaner", "masakhane"]} | 2023-09-11T17:00:07+00:00 |
33e7302db76eda5e1cf963615605ed03434f4513 | # Dataset Card for "NLQuAD"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [https://github.com/ASoleimaniB/NLQuAD](https://github.com/ASoleimaniB/NLQuAD)
- **Paper: https://aclanthology.org/2021.eacl-main.106/**
- **Size of the generated dataset:** 89.95 MB
### Dataset Summary
This is a copy of the original NLQuAD dataset distributed via [Github](https://github.com/ASoleimaniB/NLQuAD).
NLQuAD is a non-factoid long question answering dataset from BBC news articles.
NLQuADโs question types and the long length of its context documents as well as answers, make it a challenging real-world task.
NLQuAD consists of news articles as context documents, interrogative sub-headings in the articles as questions, and body paragraphs corresponding to the sub-headings as contiguous answers to the questions.
NLQuAD contains 31k non-factoid questions and long answers collected from 13k BBC news articles.
See example articles in BBC [1](https://www.bbc.com/news/world-asia-china-51230011), [2](https://www.bbc.com/news/world-55709428).
We automatically extract target answers because annotating for non-factoid long QA is extremely challenging and costly.
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```json
{
"title": "Khashoggi murder: Body 'dissolved in acid'",
"date": "2 November 2018",
"paragraphs":[
{
"context": "A top Turkish official, presidential adviser Yasin Aktay, has said ....",
"qas":[
{
"question":"What was said in the crown prince's alleged phone call?",
"id":"0_0",
"answers":[
{
"text":"During the call with President Donald Trump\'s son-in-law Jared Kushner and national ....",
"answer_start":1352,
"answer_end": 2108,
}
]
},
{
"question":"What has the investigation found so far?",
"id":"0_1",
"answers":[
{
"text":"There is still no consensus on how Khashoggi died. He entered ....",
"answer_start":2109,
"answer_end": 3128,
}
]
},
]
}
]
}
```
### Data Fields
The data fields are the same among all splits.
- `title`: a `string` feature.
- `date`: a `string` feature.
- `paragraphs`: a list feature containing dictionaries:
- `context`: a `string` feature.
- `qas`: a list feature containing dictionaries:
- `question`: a `string` feature.
- `id`: a `string` feature.
- `answers`: a list feature containing dictionaries:
- `text`: a `string` feature.
- `answer_start`: a `int32` feature.
- `answer_end`: a `int32` feature
### Data Splits
| name |train|test|validation|
|----------|----:|----:|---------:|
| |10259| 1280| 1280|
## Additional Information
### Licensing Information
This dataset is distributed under the [CC BY-NC](https://creativecommons.org/licenses/by-nc/3.0/) licence providing free access for non-commercial and academic usage.
### Citation Information
BibTeX:
```json
@inproceedings{soleimani-etal-2021-nlquad,
title = "{NLQ}u{AD}: A Non-Factoid Long Question Answering Data Set",
author = "Soleimani, Amir and
Monz, Christof and
Worring, Marcel",
booktitle = "Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume",
month = apr,
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.eacl-main.106",
doi = "10.18653/v1/2021.eacl-main.106",
pages = "1245--1255",
abstract = "We introduce NLQuAD, the first data set with baseline methods for non-factoid long question answering, a task requiring document-level language understanding. In contrast to existing span detection question answering data sets, NLQuAD has non-factoid questions that are not answerable by a short span of text and demanding multiple-sentence descriptive answers and opinions. We show the limitation of the F1 score for evaluation of long answers and introduce Intersection over Union (IoU), which measures position-sensitive overlap between the predicted and the target answer spans. To establish baseline performances, we compare BERT, RoBERTa, and Longformer models. Experimental results and human evaluations show that Longformer outperforms the other architectures, but results are still far behind a human upper bound, leaving substantial room for improvements. NLQuAD{'}s samples exceed the input limitation of most pre-trained Transformer-based models, encouraging future research on long sequence language models.",
}
``` | LLukas22/NLQuAD | [
"task_ids:extractive-qa",
"multilinguality:monolingual",
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-3.0",
"region:us"
]
| 2022-12-15T15:05:57+00:00 | {"language": ["en"], "license": ["cc-by-3.0"], "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "task_ids": ["extractive-qa"], "pretty_name": "NLQuAD", "dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "paragraphs", "list": [{"name": "context", "dtype": "string"}, {"name": "qas", "list": [{"name": "answers", "list": [{"name": "answer_end", "dtype": "int64"}, {"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}]}]}], "splits": [{"name": "train", "num_bytes": 72036724, "num_examples": 10259}, {"name": "test", "num_bytes": 9045482, "num_examples": 1280}, {"name": "validation", "num_bytes": 8876137, "num_examples": 1280}], "download_size": 0, "dataset_size": 89958343}} | 2022-12-23T13:04:58+00:00 |
39773343c536d17028c3311abaf50ef0bc49bd24 |
# Dataset Card for Multi<sup>3</sup>NLU++
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contact](#contact)
## Dataset Description
- **Paper:** [arXiv](https://arxiv.org/abs/2212.10455)
### Dataset Summary
Please access the dataset using
```
git clone https://huggingface.co/datasets/uoe-nlp/multi3-nlu/
```
Multi<sup>3</sup>NLU++ consists of 3080 utterances per language representing challenges in building multilingual multi-intent multi-domain task-oriented dialogue systems. The domains include banking and hotels. There are 62 unique intents.
### Supported Tasks and Leaderboards
- multi-label intent detection
- slot filling
- cross-lingual language understanding for task-oriented dialogue
### Languages
The dataset covers four language pairs in addition to the source dataset in English:
Spanish, Turkish, Marathi, Amharic
## Dataset Structure
### Data Instances
Each data instance contains the following features: _text_, _intents_, _uid_, _lang_, and ocassionally _slots_ and _values_
See the [Multi<sup>3</sup>NLU++ corpus viewer](https://huggingface.co/datasets/uoe-nlp/multi3-nlu/viewer/uoe-nlp--multi3-nlu/train) to explore more examples.
An example from the Multi<sup>3</sup>NLU++ looks like the following:
```
{
"text": "เคฎเคพเคเฅ เคเคฆเฅเคฏเคพเคเฅ เคฐเคฟเคเคฐเฅเคตเฅเคถเคจ เคฎเคฒเคพ เคฐเคฆเฅเคฆ เคเคพ เคเคฐเคคเคพ เคฏเฅเคฃเคพเคฐ เคจเคพเคนเฅ?",
"intents": [
"why",
"booking",
"cancel_close_leave_freeze",
"wrong_notworking_notshowing"
],
"slots": {
"date_from": {
"text": "เคเคฆเฅเคฏเคพเคเฅ",
"span": [
5,
12
],
"value": {
"day": 16,
"month": 3,
"year": 2022
}
}
},
"uid": "hotel_1_1",
"lang": "mr"
}
```
### Data Fields
- 'text': a string containing the utterance for which the intent needs to be detected
- 'intents': the corresponding intent labels
- 'uid': unique identifier per language
- 'lang': the language of the dataset
- 'slots': annotation of the span that needs to be extracted for value extraction with its label and _value_
### Data Splits
The experiments are done on different k-fold validation setups. The dataset has multiple types of data splits. Please see Section 4 of the paper.
## Dataset Creation
### Curation Rationale
Existing task-oriented dialogue datasets are 1) predominantly limited to detecting a single intent, 2) focused on a single domain, and 3) include a small set of slot types. Furthermore, the success of task-oriented dialogue is 4) often evaluated on a small set of higher-resource languages (i.e., typically English) which does not test how generalisable systems are to the diverse range of the world's languages.
Our proposed dataset addresses all these limitations
### Source Data
#### Initial Data Collection and Normalization
Please see Section 3 of the paper
#### Who are the source language producers?
The source language producers are authors of [NLU++ dataset](https://arxiv.org/abs/2204.13021). The dataset was professionally translated into our chosen four languages. We used Blend Express and Proz.com to recruit these translators.
### Personal and Sensitive Information
None. Names are fictional
### Discussion of Biases
We have carefully vetted the examples to exclude the problematic examples.
### Other Known Limitations
The dataset comprises utterances extracted from real dialogues between users and conversational agents as well as synthetic human-authored utterances constructed with the aim of introducing additional combinations of intents and slots. The utterances therefore lack the wider context that would be present in a complete dialogue. As such the dataset cannot be used to evaluate systems with respect to discourse-level phenomena present in dialogue.
## Additional Information
Baseline models:
Our MLP and QA models are based on the huggingface transformers library.
### QA
We use the following code snippet for our QA experiments. Please refer to the paper for more details
```
https://github.com/huggingface/transformers/blob/main/examples/pytorch/question-answering/run_qa.py
python run_qa.py config_qa.json
```
### Licensing Information
The dataset is Creative Commons Attribution 4.0 International (cc-by-4.0)
### Citation Information
Coming soon
### Contact
[Nikita Moghe](mailto:[email protected]) and [Evgeniia Razumovskaia]([email protected]) and [Liane Guillou](mailto:[email protected])
Dataset card based on [Allocinรฉ](https://huggingface.co/datasets/allocine) | uoe-nlp/multi3-nlu | [
"task_categories:text-classification",
"multilinguality:multilingual",
"source_datasets:nluplusplus",
"language:multilingual",
"license:cc-by-4.0",
"arxiv:2212.10455",
"arxiv:2204.13021",
"region:us"
]
| 2022-12-15T15:46:30+00:00 | {"language": ["multilingual"], "license": ["cc-by-4.0"], "multilinguality": ["multilingual"], "source_datasets": ["nluplusplus"], "task_categories": ["text-classification"], "pretty_name": "multi3-nlu"} | 2023-06-07T09:46:27+00:00 |
145d48995c3839609d2a7e7460c9bb9a5be6df66 | # Dataset Card for "common_voice_11_0_id_filtered"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | evanarlian/common_voice_11_0_id_filtered | [
"region:us"
]
| 2022-12-15T16:05:49+00:00 | {"dataset_info": {"features": [{"name": "client_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "int64"}, {"name": "down_votes", "dtype": "int64"}, {"name": "age", "dtype": "string"}, {"name": "gender", "dtype": "string"}, {"name": "accent", "dtype": "string"}, {"name": "locale", "dtype": "string"}, {"name": "segment", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 570693903.7812607, "num_examples": 22906}, {"name": "validation", "num_bytes": 98832914.0, "num_examples": 3226}, {"name": "test", "num_bytes": 112254685.0, "num_examples": 3618}, {"name": "other", "num_bytes": 147132536.35696015, "num_examples": 6380}, {"name": "invalidated", "num_bytes": 63830420.0, "num_examples": 2466}], "download_size": 975354578, "dataset_size": 992744459.1382209}} | 2022-12-15T16:06:36+00:00 |
b7b603a637c056fe07d381f21c878d6002bd4758 | # Dataset Card for "python_vul_cvefix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EddieChen372/python_vul_cvefix | [
"region:us"
]
| 2022-12-15T16:39:23+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "CWE-22", "1": "CWE-79", "2": "CWE-89", "3": "CWE-352", "4": "CWE-601", "5": "CWE-94"}}}}, {"name": "code_before", "dtype": "string"}, {"name": "code_after", "dtype": "string"}, {"name": "label_text", "dtype": "string"}, {"name": "deleted", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "added", "struct": [{"name": "code", "sequence": "string"}, {"name": "line_no", "sequence": "int64"}]}, {"name": "normalized_code_before", "dtype": "string"}, {"name": "normalized_code_after", "dtype": "string"}, {"name": "before_doc_string_pos", "sequence": "int64"}, {"name": "after_doc_string_pos", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 15101828.429268293, "num_examples": 204}, {"name": "test", "num_bytes": 3822268.0, "num_examples": 52}], "download_size": 6388923, "dataset_size": 18924096.429268293}} | 2022-12-15T16:40:12+00:00 |
34a2b1e78876a914e12d94f2492aad0f4d700df0 | # Dataset Card for "titanic"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lewtun/titanic | [
"kaggle",
"region:us"
]
| 2022-12-15T17:56:38+00:00 | {"tags": ["kaggle"], "dataset_info": {"features": [{"name": "PassengerId", "dtype": "int64"}, {"name": "Survived", "dtype": "int64"}, {"name": "Pclass", "dtype": "int64"}, {"name": "Name", "dtype": "string"}, {"name": "Sex", "dtype": "string"}, {"name": "Age", "dtype": "float64"}, {"name": "SibSp", "dtype": "int64"}, {"name": "Parch", "dtype": "int64"}, {"name": "Ticket", "dtype": "string"}, {"name": "Fare", "dtype": "float64"}, {"name": "Cabin", "dtype": "string"}, {"name": "Embarked", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 49054, "num_examples": 418}, {"name": "train", "num_bytes": 103906, "num_examples": 891}], "download_size": 61019, "dataset_size": 152960}} | 2022-12-15T17:59:50+00:00 |
a881ea99d11c85b7e7263d818be38d87415132b8 | 18/1-22/10 pixiv monthly ranking top50 & yandere images 110k
with txt | haor/pixiv-yandere | [
"license:openrail",
"region:us"
]
| 2022-12-15T18:38:01+00:00 | {"license": "openrail"} | 2022-12-16T12:32:39+00:00 |
7016f3062103cde04b4551204be4ae99d950db0b |
This dataset is a random 1/3 slice of the original [told-br](https://huggingface.co/datasets/told-br) | alexandreteles/told_br_binary_sm | [
"multilinguality:monolingual",
"size_categories:1K<n<10K",
"source_datasets:told-br",
"language:pt",
"license:cc-by-sa-4.0",
"region:us"
]
| 2022-12-15T21:08:14+00:00 | {"language": ["pt"], "license": "cc-by-sa-4.0", "multilinguality": ["monolingual"], "size_categories": ["1K<n<10K"], "source_datasets": ["told-br"], "pretty_name": "ToLD-Br-small", "language_bcp47": ["pt-BR"]} | 2022-12-15T23:00:52+00:00 |
dc95e5c611f14a95f798dfeb236a6f67aa8252ae | # AutoTrain Dataset for project: told_br_binary_sm
## Dataset Description
This dataset has been automatically processed by AutoTrain for project told_br_binary_sm.
### Languages
The BCP-47 code for the dataset's language is pt.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "@user agora n\u00e3o me d\u00e1 mais, mas antes, porra",
"target": 1
},
{
"text": "pires \u00e9 fodido fds mais um",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5599 |
| valid | 1401 |
| alexandreteles/autotrain-data-told_br_binary_sm | [
"task_categories:text-classification",
"language:pt",
"region:us"
]
| 2022-12-15T21:28:30+00:00 | {"language": ["pt"], "task_categories": ["text-classification"]} | 2022-12-15T21:29:16+00:00 |
84de4ed4ed041e25f9d3bb95ebb7bf3f4e3551ff |
This post was originally published on the [Hugging Face blog ๐ค](https://huggingface.co/blog/ethics-soc-2)
# Ethics and Society Newsletter #2
## Letโs Talk about Bias!
_Bias in ML is ubiquitous, and Bias in ML is complex; so complex in fact that no single technical intervention is likely to meaningfully address the problems it engenders. ML models, as sociotechnical systems, amplify social trends that may exacerbate inequities and harmful biases in ways that depend on their deployment context and are constantly evolving._
_This means that developing ML systems with care requires vigilance and responding to feedback from those deployment contexts, which in turn we can facilitate by sharing lessons across contexts and developing tools to analyze signs of bias at every level of ML development._
_This blog post from the [Ethics and Society regulars @๐ค](https://huggingface.co/blog/ethics-soc-1) shares some of the lessons we have learned along with tools we have developed to support ourselves and others in our communityโs efforts to better address bias in Machine Learning. The first part is a broader reflection on bias and its context. If youโve already read it and are coming back specifically for the tools, feel free to jump to the [datasets](#i-am-curatingpicking-a-dataset-for-my-ml-system-how-can-i-address-bias) or [models](#i-am-trainingselecting-a-model-for-my-ml-system-how-can-i-address-bias)
section!_
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img1.jpg" alt="Selection of tools developed by HF team members to address bias in ML" />
<em>Selection of tools developed by ๐ค team members to address bias in ML</em>
</p>
**<span style="text-decoration:underline;">Table of contents:</span>**
* **<span style="text-decoration:underline;">On Machine Biases</span>**
* [Machine Bias: from ML Systems to Risks](#machine-bias-from-ml-systems-to-personal-and-social-risks)
* [Putting Bias in Context](#putting-bias-in-context)
* **<span style="text-decoration:underline;">Tools and Recommendations</span>**
* [Addressing Bias throughout ML Development](#addressing-bias-throughout-the-ml-development-cycle)
* [Task Definition](#i-am-defining-the-task-of-my-ml-system-how-can-i-address-bias)
* [Dataset Curation](#i-am-curatingpicking-a-dataset-for-my-ml-system-how-can-i-address-bias)
* [Model Training](#i-am-trainingselecting-a-model-for-my-ml-system-how-can-i-address-bias)
* [Overview of ๐ค Bias Tools](#conclusion-and-overview-of-bias-analysis-and-documentation-tools-from-๐ค)
## _Machine Bias:_ from ML Systems to Personal and Social Risks
ML systems allow us to automate complex tasks at a scale never seen before as they are deployed in more sectors and use cases. When the technology works at its best, it can help smooth interactions between people and technical systems, remove the need for highly repetitive work, or unlock new ways of processing information to support research.
These same systems are also likely to reproduce discriminatory and abusive behaviors represented in their training data, especially when the data encodes human behaviors.
The technology then has the potential to make these issues significantly worse. Automation and deployment at scale can indeed:
1. **lock in** behaviors in time and hinder social progress [from being reflected in technology](https://dl.acm.org/doi/10.1145/3442188.3445922),
2. **spread** harmful behaviors [beyond the context](https://arxiv.org/abs/2203.07785) of the original training data,
3. **amplify** inequities by [overfocusing on stereotypical associations](https://arxiv.org/abs/2010.03058) when making predictions,
4. **remove possibilities for recourse** by hiding biases [inside โblack-boxโ systems](https://pubmed.ncbi.nlm.nih.gov/33737318/).
In order to better understand and address these risks, ML researchers and developers have started studying _machine bias_ or _algorithmic bias_, mechanisms that might lead systems to, for example, encode **negative stereotypes or associations** or to have **disparate performance** for different population groups in their deployment context.
**These issues are deeply personal** for many of us ML researchers and developers at Hugging Face and in the broader ML community. Hugging Face is [an international company](https://twitter.com/osanseviero/status/1587444072901492737), with many of us existing between countries and cultures. It is hard to fully express our sense of urgency when we see the technology we work on developed [without sufficient concern](https://dl.acm.org/doi/10.1145/3461702.3462624) for protecting people like us; especially when these systems lead to discriminatory [wrongful arrests](https://incidentdatabase.ai/cite/72/) or undue [financial distress](https://racismandtechnology.center/2021/10/29/amnestys-grim-warning-against-another-toeslagenaffaire/) and are being [increasingly sold](https://www.oecd.org/migration/mig/EMN-OECD-INFORM-FEB-2022-The-use-of-Digitalisation-and-AI-in-Migration-Management.pdf) to immigration and law enforcement services around the world. Similarly, seeing our identities routinely [suppressed in training datasets](https://aclanthology.org/2021.emnlp-main.98/) or [underrepresented in the outputs](https://huggingface.co/spaces/sasha/StableDiffusionBiasExplorer) of โgenerative AIโ [systems ](https://twitter.com/willie_agnew/status/1592829238889283585)connects these concerns to our daily lived experiences in ways that are [simultaneously enlightening and taxing](https://www.technologyreview.com/2022/10/28/1062332/responsible-ai-has-a-burnout-problem/).
While our own experiences do not come close to covering the myriad ways in which ML-mediated discrimination can disproportionately harm people whose experiences differ from ours, they provide an entry point into considerations of the trade-offs inherent in the technology. We work on these systems because we **strongly believe in MLโs potential โ we think it can shine as a valuable tool as long as it is developed with care and input from people in its deployment context**, rather than as a one-size-fits-all panacea. In particular, enabling this care requires developing a better understanding of the mechanisms of machine bias across the ML development process, and developing tools that support people [with all levels of technical knowledge of these systems in participating in the necessary conversations](https://www.vice.com/en/article/bvm35w/this-tool-lets-anyone-see-the-bias-in-ai-image-generators) about how their benefits and harms are distributed.
The present blog post from the Hugging Face [Ethics and Society regulars](https://huggingface.co/blog/ethics-soc-1) provides an overview of how we have worked, are working, or recommend users of the HF ecosystem of libraries may work to address bias at the various stages of the ML development process, and the tools we develop to support this process. We hope you will find it a useful resource to guide concrete considerations of the social impact of your work and can leverage the tools referenced here to help mitigate these issues when they arise.
## Putting Bias in Context
The first and maybe most important concept to consider when dealing with machine bias is **context**. In their foundational work on [bias in NLP](https://aclanthology.org/2020.acl-main.485.pdf), Su Lin Blodgett et al. point out that: _โ[T]he majority of [academic works on machine bias] fail to engage critically with what constitutes โbiasโ in the first placeโ_, including by building their work on top of _โunstated assumptions about what kinds of system behaviors are harmful, in what ways, to whom, and whyโ_.
This may not come as much of a surprise given the ML research communityโs [focus on the value of โgeneralizationโ](https://dl.acm.org/doi/10.1145/3531146.3533083) โ the most cited motivation for work in the field after โperformanceโ. However, while tools for bias assessment that apply to a wide range of settings are valuable to **enable a broader analysis of common trends** in model behaviors, their ability to target the mechanisms that lead to discrimination in **concrete use cases is inherently limited**. Using them to guide specific decisions within the ML development cycle usually requires an extra step or two to take the systemโs specific use context and affected people into consideration.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_foresight.png" alt="Excerpt on considerations of ML uses context and people from the Model Card Guidebook" />
<em>Excerpt on considerations of ML uses context and people from the <a href="https://huggingface.co/docs/hub/model-cards">Model Card Guidebook</a></em>
</p>
Now letโs dive deeper into the issue of linking biases in stand-alone/context-less ML artifacts to specific harms. It can be useful to think of **machine biases as risk factors for discrimination-based harms**. Take the example of a text-to-image model that over-represents light skin tones when prompted to create a picture of a person in a professional setting, but produces darker skin tones [when the prompts mention criminality](https://arxiv.org/abs/2211.03759). These tendencies would be what we call _machine biases at the model level_. Now letโs think about a few systems that use such a text-to-image model:
1. <span style="text-decoration:underline;">The model is integrated into a website creation service</span> (e.g. SquareSpace, Wix) to help users generate backgrounds for their pages. The model explicitly disables images of people in the generated background.
* In this case, the machine bias โrisk factorโ does not lead to discrimination harm because the focus of the bias (images of people) is absent from the use case.
* Further risk mitigation is not required for machine biases, although developers should be aware of ongoing discussions about the legality of integrating systems trained on scraped data in commercial systems.
2. <span style="text-decoration:underline;">The model is integrated into a stock images website</span> to provide users with synthetic images of people (e.g. in professional settings) that they can use with fewer privacy concerns, for example, to serve as illustrations for Wikipedia articles
* In this case, machine bias acts to **lock in** and **amplify** existing social biases. It reinforces stereotypes about people (โCEOs are all white menโ) that then feed back into complex social systems where increased bias leads to increased discrimination in many different ways (such as reinforcing [implicit bias](https://philpapers.org/rec/BEEAIT-2) in the workplace).
* Mitigation strategies may include educating the stock image users about these biases, or the stock image website may curate generated images to intentionally propose a more diverse set of representations.
3. <span style="text-decoration:underline;">The model is integrated into a โvirtual sketch artistโ software</span> marketed to police departments that will use it to generate pictures of suspects based on verbal testimony
* In this case, the machine biases directly cause discrimination by systematically directing police departments to darker-skinned people, putting them at increased risk of harm including physical injury and unlawful imprisonment.
* In cases like this one, there may be no level of bias mitigation that makes the risk acceptable. In particular, such a use case would be closely related to face recognition in the context of law enforcement, where [similar bias issues](https://www.law.georgetown.edu/privacy-technology-center/publications/a-forensic-without-the-science-face-recognition-in-u-s-criminal-investigations/) have led several commercial entities and legislatures to adopt moratoria pausing or banning its use across the board.
So, whoโs on the hook for machine biases in ML? These three cases illustrate one of the reasons why discussions about the responsibility of ML developers in addressing bias can get so complicated: depending on decisions made at other points in the ML system development process by other people, the biases in an ML dataset or model may land anywhere between being irrelevant to the application settings and directly leading to grievous harm. However, in all of these cases, **stronger biases in the model/dataset increase the risk of negative outcomes**. The European Union has started to develop frameworks that address this phenomenon in [recent regulatory efforts](https://ec.europa.eu/info/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en): in short, a company that deploys an AI system based on a measurably biased model is liable for harm caused by the system.
Conceptualizing bias as a risk factor then allows us to better understand the **shared responsibility** for machine biases between developers at all stages. Bias can never be fully removed, not least because the definitions of social biases and the power dynamics that tie them to discrimination vary vastly across social contexts. However:
1. Each stage of the development process, from task specification, dataset curation, and model training, to model integration and system deployment, can take steps to minimize the aspects of machine bias** that most directly depend on its choices** and technical decisions, and
2. Clear communication and **information flow between the various ML development stages** can make the difference between making choices that build on top of each other to attenuate the negative potential of bias (multipronged approach to bias mitigation, as in deployment scenario 1 above) _versus_ making choices that compound this negative potential to exacerbate the risk of harm (as in deployment scenario 3).
In the next section, we review these various stages along with some of the tools that can help us address machine bias at each of them.
## Addressing Bias throughout the ML Development Cycle
Ready for some practical advice yet? Here we go ๐ค
There is no one single way to develop ML systems; which steps happen in what order depends on a number of factors including the development setting (university, large company, startup, grassroots organization, etcโฆ), the modality (text, tabular data, images, etcโฆ), and the preeminence or scarcity of publicly available ML resources. However, we can identify three common stages of particular interest in addressing bias. These are the task definition, the data curation, and the model training. Letโs have a look at how bias handling may differ across these various stages.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_pipeline.png" alt="The Bias ML Pipeline by Meg" width="500" />
<em>The Bias ML Pipeline by <a href="https://huggingface.co/meg">Meg</a></em>
</p>
### I am <span style="text-decoration:underline;">defining the task</span> of my ML system, how can I address bias?
Whether and to what extent bias in the system concretely affects people ultimately depends on what the system is used for. As such, the first place developers can work to mitigate bias is when deciding how ML fits in their system, e.g., by deciding what optimization objective it will use.
For example, letโs go back to one of the first highly-publicized cases of a Machine Learning system used in production for algorithmic content recommendation. From 2006 to 2009, Netflix ran the [Netflix Prize](https://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings/The-Netflix-Prize-Bennett.pdf), a competition with a 1M$ cash prize challenging teams around the world to develop ML systems to accurately predict a userโs rating for a new movie based on their past ratings. The [winning submission](https://www.asc.ohio-state.edu/statistics/dmsl/GrandPrize2009_BPC_BigChaos.pdf) improved the RMSE (Root-mean-square-error) of predictions on unseen user-movie pairs by over 10% over Netflixโs own CineMatch algorithm, meaning it got much better at predicting how users would rate a new movie based on their history. This approach opened the door for much of modern algorithmic content recommendation by bringing the role of ML in modeling user preferences in recommender systems to public awareness.
So what does this have to do with bias? Doesnโt showing people content that theyโre likely to enjoy sound like a good service from a content platform? Well, it turns out that showing people more examples of **what theyโve liked in the past** ends up [reducing the diversity of the media they consume](https://dl.acm.org/doi/10.1145/3391403.3399532). Not only does it lead users to be [less satisfied in the long term](https://dl.acm.org/doi/abs/10.1145/3366423.3380281), but it also means that any biases or stereotypes captured by the initial models โ such as when modeling [the preferences of Black American users](https://www.marieclaire.com/culture/a18817/netflix-algorithms-black-movies/) or [dynamics that systematically disadvantage](https://dl.acm.org/doi/10.1145/3269206.3272027) some artists โ are likely to be reinforced if the model is [further trained on ongoing ML-mediated](https://arxiv.org/abs/2209.03942) user interactions. This reflects two of the types of bias-related concerns weโve mentioned above: the training objective acts as a **risk factor** for bias-related harms as it makes pre-existing biases much more likely to show up in predictions, and the task framing has the effect of **locking in** and exacerbating past biases.
A promising bias mitigation strategy at this stage has been to reframe the task to explicitly [model both engagement and diversity](https://dl.acm.org/doi/10.1145/3437963.3441775) when applying ML to algorithmic content recommendation. Users are likely to get more long-term satisfaction and the risk of exacerbating biases as outlined above is reduced!
This example serves to illustrate that the impact of machine biases in an ML-supported product depends not just on where we decide to leverage ML, but also on how ML techniques are integrated into the broader technical system, and with what objective. When first investigating how ML can fit into a product or a use case you are interested in, we first recommend looking for the failure modes of the system through the lens of bias before even diving into the available models or datasets - which behaviors of existing systems in the space will be particularly harmful or more likely to occur if bias is exacerbated by ML predictions?
We built a [tool](https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl) to take users through these questions in another case of algorithmic content management: [hate speech detection in automatic content moderation](https://aclanthology.org/2022.hcinlp-1.2/). We found for example that looking through news and scientific articles that didnโt particularly focus on the ML part of the technology was already a great way to get a sense of where bias is already at play. Definitely go have a look for an example of how the models and datasets fit with the deployment context and how they can relate to known bias-related harms!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img2.png" alt="Selection of tools developed by HF team members to address bias in ML" />
<em><a href="https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl">ACM Task Exploration tool</a> by <a href="https://huggingface.co/aymm">Angie</a>, <a href="https://huggingface.co/paullada">Amandalynne</a>, and <a href="https://huggingface.co/yjernite">Yacine</a></em>
</p>
#### Task definition: recommendations
There are as many ways for the ML task definition and deployment to affect the risk of bias-related harms as there are applications for ML systems. As in the examples above, some common steps that may help decide whether and how to apply ML in a way that minimizes bias-related risk include:
* Investigate:
* Reports of bias in the field pre-ML
* At-risk demographic categories for your specific use case
* Examine:
* The impact of your optimization objective on reinforcing biases
* Alternative objectives that favor diversity and positive long-term impacts
### I am <span style="text-decoration:underline;">curating/picking a dataset</span> for my ML system, how can I address bias?
While training datasets are [not the sole source of bias](https://www.cell.com/patterns/fulltext/S2666-3899(21)00061-1) in the ML development cycle, they do play a significant role. Does your [dataset disproportionately associate](https://aclanthology.org/2020.emnlp-main.23/) biographies of women with life events but those of men with achievements? Those **stereotypes** are probably going to show up in your full ML system! Does your voice recognition dataset only feature specific accents? Not a good sign for [the inclusivity of technology](https://www.scientificamerican.com/article/speech-recognition-tech-is-yet-another-example-of-bias/) you build with it in terms of **disparate performance**! Whether youโre curating a dataset for ML applications or selecting a dataset to train an ML model, finding out, mitigating, and [communicating](https://dl.acm.org/doi/10.1145/3479582) to what extent the data exhibits these phenomena are all necessary steps to reducing bias-related risks.
You can usually get a pretty good sense of likely biases in a dataset by reflecting on where it comes from, who are the people represented on the data, and what the curation process was. Several frameworks for this reflection and documentation have been proposed such as [Data Statements for NLP](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00041/43452/Data-Statements-for-Natural-Language-Processing) or [Datasheets for Datasets](https://dl.acm.org/doi/10.1145/3458723). The Hugging Face Hub includes a Dataset Card [template](https://github.com/huggingface/datasets/blob/main/templates/README.md) and [guide](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#dataset-card-creation-guide) inspired by these works; the section on [considerations for using the data](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#considerations-for-using-the-data) is usually a good place to look for information about notable biases if youโre browsing datasets, or to write a paragraph sharing your insights on the topic if youโre sharing a new one. And if youโre looking for more inspiration on what to put there, check out these sections written by Hub users in the [BigLAM organization](https://huggingface.co/biglam) for historical datasets of [legal proceedings](https://huggingface.co/datasets/biglam/old_bailey_proceedings#social-impact-of-dataset), [image classification](https://huggingface.co/datasets/biglam/brill_iconclass#social-impact-of-dataset), and [newspapers](https://huggingface.co/datasets/biglam/bnl_newspapers1841-1879#social-impact-of-dataset).
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img3.png" alt="HF Dataset Card guide for the Social Impact and Bias Sections" />
<em><a href="https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#social-impact-of-dataset">HF Dataset Card guide</a> for the Social Impact and Bias Sections</em>
</p>
While describing the origin and context of a dataset is always a good starting point to understand the biases at play, [quantitatively measuring phenomena](https://arxiv.org/abs/2212.05129) that encode those biases can be just as helpful. If youโre choosing between two different datasets for a given task or choosing between two ML models trained on different datasets, knowing which one better represents the demographic makeup of your ML systemโs user base can help you make an informed decision to minimize bias-related risks. If youโre curating a dataset iteratively by filtering data points from a source or selecting new sources of data to add, measuring how these choices affect the diversity and biases present in your overall dataset can make it safer to use in general.
Weโve recently released two tools you can leverage to measure your data through a bias-informed lens. The [disaggregators๐ค library](https://github.com/huggingface/disaggregators) provides utilities to quantify the composition of your dataset, using either metadata or leveraging models to infer properties of data points. This can be particularly useful to minimize risks of bias-related **[representation harms](https://aclanthology.org/P16-2096/)** or **disparate performances** of trained models. Look at the [demo](https://huggingface.co/spaces/society-ethics/disaggregators) to see it applied to the LAION, MedMCQA, and The Stack datasets!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img4.png" alt="Disaggregators tool by Nima" />
<em><a href="https://huggingface.co/spaces/society-ethics/disaggregators">Disaggregator tool</a> by <a href="https://huggingface.co/NimaBoscarino">Nima</a></em>
</p>
Once you have some helpful statistics about the composition of your dataset, youโll also want to look at associations between features in your data items, particularly at associations that may encode derogatory or otherwise negative stereotypes. The Data Measurements Tool we [originally introduced](https://huggingface.co/blog/data-measurements-tool#comparison-statistics) last year allows you to do this by looking at the [normalized Pointwise Mutual Information (nPMI)](https://dl.acm.org/doi/10.1145/3461702.3462557) between terms in your text-based dataset; particularly associations between gendered pronouns that may denote gendered stereotypes. [Run it yourself](https://github.com/huggingface/data-measurements-tool) or [try it here](https://huggingface.co/spaces/huggingface/data-measurements-tool) on a few pre-computed datasets!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img5.png" alt="Data Measurements tool by Meg, Sasha, Bibi, and the Gradio team" />
<em><a href="https://huggingface.co/spaces/huggingface/data-measurements-tool">Data Measurements tool</a> by <a href="https://huggingface.co/meg">Meg</a>, <a href="https://huggingface.co/sasha">Sasha</a>, <a href="https://huggingface.co/Bibss">Bibi</a>, and the <a href="https://gradio.app/">Gradio team</a></em>
</p>
#### Dataset selection/curation: recommendations
These tools arenโt full solutions by themselves, rather, they are designed to support critical examination and improvement of datasets through several lenses, including the lens of bias and bias-related risks. In general, we encourage you to keep the following steps in mind when leveraging these and other tools to mitigate bias risks at the dataset curation/selection stage:
* Identify:
* Aspects of the dataset creation that may exacerbate specific biases
* Demographic categories and social variables that are particularly important to the datasetโs task and domain
* Measure:
* The demographic distribution in your dataset
* Pre-identified negative stereotypes represented
* Document:
* Share what youโve Identified and Measured in your Dataset Card so it can benefit other users, developers, and otherwise affected people
* Adapt:
* By choosing the dataset least likely to cause bias-related harms
* By iteratively improving your dataset in ways that reduce bias risks
### I am <span style="text-decoration:underline;">training/selecting a model</span> for my ML system, how can I address bias?
Similar to the dataset curation/selection step, documenting and measuring bias-related phenomena in models can help both ML developers who are selecting a model to use as-is or to finetune and ML developers who want to train their own models. For the latter, measures of bias-related phenomena in the model can help them learn from what has worked or what hasnโt for other models and serve as a signal to guide their own development choices.
Model cards were originally proposed by [(Mitchell et al., 2019)](https://dl.acm.org/doi/10.1145/3287560.3287596) and provide a framework for model reporting that showcases information relevant to bias risks, including broad ethical considerations, disaggregated evaluation, and use case recommendation. The Hugging Face Hub provides even more tools for model documentation, with a [model card guidebook](https://huggingface.co/docs/hub/model-cards) in the Hub documentation, and an [app that lets you create extensive model cards](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) easily for your new model.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img6.png" alt="Model Card writing tool by Ezi, Marissa, and Meg" />
<em><a href="https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool">Model Card writing tool</a> by <a href="https://huggingface.co/Ezi">Ezi</a>, <a href="https://huggingface.co/Marissa">Marissa</a>, and <a href="https://huggingface.co/meg">Meg</a></em>
</p>
Documentation is a great first step for sharing general insights about a modelโs behavior, but it is usually static and presents the same information to all users. In many cases, especially for generative models that can generate outputs to approximate the distribution of their training data, we can gain a more contextual understanding of bias-related phenomena and **negative stereotypes** by visualizing and contrasting model outputs. Access to model generations can help users bring [intersectional issues in the model behavior](https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/) corresponding to their lived experience, and evaluate to what extent a model reproduces [gendered stereotypes for different adjectives](https://www.vice.com/en/article/bvm35w/this-tool-lets-anyone-see-the-bias-in-ai-image-generators). To facilitate this process, we built a tool that lets you compare generations not just across a set of adjectives and professions, but also across different models! [Go try it out](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer) to get a sense of which model might carry the least bias risks in your use case.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img7.png" alt="Visualize Adjective and Occupation Biases in Image Generation by Sasha" />
<br>
<em><a href="https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer">Visualize Adjective and Occupation Biases in Image Generation</a> by <a href="https://huggingface.co/sasha">Sasha</a></em>
</p>
Visualization of model outputs isnโt just for generative models though! For classification models, we also want to look out for bias-related harms caused by a modelโs **disparate performance** on different demographics. If you know what protected classes are most at risk of discrimination and have those annotated in an evaluation set, then you can report disaggregated performance over the different categories in [your model card](https://dl.acm.org/doi/10.1145/3287560.3287596) as mentioned above, so users can make informed decisions. If however, you are worried that you havenโt identified all populations at risk of bias-related harms, or if you do not have access to annotated test examples to measure the biases you suspect, thatโs where interactive visualizations of where and how the model fails come in handy! To help you with this, the [SEAL app](https://huggingface.co/spaces/nazneen/seal) groups similar mistakes by your model and shows you some common features in each cluster. If you want to go further, you can even combine it with the [disaggregators library](https://github.com/huggingface/disaggregators) we introduced in the datasets section to find clusters that are indicative of bias-related failure modes!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img8.png" alt="Systematic Error Analysis and Labeling (SEAL) by Nazneen" />
<em><a href="https://huggingface.co/spaces/nazneen/seal">Systematic Error Analysis and Labeling (SEAL)</a> by <a href="https://huggingface.co/nazneen">Nazneen</a></em>
</p>
Finally, a few benchmarks exist that can measure bias-related phenomena in models. For language models, benchmarks such as [BOLD](https://github.com/amazon-science/bold), [HONEST](https://aclanthology.org/2021.naacl-main.191.pdf), or [WinoBias](https://aclanthology.org/N18-2003/) provide quantitative evaluations of targeted behaviors that are indicative of biases in the models. While the benchmarks have their [limitations](https://aclanthology.org/2021.acl-long.81/), they do provide a limited view into some pre-identified bias risks that can help describe how the models function or choose between different models. You can find these evaluations pre-computed on a range of common language models [in this exploration Space](https://huggingface.co/spaces/sasha/BiasDetection) to get a first sense of how they compare!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img9.png" alt="Language Model Bias Detection by Sasha" />
<em><a href="https://huggingface.co/spaces/sasha/BiasDetection">Language Model Bias Detection</a> by <a href="https://huggingface.co/sasha">Sasha</a></em>
</p>
Even with access to a benchmark for the models you are considering, you might find that running evaluations of the larger language models you are considering can be prohibitively expensive or otherwise technically impossible with your own computing resources. The <a href="https://huggingface.co/spaces/autoevaluate/model-evaluator">Evaluation on the Hub</a> tool we released this year can help with that: not only will it run the evaluations for you, but it will also help connect them to the model documentation so the results are available once and for all โ so everyone can see, for example, that size <a href="https://huggingface.co/blog/zero-shot-eval-on-the-hub">measurably increases bias risks in models like OPT</a>!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_winobias.png" alt="Large model WinoBias scores computed with Evaluation on the Hub by Helen, Tristan, Abhishek, Lewis, and Douwe" />
<em><a href="https://huggingface.co/spaces/sasha/BiasDetection"><a href="https://huggingface.co/blog/zero-shot-eval-on-the-hub">Large model WinoBias scores computed with Evaluation on the Hub</a> by <a href="https://huggingface.co/mathemakitten">Helen</a>, <a href="https://huggingface.co/Tristan">Tristan</a>, <a href="https://huggingface.co/abhishek">Abhishek</a>, <a href="https://huggingface.co/lewtun">Lewis</a>, and <a href="https://huggingface.co/douwekiela">Douwe</a></em>
</p>
#### Model selection/development: recommendations
For models just as for datasets, different tools for documentation and evaluation will provide different views of bias risks in a model which all have a part to play in helping developers choose, develop, or understand ML systems.
* Visualize
* Generative model: visualize how the modelโs outputs may reflect stereotypes
* Classification model: visualize model errors to identify failure modes that could lead to disparate performance
* Evaluate
* When possible, evaluate models on relevant benchmarks
* Document
* Share your learnings from visualization and qualitative evaluation
* Report your modelโs disaggregated performance and results on applicable fairness benchmarks
## Conclusion and Overview of Bias Analysis and Documentation Tools from ๐ค
As we learn to leverage ML systems in more and more applications, reaping their benefits equitably will depend on our ability to actively mitigate the risks of bias-related harms associated with the technology. While there is no single answer to the question of how this should best be done in any possible setting, we can support each other in this effort by sharing lessons, tools, and methodologies to mitigate and document those risks. The present blog post outlines some of the ways Hugging Face team members have addressed this question of bias along with supporting tools, we hope that you will find them helpful and encourage you to develop and share your own!
Summary of linked tools:
* Tasks:
* Explore our directory of [ML Tasks](https://huggingface.co/tasks) to understand what technical framings and resources are available to choose from
* Use tools to explore the [full development lifecycle](https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl) of specific tasks
* Datasets:
* Make use of and contribute to [Dataset Cards](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#social-impact-of-dataset) to share relevant insights on biases in datasets.
* Use [Disaggregator](https://github.com/huggingface/disaggregators) to look for [possible disparate performance](https://huggingface.co/spaces/society-ethics/disaggregators)
* Look at aggregated [measurements of your dataset](https://huggingface.co/spaces/huggingface/data-measurements-tool) including nPMI to surface possible stereotypical associations
* Models:
* Make use of and contribute to [Model Cards](https://huggingface.co/docs/hub/model-cards) to share relevant insights on biases in models.
* Use [Interactive Model Cards](https://huggingface.co/spaces/nazneen/interactive-model-cards) to visualize performance discrepancies
* Look at [systematic model errors](https://huggingface.co/spaces/nazneen/seal) and look out for known social biases
* Use [Evaluate](https://github.com/huggingface/evaluate) and [Evaluation on the Hub](https://huggingface.co/spaces/autoevaluate/model-evaluator) to explore [language model biases](https://huggingface.co/blog/evaluating-llm-bias) including in [large models](https://huggingface.co/blog/zero-shot-eval-on-the-hub)
* Use a [Text-to-image bias explorer](https://huggingface.co/spaces/sasha/StableDiffusionBiasExplorer) to compare image generation modelsโ biases
* Compare LM models with Bias [Score Card](https://huggingface.co/spaces/sasha/BiasDetection)
Thanks for reading! ๐ค
~ Yacine, on behalf of the Ethics and Society regulars
Cite as:
```
@inproceedings{hf_ethics_soc_blog_2,
author = {Yacine Jernite and
Alexandra Sasha Luccioni and
Irene Soleiman and
Giada Pistilli and
Nathan Lambert and
Ezi Ozoani and
Brigitte Toussignant and
Margaret Mitchell},
title = {Hugging Face Ethics and Society Newsletter 2: Let's Talk about Bias!},
booktitle = {Hugging Face Blog},
year = {2022},
url = {https://doi.org/10.57967/hf/0208},
doi = {10.57967/hf/0208}
}
``` | yjernite/EthicsSocietyBlogBias | [
"license:cc-by-4.0",
"arxiv:2203.07785",
"arxiv:2010.03058",
"arxiv:2211.03759",
"arxiv:2209.03942",
"arxiv:2212.05129",
"doi:10.57967/hf/0208",
"region:us"
]
| 2022-12-15T21:30:38+00:00 | {"license": "cc-by-4.0"} | 2022-12-15T22:05:09+00:00 |
fe54a6270b82e5076657694d18b4930ddbedc9a8 |
This post was originally published on the [Hugging Face blog ๐ค](https://huggingface.co/blog/ethics-soc-2)
# Ethics and Society Newsletter #2
## Letโs Talk about Bias!
_Bias in ML is ubiquitous, and Bias in ML is complex; so complex in fact that no single technical intervention is likely to meaningfully address the problems it engenders. ML models, as sociotechnical systems, amplify social trends that may exacerbate inequities and harmful biases in ways that depend on their deployment context and are constantly evolving._
_This means that developing ML systems with care requires vigilance and responding to feedback from those deployment contexts, which in turn we can facilitate by sharing lessons across contexts and developing tools to analyze signs of bias at every level of ML development._
_This blog post from the [Ethics and Society regulars @๐ค](https://huggingface.co/blog/ethics-soc-1) shares some of the lessons we have learned along with tools we have developed to support ourselves and others in our communityโs efforts to better address bias in Machine Learning. The first part is a broader reflection on bias and its context. If youโve already read it and are coming back specifically for the tools, feel free to jump to the [datasets](#i-am-curatingpicking-a-dataset-for-my-ml-system-how-can-i-address-bias) or [models](#i-am-trainingselecting-a-model-for-my-ml-system-how-can-i-address-bias)
section!_
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img1.jpg" alt="Selection of tools developed by HF team members to address bias in ML" />
<em>Selection of tools developed by ๐ค team members to address bias in ML</em>
</p>
**<span style="text-decoration:underline;">Table of contents:</span>**
* **<span style="text-decoration:underline;">On Machine Biases</span>**
* [Machine Bias: from ML Systems to Risks](#machine-bias-from-ml-systems-to-personal-and-social-risks)
* [Putting Bias in Context](#putting-bias-in-context)
* **<span style="text-decoration:underline;">Tools and Recommendations</span>**
* [Addressing Bias throughout ML Development](#addressing-bias-throughout-the-ml-development-cycle)
* [Task Definition](#i-am-defining-the-task-of-my-ml-system-how-can-i-address-bias)
* [Dataset Curation](#i-am-curatingpicking-a-dataset-for-my-ml-system-how-can-i-address-bias)
* [Model Training](#i-am-trainingselecting-a-model-for-my-ml-system-how-can-i-address-bias)
* [Overview of ๐ค Bias Tools](#conclusion-and-overview-of-bias-analysis-and-documentation-tools-from-๐ค)
## _Machine Bias:_ from ML Systems to Personal and Social Risks
ML systems allow us to automate complex tasks at a scale never seen before as they are deployed in more sectors and use cases. When the technology works at its best, it can help smooth interactions between people and technical systems, remove the need for highly repetitive work, or unlock new ways of processing information to support research.
These same systems are also likely to reproduce discriminatory and abusive behaviors represented in their training data, especially when the data encodes human behaviors.
The technology then has the potential to make these issues significantly worse. Automation and deployment at scale can indeed:
1. **lock in** behaviors in time and hinder social progress [from being reflected in technology](https://dl.acm.org/doi/10.1145/3442188.3445922),
2. **spread** harmful behaviors [beyond the context](https://arxiv.org/abs/2203.07785) of the original training data,
3. **amplify** inequities by [overfocusing on stereotypical associations](https://arxiv.org/abs/2010.03058) when making predictions,
4. **remove possibilities for recourse** by hiding biases [inside โblack-boxโ systems](https://pubmed.ncbi.nlm.nih.gov/33737318/).
In order to better understand and address these risks, ML researchers and developers have started studying _machine bias_ or _algorithmic bias_, mechanisms that might lead systems to, for example, encode **negative stereotypes or associations** or to have **disparate performance** for different population groups in their deployment context.
**These issues are deeply personal** for many of us ML researchers and developers at Hugging Face and in the broader ML community. Hugging Face is [an international company](https://twitter.com/osanseviero/status/1587444072901492737), with many of us existing between countries and cultures. It is hard to fully express our sense of urgency when we see the technology we work on developed [without sufficient concern](https://dl.acm.org/doi/10.1145/3461702.3462624) for protecting people like us; especially when these systems lead to discriminatory [wrongful arrests](https://incidentdatabase.ai/cite/72/) or undue [financial distress](https://racismandtechnology.center/2021/10/29/amnestys-grim-warning-against-another-toeslagenaffaire/) and are being [increasingly sold](https://www.oecd.org/migration/mig/EMN-OECD-INFORM-FEB-2022-The-use-of-Digitalisation-and-AI-in-Migration-Management.pdf) to immigration and law enforcement services around the world. Similarly, seeing our identities routinely [suppressed in training datasets](https://aclanthology.org/2021.emnlp-main.98/) or [underrepresented in the outputs](https://huggingface.co/spaces/sasha/StableDiffusionBiasExplorer) of โgenerative AIโ [systems ](https://twitter.com/willie_agnew/status/1592829238889283585)connects these concerns to our daily lived experiences in ways that are [simultaneously enlightening and taxing](https://www.technologyreview.com/2022/10/28/1062332/responsible-ai-has-a-burnout-problem/).
While our own experiences do not come close to covering the myriad ways in which ML-mediated discrimination can disproportionately harm people whose experiences differ from ours, they provide an entry point into considerations of the trade-offs inherent in the technology. We work on these systems because we **strongly believe in MLโs potential โ we think it can shine as a valuable tool as long as it is developed with care and input from people in its deployment context**, rather than as a one-size-fits-all panacea. In particular, enabling this care requires developing a better understanding of the mechanisms of machine bias across the ML development process, and developing tools that support people [with all levels of technical knowledge of these systems in participating in the necessary conversations](https://www.vice.com/en/article/bvm35w/this-tool-lets-anyone-see-the-bias-in-ai-image-generators) about how their benefits and harms are distributed.
The present blog post from the Hugging Face [Ethics and Society regulars](https://huggingface.co/blog/ethics-soc-1) provides an overview of how we have worked, are working, or recommend users of the HF ecosystem of libraries may work to address bias at the various stages of the ML development process, and the tools we develop to support this process. We hope you will find it a useful resource to guide concrete considerations of the social impact of your work and can leverage the tools referenced here to help mitigate these issues when they arise.
## Putting Bias in Context
The first and maybe most important concept to consider when dealing with machine bias is **context**. In their foundational work on [bias in NLP](https://aclanthology.org/2020.acl-main.485.pdf), Su Lin Blodgett et al. point out that: _โ[T]he majority of [academic works on machine bias] fail to engage critically with what constitutes โbiasโ in the first placeโ_, including by building their work on top of _โunstated assumptions about what kinds of system behaviors are harmful, in what ways, to whom, and whyโ_.
This may not come as much of a surprise given the ML research communityโs [focus on the value of โgeneralizationโ](https://dl.acm.org/doi/10.1145/3531146.3533083) โ the most cited motivation for work in the field after โperformanceโ. However, while tools for bias assessment that apply to a wide range of settings are valuable to **enable a broader analysis of common trends** in model behaviors, their ability to target the mechanisms that lead to discrimination in **concrete use cases is inherently limited**. Using them to guide specific decisions within the ML development cycle usually requires an extra step or two to take the systemโs specific use context and affected people into consideration.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_foresight.png" alt="Excerpt on considerations of ML uses context and people from the Model Card Guidebook" />
<em>Excerpt on considerations of ML uses context and people from the <a href="https://huggingface.co/docs/hub/model-cards">Model Card Guidebook</a></em>
</p>
Now letโs dive deeper into the issue of linking biases in stand-alone/context-less ML artifacts to specific harms. It can be useful to think of **machine biases as risk factors for discrimination-based harms**. Take the example of a text-to-image model that over-represents light skin tones when prompted to create a picture of a person in a professional setting, but produces darker skin tones [when the prompts mention criminality](https://arxiv.org/abs/2211.03759). These tendencies would be what we call _machine biases at the model level_. Now letโs think about a few systems that use such a text-to-image model:
1. <span style="text-decoration:underline;">The model is integrated into a website creation service</span> (e.g. SquareSpace, Wix) to help users generate backgrounds for their pages. The model explicitly disables images of people in the generated background.
* In this case, the machine bias โrisk factorโ does not lead to discrimination harm because the focus of the bias (images of people) is absent from the use case.
* Further risk mitigation is not required for machine biases, although developers should be aware of ongoing discussions about the legality of integrating systems trained on scraped data in commercial systems.
2. <span style="text-decoration:underline;">The model is integrated into a stock images website</span> to provide users with synthetic images of people (e.g. in professional settings) that they can use with fewer privacy concerns, for example, to serve as illustrations for Wikipedia articles
* In this case, machine bias acts to **lock in** and **amplify** existing social biases. It reinforces stereotypes about people (โCEOs are all white menโ) that then feed back into complex social systems where increased bias leads to increased discrimination in many different ways (such as reinforcing [implicit bias](https://philpapers.org/rec/BEEAIT-2) in the workplace).
* Mitigation strategies may include educating the stock image users about these biases, or the stock image website may curate generated images to intentionally propose a more diverse set of representations.
3. <span style="text-decoration:underline;">The model is integrated into a โvirtual sketch artistโ software</span> marketed to police departments that will use it to generate pictures of suspects based on verbal testimony
* In this case, the machine biases directly cause discrimination by systematically directing police departments to darker-skinned people, putting them at increased risk of harm including physical injury and unlawful imprisonment.
* In cases like this one, there may be no level of bias mitigation that makes the risk acceptable. In particular, such a use case would be closely related to face recognition in the context of law enforcement, where [similar bias issues](https://www.law.georgetown.edu/privacy-technology-center/publications/a-forensic-without-the-science-face-recognition-in-u-s-criminal-investigations/) have led several commercial entities and legislatures to adopt moratoria pausing or banning its use across the board.
So, whoโs on the hook for machine biases in ML? These three cases illustrate one of the reasons why discussions about the responsibility of ML developers in addressing bias can get so complicated: depending on decisions made at other points in the ML system development process by other people, the biases in an ML dataset or model may land anywhere between being irrelevant to the application settings and directly leading to grievous harm. However, in all of these cases, **stronger biases in the model/dataset increase the risk of negative outcomes**. The European Union has started to develop frameworks that address this phenomenon in [recent regulatory efforts](https://ec.europa.eu/info/business-economy-euro/doing-business-eu/contract-rules/digital-contracts/liability-rules-artificial-intelligence_en): in short, a company that deploys an AI system based on a measurably biased model is liable for harm caused by the system.
Conceptualizing bias as a risk factor then allows us to better understand the **shared responsibility** for machine biases between developers at all stages. Bias can never be fully removed, not least because the definitions of social biases and the power dynamics that tie them to discrimination vary vastly across social contexts. However:
1. Each stage of the development process, from task specification, dataset curation, and model training, to model integration and system deployment, can take steps to minimize the aspects of machine bias** that most directly depend on its choices** and technical decisions, and
2. Clear communication and **information flow between the various ML development stages** can make the difference between making choices that build on top of each other to attenuate the negative potential of bias (multipronged approach to bias mitigation, as in deployment scenario 1 above) _versus_ making choices that compound this negative potential to exacerbate the risk of harm (as in deployment scenario 3).
In the next section, we review these various stages along with some of the tools that can help us address machine bias at each of them.
## Addressing Bias Throughout the ML Development Cycle
Ready for some practical advice yet? Here we go ๐ค
There is no one single way to develop ML systems; which steps happen in what order depends on a number of factors including the development setting (university, large company, startup, grassroots organization, etcโฆ), the modality (text, tabular data, images, etcโฆ), and the preeminence or scarcity of publicly available ML resources. However, we can identify three common stages of particular interest in addressing bias. These are the task definition, the data curation, and the model training. Letโs have a look at how bias handling may differ across these various stages.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_pipeline.png" alt="The Bias ML Pipeline by Meg" width="500" />
<em>The Bias ML Pipeline by <a href="https://huggingface.co/meg">Meg</a></em>
</p>
### I am <span style="text-decoration:underline;">defining the task</span> of my ML system, how can I address bias?
Whether and to what extent bias in the system concretely affects people ultimately depends on what the system is used for. As such, the first place developers can work to mitigate bias is when deciding how ML fits in their system, e.g., by deciding what optimization objective it will use.
For example, letโs go back to one of the first highly-publicized cases of a Machine Learning system used in production for algorithmic content recommendation. From 2006 to 2009, Netflix ran the [Netflix Prize](https://www.cs.uic.edu/~liub/KDD-cup-2007/proceedings/The-Netflix-Prize-Bennett.pdf), a competition with a 1M$ cash prize challenging teams around the world to develop ML systems to accurately predict a userโs rating for a new movie based on their past ratings. The [winning submission](https://www.asc.ohio-state.edu/statistics/dmsl/GrandPrize2009_BPC_BigChaos.pdf) improved the RMSE (Root-mean-square-error) of predictions on unseen user-movie pairs by over 10% over Netflixโs own CineMatch algorithm, meaning it got much better at predicting how users would rate a new movie based on their history. This approach opened the door for much of modern algorithmic content recommendation by bringing the role of ML in modeling user preferences in recommender systems to public awareness.
So what does this have to do with bias? Doesnโt showing people content that theyโre likely to enjoy sound like a good service from a content platform? Well, it turns out that showing people more examples of **what theyโve liked in the past** ends up [reducing the diversity of the media they consume](https://dl.acm.org/doi/10.1145/3391403.3399532). Not only does it lead users to be [less satisfied in the long term](https://dl.acm.org/doi/abs/10.1145/3366423.3380281), but it also means that any biases or stereotypes captured by the initial models โ such as when modeling [the preferences of Black American users](https://www.marieclaire.com/culture/a18817/netflix-algorithms-black-movies/) or [dynamics that systematically disadvantage](https://dl.acm.org/doi/10.1145/3269206.3272027) some artists โ are likely to be reinforced if the model is [further trained on ongoing ML-mediated](https://arxiv.org/abs/2209.03942) user interactions. This reflects two of the types of bias-related concerns weโve mentioned above: the training objective acts as a **risk factor** for bias-related harms as it makes pre-existing biases much more likely to show up in predictions, and the task framing has the effect of **locking in** and exacerbating past biases.
A promising bias mitigation strategy at this stage has been to reframe the task to explicitly [model both engagement and diversity](https://dl.acm.org/doi/10.1145/3437963.3441775) when applying ML to algorithmic content recommendation. Users are likely to get more long-term satisfaction and the risk of exacerbating biases as outlined above is reduced!
This example serves to illustrate that the impact of machine biases in an ML-supported product depends not just on where we decide to leverage ML, but also on how ML techniques are integrated into the broader technical system, and with what objective. When first investigating how ML can fit into a product or a use case you are interested in, we first recommend looking for the failure modes of the system through the lens of bias before even diving into the available models or datasets - which behaviors of existing systems in the space will be particularly harmful or more likely to occur if bias is exacerbated by ML predictions?
We built a [tool](https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl) to take users through these questions in another case of algorithmic content management: [hate speech detection in automatic content moderation](https://aclanthology.org/2022.hcinlp-1.2/). We found for example that looking through news and scientific articles that didnโt particularly focus on the ML part of the technology was already a great way to get a sense of where bias is already at play. Definitely go have a look for an example of how the models and datasets fit with the deployment context and how they can relate to known bias-related harms!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img2.png" alt="Selection of tools developed by HF team members to address bias in ML" />
<em><a href="https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl">ACM Task Exploration tool</a> by <a href="https://huggingface.co/aymm">Angie</a>, <a href="https://huggingface.co/paullada">Amandalynne</a>, and <a href="https://huggingface.co/yjernite">Yacine</a></em>
</p>
#### Task definition: recommendations
There are as many ways for the ML task definition and deployment to affect the risk of bias-related harms as there are applications for ML systems. As in the examples above, some common steps that may help decide whether and how to apply ML in a way that minimizes bias-related risk include:
* Investigate:
* Reports of bias in the field pre-ML
* At-risk demographic categories for your specific use case
* Examine:
* The impact of your optimization objective on reinforcing biases
* Alternative objectives that favor diversity and positive long-term impacts
### I am <span style="text-decoration:underline;">curating/picking a dataset</span> for my ML system, how can I address bias?
While training datasets are [not the sole source of bias](https://www.cell.com/patterns/fulltext/S2666-3899(21)00061-1) in the ML development cycle, they do play a significant role. Does your [dataset disproportionately associate](https://aclanthology.org/2020.emnlp-main.23/) biographies of women with life events but those of men with achievements? Those **stereotypes** are probably going to show up in your full ML system! Does your voice recognition dataset only feature specific accents? Not a good sign for [the inclusivity of technology](https://www.scientificamerican.com/article/speech-recognition-tech-is-yet-another-example-of-bias/) you build with it in terms of **disparate performance**! Whether youโre curating a dataset for ML applications or selecting a dataset to train an ML model, finding out, mitigating, and [communicating](https://dl.acm.org/doi/10.1145/3479582) to what extent the data exhibits these phenomena are all necessary steps to reducing bias-related risks.
You can usually get a pretty good sense of likely biases in a dataset by reflecting on where it comes from, who are the people represented on the data, and what the curation process was. Several frameworks for this reflection and documentation have been proposed such as [Data Statements for NLP](https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00041/43452/Data-Statements-for-Natural-Language-Processing) or [Datasheets for Datasets](https://dl.acm.org/doi/10.1145/3458723). The Hugging Face Hub includes a Dataset Card [template](https://github.com/huggingface/datasets/blob/main/templates/README.md) and [guide](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#dataset-card-creation-guide) inspired by these works; the section on [considerations for using the data](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#considerations-for-using-the-data) is usually a good place to look for information about notable biases if youโre browsing datasets, or to write a paragraph sharing your insights on the topic if youโre sharing a new one. And if youโre looking for more inspiration on what to put there, check out these sections written by Hub users in the [BigLAM organization](https://huggingface.co/biglam) for historical datasets of [legal proceedings](https://huggingface.co/datasets/biglam/old_bailey_proceedings#social-impact-of-dataset), [image classification](https://huggingface.co/datasets/biglam/brill_iconclass#social-impact-of-dataset), and [newspapers](https://huggingface.co/datasets/biglam/bnl_newspapers1841-1879#social-impact-of-dataset).
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img3.png" alt="HF Dataset Card guide for the Social Impact and Bias Sections" />
<em><a href="https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#social-impact-of-dataset">HF Dataset Card guide</a> for the Social Impact and Bias Sections</em>
</p>
While describing the origin and context of a dataset is always a good starting point to understand the biases at play, [quantitatively measuring phenomena](https://arxiv.org/abs/2212.05129) that encode those biases can be just as helpful. If youโre choosing between two different datasets for a given task or choosing between two ML models trained on different datasets, knowing which one better represents the demographic makeup of your ML systemโs user base can help you make an informed decision to minimize bias-related risks. If youโre curating a dataset iteratively by filtering data points from a source or selecting new sources of data to add, measuring how these choices affect the diversity and biases present in your overall dataset can make it safer to use in general.
Weโve recently released two tools you can leverage to measure your data through a bias-informed lens. The [disaggregators๐ค library](https://github.com/huggingface/disaggregators) provides utilities to quantify the composition of your dataset, using either metadata or leveraging models to infer properties of data points. This can be particularly useful to minimize risks of bias-related **[representation harms](https://aclanthology.org/P16-2096/)** or **disparate performances** of trained models. Look at the [demo](https://huggingface.co/spaces/society-ethics/disaggregators) to see it applied to the LAION, MedMCQA, and The Stack datasets!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img4.png" alt="Disaggregators tool by Nima" />
<em><a href="https://huggingface.co/spaces/society-ethics/disaggregators">Disaggregator tool</a> by <a href="https://huggingface.co/NimaBoscarino">Nima</a></em>
</p>
Once you have some helpful statistics about the composition of your dataset, youโll also want to look at associations between features in your data items, particularly at associations that may encode derogatory or otherwise negative stereotypes. The Data Measurements Tool we [originally introduced](https://huggingface.co/blog/data-measurements-tool#comparison-statistics) last year allows you to do this by looking at the [normalized Pointwise Mutual Information (nPMI)](https://dl.acm.org/doi/10.1145/3461702.3462557) between terms in your text-based dataset; particularly associations between gendered pronouns that may denote gendered stereotypes. [Run it yourself](https://github.com/huggingface/data-measurements-tool) or [try it here](https://huggingface.co/spaces/huggingface/data-measurements-tool) on a few pre-computed datasets!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img5.png" alt="Data Measurements tool by Meg, Sasha, Bibi, and the Gradio team" />
<em><a href="https://huggingface.co/spaces/huggingface/data-measurements-tool">Data Measurements tool</a> by <a href="https://huggingface.co/meg">Meg</a>, <a href="https://huggingface.co/sasha">Sasha</a>, <a href="https://huggingface.co/Bibss">Bibi</a>, and the <a href="https://gradio.app/">Gradio team</a></em>
</p>
#### Dataset selection/curation: recommendations
These tools arenโt full solutions by themselves, rather, they are designed to support critical examination and improvement of datasets through several lenses, including the lens of bias and bias-related risks. In general, we encourage you to keep the following steps in mind when leveraging these and other tools to mitigate bias risks at the dataset curation/selection stage:
* Identify:
* Aspects of the dataset creation that may exacerbate specific biases
* Demographic categories and social variables that are particularly important to the datasetโs task and domain
* Measure:
* The demographic distribution in your dataset
* Pre-identified negative stereotypes represented
* Document:
* Share what youโve Identified and Measured in your Dataset Card so it can benefit other users, developers, and otherwise affected people
* Adapt:
* By choosing the dataset least likely to cause bias-related harms
* By iteratively improving your dataset in ways that reduce bias risks
### I am <span style="text-decoration:underline;">training/selecting a model</span> for my ML system, how can I address bias?
Similar to the dataset curation/selection step, documenting and measuring bias-related phenomena in models can help both ML developers who are selecting a model to use as-is or to finetune and ML developers who want to train their own models. For the latter, measures of bias-related phenomena in the model can help them learn from what has worked or what hasnโt for other models and serve as a signal to guide their own development choices.
Model cards were originally proposed by [(Mitchell et al., 2019)](https://dl.acm.org/doi/10.1145/3287560.3287596) and provide a framework for model reporting that showcases information relevant to bias risks, including broad ethical considerations, disaggregated evaluation, and use case recommendation. The Hugging Face Hub provides even more tools for model documentation, with a [model card guidebook](https://huggingface.co/docs/hub/model-cards) in the Hub documentation, and an [app that lets you create extensive model cards](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) easily for your new model.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img6.png" alt="Model Card writing tool by Ezi, Marissa, and Meg" />
<em><a href="https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool">Model Card writing tool</a> by <a href="https://huggingface.co/Ezi">Ezi</a>, <a href="https://huggingface.co/Marissa">Marissa</a>, and <a href="https://huggingface.co/meg">Meg</a></em>
</p>
Documentation is a great first step for sharing general insights about a modelโs behavior, but it is usually static and presents the same information to all users. In many cases, especially for generative models that can generate outputs to approximate the distribution of their training data, we can gain a more contextual understanding of bias-related phenomena and **negative stereotypes** by visualizing and contrasting model outputs. Access to model generations can help users bring [intersectional issues in the model behavior](https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/) corresponding to their lived experience, and evaluate to what extent a model reproduces [gendered stereotypes for different adjectives](https://www.vice.com/en/article/bvm35w/this-tool-lets-anyone-see-the-bias-in-ai-image-generators). To facilitate this process, we built a tool that lets you compare generations not just across a set of adjectives and professions, but also across different models! [Go try it out](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer) to get a sense of which model might carry the least bias risks in your use case.
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img7.png" alt="Visualize Adjective and Occupation Biases in Image Generation by Sasha" />
<br>
<em><a href="https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer">Visualize Adjective and Occupation Biases in Image Generation</a> by <a href="https://huggingface.co/sasha">Sasha</a></em>
</p>
Visualization of model outputs isnโt just for generative models though! For classification models, we also want to look out for bias-related harms caused by a modelโs **disparate performance** on different demographics. If you know what protected classes are most at risk of discrimination and have those annotated in an evaluation set, then you can report disaggregated performance over the different categories in [your model card](https://dl.acm.org/doi/10.1145/3287560.3287596) as mentioned above, so users can make informed decisions. If however, you are worried that you havenโt identified all populations at risk of bias-related harms, or if you do not have access to annotated test examples to measure the biases you suspect, thatโs where interactive visualizations of where and how the model fails come in handy! To help you with this, the [SEAL app](https://huggingface.co/spaces/nazneen/seal) groups similar mistakes by your model and shows you some common features in each cluster. If you want to go further, you can even combine it with the [disaggregators library](https://github.com/huggingface/disaggregators) we introduced in the datasets section to find clusters that are indicative of bias-related failure modes!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img8.png" alt="Systematic Error Analysis and Labeling (SEAL) by Nazneen" />
<em><a href="https://huggingface.co/spaces/nazneen/seal">Systematic Error Analysis and Labeling (SEAL)</a> by <a href="https://huggingface.co/nazneen">Nazneen</a></em>
</p>
Finally, a few benchmarks exist that can measure bias-related phenomena in models. For language models, benchmarks such as [BOLD](https://github.com/amazon-science/bold), [HONEST](https://aclanthology.org/2021.naacl-main.191.pdf), or [WinoBias](https://aclanthology.org/N18-2003/) provide quantitative evaluations of targeted behaviors that are indicative of biases in the models. While the benchmarks have their [limitations](https://aclanthology.org/2021.acl-long.81/), they do provide a limited view into some pre-identified bias risks that can help describe how the models function or choose between different models. You can find these evaluations pre-computed on a range of common language models [in this exploration Space](https://huggingface.co/spaces/sasha/BiasDetection) to get a first sense of how they compare!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img9.png" alt="Language Model Bias Detection by Sasha" />
<em><a href="https://huggingface.co/spaces/sasha/BiasDetection">Language Model Bias Detection</a> by <a href="https://huggingface.co/sasha">Sasha</a></em>
</p>
Even with access to a benchmark for the models you are considering, you might find that running evaluations of the larger language models you are considering can be prohibitively expensive or otherwise technically impossible with your own computing resources. The <a href="https://huggingface.co/spaces/autoevaluate/model-evaluator">Evaluation on the Hub</a> tool we released this year can help with that: not only will it run the evaluations for you, but it will also help connect them to the model documentation so the results are available once and for all โ so everyone can see, for example, that size <a href="https://huggingface.co/blog/zero-shot-eval-on-the-hub">measurably increases bias risks in models like OPT</a>!
<p align="center">
<br>
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img_winobias.png" alt="Large model WinoBias scores computed with Evaluation on the Hub by Helen, Tristan, Abhishek, Lewis, and Douwe" />
<em><a href="https://huggingface.co/spaces/sasha/BiasDetection"><a href="https://huggingface.co/blog/zero-shot-eval-on-the-hub">Large model WinoBias scores computed with Evaluation on the Hub</a> by <a href="https://huggingface.co/mathemakitten">Helen</a>, <a href="https://huggingface.co/Tristan">Tristan</a>, <a href="https://huggingface.co/abhishek">Abhishek</a>, <a href="https://huggingface.co/lewtun">Lewis</a>, and <a href="https://huggingface.co/douwekiela">Douwe</a></em>
</p>
#### Model selection/development: recommendations
For models just as for datasets, different tools for documentation and evaluation will provide different views of bias risks in a model which all have a part to play in helping developers choose, develop, or understand ML systems.
* Visualize
* Generative model: visualize how the modelโs outputs may reflect stereotypes
* Classification model: visualize model errors to identify failure modes that could lead to disparate performance
* Evaluate
* When possible, evaluate models on relevant benchmarks
* Document
* Share your learnings from visualization and qualitative evaluation
* Report your modelโs disaggregated performance and results on applicable fairness benchmarks
## Conclusion and Overview of Bias Analysis and Documentation Tools from ๐ค
As we learn to leverage ML systems in more and more applications, reaping their benefits equitably will depend on our ability to actively mitigate the risks of bias-related harms associated with the technology. While there is no single answer to the question of how this should best be done in any possible setting, we can support each other in this effort by sharing lessons, tools, and methodologies to mitigate and document those risks. The present blog post outlines some of the ways Hugging Face team members have addressed this question of bias along with supporting tools, we hope that you will find them helpful and encourage you to develop and share your own!
Summary of linked tools:
* Tasks:
* Explore our directory of [ML Tasks](https://huggingface.co/tasks) to understand what technical framings and resources are available to choose from
* Use tools to explore the [full development lifecycle](https://huggingface.co/spaces/hf-task-exploration/ExploreACMnaacl) of specific tasks
* Datasets:
* Make use of and contribute to [Dataset Cards](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md#social-impact-of-dataset) to share relevant insights on biases in datasets.
* Use [Disaggregator](https://github.com/huggingface/disaggregators) to look for [possible disparate performance](https://huggingface.co/spaces/society-ethics/disaggregators)
* Look at aggregated [measurements of your dataset](https://huggingface.co/spaces/huggingface/data-measurements-tool) including nPMI to surface possible stereotypical associations
* Models:
* Make use of and contribute to [Model Cards](https://huggingface.co/docs/hub/model-cards) to share relevant insights on biases in models.
* Use [Interactive Model Cards](https://huggingface.co/spaces/nazneen/interactive-model-cards) to visualize performance discrepancies
* Look at [systematic model errors](https://huggingface.co/spaces/nazneen/seal) and look out for known social biases
* Use [Evaluate](https://github.com/huggingface/evaluate) and [Evaluation on the Hub](https://huggingface.co/spaces/autoevaluate/model-evaluator) to explore [language model biases](https://huggingface.co/blog/evaluating-llm-bias) including in [large models](https://huggingface.co/blog/zero-shot-eval-on-the-hub)
* Use a [Text-to-image bias explorer](https://huggingface.co/spaces/sasha/StableDiffusionBiasExplorer) to compare image generation modelsโ biases
* Compare LM models with Bias [Score Card](https://huggingface.co/spaces/sasha/BiasDetection)
Thanks for reading! ๐ค
~ Yacine, on behalf of the Ethics and Society regulars
Cite as:
```
@inproceedings{hf_ethics_soc_blog_2,
author = {Yacine Jernite and
Alexandra Sasha Luccioni and
Irene Solaiman and
Giada Pistilli and
Nathan Lambert and
Ezi Ozoani and
Brigitte Toussignant and
Margaret Mitchell},
title = {Hugging Face Ethics and Society Newsletter 2: Let's Talk about Bias!},
booktitle = {Hugging Face Blog},
year = {2022},
url = {https://doi.org/10.57967/hf/0214},
doi = {10.57967/hf/0214}
}
``` | society-ethics/BlogPostBias | [
"license:cc-by-4.0",
"arxiv:2203.07785",
"arxiv:2010.03058",
"arxiv:2211.03759",
"arxiv:2209.03942",
"arxiv:2212.05129",
"doi:10.57967/hf/0214",
"region:us"
]
| 2022-12-15T21:55:16+00:00 | {"license": "cc-by-4.0"} | 2022-12-16T14:54:32+00:00 |
4e17be0aeca4ff010c61c0cbaa53beaf67b225fa | # AutoTrain Dataset for project: told_br_binary_sm_bertimbau
## Dataset Description
This dataset has been automatically processed by AutoTrain for project told_br_binary_sm_bertimbau.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "@user agora n\u00e3o me d\u00e1 mais, mas antes, porra",
"target": 1
},
{
"text": "pires \u00e9 fodido fds mais um",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['0', '1'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 5599 |
| valid | 1401 |
| alexandreteles/autotrain-data-told_br_binary_sm_bertimbau | [
"task_categories:text-classification",
"region:us"
]
| 2022-12-15T22:29:35+00:00 | {"task_categories": ["text-classification"]} | 2022-12-15T22:30:17+00:00 |
07c68c26cba253dd00e08d69ef9f9cc1d4f262bf | # Dataset Card for "clinic-utility"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
```
@inproceedings{larson-etal-2019-evaluation,
title = "An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction",
author = "Larson, Stefan and
Mahendran, Anish and
Peper, Joseph J. and
Clarke, Christopher and
Lee, Andrew and
Hill, Parker and
Kummerfeld, Jonathan K. and
Leach, Kevin and
Laurenzano, Michael A. and
Tang, Lingjia and
Mars, Jason",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
year = "2019",
url = "https://www.aclweb.org/anthology/D19-1131"
}
``` | fathyshalab/clinic-utility | [
"region:us"
]
| 2022-12-15T23:22:54+00:00 | {"dataset_info": {"features": [{"name": "Unnamed: 0", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "label_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 33764.5, "num_examples": 525}, {"name": "test", "num_bytes": 14470.5, "num_examples": 225}], "download_size": 0, "dataset_size": 48235.0}} | 2023-05-15T07:51:36+00:00 |
b26be3e5d8bbcafbf6f1f5a28ed17e66d194fa67 | # Dataset Card for "butterflies_10k_names_multiple"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | sasha/butterflies_10k_names_multiple | [
"region:us"
]
| 2022-12-15T23:37:13+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "description", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "sim_score", "dtype": "float64"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 260929983.907, "num_examples": 7061}], "download_size": 268647797, "dataset_size": 260929983.907}} | 2022-12-15T23:37:48+00:00 |
a4a1edf3442726bbd653f0ab6c5077b4700fd609 | oscar127/ImagenesOscar | [
"region:us"
]
| 2022-12-15T23:52:59+00:00 | {} | 2022-12-16T00:00:48+00:00 |
|
9255cf3d7c8184c91fdfcbacda3bd0f0ea4af2b6 | # Dataset Card for "mnli_AppE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/mnli_AppE | [
"region:us"
]
| 2022-12-16T01:33:18+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 77770885, "num_examples": 383973}, {"name": "dev_matched", "num_bytes": 1943955, "num_examples": 9770}, {"name": "dev_mismatched", "num_bytes": 2070244, "num_examples": 9824}, {"name": "test_matched", "num_bytes": 1943860, "num_examples": 9673}, {"name": "test_mismatched", "num_bytes": 2071907, "num_examples": 9842}], "download_size": 0, "dataset_size": 85800851}} | 2022-12-16T01:37:06+00:00 |
513e54e089197671f12d4acc87f3c9b6be0aebd6 | annotations_creators:
- expert-generated
language_creators:
- other
-
license:
- mit
pretty_name: mnist extended
size_categories:
- 10K<n<100K
task_categories:
- image-classification
| seftontycho/mnist-extended-1 | [
"region:us"
]
| 2022-12-16T01:33:35+00:00 | {} | 2022-12-16T01:40:20+00:00 |
593c095f859fcc32d9f09c78a676fa6344fabf7c | # Dataset Card for "mnli_ChcE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/mnli_ChcE | [
"region:us"
]
| 2022-12-16T01:37:48+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 75380580, "num_examples": 383734}, {"name": "dev_matched", "num_bytes": 1887265, "num_examples": 9770}, {"name": "dev_mismatched", "num_bytes": 2007388, "num_examples": 9824}, {"name": "test_matched", "num_bytes": 1884526, "num_examples": 9673}, {"name": "test_mismatched", "num_bytes": 2008710, "num_examples": 9842}], "download_size": 56286590, "dataset_size": 83168469}} | 2022-12-16T01:38:13+00:00 |
253e0ec30f757fb87e8186d742028ff5e46d9dc0 | # Dataset Card for "mnli_CollSgE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/mnli_CollSgE | [
"region:us"
]
| 2022-12-16T01:39:15+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 70113002, "num_examples": 383726}, {"name": "dev_matched", "num_bytes": 1755041, "num_examples": 9770}, {"name": "dev_mismatched", "num_bytes": 1850578, "num_examples": 9821}, {"name": "test_matched", "num_bytes": 1749414, "num_examples": 9671}, {"name": "test_mismatched", "num_bytes": 1855882, "num_examples": 9840}], "download_size": 52803650, "dataset_size": 77323917}} | 2022-12-16T01:39:31+00:00 |
1608d4a261af219df1c1f406695cc2a46af2649b | # Dataset Card for "mnli_IndE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/mnli_IndE | [
"region:us"
]
| 2022-12-16T01:40:49+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 75917760, "num_examples": 383924}, {"name": "dev_matched", "num_bytes": 1900229, "num_examples": 9770}, {"name": "dev_mismatched", "num_bytes": 2016344, "num_examples": 9824}, {"name": "test_matched", "num_bytes": 1896266, "num_examples": 9672}, {"name": "test_mismatched", "num_bytes": 2021206, "num_examples": 9841}], "download_size": 56783020, "dataset_size": 83751805}} | 2022-12-16T01:41:06+00:00 |
264c7c9f6a91bf5a72606a9df2c37f3efed5b1b6 | # Dataset Card for "mnli_MULTI"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | liuyanchen1015/mnli_MULTI | [
"region:us"
]
| 2022-12-16T01:43:06+00:00 | {"dataset_info": {"features": [{"name": "premise", "dtype": "string"}, {"name": "hypothesis", "dtype": "string"}, {"name": "label", "dtype": "int64"}, {"name": "idx", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 79281363, "num_examples": 384388}, {"name": "dev_matched", "num_bytes": 1983976, "num_examples": 9779}, {"name": "dev_mismatched", "num_bytes": 2092314, "num_examples": 9823}, {"name": "test_matched", "num_bytes": 1976499, "num_examples": 9672}, {"name": "test_mismatched", "num_bytes": 2096238, "num_examples": 9841}], "download_size": 58746057, "dataset_size": 87430390}} | 2022-12-16T01:43:32+00:00 |
d57fc6787ed02b6d192380c5dc5c94d05f4e25a6 | wheart/aiclonex | [
"license:openrail",
"region:us"
]
| 2022-12-16T01:51:01+00:00 | {"license": "openrail"} | 2022-12-16T02:14:39+00:00 |
|
14b2e946e81322a64f2502200afea23968b6e610 | lyakaap/balanced-cc100-ja | [
"license:mit",
"region:us"
]
| 2022-12-16T02:52:52+00:00 | {"license": "mit"} | 2022-12-16T03:06:39+00:00 |
|
b20791392860e0919ab7c0f94b6f590da8c5f350 |
This dataset is extracted from the Visual Novel "Milk inside a bag of milk inside a bag of milk."
Please refer to the `milk_dialog_dataset.ipynb` file to see how the dataset was pre-processed. | alexandreteles/milk | [
"multilinguality:monolingual",
"language:en",
"license:other",
"region:us"
]
| 2022-12-16T03:10:11+00:00 | {"language": ["en"], "license": "other", "multilinguality": ["monolingual"], "pretty_name": "milk", "language_bcp47": ["en-US"]} | 2022-12-27T17:49:14+00:00 |
14a593ecd030fb46cf8ec1892df92460cbc60d24 | earlybyrd/plp_q_n_a | [
"region:us"
]
| 2022-12-16T03:54:11+00:00 | {} | 2023-01-14T05:30:14+00:00 |
|
43e39604a2a1ff24fe3857ef13c112368a6e45ac | # Dataset Card for "common_voice_11_0_th_w2v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Botnoi/common_voice_11_0_th_w2v2 | [
"region:us"
]
| 2022-12-16T04:21:21+00:00 | {"dataset_info": {"features": [{"name": "input_values", "sequence": "float32"}, {"name": "input_length", "dtype": "int64"}, {"name": "labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 12075286848.556108, "num_examples": 42779}, {"name": "validation", "num_bytes": 1632681586.0, "num_examples": 5465}, {"name": "test", "num_bytes": 1714070928.0, "num_examples": 5465}], "download_size": 14452183131, "dataset_size": 15422039362.556108}} | 2023-02-01T12:25:03+00:00 |
3a4e8d615ac28f007759c9f142f5602f63c8ef32 | weneatt/eddie | [
"license:apache-2.0",
"region:us"
]
| 2022-12-16T05:04:04+00:00 | {"license": "apache-2.0"} | 2022-12-16T05:08:07+00:00 |
|
508b9a210f555efa14a597876a29acd09005f2d7 | xianbao/test-dataset-1 | [
"license:apache-2.0",
"region:us"
]
| 2022-12-16T07:10:37+00:00 | {"license": "apache-2.0"} | 2022-12-16T07:10:39+00:00 |
|
4c3562fbc67afe83a70873fc782a5c1411d1b7fd | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
* Dataset: squad
* Config: plain_text
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@florence](https://huggingface.co/florence) for evaluating this model. | autoevaluate/autoeval-eval-squad-plain_text-58f506-2493576894 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-16T07:23:54+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad"], "eval_info": {"task": "extractive_question_answering", "model": "Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad", "metrics": [], "dataset_name": "squad", "dataset_config": "plain_text", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-16T07:28:12+00:00 |
18fe77b882f32df0575bfb911dda5a3a212d492a | # Dataset Card for "OSD-Dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a reformat of Huggingface Project's [SD Multiplayer Dataset](https://huggingface.co/datasets/huggingface-projects/sd-multiplayer-data)
It converts the image bucket into a parquet format. The text column is the prompt + the timestamp for it to the minutes precision.
The model finetuned on it is [here](https://huggingface.co/BirdL/OSD-Model) | BirdL/OSD-Dataset | [
"region:us"
]
| 2022-12-16T07:30:34+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7440671071.55, "num_examples": 198771}], "download_size": 7196594621, "dataset_size": 7440671071.55}} | 2022-12-19T19:43:20+00:00 |
0f465b676513506f29f15c39ec19ccdcb0687562 | # Dataset Card for AutoTrain Evaluator
This repository contains model predictions generated by [AutoTrain](https://huggingface.co/autotrain) for the following task and dataset:
* Task: Question Answering
* Model: Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad
* Dataset: squad_v2
* Config: squad_v2
* Split: validation
To run new evaluation jobs, visit Hugging Face's [automatic model evaluator](https://huggingface.co/spaces/autoevaluate/model-evaluator).
## Contributions
Thanks to [@Florence Gundidza](https://huggingface.co/Florence Gundidza) for evaluating this model. | autoevaluate/autoeval-eval-squad_v2-squad_v2-878283-2493776900 | [
"autotrain",
"evaluation",
"region:us"
]
| 2022-12-16T07:35:38+00:00 | {"type": "predictions", "tags": ["autotrain", "evaluation"], "datasets": ["squad_v2"], "eval_info": {"task": "extractive_question_answering", "model": "Jiqing/bert-large-uncased-whole-word-masking-finetuned-squad-finetuned-squad", "metrics": ["precision", "recall"], "dataset_name": "squad_v2", "dataset_config": "squad_v2", "dataset_split": "validation", "col_mapping": {"context": "context", "question": "question", "answers-text": "answers.text", "answers-answer_start": "answers.answer_start"}}} | 2022-12-16T07:40:00+00:00 |
62dfdc0c261fd6029582f165b8d772f05712253b | # Dataset Card for "yannick-kilcher-transcript-wav"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | matallanas/yannick-kilcher-transcript-wav | [
"region:us"
]
| 2022-12-16T07:49:11+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "channel", "dtype": "string"}, {"name": "channel_id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "categories", "sequence": "string"}, {"name": "tags", "sequence": "string"}, {"name": "description", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "segments", "list": [{"name": "start", "dtype": "float64"}, {"name": "end", "dtype": "float64"}, {"name": "text", "dtype": "string"}]}, {"name": "audio", "dtype": "audio"}], "splits": [{"name": "train", "num_bytes": 144437989292.0, "num_examples": 370}], "download_size": 127955407676, "dataset_size": 144437989292.0}} | 2022-12-16T10:11:10+00:00 |
9253555e8c5ec0890c85f4cddfe8053aaef49247 | # Dataset Card for "kaggle-mbti-cleaned-augmented"
This dataset is built upon [Shunian/kaggle-mbti-cleaned](https://huggingface.co/datasets/Shunian/kaggle-mbti-cleaned) to address the sample imbalance problem.
Thanks to the [Parrot Paraphraser](https://github.com/PrithivirajDamodaran/Parrot_Paraphraser) and [NLP AUG](https://github.com/makcedward/nlpaug), some of the skewness issue are addressed in the training data, make it grows from 328,660 samples to 478,389 samples in total.
View [GitHub](https://github.com/nogibjj/MBTI-Personality-Test) for more information | Shunian/kaggle-mbti-cleaned-augmented | [
"region:us"
]
| 2022-12-16T09:30:11+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": "int64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 74489242, "num_examples": 478389}, {"name": "test", "num_bytes": 12922409, "num_examples": 81957}], "download_size": 56815784, "dataset_size": 87411651}} | 2022-12-16T09:46:26+00:00 |
926a3f36d6d4066dea2cca2736494af477dc1864 | # Dataset Card for "swiss_parliament_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | yanickschraner/swiss_parliament_corpus | [
"region:us"
]
| 2022-12-16T10:13:00+00:00 | {"dataset_info": {"features": [{"name": "client_id", "dtype": "int64"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "sentence", "dtype": "string"}, {"name": "up_votes", "dtype": "float64"}, {"name": "down_votes", "dtype": "float64"}, {"name": "age", "dtype": "float64"}, {"name": "gender", "dtype": "float64"}, {"name": "accent", "dtype": "float64"}, {"name": "iou_estimate", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 24373100536.732, "num_examples": 90324}, {"name": "test", "num_bytes": 824083440.94, "num_examples": 3332}], "download_size": 14083003405, "dataset_size": 25197183977.671997}} | 2022-12-16T13:44:59+00:00 |
805bb77bfa67c7dab48f9c17923caa64dbd779ef | kazuox/SD | [
"license:unknown",
"region:us"
]
| 2022-12-16T11:00:32+00:00 | {"license": "unknown"} | 2023-08-08T18:24:28+00:00 |
|
cc49a7493af2928b5fc8bf1f92e335cc8e073c4c | iurypedroso/dataset | [
"license:mit",
"region:us"
]
| 2022-12-16T11:50:30+00:00 | {"license": "mit"} | 2022-12-16T11:54:02+00:00 |
|
d667b8e3420edef64bbf14e8c26030d156e2e012 | Hi, this is my first dataset. If you know how to make it better, please leave a comment | sbad/biographySummaries | [
"license:cc",
"region:us"
]
| 2022-12-16T12:47:25+00:00 | {"license": "cc"} | 2022-12-17T10:15:55+00:00 |
b804913a330ab772fa9340c9e53ab50d688a4d5e | # Dataset Card for "test_torch"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | polinaeterna/test_torch | [
"region:us"
]
| 2022-12-16T14:26:59+00:00 | {"dataset_info": {"features": [{"name": "data", "dtype": "float64"}, {"name": "text", "dtype": "string"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 368, "num_examples": 16}], "download_size": 1376, "dataset_size": 368}} | 2022-12-16T14:27:09+00:00 |
c74be08773a357d75a86242bbac2d6b5581e9d3d | Ash-Hun/korean_slangData | [
"license:mit",
"region:us"
]
| 2022-12-16T15:33:02+00:00 | {"license": "mit"} | 2022-12-16T15:34:04+00:00 |
|
e22ecaab32e11218eb2620503d495147f0d04047 | Arsenalalex108/Cburnett_Pieces | [
"license:cc-by-3.0",
"region:us"
]
| 2022-12-16T15:42:54+00:00 | {"license": "cc-by-3.0"} | 2022-12-16T15:49:53+00:00 |
|
f8024be1cf47d74a090b46b9b7c57caff075589f |
# Dataset Card for "Wikiomnia"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Structure](#dataset-structure)
- [Dataset Creation](#dataset-creation)
- [Additional Information](#additional-information)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [https://github.com/RussianNLP](https://github.com/RussianNLP)
- **Paper:** [WikiOmnia: filtration and evaluation of the generated QA corpus on the whole Russian Wikipedia](https://arxiv.org/abs/2204.08009)
- **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Dataset Summary
We present the WikiOmnia dataset, a new publicly available set of QA-pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generative pipeline. The dataset includes every available article from Wikipedia for the Russian language. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).
WikiOmnia consists of 2 parts:
1. the voluminous, automatically generated part: 15,9 million triplets consisting of the original article summary, a corresponding generated question and a generated answer;
2. the filtered part: the subsample of 3,5 million triplets, fully verified with automatic means
Wikiomnia adheres to a standard SQuAD format problem, resulting in triplets "text paragraph - question based on paragraph - answer from the paragraph", see the following example:
**Original Wikipedia paragraph**: ะะพะธัะธ ะะฐัะธะผะพ (ัะฟ. ะะฐัะธะผะพ ะะพ:ะธัะธ) โ ะธะทะฒะตััะฝัะน ัะตะถะธัััั ะฐะฝะธะผะต ะธ ะพัะฝะพะฒะฐัะตะปั ัะฟะพะฝัะบะพะน ะฐะฝะธะผะฐัะธะพะฝะฝะพะน ัััะดะธะธ Bee Train. ะก
ะผะพะผะตะฝัะฐ ะพัะฝะพะฒะฐะฝะธั ัััะดะธะธ ะพะฝ ััะบะพะฒะพะดะธั ะฟัะพะธะทะฒะพะดััะฒะพะผ ะฟะพััะธ ะฒัะตั
ะตั ะบะฐััะธะฝ, ะฐ ัะฐะบะถะต ะฒัะตะผั ะพั ะฒัะตะผะตะฝะธ ะฟัะธะฝะธะผะฐะตั ััะฐััะธะต ะฒ ัะฐะฑะพัะต ะฝะฐะด ะฐะฝะธะผะฐัะธะตะน ะธ ะผัะทัะบะพะน.
**English translation**: Koichi Mashimo is a famous anime director and the founder of the Japanese animation studio Bee Train. Since the creation of the studio, he directed almost all studioโs works, and he
also sometimes participates in art and sound tasks.
**Generated question (ruT5)**: ะัะพ ัะฒะปัะตััั ะพัะฝะพะฒะฐัะตะปะตะผ ัะฟะพะฝัะบะพะน ะฐะฝะธะผะฐัะธะพะฝะฝะพะน ัััะดะธะธ Bee Train?
**Generated answer (ruT5)**: ะะพะธัะธ ะะฐัะธะผะพ
**English QA translation**: Who is the founder of the Japanese animation studio Bee Train? Koichi Mashimo
## Dataset Creation
Models used for dataset generation:
- [ruT5](https://huggingface.co/sberbank-ai/ruT5-large) large fine-tuned on SberQuaD
- [ruGPT-3](https://huggingface.co/sberbank-ai/rugpt3xl) XL fine-tuned on SberQuaD
- [ruBERT](http://docs.deeppavlov.ai/en/master/features/models/squad.html) DeepPavlov tuned for QA tasks
Source: Wikipedia version March 2021
Special tokens: <[TEXT]>, <[QUESTION]>, <[ANSWER]>
The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-
large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).

## Additional Information
### Licensing Information
[Apache 2.0 license](https://github.com/RussianNLP/WikiOmnia/blob/main/LICENSE)
### Citation Information
```
@inproceedings{pisarevskaya-shavrina-2022-wikiomnia,
title = "{W}iki{O}mnia: filtration and evaluation of the generated {QA} corpus on the whole {R}ussian {W}ikipedia",
author = "Pisarevskaya, Dina and
Shavrina, Tatiana",
booktitle = "Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)",
month = dec,
year = "2022",
address = "Abu Dhabi, United Arab Emirates (Hybrid)",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2022.gem-1.10",
pages = "125--135",
abstract = "The General QA field has been developing the methodology referencing the Stanford Question answering dataset (SQuAD) as the significant benchmark. Compiling factual questions datasets requires manual annotations, limiting the training data{'}s potential size. We present the WikiOmnia dataset, a new publicly available set of QA pairs and corresponding Russian Wikipedia article summary sections, composed with a fully automated generation and filtration pipeline. To ensure high quality of generated QA pairs, diverse manual and automated evaluation techniques were applied. The WikiOmnia pipeline is available open-source and is also tested for creating SQuAD-formatted QA on other domains, like news texts, fiction, and social media. The resulting dataset includes two parts: raw data on the whole Russian Wikipedia (7,930,873 QA pairs with paragraphs for ruGPT-3 XL and 7,991,040 QA pairs with paragraphs for ruT5-large) and cleaned data with strict automatic verification (over 160,000 QA pairs with paragraphs for ruGPT-3 XL and over 3,400,000 QA pairs with paragraphs for ruT5-large).",
}
```
### Contributions
Thanks to [@Deenochka](https://github.com/deenochka), [@TatianaShavrina](https://github.com/TatianaShavrina) | RussianNLP/wikiomnia | [
"task_categories:question-answering",
"size_categories:1M<n<10M",
"language:ru",
"license:apache-2.0",
"wikipedia",
"wikiomnia",
"squad",
"QA",
"arxiv:2204.08009",
"region:us"
]
| 2022-12-16T16:03:40+00:00 | {"language": ["ru"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["question-answering"], "pretty_name": "WikiOmnia", "dataset_info": [{"config_name": "wikiomnia_ruT5_raw", "features": [{"name": "title", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "batch_id", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 600356136, "num_examples": 266295}, {"name": "test", "num_bytes": 572651444, "num_examples": 267751}], "download_size": 1204094848, "dataset_size": 1173007580}, {"config_name": "wikiomnia_ruT5_filtered", "features": [{"name": "title", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "batch_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4157093224, "num_examples": 2088027}], "download_size": 4278635364, "dataset_size": 4157093224}, {"config_name": "wikiomnia_ruGPT3_filtered", "features": [{"name": "title", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "batch_id", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 338607635, "num_examples": 173314}], "download_size": 348694031, "dataset_size": 338607635}, {"config_name": "wikiomnia_ruGPT3_raw", "features": [{"name": "title", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "batch_id", "dtype": "string"}], "splits": [{"name": "train_batch1", "num_bytes": 553204785, "num_examples": 260808}, {"name": "train_batch2", "num_bytes": 542823205, "num_examples": 263599}, {"name": "train_batch3", "num_bytes": 582321994, "num_examples": 269736}, {"name": "train_batch4", "num_bytes": 543315355, "num_examples": 265948}, {"name": "train_batch5", "num_bytes": 513288049, "num_examples": 268466}, {"name": "train_batch6", "num_bytes": 943556173, "num_examples": 512147}, {"name": "train_batch7", "num_bytes": 929464509, "num_examples": 508149}, {"name": "train_batch8", "num_bytes": 915128725, "num_examples": 507559}, {"name": "train_batch9", "num_bytes": 926443048, "num_examples": 504292}, {"name": "train_batch10", "num_bytes": 834958539, "num_examples": 463812}, {"name": "train_batch11", "num_bytes": 509866027, "num_examples": 287770}, {"name": "train_batch12", "num_bytes": 478843738, "num_examples": 271410}, {"name": "train_batch13", "num_bytes": 757068702, "num_examples": 385730}, {"name": "train_batch14", "num_bytes": 575937629, "num_examples": 304110}, {"name": "train_batch15", "num_bytes": 517092031, "num_examples": 277507}, {"name": "train_batch16", "num_bytes": 759363156, "num_examples": 402203}, {"name": "train_batch17", "num_bytes": 860544388, "num_examples": 466572}, {"name": "train_batch18", "num_bytes": 935985528, "num_examples": 518348}, {"name": "train_batch19", "num_bytes": 936782197, "num_examples": 514307}, {"name": "train_batch20", "num_bytes": 874299949, "num_examples": 487238}], "download_size": 14939875008, "dataset_size": 14490287727}, {"config_name": "wikiomnia_ruT5_raw_train", "features": [{"name": "title", "dtype": "string"}, {"name": "categories", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "batch_id", "dtype": "string"}], "splits": [{"name": "train_batch3", "num_bytes": 612693602, "num_examples": 271391}, {"name": "train_batch4", "num_bytes": 570286147, "num_examples": 265947}, {"name": "train_batch5", "num_bytes": 552502041, "num_examples": 274650}, {"name": "train_batch6", "num_bytes": 1017066184, "num_examples": 525224}, {"name": "train_batch7", "num_bytes": 972351430, "num_examples": 509615}, {"name": "train_batch8", "num_bytes": 973314180, "num_examples": 516828}, {"name": "train_batch9", "num_bytes": 981651841, "num_examples": 512709}, {"name": "train_batch10", "num_bytes": 880664685, "num_examples": 469512}, {"name": "train_batch11", "num_bytes": 543971388, "num_examples": 294631}, {"name": "train_batch12", "num_bytes": 503939060, "num_examples": 273526}, {"name": "train_batch13", "num_bytes": 794421530, "num_examples": 392021}, {"name": "train_batch14", "num_bytes": 610815879, "num_examples": 311452}, {"name": "train_batch15", "num_bytes": 540225492, "num_examples": 278677}, {"name": "train_batch16", "num_bytes": 804003566, "num_examples": 411192}, {"name": "train_batch17", "num_bytes": 903347135, "num_examples": 469871}, {"name": "train_batch18", "num_bytes": 995239085, "num_examples": 528301}, {"name": "train_batch19", "num_bytes": 1003402360, "num_examples": 522264}, {"name": "train_batch20", "num_bytes": 948137237, "num_examples": 499866}], "download_size": 14634332336, "dataset_size": 14208032842}], "tags": ["wikipedia", "wikiomnia", "squad", "QA"]} | 2023-04-07T05:43:59+00:00 |
91c9ffe6599c9029b106dd781591ae5ccede72ef | https://images.unsplash.com/photo-1611915387288-fd8d2f5f928b?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxleHBsb3JlLWZlZWR8MXx8fGVufDB8fHx8&w=1000&q=80 , a cat lookng | Hisjhsshh/DatasetImageDescribe | [
"region:us"
]
| 2022-12-16T16:16:18+00:00 | {} | 2022-12-16T16:17:38+00:00 |
879e19a2c9bb7ef03f9cf962089cf7d008f91e27 |
## Dataset Description
- **Repository:** [openai/gpt2](https://github.com/openai/gpt-2)
- **Paper:** Radford et al. [Language Models are Unsupervised Multitask Learners](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf)
### Dataset Summary
This dataset is comprised of the LAMBADA test split as pre-processed by OpenAI (see relevant discussions [here](https://github.com/openai/gpt-2/issues/131#issuecomment-497136199) and [here](https://github.com/huggingface/transformers/issues/491)). It also contains machine translated versions of the split in German, Spanish, French, and Italian.
LAMBADA is used to evaluate the capabilities of computational models for text understanding by means of a word prediction task. LAMBADA is a collection of narrative texts sharing the characteristic that human subjects are able to guess their last word if they are exposed to the whole text, but not if they only see the last sentence preceding the target word. To succeed on LAMBADA, computational models cannot simply rely on local context, but must be able to keep track of information in the broader discourse.
### Languages
English, German, Spanish, French, and Italian.
### Source Data
For non-English languages, the data splits were produced by Google Translate. See the [`translation_script.py`](translation_script.py) for more details.
## Additional Information
### Hash Checksums
For data integrity checks we leave the following checksums for the files in this dataset:
| File Name | Checksum (SHA-256) |
|--------------------------------------------------------------------------|------------------------------------------------------------------|
| lambada_test_de.jsonl | 51c6c1795894c46e88e4c104b5667f488efe79081fb34d746b82b8caa663865e |
| [openai/lambada_test.jsonl](https://openaipublic.blob.core.windows.net/gpt-2/data/lambada_test.jsonl) | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_en.jsonl | 4aa8d02cd17c719165fc8a7887fddd641f43fcafa4b1c806ca8abc31fabdb226 |
| lambada_test_es.jsonl | ffd760026c647fb43c67ce1bc56fd527937304b348712dce33190ea6caba6f9c |
| lambada_test_fr.jsonl | 941ec6a73dba7dc91c860bf493eb66a527cd430148827a4753a4535a046bf362 |
| lambada_test_it.jsonl | 86654237716702ab74f42855ae5a78455c1b0e50054a4593fb9c6fcf7fad0850 |
### Licensing
License: [Modified MIT](https://github.com/openai/gpt-2/blob/master/LICENSE)
### Citation
```bibtex
@article{radford2019language,
title={Language Models are Unsupervised Multitask Learners},
author={Radford, Alec and Wu, Jeff and Child, Rewon and Luan, David and Amodei, Dario and Sutskever, Ilya},
year={2019}
}
```
```bibtex
@misc{
author={Paperno, Denis and Kruszewski, Germรกn and Lazaridou, Angeliki and Pham, Quan Ngoc and Bernardi, Raffaella and Pezzelle, Sandro and Baroni, Marco and Boleda, Gemma and Fernรกndez, Raquel},
title={The LAMBADA dataset},
DOI={10.5281/zenodo.2630551},
publisher={Zenodo},
year={2016},
month={Aug}
}
```
### Contributions
Thanks to Sid Black ([@sdtblck](https://github.com/sdtblck)) for translating the `lambada_openai` dataset into the non-English languages.
Thanks to Jonathan Tow ([@jon-tow](https://github.com/jon-tow)) for adding this dataset.
| EleutherAI/lambada_openai | [
"task_ids:language-modeling",
"language_creators:machine-generated",
"multilinguality:translation",
"size_categories:1K<n<10K",
"source_datasets:lambada",
"language:de",
"language:en",
"language:es",
"language:fr",
"language:it",
"license:mit",
"region:us"
]
| 2022-12-16T16:35:07+00:00 | {"language_creators": ["machine-generated"], "language": ["de", "en", "es", "fr", "it"], "license": "mit", "multilinguality": ["translation"], "size_categories": ["1K<n<10K"], "source_datasets": ["lambada"], "task_ids": ["language-modeling"], "pretty_name": "LAMBADA OpenAI", "dataset_info": [{"config_name": "default", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1709449, "num_examples": 5153}], "download_size": 1819752, "dataset_size": 1709449}, {"config_name": "de", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1904576, "num_examples": 5153}], "download_size": 1985231, "dataset_size": 1904576}, {"config_name": "en", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1709449, "num_examples": 5153}], "download_size": 1819752, "dataset_size": 1709449}, {"config_name": "es", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1821735, "num_examples": 5153}], "download_size": 1902349, "dataset_size": 1821735}, {"config_name": "fr", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1948795, "num_examples": 5153}], "download_size": 2028703, "dataset_size": 1948795}, {"config_name": "it", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 1813420, "num_examples": 5153}], "download_size": 1894613, "dataset_size": 1813420}]} | 2022-12-16T19:53:23+00:00 |
a8833d9ce9d08dcb1fdd1a4a926c94a05a6e7a84 | # Dataset Card for "australian_sea_slugs"
This is a filtered version of the [Nudibranchs of the Sunshine Coast Australia](https://www.gbif.org/dataset/ee412fa2-edc9-4c6b-91f3-ff2a02c245e0) dataset.
## Citation
```
Atlas of Living Australia (2019). Nudibranchs of the Sunshine Coast Australia. Occurrence dataset https://doi.org/10.15468/gtoiks accessed via GBIF.org on 2022-12-16.
``` | sasha/australian_sea_slugs | [
"region:us"
]
| 2022-12-16T17:34:52+00:00 | {"dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86677304.65602817, "num_examples": 2107}], "download_size": 87406259, "dataset_size": 86677304.65602817}} | 2022-12-16T17:37:05+00:00 |
0e5f02997bfaa45b7b560973a7419eb45a674e65 | awkwardneutrino/test-kallida | [
"license:bigscience-openrail-m",
"region:us"
]
| 2022-12-16T17:39:41+00:00 | {"license": "bigscience-openrail-m"} | 2022-12-16T17:40:29+00:00 |
|
50b606e9d78a38848b8452d72072678cbeaedd05 | This dataset contains information about the 9761 witches from the Crypto Coven NFT project (https://www.cryptocoven.xyz/) collected using OpenSea API.
The folder 'witch_images' includes the images of each witch in three different sizes.
I briefly describe the data in the `witches.csv` below:
- `id`: the id of the witch
- `num_sales`: number of sales in the past (till 4/21/2022 the day I collected the data)
- `name`: the name of the witch
- `description`: the description of the witch
- `external_link`: the link to the official page for the witch
- `permalink`: the OpenSea link for the witch
- `token_metadata`: the metadata JSON file about the witch
- `token_id`: the token_id of the NFT
- `owner.user.username`: the user name of the current owner
- `owner.address`: the wallet address of the current owner
- `last_sale.total_price`: the price of the last sale in gwei. Note that the unit here is gwei (giga and wei) and 1 ether = 1 billion gwei (18 zeros)
- `last_sale.payment_token.usd_price`: the USD price of 1 ether (ETH) for the last sale
- `last_sale.transaction.timestamp`: the timestamp of the last sale
- `properties`: there are 32 properties of each witch covering the different design elements of each witch, such as Skin Tone, Eyebrows, Body Shape, etc.
`witches_full.csv` is the full data provided by the OpenSea API, such as https://api.opensea.io/api/v1/asset/0x5180db8f5c931aae63c74266b211f580155ecac8/50. I just simply flattened the JSON returned by the API. | harrywang/crypto-coven | [
"license:mit",
"region:us"
]
| 2022-12-16T17:57:33+00:00 | {"license": "mit"} | 2022-12-16T18:00:36+00:00 |
75f72c304cfde536c03d1ecb0b63e564424338da | # Dataset Card for "full-hh-rlhf"
Anthropic's HH dataset reformatted into prompt, chosen, rejected samples. | Dahoas/full-hh-rlhf | [
"region:us"
]
| 2022-12-16T20:45:27+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 203150123, "num_examples": 112052}, {"name": "test", "num_bytes": 22606646, "num_examples": 12451}], "download_size": 136150742, "dataset_size": 225756769}} | 2023-02-23T17:29:46+00:00 |
a4a88f2b03f9f8773c8b5b5681e8399e686bf281 | inwaves/dtchess-standard | [
"license:mit",
"region:us"
]
| 2022-12-16T23:20:20+00:00 | {"license": "mit"} | 2022-12-29T03:32:40+00:00 |
|
4b1813e2070c90abb8011eee201264fd6ee4a06a | Keyvanazami/Rocky | [
"license:openrail",
"region:us"
]
| 2022-12-16T23:45:00+00:00 | {"license": "openrail"} | 2022-12-16T23:45:54+00:00 |
|
0fd6a94fd43cd6af2b742eb3e919fb768df3ba42 | JOHNNAKUMRAD7DE/22 | [
"license:openrail",
"region:us"
]
| 2022-12-17T07:12:13+00:00 | {"license": "openrail"} | 2022-12-17T07:12:14+00:00 |
|
49677eb53625813b5a9ce88938e1c7a40ce97c0d | # Dataset Card for "QuadraticEquations2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | snork-maiden/QuadraticEquations2 | [
"region:us"
]
| 2022-12-17T07:14:36+00:00 | {"dataset_info": {"features": [{"name": "text", "sequence": "int64"}, {"name": "label", "dtype": "int64"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 3520000, "num_examples": 80000}, {"name": "test", "num_bytes": 880000, "num_examples": 20000}], "download_size": 1308051, "dataset_size": 4400000}} | 2022-12-17T07:35:13+00:00 |
6c125a5f38f2f1d45d9cfdf71834ecdcf2565fd2 | # Dataset Card for "salvadoran-news"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | justinian336/salvadoran-news | [
"region:us"
]
| 2022-12-17T07:32:24+00:00 | {"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "link", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 276670922, "num_examples": 102366}], "download_size": 159243312, "dataset_size": 276670922}} | 2023-03-21T05:38:49+00:00 |
fe0dd8b35b749f383c063d9bab4dc35fddbe1896 | # Dataset Card for "natural_questions"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | maximedb/natural_questions | [
"region:us"
]
| 2022-12-17T08:16:54+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10087609, "num_examples": 130233}, {"name": "validation", "num_bytes": 714323, "num_examples": 8643}], "download_size": 6827128, "dataset_size": 10801932}} | 2022-12-17T08:17:26+00:00 |
4b0deef671013a7379b62f0b3c18e718a079560e |
## Download
| Windows-CUDA11.6 | Windows-CUDA11.3 |
| ------------------------ | ------------------------ |
| [Download](./116env.zip) | [Download](./113env.zip) |
## Usage
```bash
./{folder name}/Scripts/Activate.ps1
```
| HuanLin/DiffSVC-WindowsENV | [
"license:gpl",
"region:us"
]
| 2022-12-17T12:46:19+00:00 | {"license": "gpl"} | 2022-12-17T12:58:21+00:00 |
d05e3a37229b15ffd95ef2f7c24356c85cc9575d |
tweets in english positive negative | ad321/test-tweets | [
"task_categories:text-classification",
"task_ids:sentiment-classification",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:1M<n<10M",
"source_datasets:original",
"language:en",
"license:gpl-3.0",
"region:us"
]
| 2022-12-17T13:39:02+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["en"], "license": ["gpl-3.0"], "multilinguality": ["monolingual"], "size_categories": ["1M<n<10M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": ["sentiment-classification"], "pretty_name": "tweeter-dataset-sent-analysis", "tags": [], "train-eval-index": [{"col_mapping": {"label": "labels", "metrics": [{"name": "Accuracy", "type": "accuracy"}, {"args": {"average": "binary"}, "name": "F1 binary", "type": "f1"}], "tweet": "text"}, "config": "default", "splits": {"train_split": "train", "validation_split": "validation"}, "task": "text-classification", "task_id": "binary_classification"}]} | 2022-12-17T14:34:45+00:00 |
800e5d6451bc922e887b89219e6ccaca3fd6ec48 | JLD/unsplash25k-image-embeddings | [
"license:mit",
"region:us"
]
| 2022-12-17T14:41:39+00:00 | {"license": "mit"} | 2022-12-23T20:33:03+00:00 |
|
b8b5d13bb5657c0b8b93a10ab987decac868214e |
# Doc2Query Generated Queries for `msmarco-passage`
This dataset provides the pre-computed generated queries for the [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) dataset,
for use when indexing Doc2Query.
The generated queries from from the T5 Doc2Query model, released by the original authors [here](https://github.com/castorini/docTTTTTquery).
## Getting started
This artefact is meant to be used with the [`pyterrier_doc2query`](https://github.com/terrierteam/pyterrier_doc2query) pacakge. It can
be installed as:
```bash
pip install git+https://github.com/terrierteam/pyterrier_doc2query
```
Depending on what you are using this aretefact for, you may also need the following additional package:
```bash
pip install git+https://github.com/terrierteam/pyterrier_pisa # for indexing / retrieval
```
## Using this artefact
The main use case is to use this aretefact in a Doc2Query indexing pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_pisa import PisaIndex
from pyterrier_doc2query import Doc2QueryStore
store = Doc2QueryStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage')
index = PisaIndex('path/to/index')
pipeline = store.generator(limit_k=40) >> index
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
You can also use the store directly as a dataset to look up or iterate over the data:
```python
store.lookup('100')
# {'querygen': ...}
for record in store:
pass
```
## Reproducing this aretefact
Due to the random nature of the Doc2Query generation process, this artefact cannot be reproduced verbatim.
This aretefact can be reproduced using the following pipeline:
The following runs Doc2Query inference over the MS MARCO dataset. It will not produce the artefact verbatim,
but should produce similar results when used for indexing/retrieval.
```python
import pyterrier as pt ; pt.init()
from pyterrier_doc2query import Doc2Query, Doc2QueryStore
doc2query = Doc2Query('macavaney/doc2query-t5-base-msmarco', num_samples=80)
store = Doc2QueryStore('path/to/store')
pipeline = doc2query >> store
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
Note that this process will take quite some time, since it generates 80 queries for every document in the dataset.
Alternatively, you could reproduce this artefact verbatim using the following script, but it doesn't perform
model inference; it just uses the pre-generated queries from the original authors.
```bash
wget https://git.uwaterloo.ca/jimmylin/doc2query-data/raw/master/T5-passage/predicted_queries_topk_sampling.zip
unzip predicted_queries_topk_sampling.zip
```
```python
from pyterrier_doc2query import Doc2QueryStore
import os
import ir_datasets
def iter_files(path):
i = 0
while os.path.exists(path.format(i)):
with open(path.format(i), 'rt') as fin:
for line in fin:
yield line.strip()
i += 1
def it():
file_iters = [iter_files('predicted_queries_topk_sample{:03}'.format(i)+'.txt{:03}-1004000') for i in range(80)]
for queries in enumerate(zip(*file_iters)):
yield {'docno': str(i), 'querygen': '\n'.join(queries)}
store = Doc2QueryStore('path/to/store')
store.index(it())
```
| macavaney/d2q-msmarco-passage | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"source_datasets:msmarco-passage",
"document-expansion",
"doc2query",
"region:us"
]
| 2022-12-17T14:53:07+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "source_datasets": ["msmarco-passage"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Doc2Query Generated Queries for `msmarco-passage`", "tags": ["document-expansion", "doc2query"], "viewer": false} | 2022-12-18T20:12:57+00:00 |
32d6db2e304ddfb33cbb6e2243ad42caf4ab32ab |
# Dataset Card for PolQA Dataset
## Dataset Description
- **Paper:** [Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies](https://arxiv.org/abs/2212.08897)
- **Point of Contact:** [Piotr Rybak](mailto:[email protected])
### Dataset Summary
PolQA is the first Polish dataset for open-domain question answering. It consists of 7,000 questions, 87,525 manually labeled evidence passages, and a corpus of over 7 million candidate passages. The dataset can be used to train both a passage retriever and an abstractive reader.
### Supported Tasks and Leaderboards
- `open-domain-qa`: The dataset can be used to train a model for open-domain question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
- `document-retrieval`: The dataset can be used to train a model for document retrieval. Success on this task is typically measured by [top-k retrieval accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) or [NDCG](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.ndcg_score.html).
- `abstractive-qa`: The dataset can be used to train a model for abstractive question answering. Success on this task is typically measured using [metric defined during PolEval 2021](https://2021.poleval.pl/tasks/task4).
### Languages
The text is in Polish, as spoken by the host of the [Jeden z Dziesiฤciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show (questions) and [Polish Wikipedia](https://pl.wikipedia.org/) editors (passages). The BCP-47 code for Polish is pl-PL.
## Dataset Structure
### Data Instances
The main part of the dataset consists of manually annotated question-passage pairs. For each instance, there is a `question`, a passage (`passage_id`, `passage_title`, `passage_text`), and a boolean indicator if the passage is `relevant` for the given question (i.e. does it contain the answers).
For each `question` there is a list of possible `answers` formulated in a natural language, in a way a Polish
speaker would answer the questions. It means that the answers might
contain prepositions, be inflected, and contain punctuation. In some
cases, the answer might have multiple correct variants, e.g. numbers
are written as numerals and words, synonyms, abbreviations and their
expansions.
Additionally, we provide a classification of each question-answer pair based on the `question_formulation`, the `question_type`, and the `entity_type/entity_subtype`, according to the taxonomy proposed by
[Maciej Ogrodniczuk and Piotr Przybyลa (2021)](http://nlp.ipipan.waw.pl/Bib/ogr:prz:21:poleval.pdf).
```
{
'question_id': 6,
'passage_title': 'Mumbaj',
'passage_text': 'Mumbaj lub Bombaj (marathi เคฎเฅเคเคฌเค, trb.: Mumbaj; ang. Mumbai; do 1995 Bombay) โ stolica indyjskiego stanu Maharasztra, poลoลผona na wyspie Salsette, na Morzu Arabskim.',
'passage_wiki': 'Mumbaj lub Bombaj (mr. เคฎเฅเคเคฌเค, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) โ stolica indyjskiego stanu Maharasztra, poลoลผona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszฤ
po Delhi aglomeracjฤ liczฤ
cฤ
23 miliony mieszkaลcรณw. Dziฤki naturalnemu poลoลผeniu jest to najwiฤkszy port morski kraju. Znajdujฤ
siฤ tutaj takลผe najsilniejsze gieลdy Azji Poลudniowej: National Stock Exchange of India i Bombay Stock Exchange.',
'passage_id': '42609-0',
'duplicate': False,
'question': 'W ktรณrym paลstwie leลผy Bombaj?',
'relevant': True,
'annotated_by': 'Igor',
'answers': "['w Indiach', 'Indie']",
'question_formulation': 'QUESTION',
'question_type': 'SINGLE ENTITY',
'entity_type': 'NAMED',
'entity_subtype': 'COUNTRY',
'split': 'train',
'passage_source': 'human'
}
```
The second part of the dataset is a corpus of Polish Wikipedia (March 2022 snapshot) passages. The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
```
{
'id': '42609-0',
'title': 'Mumbaj',
'text': 'Mumbaj lub Bombaj (mr. เคฎเฅเคเคฌเค, trb.: "Mumbaj"; ang. Mumbai; do 1995 Bombay) โ stolica indyjskiego stanu Maharasztra, poลoลผona na wyspie Salsette, na Morzu Arabskim. Wraz z miastami satelitarnymi tworzy najludniejszฤ
po Delhi aglomeracjฤ liczฤ
cฤ
23 miliony mieszkaลcรณw. Dziฤki naturalnemu poลoลผeniu jest to najwiฤkszy port morski kraju. Znajdujฤ
siฤ tutaj takลผe najsilniejsze gieลdy Azji Poลudniowej: National Stock Exchange of India i Bombay Stock Exchange.'
}
```
### Data Fields
Question-passage pairs:
- `question_id`: an integer id of the question
- `passage_title`: a string containing the title of the Wikipedia article
- `passage_text`: a string containing the passage text as extracted by the human annotator
- `passage_wiki`: a string containing the passage text as it can be found in the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `passage_id`: a string containing the id of the passage from the provided Wikipedia corpus. Empty if the passage doesn't exist in the corpus.
- `duplicate`: a boolean flag representing whether a question-passage pair is duplicated in the dataset. This occurs when the same passage was found in multiple passage sources.
- `question`: a string containing the question
- `relevant`: a boolean flag representing whether a passage is relevant to the question (i.e. does it contain the answers)
- `annotated_by`: a string containing the name of the annotator who verified the relevance of the pair
- `answers`: a string containing a list of possible short answers to the question
- `question_formulation`: a string containing a kind of expression used to request information. One of the following:
- `QUESTION`, e.g. *What is the name of the first letter of the Greek alphabet?*
- `COMMAND`, e.g. *Expand the abbreviation โCIAโ.*
- `COMPOUND`, e.g. *This French writer, born in the 19th century, is
considered a pioneer of sci-fi literature. What is his name?*
- `question_type`: a string indicating what type of information is sought by the question. One of the following:
- `SINGLE ENTITY`, e.g. *Who is the hero in the Tomb Rider video game series?*
- `MULTIPLE ENTITIES`, e.g. *Which two seas are linked by the Corinth Canal?*
- `ENTITY CHOICE`, e.g. *Is "Sombrero" a type of dance, a hat, or a dish?*
- `YES/NO`, e.g. *When the term of office of the Polish Sejm is terminated, does it apply to the Senate as well?*
- `OTHER NAME`, e.g. *What was the nickname of Louis I, the King of the Franks?*
- `GAP FILLING`, e.g. *Finish the proverb: "If you fly with the crows... ".*
- `entity_type`: a string containing a type of the sought entity. One of the following: `NAMED`, `UNNAMED`, or `YES/NO`.
- `entity_subtype`: a string containing a subtype of the sought entity. Can take one of the 34 different values.
- `split`: a string containing the split of the dataset. One of the following: `train`, `valid`, or `test`.
- `passage_source`: a string containing the source of the passage. One of the following:
- `human`: the passage was proposed by a human annotator using any
internal (i.e. Wikipedia search) or external (e.g. Google) search engines and any keywords or queries they considered useful
- `hard-negatives`: the passage was proposed using a neural retriever trained on the passages found by the human annotators
- `zero-shot`: the passage was proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2)
Corpus of passages:
- `id`: a string representing the Wikipedia article id and the index of extracted passage. Matches the `passage_id` from the main part of the dataset.
- `title`: a string containing the title of the Wikipedia article. Matches the `passage_title` from the main part of the dataset.
- `text`: a string containing the passage text. Matches the `passage_wiki` from the main part of the dataset.
### Data Splits
The questions are assigned into one of three splits: `train`, `validation`, and `test`. The `validation` and `test` questions are randomly sampled from the `test-B` dataset from the [PolEval 2021](https://2021.poleval.pl/tasks/task4) competition.
| | # questions | # positive passages | # negative passages |
|------------|------------:|--------------------:|--------------------:|
| train | 5,000 | 27,131 | 34,904 |
| validation | 1,000 | 5,839 | 6,927 |
| test | 1,000 | 5,938 | 6,786 |
## Dataset Creation
### Curation Rationale
The PolQA dataset was created to support and promote the research in the open-domain question answering for Polish. It also serves as a benchmark to evaluate OpenQA systems.
### Source Data
#### Initial Data Collection and Normalization
The majority of questions come from two existing resources, the
6,000 questions from the [PolEval 2021 shared task on QA](https://2021.poleval.pl/tasks/task4) and additional 1,000 questions gathered by one of the shared
task [participants](http://poleval.pl/files/poleval2021.pdf#page=151). Originally, the questions come from collections associated with TV shows, both officially published and gathered online by their fans, as well as questions used in actual quiz competitions, on TV or online.
The evidence passages come from the Polish Wikipedia (March 2022 snapshot). The raw Wikipedia snapshot was parsed using [WikiExtractor](https://github.com/attardi/wikiextractor) and split into passages at the ends of the paragraphs or if the passage was longer than 500 characters.
#### Who are the source language producers?
The questions come from various sources and their authors are unknown but are mostly analogous (or even identical) to questions asked during the [Jeden z Dziesiฤciu](https://pl.wikipedia.org/wiki/Jeden_z_dziesi%C4%99ciu) TV show.
The passages were written by the editors of the Polish Wikipedia.
### Annotations
#### Annotation process
Two approaches were used to annotate the question-passage pairs. Each of them consists of two phases: the retrieval of candidate passages and the manual verification of their relevance.
In the first approach, we asked annotators to use internal (i.e. Wikipedia search) or external (e.g. Google) search engines to find up to five relevant passages using any keywords or queries they consider useful (`passage_source="human"`). Based on those passages, we trained the neural retriever to extend the number of relevant passages, as well as to retrieve the hard negatives (`passage_source="hard-negatives"`).
In the second approach, the passage candidates were proposed by the BM25 retriever and re-ranked using [multilingual cross-encoder](https://huggingface.co/unicamp-dl/mMiniLM-L6-v2-mmarco-v2) (`passage_source="zero-shot"`).
In both cases, all proposed question-passage pairs were manually verified by the annotators.
#### Who are the annotators?
The annotation team consisted of 16 annotators, all native Polish
speakers, most of them having linguistic backgrounds and previous
experience as an annotator.
### Personal and Sensitive Information
The dataset does not contain any personal or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was created to promote the research in the open-domain question answering for Polish and allow developing question answering systems.
### Discussion of Biases
The passages proposed by the `hard-negative` and `zero-shot` methods are bound to be easier to retrieve by retrievers since they were proposed by such. To mitigate this bias, we include the passages found by the human annotators in an unconstrained way (`passage_source="human"`). We hypothesize that it will result in more unbiased and diverse examples. Moreover, we asked the annotators to find not one but up to five passages, preferably from different articles to even further increase passage diversity.
### Other Known Limitations
The PolQA dataset focuses on trivia questions which might limit its usefulness in real-world applications since neural retrievers generalize poorly to other domains.
## Additional Information
### Dataset Curators
The PolQA dataset was developed by Piotr Rybak, Piotr Przybyลa, and Maciej Ogrodniczuk from the [Institute of Computer Science, Polish Academy of Sciences](http://zil.ipipan.waw.pl/).
This work was supported by the European Regional Development Fund as a part of 2014โ2020 Smart Growth Operational Programme, CLARIN โ Common Language Resources and Technology Infrastructure, project no. POIR.04.02.00-00C002/19.
### Licensing Information
CC BY-SA 4.0
### Citation Information
```
@misc{rybak2022improving,
title={Improving Question Answering Performance through Manual Annotation: Costs, Benefits and Strategies},
author={Piotr Rybak and Piotr Przybyลa and Maciej Ogrodniczuk},
year={2022},
eprint={2212.08897},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | ipipan/polqa | [
"task_categories:question-answering",
"task_categories:text-retrieval",
"task_categories:text2text-generation",
"task_ids:open-domain-qa",
"task_ids:document-retrieval",
"task_ids:abstractive-qa",
"annotations_creators:expert-generated",
"size_categories:10K<n<100K",
"language:pl",
"license:cc-by-sa-4.0",
"arxiv:2212.08897",
"region:us"
]
| 2022-12-17T15:03:58+00:00 | {"annotations_creators": ["expert-generated"], "language": ["pl"], "license": "cc-by-sa-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering", "text-retrieval", "text2text-generation"], "task_ids": ["open-domain-qa", "document-retrieval", "abstractive-qa"], "pretty_name": "PolQA"} | 2023-09-09T12:37:44+00:00 |
c671fd20220e568e25b7b65c723daf9eddfdf677 |
# Doc2Query ELECTRA Relevance Scores for `msmarco-passage`
This dataset provides the pre-computed query relevance scores for the [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) dataset,
for use with Doc2Query--.
The generated queries come from [`macavaney/d2q-msmarco-passage`](https://huggingface.co/datasets/macavaney/d2q-msmarco-passage) and
were scored with [`crystina-z/monoELECTRA_LCE_nneg31`](https://huggingface.co/crystina-z/monoELECTRA_LCE_nneg31).
## Getting started
This artefact is meant to be used with the [`pyterrier_doc2query`](https://github.com/terrierteam/pyterrier_doc2query) pacakge. It can
be installed as:
```bash
pip install git+https://github.com/terrierteam/pyterrier_doc2query
```
Depending on what you are using this aretefact for, you may also need the following additional packages:
```bash
pip install git+https://github.com/terrierteam/pyterrier_pisa # for indexing / retrieval
pip install git+https://github.com/terrierteam/pyterrier_dr # for reproducing this aretefact
```
## Using this artefact
The main use case is to use this aretefact in a Doc2Query−− indexing pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_pisa import PisaIndex
from pyterrier_doc2query import QueryScoreStore, QueryFilter
store = QueryScoreStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage-scores-electra')
index = PisaIndex('path/to/index')
pipeline = store.query_scorer(limit_k=40) >> QueryFilter(t=store.percentile(70)) >> index
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
You can also use the store directly as a dataset to look up or iterate over the data:
```python
store.lookup('100')
# {'querygen': ..., 'querygen_store': ...}
for record in store:
pass
```
## Reproducing this aretefact
This aretefact can be reproduced using the following pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_dr import ElectraScorer
from pyterrier_doc2query import Doc2QueryStore, QueryScoreStore, QueryScorer
doc2query_generator = Doc2QueryStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage').generator()
store = QueryScoreStore('path/to/store')
pipeline = doc2query_generator >> QueryScorer(ElectraScorer()) >> store
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
Note that this process will take quite some time; it computes the relevance score for 80 generated queries
for every document in the dataset.
| macavaney/d2q-msmarco-passage-scores-electra | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"source_datasets:msmarco-passage",
"document-expansion",
"doc2query--",
"region:us"
]
| 2022-12-17T15:18:38+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "source_datasets": ["msmarco-passage"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Doc2Query ELECTRA Relevance Scores for `msmarco-passage`", "tags": ["document-expansion", "doc2query--"], "viewer": false} | 2022-12-18T20:12:10+00:00 |
a257ab4fcfa09707711f7453885105230f634998 |
# Doc2Query monoT5 Relevance Scores for `msmarco-passage`
This dataset provides the pre-computed query relevance scores for the [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) dataset,
for use with Doc2Query--.
The generated queries come from [`macavaney/d2q-msmarco-passage`](https://huggingface.co/datasets/macavaney/d2q-msmarco-passage) and
were scored with [`castorini/monot5-base-msmarco`](https://huggingface.co/castorini/monot5-base-msmarco).
## Getting started
This artefact is meant to be used with the [`pyterrier_doc2query`](https://github.com/terrierteam/pyterrier_doc2query) pacakge. It can
be installed as:
```bash
pip install git+https://github.com/terrierteam/pyterrier_doc2query
```
Depending on what you are using this aretefact for, you may also need the following additional packages:
```bash
pip install git+https://github.com/terrierteam/pyterrier_pisa # for indexing / retrieval
pip install git+https://github.com/terrierteam/pyterrier_t5 # for reproducing this aretefact
```
## Using this artefact
The main use case is to use this aretefact in a Doc2Query−− indexing pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_pisa import PisaIndex
from pyterrier_doc2query import QueryScoreStore, QueryFilter
store = QueryScoreStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage-scores-monot5')
index = PisaIndex('path/to/index')
pipeline = store.query_scorer(limit_k=40) >> QueryFilter(t=store.percentile(70)) >> index
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
You can also use the store directly as a dataset to look up or iterate over the data:
```python
store.lookup('100')
# {'querygen': ..., 'querygen_store': ...}
for record in store:
pass
```
## Reproducing this aretefact
This aretefact can be reproduced using the following pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_t5 import MonoT5ReRanker
from pyterrier_doc2query import Doc2QueryStore, QueryScoreStore, QueryScorer
doc2query_generator = Doc2QueryStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage').generator()
store = QueryScoreStore('path/to/store')
pipeline = doc2query_generator >> QueryScorer(MonoT5ReRanker()) >> store
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
Note that this process will take quite some time; it computes the relevance score for 80 generated queries
for every document in the dataset.
| macavaney/d2q-msmarco-passage-scores-monot5 | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"source_datasets:msmarco-passage",
"document-expansion",
"doc2query--",
"region:us"
]
| 2022-12-17T15:19:01+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "source_datasets": ["msmarco-passage"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Doc2Query monoT5 Relevance Scores for `msmarco-passage`", "tags": ["document-expansion", "doc2query--"], "viewer": false} | 2022-12-18T20:13:58+00:00 |
9d19c8e11a1715ad1ec70122561bad5488eec68e |
# Doc2Query TCT Relevance Scores for `msmarco-passage`
This dataset provides the pre-computed query relevance scores for the [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) dataset,
for use with Doc2Query--.
The generated queries come from [`macavaney/d2q-msmarco-passage`](https://huggingface.co/datasets/macavaney/d2q-msmarco-passage) and
were scored with [`castorini/tct_colbert-v2-hnp-msmarco`](https://huggingface.co/castorini/tct_colbert-v2-hnp-msmarco).
## Getting started
This artefact is meant to be used with the [`pyterrier_doc2query`](https://github.com/terrierteam/pyterrier_doc2query) pacakge. It can
be installed as:
```bash
pip install git+https://github.com/terrierteam/pyterrier_doc2query
```
Depending on what you are using this aretefact for, you may also need the following additional packages:
```bash
pip install git+https://github.com/terrierteam/pyterrier_pisa # for indexing / retrieval
pip install git+https://github.com/terrierteam/pyterrier_dr # for reproducing this aretefact
```
## Using this artefact
The main use case is to use this aretefact in a Doc2Query−− indexing pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_pisa import PisaIndex
from pyterrier_doc2query import QueryScoreStore, QueryFilter
store = QueryScoreStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage-scores-tct')
index = PisaIndex('path/to/index')
pipeline = store.query_scorer(limit_k=40) >> QueryFilter(t=store.percentile(70)) >> index
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
You can also use the store directly as a dataset to look up or iterate over the data:
```python
store.lookup('100')
# {'querygen': ..., 'querygen_store': ...}
for record in store:
pass
```
## Reproducing this aretefact
This aretefact can be reproduced using the following pipeline:
```python
import pyterrier as pt ; pt.init()
from pyterrier_dr import TctColBert
from pyterrier_doc2query import Doc2QueryStore, QueryScoreStore, QueryScorer
doc2query_generator = Doc2QueryStore.from_repo('https://huggingface.co/datasets/macavaney/d2q-msmarco-passage').generator()
store = QueryScoreStore('path/to/store')
pipeline = doc2query_generator >> QueryScorer(TctColBert('castorini/tct_colbert-v2-hnp-msmarco')) >> store
dataset = pt.get_dataset('irds:msmarco-passage')
pipeline.index(dataset.get_corpus_iter())
```
Note that this process will take quite some time; it computes the relevance score for 80 generated queries
for every document in the dataset.
| macavaney/d2q-msmarco-passage-scores-tct | [
"task_categories:text-retrieval",
"task_ids:document-retrieval",
"annotations_creators:no-annotation",
"language_creators:machine-generated",
"source_datasets:msmarco-passage",
"document-expansion",
"doc2query--",
"region:us"
]
| 2022-12-17T15:19:11+00:00 | {"annotations_creators": ["no-annotation"], "language_creators": ["machine-generated"], "language": [], "license": [], "source_datasets": ["msmarco-passage"], "task_categories": ["text-retrieval"], "task_ids": ["document-retrieval"], "pretty_name": "Doc2Query TCT Relevance Scores for `msmarco-passage`", "tags": ["document-expansion", "doc2query--"], "viewer": false} | 2022-12-18T20:13:32+00:00 |
33705626e34dac1ae1f224e94061def310b0ba46 | miraclenugget/ayaka | [
"license:unknown",
"region:us"
]
| 2022-12-17T15:47:06+00:00 | {"license": "unknown"} | 2022-12-19T02:39:22+00:00 |
|
b7293a0e2ddb95e0eaa520029301a93d01bafd7f |
## Table of Contents
- [Dataset Description](#dataset-description)
-
# Utilising Weak Supervision to Create S3D: A Sarcasm Annotated Dataset
This is the repository for the S3D dataset published at EMNLP 2022. The dataset can help build sarcasm detection models.
# S3D-v2 Summary
The S3D-v2 dataset is our silver standard dataset of 100,000 tweets labelled for sarcasm using weak supervision by a majority voting system of fine-tuned sarcasm detection models. The models used are
our [roberta-large-finetuned-SARC-combined-DS](https://huggingface.co/surrey-nlp/roberta-large-finetuned-SARC-combined-DS), [bertweet-base-finetuned-SARC-DS](https://huggingface.co/surrey-nlp/bertweet-base-finetuned-SARC-DS)
and [bertweet-base-finetuned-SARC-combined-DS](https://huggingface.co/surrey-nlp/bertweet-base-finetuned-SARC-combined-DS) models.
S3D contains 13016 tweets labelled as sarcastic, and 86904 tweets labelled as not being sarcastic.
# Data Fields
- Text: The preprocessed tweet
- Label: A label to denote if a given tweet is sarcastic
# Data Splits
- Train: 70,000
- Valid: 15,000
- Test: 15,000 | surrey-nlp/S3D-v2 | [
"task_categories:text-classification",
"annotations_creators:Jordan Painter, Diptesh Kanojia",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:en",
"license:cc-by-sa-4.0",
"region:us"
]
| 2022-12-17T18:00:12+00:00 | {"annotations_creators": ["Jordan Painter, Diptesh Kanojia"], "language": ["en"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["text-classification"], "pretty_name": "Utilising Weak Supervision to create S3D: A Sarcasm Annotated Dataset"} | 2022-12-17T18:17:27+00:00 |
65a7c106f0a8d42d69437eeef097d3ecba7909a1 | JohnMeta8/pandacraft_maker | [
"region:us"
]
| 2022-12-17T18:17:11+00:00 | {} | 2022-12-17T18:29:16+00:00 |
|
37beb97b7900281cd67ac189d4fb91c589b25582 | # Dataset Card for Birdsnap Dataset v 1.1.
Welcome to the Birdsnap dataset, consisting of 49,829 images of 500 species of North American birds, collected from Flickr, and corresponding species, bounding box, and part labels.
The dataset distribution also consists of the following files:
1. species.txt
This file lists the species in the dataset. The first line is a header. Each subsequent line represents a species. Lines are tab-delimited, and the fields
are:
- id: An integer id for the species. These ids run from 1 to 500 for the 500 species.
- common: The common English name of the species, for example "Blue Jay."
- scientific: The scientific (Latin) name of the species, for example "Cyanocitta cristata."
- dir: The name of the a directory in which to store the images of this species. This is just the common name with spaces and other dangerous-in-file-path characters replaced or removed.
2. images.txt
This file lists the images in the dataset, with the coresponding bounding boxes, part locations, and species labels. Like species.txt, it is tab-delimited with the first line giving field names. The fields are:
- url: The URL from which the image was downloaded.
- md5: An MD5 sum of the image file constants.
- path: The local path of the image.
- species_id: The id of the species of the labeled bird in the image.
- bb_x1, bb_y1, bb_x2, bb_y2: The coordinates of the top-left (bb_x1, bb_y1) and bottom-right (bb_x2, bb_y2) corners of the bounding box of the labeled
bird.
- ${part}_x, ${part}_y: The coordinates of part ${part}. Parts are back, beak, belly, breast, crown, forehead, left_cheek, left_eye, left_leg, left_wing, nape, right_cheek, right_eye, right_leg, right_wing, tail, throat.
3. test_images.txt
This file lists the 2443 test images used in the species identification experiments in the paper. It has a header line, then the "path" (from images.txt) of each test image, one per line.
### Citation
```
@inproceedings{berg2014birdsnap,
title={Birdsnap: Large-scale fine-grained visual categorization of birds},
author={Berg, Thomas and Liu, Jiongxin and Woo Lee, Seung and Alexander, Michelle L and Jacobs, David W and Belhumeur, Peter N},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
pages={2011--2018},
year={2014}
}
```
| sasha/birdsnap | [
"region:us"
]
| 2022-12-17T20:35:55+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 63184668691.7, "num_examples": 39860}], "download_size": 69093722465, "dataset_size": 63184668691.7}} | 2022-12-17T21:29:07+00:00 |
883fc9343999ef5f3d5b1f23a1ef78f517106433 | # Dataset Card for "unet-lsun-256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Dahoas/unet-lsun-256 | [
"region:us"
]
| 2022-12-17T21:18:33+00:00 | {"dataset_info": {"features": [{"name": "images", "sequence": {"sequence": {"sequence": "float32"}}}], "splits": [{"name": "train", "num_bytes": 39513896960, "num_examples": 50048}], "download_size": 39351524715, "dataset_size": 39513896960}} | 2022-12-19T16:02:31+00:00 |
de93a3333b46bcad6f5e35a456410b67b2ed4e01 | ghmfx/natural-questions-short | [
"license:wtfpl",
"region:us"
]
| 2022-12-17T21:28:43+00:00 | {"license": "wtfpl"} | 2022-12-17T21:29:07+00:00 |
|
a29a9757125f4bb1c26445ad0d2ef7d9b2cc9c4c | Preprocessed version of Super-Natural-Instructions from https://github.com/allenai/natural-instructions/tree/master/splits. The same inputs may appear with different outputs, thus to avoid duplicate inputs, you can deduplicate by the `id` or the `inputs` field.
Train Tasks:
```
['task001_quoref_question_generation', 'task002_quoref_answer_generation', 'task022_cosmosqa_passage_inappropriate_binary', 'task023_cosmosqa_question_generation', 'task024_cosmosqa_answer_generation', 'task025_cosmosqa_incorrect_answer_generation', 'task026_drop_question_generation', 'task027_drop_answer_type_generation', 'task028_drop_answer_generation', 'task043_essential_terms_answering_incomplete_questions', 'task044_essential_terms_identifying_essential_words', 'task045_miscellaneous_sentence_paraphrasing', 'task046_miscellaneous_question_typing', 'task047_miscellaneous_answering_science_questions', 'task059_ropes_story_generation', 'task060_ropes_question_generation', 'task061_ropes_answer_generation', 'task062_bigbench_repeat_copy_logic', 'task063_first_i_elements', 'task064_all_elements_except_first_i', 'task065_timetravel_consistent_sentence_classification', 'task066_timetravel_binary_consistency_classification', 'task067_abductivenli_answer_generation', 'task068_abductivenli_incorrect_answer_generation', 'task069_abductivenli_classification', 'task070_abductivenli_incorrect_classification', 'task071_abductivenli_answer_generation', 'task072_abductivenli_answer_generation', 'task073_commonsenseqa_answer_generation', 'task074_squad1.1_question_generation', 'task075_squad1.1_answer_generation', 'task076_splash_correcting_sql_mistake', 'task077_splash_explanation_to_sql', 'task078_all_elements_except_last_i', 'task079_conala_concat_strings', 'task080_piqa_answer_generation', 'task081_piqa_wrong_answer_generation', 'task082_babi_t1_single_supporting_fact_question_generation', 'task083_babi_t1_single_supporting_fact_answer_generation', 'task084_babi_t1_single_supporting_fact_identify_relevant_fact', 'task085_unnatural_addsub_arithmetic', 'task087_new_operator_addsub_arithmetic', 'task088_identify_typo_verification', 'task089_swap_words_verification', 'task090_equation_learner_algebra', 'task091_all_elements_from_index_i_to_j', 'task092_check_prime_classification', 'task093_conala_normalize_lists', 'task094_conala_calculate_mean', 'task095_conala_max_absolute_value', 'task096_conala_list_index_subtraction', 'task097_conala_remove_duplicates', 'task098_conala_list_intersection', 'task099_reverse_elements_between_index_i_and_j', 'task100_concatenate_all_elements_from_index_i_to_j', 'task101_reverse_and_concatenate_all_elements_from_index_i_to_j', 'task103_facts2story_long_text_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task105_story_cloze-rocstories_sentence_generation', 'task107_splash_question_to_sql', 'task1087_two_number_sum', 'task1088_array_of_products', 'task1089_check_monotonic_array', 'task108_contextualabusedetection_classification', 'task109_smsspamcollection_spamsmsdetection', 'task110_logic2text_sentence_generation', 'task111_asset_sentence_simplification', 'task112_asset_simple_sentence_identification', 'task1135_xcsr_en_commonsense_mc_classification', 'task113_count_frequency_of_letter', 'task1146_country_capital', 'task1147_country_currency', 'task1148_maximum_ascii_value', 'task1149_item_check_edible', 'task114_is_the_given_word_longest', 'task1150_delete_max_min', 'task1151_swap_max_min', 'task115_help_advice_classification', 'task1167_penn_treebank_coarse_pos_tagging', 'task1168_brown_coarse_pos_tagging', 'task116_com2sense_commonsense_reasoning', 'task1186_nne_hrngo_classification', 'task1188_count_max_freq_char', 'task1189_check_char_in_string', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1190_add_integer_to_list', 'task1191_food_veg_nonveg', 'task1192_food_flavor_profile', 'task1193_food_course_classification', 'task1194_kth_largest_element', 'task1196_atomic_classification_oeffect', 'task1197_atomic_classification_oreact', 'task1198_atomic_classification_owant', 'task1199_atomic_classification_xattr', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task1200_atomic_classification_xeffect', 'task1201_atomic_classification_xintent', 'task1202_atomic_classification_xneed', 'task1203_atomic_classification_xreact', 'task1204_atomic_classification_hinderedby', 'task1205_atomic_classification_isafter', 'task1206_atomic_classification_isbefore', 'task1207_atomic_classification_atlocation', 'task1208_atomic_classification_xreason', 'task1209_atomic_classification_objectuse', 'task1210_atomic_classification_madeupof', 'task1211_atomic_classification_hassubevent', 'task1212_atomic_classification_hasproperty', 'task1213_atomic_classification_desires', 'task1214_atomic_classification_xwant', 'task1215_atomic_classification_capableof', 'task1216_atomic_classification_causes', 'task1217_atomic_answer_generation', 'task122_conala_list_index_addition', 'task123_conala_sort_dictionary', 'task124_conala_pair_averages', 'task125_conala_pair_differences', 'task126_scan_structured_text_generation_command_action_all', 'task127_scan_long_text_generation_action_command_all', 'task1283_hrngo_quality_classification', 'task1284_hrngo_informativeness_classification', 'task1285_kpa_keypoint_matching', 'task1286_openbookqa_question_answering', 'task1288_glue_mrpc_paraphrasing', 'task1289_trec_classification', 'task128_scan_structured_text_generation_command_action_short', 'task1290_xsum_summarization', 'task1291_multi_news_summarization', 'task1292_yelp_review_full_text_categorization', 'task1293_kilt_tasks_hotpotqa_question_answering', 'task1294_wiki_qa_answer_verification', 'task1295_adversarial_qa_question_answering', 'task1296_wiki_hop_question_answering', 'task129_scan_long_text_generation_action_command_short', 'task1308_amazonreview_category_classification', 'task1309_amazonreview_summary_classification', 'task130_scan_structured_text_generation_command_action_long', 'task1310_amazonreview_rating_classification', 'task1311_amazonreview_rating_classification', 'task1312_amazonreview_polarity_classification', 'task1313_amazonreview_polarity_classification', 'task1314_country_abbreviation', 'task1315_find_range_array', 'task1316_remove_duplicates_string', 'task1317_country_calling_code', 'task1318_country_national_dish', 'task1319_country_by_barcode_prefix', 'task131_scan_long_text_generation_action_command_long', 'task1320_country_domain_tld', 'task1321_country_continent', 'task1322_country_government_type', 'task1325_qa_zre_question_generation_on_subject_relation', 'task1326_qa_zre_question_generation_from_answer', 'task1327_qa_zre_answer_generation_from_question', 'task1328_qa_zre_relation_generation_from_question', 'task132_dais_text_modification', 'task1331_reverse_array', 'task1332_check_leap_year', 'task1333_check_validity_date_ddmmyyyy', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task1340_msr_text_compression_compression', 'task1341_msr_text_classification', 'task1346_glue_cola_grammatical_correctness_classification', 'task1347_glue_sts-b_similarity_classification', 'task1354_sent_comp_classification', 'task1355_sent_comp_summarization', 'task1359_numer_sense_answer_generation', 'task1360_numer_sense_multiple_choice_qa_generation', 'task1361_movierationales_classification', 'task1364_hans_answer_generation', 'task1366_healthfact_classification', 'task1368_healthfact_sentence_generation', 'task1369_healthfact_sentence_generation', 'task1378_quarel_correct_answer_generation', 'task1379_quarel_incorrect_answer_generation', 'task137_detoxifying-lms_classification_toxicity', 'task1380_quarel_correct_option_generation', 'task1381_quarel_incorrect_option_generation', 'task1382_quarel_write_correct_answer', 'task1383_quarel_write_incorrect_answer', 'task1384_deal_or_no_dialog_classification', 'task1389_hellaswag_completion', 'task138_detoxifying-lms_classification_fluency', 'task1398_obqa_question_generation', 'task1399_obqa_answer_generation', 'task139_detoxifying-lms_classification_topicality', 'task1400_obqa_incorrect_answer_generation', 'task1401_obqa_sentence_generation', 'task1403_check_validity_date_mmddyyyy', 'task1404_date_conversion', 'task1405_find_median', 'task1406_kth_smallest_element', 'task140_detoxifying-lms_classification_style', 'task1412_web_questions_question_answering', 'task1418_bless_semantic_relation_classification', 'task1419_mathqa_gain', 'task141_odd-man-out_classification_category', 'task1420_mathqa_general', 'task1421_mathqa_other', 'task1422_mathqa_physics', 'task1423_mathqa_geometry', 'task1424_mathqa_probability', 'task1425_country_iso_numeric', 'task1426_country_independence_year', 'task1427_country_region_in_world', 'task1428_country_surface_area', 'task1429_evalution_semantic_relation_classification', 'task142_odd-man-out_classification_no_category', 'task1431_head_qa_answer_generation', 'task1434_head_qa_classification', 'task143_odd-man-out_classification_generate_category', 'task1443_string_to_number', 'task1444_round_power_of_two', 'task1445_closest_integers', 'task1446_farthest_integers', 'task1447_drug_extraction_ade', 'task1448_disease_entity_extraction_ncbi_dataset', 'task1449_disease_entity_extraction_bc5cdr_dataset', 'task144_subjqa_question_answering', 'task1451_drug_dose_extraction', 'task1452_location_entity_extraction_btc_corpus', 'task1453_person_entity_extraction_btc_corpus', 'task145_afs_argument_similarity_death_penalty', 'task146_afs_argument_similarity_gun_control', 'task1479_organization_entity_extraction_btc_corpus', 'task147_afs_argument_similarity_gay_marriage', 'task1480_gene_extraction_jnlpba_dataset', 'task1481_gene_extraction_bc2gm_dataset', 'task1482_gene_extraction_chemprot_dataset', 'task1483_chemical_extraction_chemprot_dataset', 'task1484_gene_extraction_linnaeus_dataset', 'task1485_organ_extraction_anem_dataset', 'task1486_cell_extraction_anem_dataset', 'task1487_organism_substance_extraction_anem_dataset', 'task1488_sarcasmdetection_headline_classification', 'task1489_sarcasmdetection_tweet_classification', 'task148_afs_argument_quality_gay_marriage', 'task1495_adverse_drug_event_classification', 'task1498_24hour_to_12hour_clock', 'task1499_dstc3_summarization', 'task149_afs_argument_quality_death_penalty', 'task1500_dstc3_classification', 'task1501_dstc3_answer_generation', 'task1502_hatexplain_classification', 'task1503_hatexplain_classification', 'task1504_hatexplain_answer_generation', 'task1505_root09_semantic_relation_classification', 'task1506_celebrity_minimal_dob_span', 'task1507_boolean_temporal_reasoning', 'task1508_wordnet_antonyms', 'task1509_evalution_antonyms', 'task150_afs_argument_quality_gun_control', 'task1510_evalution_relation_extraction', 'task1517_limit_classfication', 'task1518_limit_answer_generation', 'task1519_qa_srl_question_generation', 'task151_tomqa_find_location_easy_clean', 'task1520_qa_srl_answer_generation', 'task152_tomqa_find_location_easy_noise', 'task153_tomqa_find_location_hard_clean', 'task1541_agnews_classification', 'task1542_every_ith_element_from_starting', 'task1548_wiqa_binary_classification', 'task1549_wiqa_answer_generation_missing_step', 'task154_tomqa_find_location_hard_noise', 'task1551_every_ith_element_from_kth_element', 'task1553_cnn_dailymail_summarization', 'task1559_blimp_binary_classification', 'task155_count_nouns_verbs', 'task1560_blimp_binary_classification', 'task1564_triviaqa_answer_generation', 'task1565_triviaqa_classification', 'task1566_propara_structured_text_generation', 'task1567_propara_question_generation', 'task1568_propara_classification', 'task156_codah_classification_adversarial', 'task1572_samsum_summary', 'task1573_samsum_classification', 'task157_count_vowels_and_consonants', 'task1580_eqasc-perturbed_question_generation', 'task1581_eqasc-perturbed_answer_generation', 'task1582_bless_hypernym_generation', 'task1583_bless_meronym_classification', 'task1584_evalution_meronym_classification', 'task1585_root09_hypernym_generation', 'task158_count_frequency_of_words', 'task1590_diplomacy_text_generation', 'task1592_yahoo_answers_topics_classfication', 'task1593_yahoo_answers_topics_classification', 'task1594_yahoo_answers_topics_question_generation', 'task1595_event2mind_text_generation_1', 'task1596_event2mind_text_generation_2', 'task1599_smcalflow_classification', 'task159_check_frequency_of_words_in_sentence_pair', 'task1600_smcalflow_sentence_generation', 'task1601_webquestions_answer_generation', 'task1602_webquestion_question_genreation', 'task1603_smcalflow_sentence_generation', 'task1604_ethos_text_classification', 'task1605_ethos_text_classification', 'task1606_ethos_text_classification', 'task1607_ethos_text_classification', 'task1608_xquad_en_answer_generation', 'task1609_xquad_en_question_generation', 'task160_replace_letter_in_a_sentence', 'task161_count_words_containing_letter', 'task162_count_words_starting_with_letter', 'task163_count_words_ending_with_letter', 'task1645_medical_question_pair_dataset_text_classification', 'task164_mcscript_question_answering_text', 'task1656_gooaq_answer_generation', 'task1657_gooaq_question_generation', 'task165_mcscript_question_answering_commonsense', 'task1660_super_glue_question_generation', 'task1661_super_glue_classification', 'task1665_trainglecopa_question_generation', 'task1669_md_gender_bias_text_modification', 'task166_clariq_sentence_generation', 'task1670_md_gender_bias_text_modification', 'task1678_mathqa_answer_selection', 'task167_strategyqa_question_generation', 'task168_strategyqa_question_decomposition', 'task169_strategyqa_sentence_generation', 'task1703_ljspeech_textmodification', 'task1704_ljspeech_textmodification', 'task1705_ljspeech_classification', 'task1706_ljspeech_classification', 'task170_hotpotqa_answer_generation', 'task1711_poki_text_generation', 'task1712_poki_classification', 'task1713_convai3_sentence_generation', 'task1714_convai3_sentence_generation', 'task1720_civil_comments_toxicity_classification', 'task1721_civil_comments_obscenity_classification', 'task1722_civil_comments_threat_classification', 'task1723_civil_comments_sexuallyexplicit_classification', 'task1724_civil_comments_insult_classification', 'task1725_civil_comments_severtoxicity_classification', 'task1726_mathqa_correct_answer_generation', 'task1727_wiqa_what_is_the_effect', 'task1729_personachat_generate_next', 'task1730_personachat_choose_next', 'task1731_quartz_question_answering', 'task176_break_decompose_questions', 'task177_para-nmt_paraphrasing', 'task178_quartz_question_answering', 'task179_participant_extraction', 'task180_intervention_extraction', 'task181_outcome_extraction', 'task182_duorc_question_generation', 'task183_rhyme_generation', 'task184_break_generate_question', 'task191_hotpotqa_question_generation', 'task192_hotpotqa_sentence_generation', 'task193_duorc_question_generation', 'task194_duorc_answer_generation', 'task195_sentiment140_classification', 'task196_sentiment140_answer_generation', 'task205_remove_even_elements', 'task206_collatz_conjecture', 'task207_max_element_lists', 'task208_combinations_of_list', 'task209_stancedetection_classification', 'task210_logic2text_structured_text_generation', 'task211_logic2text_classification', 'task212_logic2text_classification', 'task223_quartz_explanation_generation', 'task227_clariq_classification', 'task228_arc_answer_generation_easy', 'task229_arc_answer_generation_hard', 'task243_count_elements_in_set_intersection', 'task244_count_elements_in_set_union', 'task245_check_presence_in_set_intersection', 'task246_dream_question_generation', 'task247_dream_answer_generation', 'task248_dream_classification', 'task267_concatenate_and_reverse_all_elements_from_index_i_to_j', 'task268_casehold_legal_answer_generation', 'task269_csrg_counterfactual_story_generation', 'task270_csrg_counterfactual_context_generation', 'task274_overruling_legal_classification', 'task275_enhanced_wsc_paraphrase_generation', 'task276_enhanced_wsc_classification', 'task277_stereoset_sentence_generation_stereotype', 'task278_stereoset_sentence_generation_antistereotype', 'task279_stereoset_classification_stereotype', 'task280_stereoset_classification_stereotype_type', 'task283_dream_incorrect_answer_generation', 'task284_imdb_classification', 'task285_imdb_answer_generation', 'task286_olid_offense_judgment', 'task287_casehold_legal_incorrect_answer_generation', 'task291_semeval_2020_task4_commonsense_validation', 'task292_storycommonsense_character_text_generation', 'task293_storycommonsense_emotion_text_generation', 'task294_storycommonsense_motiv_text_generation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task296_storycloze_correct_end_classification', 'task297_storycloze_incorrect_end_classification', 'task298_storycloze_correct_end_classification', 'task299_storycloze_sentence_generation', 'task300_storycloze_order_generation', 'task301_record_question_generation', 'task302_record_classification', 'task303_record_incorrect_answer_generation', 'task305_jeopardy_answer_generation_normal', 'task306_jeopardy_answer_generation_double', 'task307_jeopardy_answer_generation_final', 'task308_jeopardy_answer_generation_all', 'task309_race_answer_generation', 'task310_race_classification', 'task311_race_question_generation', 'task316_crows-pairs_classification_stereotype', 'task317_crows-pairs_classification_stereotype_type', 'task318_stereoset_classification_gender', 'task319_stereoset_classification_profession', 'task320_stereoset_classification_race', 'task321_stereoset_classification_religion', 'task322_jigsaw_classification_threat', 'task323_jigsaw_classification_sexually_explicit', 'task324_jigsaw_classification_disagree', 'task325_jigsaw_classification_identity_attack', 'task326_jigsaw_classification_obscene', 'task327_jigsaw_classification_toxic', 'task328_jigsaw_classification_insult', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task339_record_answer_generation', 'task340_winomt_classification_gender_pro', 'task341_winomt_classification_gender_anti', 'task342_winomt_classification_profession_pro', 'task343_winomt_classification_profession_anti', 'task344_hybridqa_answer_generation', 'task345_hybridqa_answer_generation', 'task346_hybridqa_classification', 'task347_hybridqa_incorrect_answer_generation', 'task350_winomt_classification_gender_identifiability_pro', 'task351_winomt_classification_gender_identifiability_anti', 'task353_casino_classification_negotiation_elicit_pref', 'task354_casino_classification_negotiation_no_need', 'task355_casino_classification_negotiation_other_need', 'task356_casino_classification_negotiation_self_need', 'task357_casino_classification_negotiation_small_talk', 'task358_casino_classification_negotiation_uv_part', 'task359_casino_classification_negotiation_vouch_fair', 'task363_sst2_polarity_classification', 'task364_regard_social_impact_classification', 'task365_synthetic_remove_vowels', 'task366_synthetic_return_primes', 'task367_synthetic_remove_floats', 'task368_synthetic_even_or_odd_calculation', 'task369_synthetic_remove_odds', 'task370_synthetic_remove_divisible_by_3', 'task371_synthetic_product_of_list', 'task372_synthetic_palindrome_numbers', 'task373_synthetic_round_tens_place', 'task374_synthetic_pos_or_neg_calculation', 'task375_classify_type_of_sentence_in_debate', 'task376_reverse_order_of_words', 'task377_remove_words_of_given_length', 'task378_reverse_words_of_given_length', 'task379_agnews_topic_classification', 'task380_boolq_yes_no_question', 'task381_boolq_question_generation', 'task382_hybridqa_answer_generation', 'task383_matres_classification', 'task384_socialiqa_question_classification', 'task385_socialiqa_incorrect_answer_generation', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task388_torque_token_classification', 'task389_torque_generate_temporal_question', 'task390_torque_text_span_selection', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task400_paws_paraphrase_classification', 'task403_creak_commonsense_inference', 'task405_narrativeqa_question_generation', 'task413_mickey_en_sentence_perturbation_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task453_swag_answer_generation', 'task454_swag_incorrect_answer_generation', 'task455_swag_context_generation', 'task456_matres_intention_classification', 'task457_matres_conditional_classification', 'task458_matres_negation_classification', 'task459_matres_static_classification', 'task460_qasper_answer_generation', 'task461_qasper_question_generation', 'task462_qasper_classification', 'task469_mrqa_answer_generation', 'task470_mrqa_question_generation', 'task471_haspart_answer_generation', 'task472_haspart_classification', 'task475_yelp_polarity_classification', 'task476_cls_english_books_classification', 'task477_cls_english_dvd_classification', 'task478_cls_english_music_classification', 'task488_extract_all_alphabetical_elements_from_list_in_order', 'task489_mwsc_question_generation', 'task490_mwsc_options_generation', 'task491_mwsc_answer_generation', 'task492_mwsc_incorrect_answer_generation', 'task493_review_polarity_classification', 'task494_review_polarity_answer_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task497_extract_all_numbers_from_list_in_order', 'task499_extract_and_add_all_numbers_from_list', 'task504_count_all_alphabetical_elements_in_list', 'task505_count_all_numerical_elements_in_list', 'task506_position_of_all_alphabetical_elements_in_list', 'task507_position_of_all_numerical_elements_in_list', 'task509_collate_of_all_alphabetical_and_numerical_elements_in_list_separately', 'task512_twitter_emotion_classification', 'task513_argument_stance_classification', 'task514_argument_consequence_classification', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task517_emo_classify_emotion_of_dialogue', 'task518_emo_different_dialogue_emotions', 'task521_trivia_question_classification', 'task522_news_editorial_summary', 'task523_find_if_numbers_or_alphabets_are_more_in_list', 'task547_alt_translation_entk_en', 'task550_discofuse_sentence_generation', 'task560_alt_translation_en_entk', 'task563_discofuse_answer_generation', 'task564_discofuse_classification', 'task565_circa_answer_generation', 'task566_circa_classification', 'task567_circa_text_generation', 'task568_circa_question_generation', 'task573_air_dialogue_classification', 'task574_air_dialogue_sentence_generation', 'task575_air_dialogue_classification', 'task576_curiosity_dialogs_answer_generation', 'task577_curiosity_dialogs_classification', 'task578_curiosity_dialogs_answer_generation', 'task579_socialiqa_classification', 'task580_socialiqa_answer_generation', 'task581_socialiqa_question_generation', 'task582_naturalquestion_answer_generation', 'task583_udeps_eng_coarse_pos_tagging', 'task584_udeps_eng_fine_pos_tagging', 'task585_preposition_classification', 'task586_amazonfood_polarity_classification', 'task587_amazonfood_polarity_correction_classification', 'task588_amazonfood_rating_classification', 'task589_amazonfood_summary_text_generation', 'task590_amazonfood_summary_correction_classification', 'task591_sciq_answer_generation', 'task592_sciq_incorrect_answer_generation', 'task593_sciq_explanation_generation', 'task594_sciq_question_generation', 'task595_mocha_answer_generation', 'task596_mocha_question_generation', 'task597_cuad_answer_generation', 'task598_cuad_answer_generation', 'task599_cuad_question_generation', 'task600_find_the_longest_common_substring_in_two_strings', 'task605_find_the_longest_common_subsequence_in_two_lists', 'task606_sum_of_all_numbers_in_list_between_positions_i_and_j', 'task607_sbic_intentional_offense_binary_classification', 'task608_sbic_sexual_offense_binary_classification', 'task609_sbic_potentially_offense_binary_classification', 'task610_conllpp_ner', 'task611_mutual_multi_turn_dialogue', 'task615_moviesqa_answer_generation', 'task616_cola_classification', 'task617_amazonreview_category_text_generation', 'task618_amazonreview_summary_text_generation', 'task622_replace_alphabets_in_a_list_by_their_position_in_english_alphabet', 'task625_xlwic_true_or_false_answer_generation', 'task626_xlwic_sentence_based_on_given_word_sentence_generation', 'task627_xlwic_word_with_same_meaning_sentence_generation', 'task628_xlwic_word_with_different_meaning_sentence_generation', 'task629_dbpedia_14_classification', 'task630_dbpedia_14_classification', 'task631_dbpedia_14_incorrect_answer_generation', 'task632_dbpedia_14_classification', 'task633_dbpedia_14_answer_generation', 'task636_extract_and_sort_unique_alphabets_in_a_list', 'task637_extract_and_sort_unique_digits_in_a_list', 'task638_multi_woz_classification', 'task639_multi_woz_user_utterance_generation', 'task649_race_blank_question_generation', 'task664_mmmlu_answer_generation_abstract_algebra', 'task665_mmmlu_answer_generation_anatomy', 'task666_mmmlu_answer_generation_astronomy', 'task667_mmmlu_answer_generation_business_ethics', 'task668_extreme_abstract_summarization', 'task672_amazon_and_yelp_summarization_dataset_summarization', 'task672_nummersense', 'task673_google_wellformed_query_classification', 'task674_google_wellformed_query_sentence_generation', 'task675_google_wellformed_query_sentence_generation', 'task679_hope_edi_english_text_classification', 'task681_hope_edi_malayalam_text_classification', 'task682_online_privacy_policy_text_classification', 'task683_online_privacy_policy_text_purpose_answer_generation', 'task684_online_privacy_policy_text_information_type_generation', 'task685_mmmlu_answer_generation_clinical_knowledge', 'task686_mmmlu_answer_generation_college_biology', 'task687_mmmlu_answer_generation_college_chemistry', 'task688_mmmlu_answer_generation_college_computer_science', 'task689_mmmlu_answer_generation_college_mathematics', 'task690_mmmlu_answer_generation_college_medicine', 'task691_mmmlu_answer_generation_college_physics', 'task692_mmmlu_answer_generation_computer_security', 'task693_mmmlu_answer_generation_conceptual_physics', 'task694_mmmlu_answer_generation_econometrics', 'task695_mmmlu_answer_generation_electrical_engineering', 'task696_mmmlu_answer_generation_elementary_mathematics', 'task697_mmmlu_answer_generation_formal_logic', 'task698_mmmlu_answer_generation_global_facts', 'task699_mmmlu_answer_generation_high_school_biology', 'task700_mmmlu_answer_generation_high_school_chemistry', 'task701_mmmlu_answer_generation_high_school_computer_science', 'task702_mmmlu_answer_generation_high_school_european_history', 'task703_mmmlu_answer_generation_high_school_geography', 'task704_mmmlu_answer_generation_high_school_government_and_politics', 'task705_mmmlu_answer_generation_high_school_macroeconomics', 'task706_mmmlu_answer_generation_high_school_mathematics', 'task707_mmmlu_answer_generation_high_school_microeconomics', 'task708_mmmlu_answer_generation_high_school_physics', 'task709_mmmlu_answer_generation_high_school_psychology', 'task710_mmmlu_answer_generation_high_school_statistics', 'task711_mmmlu_answer_generation_high_school_us_history', 'task712_mmmlu_answer_generation_high_school_world_history', 'task713_mmmlu_answer_generation_human_aging', 'task714_mmmlu_answer_generation_human_sexuality', 'task715_mmmlu_answer_generation_international_law', 'task716_mmmlu_answer_generation_jurisprudence', 'task717_mmmlu_answer_generation_logical_fallacies', 'task718_mmmlu_answer_generation_machine_learning', 'task719_mmmlu_answer_generation_management', 'task720_mmmlu_answer_generation_marketing', 'task721_mmmlu_answer_generation_medical_genetics', 'task722_mmmlu_answer_generation_random_topic', 'task723_mmmlu_answer_generation_moral_disputes', 'task724_mmmlu_answer_generation_moral_scenarios', 'task725_mmmlu_answer_generation_nutrition', 'task726_mmmlu_answer_generation_philosophy', 'task727_mmmlu_answer_generation_prehistory', 'task728_mmmlu_answer_generation_professional_accounting', 'task729_mmmlu_answer_generation_professional_law', 'task730_mmmlu_answer_generation_professional_medicine', 'task731_mmmlu_answer_generation_professional_psychology', 'task732_mmmlu_answer_generation_public_relations', 'task733_mmmlu_answer_generation_security_studies', 'task734_mmmlu_answer_generation_sociology', 'task735_mmmlu_answer_generation_us_foreign_policy', 'task736_mmmlu_answer_generation_virology', 'task737_mmmlu_answer_generation_world_religions', 'task739_lhoestq_question_generation', 'task740_lhoestq_answer_generation_quantity', 'task741_lhoestq_answer_generation_place', 'task742_lhoestq_answer_generation_frequency', 'task745_ai2_arithmetic_questions_arithmetic', 'task746_yelp_restaurant_review_classification', 'task750_aqua_multiple_choice_answering', 'task751_svamp_subtraction_question_answering', 'task752_svamp_multiplication_question_answering', 'task753_svamp_addition_question_answering', 'task754_svamp_common-division_question_answering', 'task755_find_longest_substring_and_replace_its_sorted_lowercase_version_in_both_lists', 'task756_find_longert_substring_and_return_all_unique_alphabets_in_it', 'task761_app_review_classification', 'task766_craigslist_bargains_classification', 'task767_craigslist_bargains_classification', 'task770_pawsx_english_text_modification', 'task819_pec_sentiment_classification', 'task820_protoqa_answer_generation', 'task821_protoqa_question_generation', 'task823_peixian-rtgender_sentiment_analysis', 'task833_poem_sentiment_classification', 'task834_mathdataset_classification', 'task835_mathdataset_answer_generation', 'task843_financial_phrasebank_classification', 'task844_financial_phrasebank_classification', 'task845_pubmedqa_question_generation', 'task846_pubmedqa_classification', 'task847_pubmedqa_question_generation', 'task848_pubmedqa_classification', 'task849_pubmedqa_answer_generation', 'task850_synthetic_longest_palindrome', 'task851_synthetic_multiply_evens', 'task852_synthetic_multiply_odds', 'task853_hippocorpus_long_text_generation', 'task854_hippocorpus_classification', 'task855_conv_ai_2_classification', 'task856_conv_ai_2_classification', 'task857_inquisitive_question_generation', 'task858_inquisitive_span_detection', 'task859_prost_question_generation', 'task860_prost_mcq_generation', 'task861_asdiv_addsub_question_answering', 'task861_prost_mcq_answers_generation', 'task862_asdiv_multidiv_question_answering', 'task863_asdiv_multiop_question_answering', 'task864_asdiv_singleop_question_answering', 'task865_mawps_addsub_question_answering', 'task866_mawps_multidiv_question_answering', 'task867_mawps_multiop_question_answering', 'task868_cfq_mcd1_explanation_to_sql', 'task868_mawps_singleop_question_answering', 'task869_cfq_mcd1_sql_to_explanation', 'task870_msmarco_answer_generation', 'task871_msmarco_question_generation', 'task874_opus_xhosanavy_sr', 'task875_emotion_classification', 'task886_quail_question_generation', 'task887_quail_answer_generation', 'task888_reviews_classification', 'task889_goemotions_classification', 'task897_freebase_qa_topic_question_generation', 'task898_freebase_qa_answer_generation', 'task899_freebase_qa_topic_generation', 'task900_freebase_qa_category_classification', 'task901_freebase_qa_category_question_generation', 'task902_deceptive_opinion_spam_classification', 'task903_deceptive_opinion_spam_classification', 'task904_hate_speech_offensive_classification', 'task905_hate_speech_offensive_classification', 'task906_dialogre_identify_names', 'task907_dialogre_identify_relationships', 'task908_dialogre_identify_familial_relationships', 'task909_dialogre_prevalent_speakers', 'task917_coqa_question_generation', 'task918_coqa_answer_generation', 'task919_coqa_incorrect_answer_generation', 'task921_code_x_glue_information_retreival', 'task922_event2mind_word_generation', 'task923_event2mind_classifier', 'task924_event2mind_word_generation', 'task925_coached_conv_pref_classifier', 'task926_coached_conv_pref_word_generation', 'task927_yelp_negative_to_positive_style_transfer', 'task928_yelp_positive_to_negative_style_transfer', 'task929_products_reviews_classification', 'task933_wiki_auto_style_transfer', 'task934_turk_simplification', 'task955_wiki_auto_style_transfer', 'task956_leetcode_420_strong_password_check', 'task963_librispeech_asr_next_word_prediction', 'task964_librispeech_asr_text_auto_completion', 'task965_librispeech_asr_missing_word_prediction', 'task966_ruletaker_fact_checking_based_on_given_context', 'task967_ruletaker_incorrect_fact_generation_based_on_given_paragraph']
```
Validation Tasks:
```
['task1333_check_validity_date_ddmmyyyy', 'task1403_check_validity_date_mmddyyyy', 'task291_semeval_2020_task4_commonsense_validation']
```
Test Tasks:
```
['task020_mctaco_span_based_question', 'task033_winogrande_answer_generation', 'task034_winogrande_question_modification_object', 'task035_winogrande_question_modification_person', 'task036_qasc_topic_word_to_generate_related_fact', 'task039_qasc_find_overlapping_words', 'task050_multirc_answerability', 'task102_commongen_sentence_generation', 'task104_semeval_2019_task10_closed_vocabulary_mathematical_answer_generation', 'task1152_bard_analogical_reasoning_causation', 'task1153_bard_analogical_reasoning_affordance', 'task1154_bard_analogical_reasoning_travel', 'task1155_bard_analogical_reasoning_trash_or_treasure', 'task1156_bard_analogical_reasoning_tools', 'task1157_bard_analogical_reasoning_rooms_for_containers', 'task1158_bard_analogical_reasoning_manipulating_items', 'task1159_bard_analogical_reasoning_containers', 'task1161_coda19_title_generation', 'task118_semeval_2019_task10_open_vocabulary_mathematical_answer_generation', 'task1195_disflqa_disfluent_to_fluent_conversion', 'task119_semeval_2019_task10_geometric_mathematical_answer_generation', 'task121_zest_text_modification', 'task1336_peixian_equity_evaluation_corpus_gender_classifier', 'task1338_peixian_equity_evaluation_corpus_sentiment_classifier', 'task1339_peixian_equity_evaluation_corpus_text_completion', 'task133_winowhy_reason_plausibility_detection', 'task1342_amazon_us_reviews_title', 'task1344_glue_entailment_classification', 'task1345_glue_qqp_question_paraprashing', 'task1356_xlsum_title_generation', 'task1358_xlsum_title_generation', 'task1385_anli_r1_entailment', 'task1386_anli_r2_entailment', 'task1387_anli_r3_entailment', 'task1388_cb_entailment', 'task1390_wscfixed_coreference', 'task1391_winogrande_easy_answer_generation', 'task1393_superglue_copa_text_completion', 'task1394_meta_woz_task_classification', 'task1407_dart_question_generation', 'task1409_dart_text_generation', 'task1429_evalution_semantic_relation_classification', 'task1439_doqa_cooking_isanswerable', 'task1442_doqa_movies_isanswerable', 'task1509_evalution_antonyms', 'task1510_evalution_relation_extraction', 'task1516_imppres_naturallanguageinference', 'task1529_scitail1.1_classification', 'task1531_daily_dialog_type_classification', 'task1533_daily_dialog_formal_classification', 'task1534_daily_dialog_question_classification', 'task1540_parsed_pdfs_summarization', 'task1554_scitail_classification', 'task1557_jfleg_answer_generation', 'task1562_zest_text_modification', 'task1584_evalution_meronym_classification', 'task1586_scifact_title_generation', 'task1598_nyc_long_text_generation', 'task1612_sick_label_classification', 'task1615_sick_tclassify_b_relation_a', 'task1622_disfl_qa_text_modication', 'task1624_disfl_qa_question_yesno_classification', 'task1631_openpi_answer_generation', 'task1640_aqa1.0_answerable_unanswerable_question_classification', 'task1659_title_generation', 'task1664_winobias_text_generation', 'task1728_web_nlg_data_to_text', 'task190_snli_classification', 'task199_mnli_classification', 'task200_mnli_entailment_classification', 'task201_mnli_neutral_classification', 'task202_mnli_contradiction_classification', 'task219_rocstories_title_answer_generation', 'task220_rocstories_title_classification', 'task226_english_language_answer_relevance_classification', 'task232_iirc_link_number_classification', 'task233_iirc_link_exists_classification', 'task242_tweetqa_classification', 'task249_enhanced_wsc_pronoun_disambiguation', 'task281_points_of_correspondence', 'task288_gigaword_summarization', 'task290_tellmewhy_question_answerability', 'task291_semeval_2020_task4_commonsense_validation', 'task295_semeval_2020_task4_commonsense_reasoning', 'task304_numeric_fused_head_resolution', 'task329_gap_classification', 'task330_gap_answer_generation', 'task333_hateeval_classification_hate_en', 'task335_hateeval_classification_aggresive_en', 'task337_hateeval_classification_individual_en', 'task349_squad2.0_answerable_unanswerable_question_classification', 'task362_spolin_yesand_prompt_response_sub_classification', 'task386_semeval_2018_task3_irony_detection', 'task387_semeval_2018_task3_irony_classification', 'task391_causal_relationship', 'task392_inverse_causal_relationship', 'task393_plausible_result_generation', 'task397_semeval_2018_task1_tweet_anger_detection', 'task398_semeval_2018_task1_tweet_joy_detection', 'task399_semeval_2018_task1_tweet_sadness_detection', 'task401_numeric_fused_head_reference', 'task402_grailqa_paraphrase_generation', 'task418_persent_title_generation', 'task428_senteval_inversion', 'task429_senteval_tense', 'task430_senteval_subject_count', 'task431_senteval_object_count', 'task442_com_qa_paraphrase_question_generation', 'task495_semeval_headline_classification', 'task496_semeval_answer_generation', 'task500_scruples_anecdotes_title_generation', 'task510_reddit_tifu_title_summarization', 'task515_senteval_odd_word_out', 'task516_senteval_conjoints_inversion', 'task520_aquamuse_answer_given_in_passage', 'task569_recipe_nlg_text_generation', 'task602_wikitext-103_answer_generation', 'task613_politifact_text_generation', 'task614_glucose_cause_event_detection', 'task619_ohsumed_abstract_title_generation', 'task620_ohsumed_medical_subject_headings_answer_generation', 'task623_ohsumed_yes_no_answer_generation', 'task640_esnli_classification', 'task641_esnli_classification', 'task642_esnli_classification', 'task645_summarization', 'task648_answer_generation', 'task670_ambigqa_question_generation', 'task671_ambigqa_text_generation', 'task677_ollie_sentence_answer_generation', 'task738_perspectrum_classification', 'task743_eurlex_summarization', 'task760_msr_sqa_long_text_generation', 'task769_qed_summarization', 'task827_copa_commonsense_reasoning', 'task828_copa_commonsense_cause_effect', 'task879_schema_guided_dstc8_classification', 'task880_schema_guided_dstc8_classification', 'task890_gcwd_classification', 'task891_gap_coreference_resolution', 'task892_gap_reverse_coreference_resolution', 'task893_gap_fill_the_blank_coreference_resolution', 'task909_dialogre_prevalent_speakers', 'task935_defeasible_nli_atomic_classification', 'task936_defeasible_nli_snli_classification', 'task937_defeasible_nli_social_classification', 'task957_e2e_nlg_text_generation_generate', 'task970_sherliic_causal_relationship']
``` | Muennighoff/natural-instructions | [
"task_categories:other",
"annotations_creators:crowdsourced",
"annotations_creators:expert-generated",
"multilinguality:monolingual",
"size_categories:100M<n<1B",
"language:en",
"region:us"
]
| 2022-12-17T21:45:01+00:00 | {"annotations_creators": ["crowdsourced", "expert-generated"], "language": ["en"], "multilinguality": ["monolingual"], "size_categories": ["100M<n<1B"], "task_categories": ["other"]} | 2022-12-23T20:08:44+00:00 |
c73f6b34d3dc906e32cb8aff66bf7ccc1fbd3a33 | slotonline22/slotonline | [
"license:bigscience-openrail-m",
"region:us"
]
| 2022-12-17T22:14:44+00:00 | {"license": "bigscience-openrail-m"} | 2022-12-17T22:14:49+00:00 |
|
af494fe1b62762178d37c0b71b4a7160f0534f1a |
# Dataset Card for "squad_v2_dutch"
## Dataset Description
- **Homepage:** [https://rajpurkar.github.io/SQuAD-explorer/](https://rajpurkar.github.io/SQuAD-explorer/)
## Dataset Summary
The squad_v2_dutch dataset is a machine-translated version of the SQuAD v2 dataset from English to Dutch.
The SQuAD v2 dataset combines the 100,000 questions in SQuAD1.1 with over 50,000 unanswerable questions written adversarially by crowdworkers
to look similar to answerable ones. To do well on SQuAD2.0, systems must not only answer questions when possible, but
also determine when no answer is supported by the paragraph and abstain from answering.
## Challenges and Solutions
One of the main challenges in translating the SQuAD v2 dataset to Dutch was accurately translating the answers, which are often short phrases or single words.
Translating the answers individually would result in obvious mistakes. Examples are
* Destiny's Child -> Het kind van Destiny
* Dangerously in Love -> Gevaarlijk in de liefde
* Imagine -> Stel je voor
* Men in Black -> Mannen in zwart
* Hottest Female Singer of All Time -> De heetste vrouwelijke zanger aller tijden
The correct translation of these phrases often depends on the context in which they are used.
To address this, the title, question, answers, and context were concatenated as a single sequence, separated by the newline character.
When the translated version had the correct number of newlines and did not contain any apparent mixups of the answers with the question and title, it was used.
Otherwise, the one-by-one context-less translation was used as a fallback.
Most examples where translated with the context-rich translation: ~95%.
* train split: context: 123898, no context: 6406
* validation split: context: 10196, no context: 1644
### Data Fields
The data fields are the same among all splits.
#### squad_v2
- `id`: a `string` feature.
- `title`: a `string` feature.
- `title_en`: a `string` feature.
- `context`: a `string` feature.
- `question`: a `string` feature.
- `answers`: a dictionary feature containing:
- `text`: a list of `string` feature.
- `text_en`: a list of `string` feature.
- `answer_start_en`: a `int32` feature.
### Citation Information
```
@article{2016arXiv160605250R,
author = {{Rajpurkar}, Pranav and {Zhang}, Jian and {Lopyrev},
Konstantin and {Liang}, Percy},
title = "{SQuAD: 100,000+ Questions for Machine Comprehension of Text}",
journal = {arXiv e-prints},
year = 2016,
eid = {arXiv:1606.05250},
pages = {arXiv:1606.05250},
archivePrefix = {arXiv},
eprint = {1606.05250},
}
```
### Contributions
Thanks to [@lewtun](https://github.com/lewtun), [@albertvillanova](https://github.com/albertvillanova), [@patrickvonplaten](https://github.com/patrickvonplaten),
[@thomwolf](https://github.com/thomwolf) for adding the https://huggingface.co/datasets/squad_v2 dataset.
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/).
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
| yhavinga/squad_v2_dutch | [
"task_categories:question-answering",
"task_ids:open-domain-qa",
"task_ids:extractive-qa",
"annotations_creators:crowdsourced",
"language_creators:crowdsourced",
"multilinguality:monolingual",
"size_categories:100K<n<1M",
"source_datasets:original",
"language:nl",
"license:cc-by-sa-4.0",
"arxiv:1606.05250",
"region:us"
]
| 2022-12-17T22:50:45+00:00 | {"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["nl"], "license": ["cc-by-sa-4.0"], "multilinguality": ["monolingual"], "size_categories": ["100K<n<1M"], "source_datasets": ["original"], "task_categories": ["question-answering"], "task_ids": ["open-domain-qa", "extractive-qa"], "paperswithcode_id": "squad_v2_dutch", "pretty_name": "SQuAD2.0 Dutch", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "title_en", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "sequence": [{"name": "text", "dtype": "string"}, {"name": "text_en", "dtype": "string"}, {"name": "answer_start_en", "dtype": "int32"}]}]}} | 2023-01-21T13:53:27+00:00 |
c47ace8e29d2712c22ddb223653b377f4175b2da | # Quick!Draw! 1pct Sample (per-row bin format)
This is a sample 1-percent of the entire 50M-row [QuickDraw! dataset](https://github.com/googlecreativelab/quickdraw-dataset). The row for each drawing contains a byte-encoded packed representation of the drawing and data, which you can unpack using the following snippet:
```
def unpack_drawing(file_handle):
key_id, = unpack('Q', file_handle.read(8))
country_code, = unpack('2s', file_handle.read(2))
recognized, = unpack('b', file_handle.read(1))
timestamp, = unpack('I', file_handle.read(4))
n_strokes, = unpack('H', file_handle.read(2))
image = []
n_bytes = 17
for i in range(n_strokes):
n_points, = unpack('H', file_handle.read(2))
fmt = str(n_points) + 'B'
x = unpack(fmt, file_handle.read(n_points))
y = unpack(fmt, file_handle.read(n_points))
image.append((x, y))
n_bytes += 2 + 2*n_points
result = {
'key_id': key_id,
'country_code': country_code,
'recognized': recognized,
'timestamp': timestamp,
'image': image,
}
return result
```
The `image` in the above is still in line vector format. To convert render this to a raster image (I recommend you do this on-the-fly in a pre-processor):
```
# packed bin -> RGB PIL
def binToPIL(packed_drawing):
padding = 8
radius = 7
scale = (224.0-(2*padding)) / 256
unpacked = unpack_drawing(io.BytesIO(packed_drawing))
unpacked_image = unpacked['image']
image = np.full((224,224), 255, np.uint8)
for stroke in unpacked['image']:
prevX = round(stroke[0][0]*scale)
prevY = round(stroke[1][0]*scale)
for i in range(1, len(stroke[0])):
x = round(stroke[0][i]*scale)
y = round(stroke[1][i]*scale)
cv2.line(image, (padding+prevX, padding+prevY), (padding+x, padding+y), 0, radius, -1)
prevX = x
prevY = y
pilImage = Image.fromarray(image).convert("RGB")
return pilImage
``` | kmewhort/quickdraw-bins-1pct-sample | [
"region:us"
]
| 2022-12-18T02:37:21+00:00 | {"dataset_info": {"features": [{"name": "label", "dtype": {"class_label": {"names": {"0": "The Eiffel Tower", "1": "The Great Wall of China", "2": "The Mona Lisa", "3": "aircraft carrier", "4": "airplane", "5": "alarm clock", "6": "ambulance", "7": "angel", "8": "animal migration", "9": "ant", "10": "anvil", "11": "apple", "12": "arm", "13": "asparagus", "14": "axe", "15": "backpack", "16": "banana", "17": "bandage", "18": "barn", "19": "baseball", "20": "baseball bat", "21": "basket", "22": "basketball", "23": "bat", "24": "bathtub", "25": "beach", "26": "bear", "27": "beard", "28": "bed", "29": "bee", "30": "belt", "31": "bench", "32": "bicycle", "33": "binoculars", "34": "bird", "35": "birthday cake", "36": "blackberry", "37": "blueberry", "38": "book", "39": "boomerang", "40": "bottlecap", "41": "bowtie", "42": "bracelet", "43": "brain", "44": "bread", "45": "bridge", "46": "broccoli", "47": "broom", "48": "bucket", "49": "bulldozer", "50": "bus", "51": "bush", "52": "butterfly", "53": "cactus", "54": "cake", "55": "calculator", "56": "calendar", "57": "camel", "58": "camera", "59": "camouflage", "60": "campfire", "61": "candle", "62": "cannon", "63": "canoe", "64": "car", "65": "carrot", "66": "castle", "67": "cat", "68": "ceiling fan", "69": "cell phone", "70": "cello", "71": "chair", "72": "chandelier", "73": "church", "74": "circle", "75": "clarinet", "76": "clock", "77": "cloud", "78": "coffee cup", "79": "compass", "80": "computer", "81": "cookie", "82": "cooler", "83": "couch", "84": "cow", "85": "crab", "86": "crayon", "87": "crocodile", "88": "crown", "89": "cruise ship", "90": "cup", "91": "diamond", "92": "dishwasher", "93": "diving board", "94": "dog", "95": "dolphin", "96": "donut", "97": "door", "98": "dragon", "99": "dresser", "100": "drill", "101": "drums", "102": "duck", "103": "dumbbell", "104": "ear", "105": "elbow", "106": "elephant", "107": "envelope", "108": "eraser", "109": "eye", "110": "eyeglasses", "111": "face", "112": "fan", "113": "feather", "114": "fence", "115": "finger", "116": "fire hydrant", "117": "fireplace", "118": "firetruck", "119": "fish", "120": "flamingo", "121": "flashlight", "122": "flip flops", "123": "floor lamp", "124": "flower", "125": "flying saucer", "126": "foot", "127": "fork", "128": "frog", "129": "frying pan", "130": "garden", "131": "garden hose", "132": "giraffe", "133": "goatee", "134": "golf club", "135": "grapes", "136": "grass", "137": "guitar", "138": "hamburger", "139": "hammer", "140": "hand", "141": "harp", "142": "hat", "143": "headphones", "144": "hedgehog", "145": "helicopter", "146": "helmet", "147": "hexagon", "148": "hockey puck", "149": "hockey stick", "150": "horse", "151": "hospital", "152": "hot air balloon", "153": "hot dog", "154": "hot tub", "155": "hourglass", "156": "house", "157": "house plant", "158": "hurricane", "159": "ice cream", "160": "jacket", "161": "jail", "162": "kangaroo", "163": "key", "164": "keyboard", "165": "knee", "166": "knife", "167": "ladder", "168": "lantern", "169": "laptop", "170": "leaf", "171": "leg", "172": "light bulb", "173": "lighter", "174": "lighthouse", "175": "lightning", "176": "line", "177": "lion", "178": "lipstick", "179": "lobster", "180": "lollipop", "181": "mailbox", "182": "map", "183": "marker", "184": "matches", "185": "megaphone", "186": "mermaid", "187": "microphone", "188": "microwave", "189": "monkey", "190": "moon", "191": "mosquito", "192": "motorbike", "193": "mountain", "194": "mouse", "195": "moustache", "196": "mouth", "197": "mug", "198": "mushroom", "199": "nail", "200": "necklace", "201": "nose", "202": "ocean", "203": "octagon", "204": "octopus", "205": "onion", "206": "oven", "207": "owl", "208": "paint can", "209": "paintbrush", "210": "palm tree", "211": "panda", "212": "pants", "213": "paper clip", "214": "parachute", "215": "parrot", "216": "passport", "217": "peanut", "218": "pear", "219": "peas", "220": "pencil", "221": "penguin", "222": "piano", "223": "pickup truck", "224": "picture frame", "225": "pig", "226": "pillow", "227": "pineapple", "228": "pizza", "229": "pliers", "230": "police car", "231": "pond", "232": "pool", "233": "popsicle", "234": "postcard", "235": "potato", "236": "power outlet", "237": "purse", "238": "rabbit", "239": "raccoon", "240": "radio", "241": "rain", "242": "rainbow", "243": "rake", "244": "remote control", "245": "rhinoceros", "246": "rifle", "247": "river", "248": "roller coaster", "249": "rollerskates", "250": "sailboat", "251": "sandwich", "252": "saw", "253": "saxophone", "254": "school bus", "255": "scissors", "256": "scorpion", "257": "screwdriver", "258": "sea turtle", "259": "see saw", "260": "shark", "261": "sheep", "262": "shoe", "263": "shorts", "264": "shovel", "265": "sink", "266": "skateboard", "267": "skull", "268": "skyscraper", "269": "sleeping bag", "270": "smiley face", "271": "snail", "272": "snake", "273": "snorkel", "274": "snowflake", "275": "snowman", "276": "soccer ball", "277": "sock", "278": "speedboat", "279": "spider", "280": "spoon", "281": "spreadsheet", "282": "square", "283": "squiggle", "284": "squirrel", "285": "stairs", "286": "star", "287": "steak", "288": "stereo", "289": "stethoscope", "290": "stitches", "291": "stop sign", "292": "stove", "293": "strawberry", "294": "streetlight", "295": "string bean", "296": "submarine", "297": "suitcase", "298": "sun", "299": "swan", "300": "sweater", "301": "swing set", "302": "sword", "303": "syringe", "304": "t-shirt", "305": "table", "306": "teapot", "307": "teddy-bear", "308": "telephone", "309": "television", "310": "tennis racquet", "311": "tent", "312": "tiger", "313": "toaster", "314": "toe", "315": "toilet", "316": "tooth", "317": "toothbrush", "318": "toothpaste", "319": "tornado", "320": "tractor", "321": "traffic light", "322": "train", "323": "tree", "324": "triangle", "325": "trombone", "326": "truck", "327": "trumpet", "328": "umbrella", "329": "underwear", "330": "van", "331": "vase", "332": "violin", "333": "washing machine", "334": "watermelon", "335": "waterslide", "336": "whale", "337": "wheel", "338": "windmill", "339": "wine bottle", "340": "wine glass", "341": "wristwatch", "342": "yoga", "343": "zebra", "344": "zigzag"}}}}, {"name": "packed_drawing", "dtype": "binary"}], "splits": [{"name": "train", "num_bytes": 51960652.42514169, "num_examples": 403410}, {"name": "test", "num_bytes": 12990227.508075692, "num_examples": 100853}], "download_size": 62877590, "dataset_size": 64950879.933217384}} | 2022-12-19T15:09:12+00:00 |
06342c6cbe4ea3550f46c568076d04153aab9052 | aiarttsukuruyo/gfpgan | [
"license:other",
"region:us"
]
| 2022-12-18T03:56:29+00:00 | {"license": "other"} | 2022-12-18T03:57:29+00:00 |
|
e63637e4ae28e74c8a2d4da90809764ce104428d | # AutoTrain Dataset for project: honor
## Dataset Description
This dataset has been automatically processed by AutoTrain for project honor.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "\"Kimchi (kimchee) is a Korean dish which is well known throughout the world. It is a spicy, tangy and pungent food that contains pickled vegetables. The word \"Kimchi\" comes from the Korean \"Kim\" meaning \"turn\" and \"Chi\" meaning \"sauce\".\\n\\nKimchi consists of vegetables which are salted, fermented and seasoned. It is an important part of the Korean diet. The two main methods of preparing Kimchi are fermentation and salting. Fermented Kimchi is made by mixing cabbage, radish and other vegetables with a specific kind of salt and sugar. Salted Kimchi is made by mixing cabbage, radish and other vegetables with a specific amount of salt and some vinegar.\\n\\nThe standard vegetables used in preparing Kimchi include cabbage, radish, turnip and Chinese cabbage. However, there are many different variations of Kimchi. Some of the variations include Kimchi with beef, Kimchi with fish and Kimchi with soybean paste.\\n\\nThe preparation of Kimchi is considered to be an important part of Korean culture. It is prepared in a ritualistic manner. The Korean culture also consider it as a \"doorway\" to a family's hearth.",
"target": 1,
"feat_meta.pile_set_name": "GPT-3"
},
{
"text": "So how did you survive the terrible British summer of 2015? (Mine was miserable. There were too many weekends at home in the garden, that's all I can say.) Well, it's a new year and a new season of Doctor Who, with Peter Capaldi as our time-travelling hero.\\n\\nHere's the first photo of Capaldi in costume:\\n\\nAnd here's how it all begins...\\n\\nThis story is called The Magician's Apprentice and features Missy (the Master, if you didn't know).\\n\\nAnd here's a trailer:\\n\\nAll we can say is: A spooky church? The Doctor having to answer questions about his mistakes? Yes, please! We can't wait to see more.\\n\\nDoctor Who series 9 begins on Saturday 19 September on BBC One.",
"target": 1,
"feat_meta.pile_set_name": "GPT-3"
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"text": "Value(dtype='string', id=None)",
"target": "ClassLabel(names=['human', 'machine'], id=None)",
"feat_meta.pile_set_name": "Value(dtype='string', id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 3212 |
| valid | 804 |
| freddiezhang/honordata | [
"task_categories:text-classification",
"language:en",
"region:us"
]
| 2022-12-18T05:56:24+00:00 | {"language": ["en"], "task_categories": ["text-classification"]} | 2022-12-19T04:48:12+00:00 |
ec2643c933e548a0b2476e9601d53d7e27ff4b54 | Glac1er/May | [
"license:unknown",
"region:us"
]
| 2022-12-18T06:47:01+00:00 | {"license": "unknown"} | 2022-12-18T18:46:23+00:00 |
|
49d1f5cd8716bd0780f61bd12caacb48242cd06d |
# Dataset Card for "lmqg/qag_dequad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the DEQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
German (de)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": "51._Bundesstaat === District of Columbia === Der District of Columbia gilt neben Puerto Rico als einer der aussichtsreichen Kandidaten fรผr die Anerkennung als Bundesstaat in naher Zukunft. Die Einwohner des Bundesdistrikts gelten als grรถรte Befรผrworter dieser Entscheidung, die jedoch einer Verfassungsรคnderung bedรผrfte. Die Anhรคnger nutzen das Motto des Unabhรคngigkeitskrieges in abgewandelter Form โ โTaxation without representationโ โ, um auf die mangelnde Reprรคsentation im Kongress hinzuweisen. Das Motto wird heute auf die Nummernschilder neu zugelassener Autos gedruckt (wobei der Fahrer alternativ die Internet-Adresse des D.C. wรคhlen kann). Bill Clintons Prรคsidenten-Limousine hatte ein solches Nummernschild kurz vor Ende seiner Amtszeit. George W. Bush lieร diese Nummernschilder nach seinem Amtsantritt wieder entfernen. Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grรผnen zur ''D.C. Statehood Green Party''. 1978 kamen sie ihrem Ziel am nรคchsten, als der Kongress das ''District of Columbia Voting Rights Amendment'' verabschiedete. Zwei Jahre spรคter beriefen lokale Bรผrger mit einer Initiative eine konstitutionelle Versammlung fรผr einen neuen Bundesstaat. 1982 ratifizierten die Wรคhler die Verfassung des Bundesstaates, der ''New Columbia'' heiรen sollte. 1985 wurde der Plan jedoch gestoppt, als das Amendment scheiterte, weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde. Eine andere Mรถglichkeit wรคre die Rรผckgliederung des Gebietes in den Bundesstaat Maryland. Damit wรผrden die Einwohner des D.C. in den Genuss der Vorteile kommen, in einem Bundesstaat zu leben, ohne dass ein 51. Bundesstaat geschaffen werden mรผsste. Am 26. Juni 2020 stimmte das US-Reprรคsentantenhaus mit 232 zu 180 Stimmen dafรผr, den District of Columbia als 51. Bundesstaat anzuerkennen. Ein positives Votum des durch die Republikaner dominierten US-Senats gilt als unwahrscheinlich. Auรerdem kรผndigte Prรคsident Trump sein Veto gegen ein solches, potenzielles Vorhaben an. Dennoch war es das erste positive Votum einer der beiden Kammern des US-Kongresses fรผr eine Anerkennung als Bundesstaat.",
"questions": [ "Was ist das Motto der Befรผrworter der Anerkennung von District of Columbia als neuer US-Bundesstaat?", "Warum hat die Anerkennung von District of Columbia zu einem neuen US-Bundesstaat 1985 nicht geklappt?", "Was war der potenzielle Name fรผr den neuen US-Bundesstaat anstelle von District of Columbia?", "Aus welchen ehemaligen Parteien bestand die D.C. Statehood Green Party?" ],
"answers": [ "das Motto des Unabhรคngigkeitskrieges in abgewandelter Form โ โTaxation without representationโ ", "weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde", " ''New Columbia'' ", "Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grรผnen" ],
"questions_answers": "question: Was ist das Motto der Befรผrworter der Anerkennung von District of Columbia als neuer US-Bundesstaat?, answer: das Motto des Unabhรคngigkeitskrieges in abgewandelter Form โ โTaxation without representationโ | question: Warum hat die Anerkennung von District of Columbia zu einem neuen US-Bundesstaat 1985 nicht geklappt?, answer: weil es nicht von genug Staaten innerhalb von sieben Jahren ratifiziert wurde | question: Was war der potenzielle Name fรผr den neuen US-Bundesstaat anstelle von District of Columbia?, answer: ''New Columbia'' | question: Aus welchen ehemaligen Parteien bestand die D.C. Statehood Green Party?, answer: Die kleine ''D.C. Statehood Party'' vertrat diese Ansicht und vereinte sich mit den Grรผnen"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|2489 | 1476 | 474 |
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_dequad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_dequad",
"language:de",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T07:04:59+00:00 | {"language": "de", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_dequad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T08:14:09+00:00 |
cc90041343272411122d092d816a2eabb9f8d9d1 |
# Dataset Card for "lmqg/qag_koquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushio.com/)
### Dataset Summary
This is the question & answer generation dataset based on the KOQuAD.
### Supported Tasks and Leaderboards
* `question-answer-generation`: The dataset is assumed to be used to train a model for question & answer generation.
Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail).
### Languages
Korean (ko)
## Dataset Structure
An example of 'train' looks as follows.
```
{
"paragraph": ""3.13 ๋ง์ธ์ด๋" ์ 1919๋
3.13์ผ ์ ์ฃผ์์ ์ผ์ด๋ ๋ง์ธ์ด๋์ด๋ค. ์ง์ญ ์ธ์ฌ๋ค๊ณผ ํจ๊ป ์ ํฅํ๊ต ํ์๋ค์ด ์ฃผ๋์ ์ธ ์ญํ ์ ํ๋ฉฐ, ๋ง์ธ์ด๋์ ์ด๋์๋ค. ๋ฐํ๋ จ, ๊น์ ๊ทน ๋ฑ ์ ์ฃผ ์ง๋์๋ค์ ๊ตฐ์ฐ์์ 4์ผ๊ณผ 5์ผ ๋
๋ฆฝ๋ง์ธ ์์๊ฐ ๊ฐํ๋๋ค๋ ์์์ ๋ฃ๊ณ ์ค๋นํ๊ณ ์์๋ค. ์ฒ๋๊ต์ ๋ฐํ๋ จ ์ ๊ฐํ ์ด๋ฌด์ง์์ ํ์ํ ํ๊ทน๊ธฐ๋ฅผ ์ธ์ํ๊ธฐ๋ก ํ์๋ค. ์์ธ์ ๋น๋กฏํ ๋ค๋ฅธ ์ง๋ฐฉ์์ ์์๊ฐ ๊ณ์๋์ ์ผ๋ณธ๊ฒฝ์ฐฐ์ ์ ํฅํ๊ต์ ๊ธฐ์ ํ๊ต๋ฅผ ๋น๋กฏํ ์ ์ฃผ์๋ด ํ๊ต์ ๊ฐ์ ๋ฐฉํ์กฐ์น๋ฅผ ์ทจํ๋ค. ์ด์ ์ต์ข
์ผ ๋ฑ ์ ํฅํ๊ต ํ์ 5๋ช
์ ๋ฐค์ ์ด์ฉํด ์ ํฅํ๊ต ์งํ์ค์์ ํ๊ทน๊ธฐ ๋ฑ ์ธ์๋ฌผ์ ๋ง๋ค์๋ค. ์ค๋น๋ฅผ ๋ง์น ์ด๋ค์ 13์ผ ์ฅํฐ๋ก ๋ชจ์ด๊ธฐ ์์ํ๊ณ , ์ฑ์๊ฐ๋ง๋๋ก ์์ฅํ ํ๊ทน๊ธฐ๋ฅผ ์ฅํฐ๋ก ์ค์ด ๋๋ฅด๊ณ ๊ฑฐ์ฌ ์ง์ ์์ฅ ์
๊ตฌ์ธ ์์ฐ๋๊ณผ ์ ์ฃผ๊ต ๊ฑด๋ํธ์์ ๊ตฐ์ค๋ค์๊ฒ ์๋ฐํ ๋ฐฐ๋ถํ๋ค. ๋ฎ 12์20๋ถ๊ป ์ ํฅํ๊ต์ ๊ธฐ์ ํ๊ต ํ์ ๋ฐ ์ฒ๋๊ต๋ ๋ฑ์ ํ๊ทน๊ธฐ๋ฅผ ๋ค๊ณ ๋ง์ธ๋ฅผ ๋ถ๋ ๋ค. ๋จ๋ฌธ ๋ฐ ์์ฅ, ์ 2๋ณดํตํ๊ต(ํ ์์ฐ์ด๋ฑํ๊ต)์์ ๋ชจ์ฌ ์ธ์๋ฌผ์ ๋ฟ๋ฆฌ๋ฉฐ ์๊ฐ์ง๋ก ๊ตฌ๋ณด๋ก ํ์งํ๋ค. ์์๋ ์คํ 11์๊น์ง ์๋์ฐจ๋ก ๊ณ์๋๋ค. ๋ ๋ค์๋ ์คํ 3์์๋ ๊ตฐ์ค์ด ๋ชจ์ฌ ๋ง์ธ๋ฅผ ๋ถ๋ ๋ค. ์ดํ ๊ณ ํ์ง, ๋จ๊ถํ, ๊น๋ณํ, ๊น์ ์ , ์ด๊ธฐ๊ณค, ๊น๊ฒฝ์ ๋ฑ ์ ํฅํ๊ต ํ์๋ค์ ์์๋ฅผ ์ฃผ๋ํ๋ค๋ ํ์๋ก ๋ชจ๋ ์คํ 1๋
์ ์ธ๋ ๋ฐ์๋ค. ์ด์ธ ์ ํฅํ๊ต ํ์ 3๋ช
์ ์ผ์ ์ ๊ณ ๋ฌธ์ ์ฅ์ฌํ ๊ฒ์ผ๋ก ์๋ ค์ก๋ค. ๋ ์์๋ฅผ ์ง๋ํ ๊น์ธ์ ๋ชฉ์ฌ๋ ์ดํ ์ค๊ตญ ์ํด๋ก ๊ฑฐ์ฒ๋ฅผ ์ฎ๊ฒจ ์์์ ๋ถ์์ ํ๋ํ๋ค. ํ์ฌ ์ ํฅํ๊ต ๊ต๋ฌธ ์์ ๋ง์ธ์ด๋ ๊ธฐ๋
๋น๊ฐ ์ธ์์ ธ ์๋ค.",
"questions": [ "๋ง์ธ์ด๋ ๊ธฐ๋
๋น๊ฐ ์ธ์์ ธ ์๋ ๊ณณ์?", "์ผ๋ณธ๊ฒฝ์ฐฐ์ ๊ฐ์ ๋ฐฉํ์กฐ์น์๋ ๋ถ๊ตฌํ๊ณ ํ์๋ค์ ์ ํฅํ๊ต ์งํ์ค์ ๋ชจ์ฌ์ ์ด๋ค ์ธ์๋ฌผ์ ๋ง๋ค์๋๊ฐ?", "์ฌ๋ฌ ์ง๋ฐฉ์์ ์์๊ฐ ์ผ์ด๋์ ์ผ๋ณธ๊ฒฝ์ฐฐ์ด ์ ์ฃผ์๋ด ํ๊ต์ ๊ฐํํ ์กฐ์น๋ ๋ฌด์์ธ๊ฐ?", "์ง์ญ์ธ์ฌ๋ค๊ณผ ์ ํฅ๊ณ ๋ฑํ๊ต ํ์๋ค์ด ์ฃผ๋์ ์ธ ์ญํ ์ ํ 3.13 ๋ง์ธ์ด๋์ด ์ผ์ด๋ ํด๋?", "์ ํฅํ๊ต ํ์๋ค์ ์์๋ฅผ ์ฃผ๋ํ๋ค๋ ํ์๋ก ๋ชจ๋ ์คํ ๋ช๋
์ ์ธ๋ ๋ฐ์๋๊ฐ?", "๋ง์ธ์ด๋์์ ์ฃผ๋์ ์ธ ์ญํ ์ ํ ์ด๋ค์?", "1919๋
3.1 ์ด๋์ด ์ผ์ด๋ ์ง์ญ์ ์ด๋์ธ๊ฐ?", "3.13 ๋ง์ธ์ด๋์ด ์ผ์ด๋ ๊ณณ์?" ],
"answers": [ "์ ํฅํ๊ต ๊ต๋ฌธ ์", "ํ๊ทน๊ธฐ", "๊ฐ์ ๋ฐฉํ์กฐ์น", "1919๋
", "1๋
", "์ ํฅํ๊ต ํ์๋ค", "์ ์ฃผ", "์ ์ฃผ" ],
"questions_answers": "question: ๋ง์ธ์ด๋ ๊ธฐ๋
๋น๊ฐ ์ธ์์ ธ ์๋ ๊ณณ์?, answer: ์ ํฅํ๊ต ๊ต๋ฌธ ์ | question: ์ผ๋ณธ๊ฒฝ์ฐฐ์ ๊ฐ์ ๋ฐฉํ์กฐ์น์๋ ๋ถ๊ตฌํ๊ณ ํ์๋ค์ ์ ํฅํ๊ต ์งํ์ค์ ๋ชจ์ฌ์ ์ด๋ค ์ธ์๋ฌผ์ ๋ง๋ค์๋๊ฐ?, answer: ํ๊ทน๊ธฐ | question: ์ฌ๋ฌ ์ง๋ฐฉ์์ ์์๊ฐ ์ผ์ด๋์ ์ผ๋ณธ๊ฒฝ์ฐฐ์ด ์ ์ฃผ์๋ด ํ๊ต์ ๊ฐํํ ์กฐ์น๋ ๋ฌด์์ธ๊ฐ?, answer: ๊ฐ์ ๋ฐฉํ์กฐ์น | question: ์ง์ญ์ธ์ฌ๋ค๊ณผ ์ ํฅ๊ณ ๋ฑํ๊ต ํ์๋ค์ด ์ฃผ๋์ ์ธ ์ญํ ์ ํ 3.13 ๋ง์ธ์ด๋์ด ์ผ์ด๋ ํด๋?, answer: 1919๋
| question: ์ ํฅํ๊ต ํ์๋ค์ ์์๋ฅผ ์ฃผ๋ํ๋ค๋ ํ์๋ก ๋ชจ๋ ์คํ ๋ช๋
์ ์ธ๋ ๋ฐ์๋๊ฐ?, answer: 1๋
| question: ๋ง์ธ์ด๋์์ ์ฃผ๋์ ์ธ ์ญํ ์ ํ ์ด๋ค์?, answer: ์ ํฅํ๊ต ํ์๋ค | question: 1919๋
3.1 ์ด๋์ด ์ผ์ด๋ ์ง์ญ์ ์ด๋์ธ๊ฐ?, answer: ์ ์ฃผ | question: 3.13 ๋ง์ธ์ด๋์ด ์ผ์ด๋ ๊ณณ์?, answer: ์ ์ฃผ"
}
```
The data fields are the same among all splits.
- `questions`: a `list` of `string` features.
- `answers`: a `list` of `string` features.
- `paragraph`: a `string` feature.
- `questions_answers`: a `string` feature.
## Data Splits
|train|validation|test |
|----:|---------:|----:|
|9600 | 960 | 4442|
## Citation Information
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
``` | lmqg/qag_koquad | [
"task_categories:text-generation",
"task_ids:language-modeling",
"multilinguality:monolingual",
"size_categories:1k<n<10K",
"source_datasets:lmqg/qg_koquad",
"language:ko",
"license:cc-by-sa-4.0",
"question-generation",
"arxiv:2210.03992",
"region:us"
]
| 2022-12-18T07:05:17+00:00 | {"language": "ko", "license": "cc-by-sa-4.0", "multilinguality": "monolingual", "size_categories": "1k<n<10K", "source_datasets": "lmqg/qg_koquad", "task_categories": ["text-generation"], "task_ids": ["language-modeling"], "pretty_name": "SQuAD for question generation", "tags": ["question-generation"]} | 2022-12-18T08:03:53+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.